FreeIPA Identity Management planet - technical blogs

October 19, 2018

Fraser Tweedale

Should FreeIPA ship a subordinate CA profile?

Should FreeIPA ship a subordinate CA profile?

In my previous post I discussed how to issue subordinate CA (sub-CA) certificates from FreeIPA. In brief, the administrator must create and import a profile configuration for issuing certificates with the needed characteristics. The profile must add a Basic Constraints extension asserting that the subject is a CA.

After publishing that post, it formed the basis of an official Red Hat solution (Red Hat subscription required to view). Subsequently, an RFE was filed requesting a sub-CA profile to be included by default in FreeIPA. In this short post I’ll outline the reasons why this might not be a good idea, and what the profile might look like if we did ship one.

The case against

The most important reason not to include a sub-CA profile is that it will not be appropriate for many use cases. Important attributes of a sub-CA certificate include:

  • validity period (how long will the certificate be valid for?)
  • key usage and extended key usage (what can the certificate be used for?)
  • path length constraint (how many further subordinate CAs may be issued below this CA?)
  • name constraints (what namespaces can this CA issue certificates for?)

If we ship a default sub-CA profile in FreeIPA, all of these attributes will be determined ahead of time and fixed. There is a good chance the values will not be appropriate, and the administrator must create a custom profile configuration anyway. Worse, there is a risk that the profile will be used without due consideration of its appropriateness.

If we do nothing, we still have the blog post and official solution to guide administrators through the process. The administrator has the opportunity to alter the profile configuration according to their security or operational requirements.

The case for

The RFE description states:

Signing a subordinate CA’s CSR in IdM is difficult and requires tinkering. This functionality should be built in and present with the product. Please bundle a subordinate CA profile like the one described in the [blog post].

I agree that Dogtag profile configuration is difficult, even obtuse. It is not well documented and there is limited sanity checking. There is no “one size fits all” when it comes to sub-CA profiles, but can there be a “one size fits most”? Such a profile might have:

  • path length constraint of zero (the CA can only issue leaf certificates)
  • name constraints limiting DNS names to the FreeIPA domain (and subdomains)
  • a validity period of two years

In terms of security these are conservative attributes but they still admit the most common use case. Two years may or may not be a reasonable lifetime for the subordinate CA, but we have to choose some fixed value. The downside is that customers could use this profile without being aware of its limitations (path length, name constraints). The resulting issues will frustrate the customer and probably result in some support cases too.

Alternatives and conclusion

There is a middle road: instead of shipping the profile, we ship a “profile assistant” tool that asks some questions and builds the profile configuration. Questions would include the desired validity period, whether it’s for a CA (and if so the path length constraint), name constraints (if any), and so on. Then it imports the configuration.

There may be merit to this option, but none of the machinery exists. The effort and lead time are high. The other options: do-nothing (really improve and maintain documentation), or shipping a default sub-CA profile—are low effort and lead time.

In conclusion, I am open to either leaving sub-CA profiles as a documentation concern, or including a conservative default profile. But because there is no one size fits all, I prefer to leave sub-CA profile creation as a documented process that administrators can perform themselves—and tweak as they see fit.

October 19, 2018 12:00 AM

October 18, 2018

William Brown

Rust RwLock and Mutex Performance Oddities

Rust RwLock and Mutex Performance Oddities

Recently I have been working on Rust datastructures once again. In the process I wanted to test how my work performed compared to a standard library RwLock and Mutex. On my home laptop the RwLock was 5 times faster, the Mutex 2 times faster than my work.

So checking out my code on my workplace workstation and running my bench marks I noticed the Mutex was the same - 2 times faster. However, the RwLock was 4000 times slower.

What’s a RwLock and Mutex anyway?

In a multithreaded application, it’s important that data that needs to be shared between threads is consistent when accessed. This consistency is not just logical consistency of the data, but affects hardware consistency of the memory in cache. As a simple example, let’s examine an update to a bank account done by two threads:

acc = 10
deposit = 3
withdrawl = 5

[ Thread A ]            [ Thread B ]
acc = load_balance()    acc = load_balance()
acc = acc + deposit     acc = acc - withdrawl
store_balance(acc)      store_balance(acc)

What will the account balance be at the end? The answer is “it depends”. Because threads are working in parallel these operations could happen:

  • At the same time
  • Interleaved (various possibilities)
  • Sequentially

This isn’t very healthy for our bank account. We could lose our deposit, or have invalid data. Valid outcomes at the end are that acc could be 13, 5, 8. Only one of these is correct.

A mutex protects our data in multiple ways. It provides hardware consistency operations so that our cpus cache state is valid. It also allows only a single thread inside of the mutex at a time so we can linearise operations. Mutex comes from the word “Mutual Exclusion” after all.

So our example with a mutex now becomes:

acc = 10
deposit = 3
withdrawl = 5

[ Thread A ]            [ Thread B ]
mutex.lock()            mutex.lock()
acc = load_balance()    acc = load_balance()
acc = acc + deposit     acc = acc - withdrawl
store_balance(acc)      store_balance(acc)
mutex.unlock()          mutex.unlock()

Now only one thread will access our account at a time: The other thread will block until the mutex is released.

A RwLock is a special extension to this pattern. Where a mutex guarantees single access to the data in a read and write form, a RwLock (Read Write Lock) allows multiple read-only views OR single read and writer access. Importantly when a writer wants to access the lock, all readers must complete their work and “drain”. Once the write is complete readers can begin again. So you can imagine it as:

Time ->

T1: -- read --> x
T3:     -- read --> x                x -- read -->
T3:     -- read --> x                x -- read -->
T4:                   | -- write -- |
T5:                                  x -- read -->

Test Case for the RwLock

My test case is simple. Given a set of 12 threads, we spawn:

  • 8 readers. Take a read lock, read the value, release the read lock. If the value == target then stop the thread.
  • 4 writers. Take a write lock, read the value. Add one and write. Continue until value == target then stop.

Other conditions:

  • The test code is identical between Mutex/RwLock (beside the locking costruct)
  • –release is used for compiler optimisations
  • The test hardware is as close as possible (i7 quad core)
  • The tests are run multiple time to construct averages of the performance

The idea being that X target number of writes must occur, while many readers contend as fast as possible on the read. We are pressuring the system of choice between “many readers getting to read fast” or “writers getting priority to drain/block readers”.

On OSX given a target of 500 writes, this was able to complete in 0.01 second for the RwLock. (MBP 2011, 2.8GHz)

On Linux given a target of 500 writes, this completed in 42 seconds. This is a 4000 times difference. (i7-7700 CPU @ 3.60GHz)

All things considered the Linux machine should have an advantage - it’s a desktop processor, of a newer generation, and much faster clock speed. So why is the RwLock performance so much different on Linux?

To the source code!

Examining the Rust source code , many OS primitives come from libc. This is because they require OS support to function. RwLock is an example of this as is mutex and many more. The unix implementation for Rust consumes the pthread_rwlock primitive. This means we need to read man pages to understand the details of each.

OSX uses FreeBSD userland components, so we can assume they follow the BSD man pages. In the FreeBSD man page for pthread_rwlock_rdlock we see:

IMPLEMENTATION NOTES

 To prevent writer starvation, writers are favored over readers.

Linux however, uses different constructs. Looking at the Linux man page:

PTHREAD_RWLOCK_PREFER_READER_NP
  This is the default.  A thread may hold multiple read locks;
  that is, read locks are recursive.  According to The Single
  Unix Specification, the behavior is unspecified when a reader
  tries to place a lock, and there is no write lock but writers
  are waiting.  Giving preference to the reader, as is set by
  PTHREAD_RWLOCK_PREFER_READER_NP, implies that the reader will
  receive the requested lock, even if a writer is waiting.  As
  long as there are readers, the writer will be starved.

Reader vs Writer Preferences?

Due to the policy of a RwLock having multiple readers OR a single writer, a preference is given to one or the other. The preference basically boils down to the choice of:

  • Do you respond to write requests and have new readers block?
  • Do you favour readers but let writers block until reads are complete?

The difference is that on a read heavy workload, a write will continue to be delayed so that readers can begin and complete (up until some threshold of time). However, on a writer focused workload, you allow readers to stall so that writes can complete sooner.

On Linux, they choose a reader preference. On OSX/BSD they choose a writer preference.

Because our test is about how fast can a target of write operations complete, the writer preference of BSD/OSX causes this test to be much faster. Our readers still “read” but are giving way to writers, which completes our test sooner.

However, the linux “reader favour” policy means that our readers (designed for creating conteniton) are allowed to skip the queue and block writers. This causes our writers to starve. Because the test is only concerned with writer completion, the result is (correctly) showing our writers are heavily delayed - even though many more readers are completing.

If we were to track the number of reads that completed, I am sure we would see a large factor of difference where Linux has allow many more readers to complete than the OSX version.

Linux pthread_rwlock does allow you to change this policy (PTHREAD_RWLOCK_PREFER_WRITER_NP) but this isn’t exposed via Rust. This means today, you accept (and trust) the OS default. Rust is just unaware at compile time and run time that such a different policy exists.

Conclusion

Rust like any language consumes operating system primitives. Every OS implements these differently and these differences in OS policy can cause real performance differences in applications between development and production.

It’s well worth understanding the constructions used in programming languages and how they affect the performance of your application - and the decisions behind those tradeoffs.

This isn’t meant to say “don’t use RwLock in Rust on Linux”. This is meant to say “choose it when it makes sense - on read heavy loads, understanding writers will delay”. For my project (A copy on write cell) I will likely conditionally compile rwlock on osx, but mutex on linux as I require a writer favoured behaviour. There are certainly applications that will benefit from the reader priority in linux (especially if there is low writer volume and low penalty to delayed writes).

October 18, 2018 02:00 PM

September 05, 2018

Rob Crittenden

Migration and User Private Groups

When adding an IPA user they are typically created with a User-Private Group (UPG). This is a group of the same name, with the same GID. It is treated specially in that it cannot have members and does not typically appears in group searches using the IPA API (unless the private option is included).

Migrating from another LDAP source, including another IPA server, does not create UPGs. There are a number of reasons for this:

  1. It can be expensive to be sure that no groups reference any given user
  2. What to do if one group cannot be made into a UPG.  There is no interactive mode so it is either skip it, add it as a non-UPG or drop the group members and add it as a UPG.

We took the easy way out and don’t convert any. There is an RFE to be able to do this during migration.

This came up recently on the freeipa-users list and I thought about what it would take to convert a group back into a UPG.

My first solution was a rather compex set of ldapmodify operations.

One to update the group:

$ kinit admin
$ ldapmodify -Y GSSAPI
dn: cn=test,cn=groups,cn=accounts,dc=example,dc=com
changetype: modify
add: objectclass
objectClass: mepManagedEntry
-
add: mepManagedBy
mepManagedBy: uid=test,cn=users,cn=accounts,dc=example,dc=com
- 
delete: objectclass
objectClass: ipausergroup
-
delete: objectclass
objectClass: groupofnames
-
delete: objectclass
objectClass: nestedgroup

^D

And one to update the user:

$ ldapmodify -Y GSSAPI
dn: uid=test,cn=users,cn=accounts,dc=example,dc=com
changetype: modify
add: objectclass
objectClass: mepOriginEntry
-
add: mepManagedEntry
mepManagedEntry: cn=test,cn=groups,cn=accounts,dc=example,dc=com

^D

This seemed cumbersome especially if there are a lot of groups to convert. It also doesn’t consider the case where a group has a member so is nonconvertible. An ObjectclassViolation LDAP error would be thrown in that case.

So I poked at the group-detach command and came up with this. If you drop this file, attach.py, into /usr/lib/python-*/site-packages/ipaserver/plugins and restart Apache you’ll have the group-attach command:

import six

from ipalib import Str
from ipalib.plugable import Registry
from .baseldap import (
    pkey_to_value,
    LDAPQuery
)
from ipalib import _, ngettext
from ipalib import errors
from ipalib import output

register = Registry()


@register()
class group_attach(LDAPQuery):
    __doc__ = _('Attach a managed group to a user.')

    takes_parms = (
        Str('user',
            doc=_('User to attach group to'),
            flags=['no_create', 'no_update', 'no_search'],
        )
    )

    has_output = output.standard_value
    msg_summary = _('Attached group "%(value)s" to user "%(value)s"')

    def execute(self, *keys, **options):
        """
        This requires updating both the user and the group. We first need to
        verify that both the user and group can be updated, then we go
        about our work. We don't want a situation where only the user or
        group can be modified and we're left in a bad state.
        """
        ldap = self.obj.backend

        group_dn = self.obj.get_dn(*keys, **options)
        user_dn = self.api.Object['user'].get_dn(*keys)

        try:
            user_attrs = ldap.get_entry(user_dn)
        except errors.NotFound:
            raise self.obj.handle_not_found(*keys)
        is_managed = self.obj.has_objectclass(
            user_attrs['objectclass'], 'mepmanagedentry'
        )
        if (not ldap.can_write(user_dn, "objectclass") or
                not ldap.can_write(user_dn, "mepManagedEntry")
                and is_managed):
            raise errors.ACIError(
                info=_('not allowed to modify user entries')
            )

        group_attrs = ldap.get_entry(group_dn)
        is_managed = self.obj.has_objectclass(
            group_attrs['objectclass'], 'mepmanagedby'
        )
        if (not ldap.can_write(group_dn, "objectclass") or
                not ldap.can_write(group_dn, "mepManagedBy")
                and is_managed):
            raise errors.ACIError(
                info=_('not allowed to modify group entries')
            )

        objectclasses = user_attrs['objectclass']
        if 'meporiginentry' in [x.lower() for x in objectclasses]:
            raise errors.ACIError(
                info=_('The user is already attached to a group')
            )

        group_attrs = ldap.get_entry(group_dn)
        objectclasses = group_attrs['objectclass']
        if 'mepmanagedentry' in [x.lower() for x in objectclasses]:
            raise errors.ACIError(
                info=_('The group is already managed')
            )
        if group_attrs.get('member'):
            raise errors.ACIError(
                info=_('The group has members')
            )

        for objectclass in ('ipausergroup', 'groupofnames', 'nestedgroup'):
            try:
                i = objectclasses.index(objectclass)
            except ValueError:
                # this should never happen
                pass
            del objectclasses[i]

        objectclasses.append('mepManagedEntry')

        group_attrs['mepManagedBy'] = user_dn
        group_attrs['objectclass'] = objectclasses
        ldap.update_entry(group_attrs)

        try:
            user_attrs['objectclass'].append('mepOriginEntry')
            user_attrs['mepManagedEntry'] = group_dn
            ldap.update_entry(user_attrs)
        except ValueError:
            # Somehow the user isn't managed, let it pass for now. We'll
            # let the group throw "Not managed".
            pass

        return dict(
            result=True,
            value=pkey_to_value(keys[0], options),
        )

This leaves some things to be desired, notably the exceptions are ACIError rather than something perhaps more relevant.

$ ipa group-detach test
--------------------------------------
Detached group "test" from user "test"
--------------------------------------

$ ipa group-attach test
------------------------------------
Attached group "test" to user "test"
------------------------------------

There be dragons. I have barely tested this, just enough to scratch the itch of my curiosity.

by rcritten at September 05, 2018 09:26 PM

August 21, 2018

Fraser Tweedale

Issuing subordinate CA certificates from FreeIPA

Issuing subordinate CA certificates from FreeIPA

FreeIPA, since version 4.4, has supported creating subordinate CAs within the deployment’s Dogtag CA instance. This feature is called lightweight sub-CAs. But what about when you need to issue a subordinate CA certificate to an external entity? One use case would be chaining a FreeIPA deployment up to some existing FreeIPA deployment. This is similar to what many customers do with Active Directory. In this post I’ll show how you can issue subordinate CA certificates from FreeIPA.

Scenario description

The existing FreeIPA deployment has the realm IPA.LOCAL and domain ipa.local. Its CA’s Subject Distinguished Name (Subject DN) is CN=Certificate Authority,O=IPA.LOCAL 201808022359. The master’s hostname is f28-0.ipa.local. I will refer to this deployment as the existing or primary deployment.

I will install a new FreeIPA deployment on the host f28-1.ipa.local, with realm SUB.IPA.LOCAL and domain sub.ipa.local. This will be called the secondary deployment. Its CA will be signed by the CA of the primary deployment.

Choice of subject principal and Subject DN

All certificate issuance via FreeIPA (with some limited exceptions) requires a nominated subject principal. Subject names in the CSR (Subject DN and Subject Alternative Names) are validated against the subject principal. We must create a subject principal in the primary deployment to represent the CA of the secondary deployment.

When validating CSRs, the Common Name (CN) of the Subject DN is checked against the subject principal, in the following ways:

  • for user principals, the CN must match the UID
  • for host principals, the CN must match the hostname (case-insensitive)
  • for service principals, the CN must match the hostname (case-insensitive); only principal aliases with the same service type as the canonical principal are checked

This validation regime imposes a restriction on what the CN of the subordinate CA can be. In particular:

  • the Subject DN must contain a CN attribute
  • the CN value can be a hostname (host or service principal), or a UID (user principal)

For this scenario, I chose to create a host principal for the domain of the secondary deployment:

[f28-0]% ipa host-add --force sub.ipa.local
--------------------------
Added host "sub.ipa.local"
--------------------------
  Host name: sub.ipa.local
  Principal name: host/sub.ipa.local@IPA.LOCAL
  Principal alias: host/sub.ipa.local@IPA.LOCAL
  Password: False
  Keytab: False
  Managed by: sub.ipa.local

Creating a certificate profile for sub-CAs

We will tweak the caIPAserviceCert profile configuration to create a new profile for subordinate CAs. Export the profile configuration:

[f28-0]% ipa certprofile-show caIPAserviceCert --out SubCA.cfg
------------------------------------------------
Profile configuration stored in file 'SubCA.cfg'
------------------------------------------------
  Profile ID: caIPAserviceCert
  Profile description: Standard profile for network services
  Store issued certificates: TRUE

Perform the following edits to SubCA.cfg:

  1. Replace profileId=caIPAserviceCert with profileId=SubCA.
  2. Replace the subjectNameDefaultImpl component with the userSubjectNameDefaultImpl component. This will use the Subject DN from the CSR as is, without restriction:

    policyset.serverCertSet.1.constraint.class_id=noConstraintImpl
    policyset.serverCertSet.1.constraint.name=No Constraint
    policyset.serverCertSet.1.default.class_id=userSubjectNameDefaultImpl
    policyset.serverCertSet.1.default.name=Subject Name Default
  3. Edit the keyUsageExtDefaultImpl and keyUsageExtConstraintImpl configurations. They should have the following settings:
    • keyUsageCrlSign=true
    • keyUsageDataEncipherment=false
    • keyUsageDecipherOnly=false
    • keyUsageDigitalSignature=true
    • keyUsageEncipherOnly=false
    • keyUsageKeyAgreement=false
    • keyUsageKeyCertSign=true
    • keyUsageKeyEncipherment=false
    • keyUsageNonRepudiation=true
  4. Add the Basic Constraints extension configuration:

    policyset.serverCertSet.15.constraint.class_id=basicConstraintsExtConstraintImpl
    policyset.serverCertSet.15.constraint.name=Basic Constraint Extension Constraint
    policyset.serverCertSet.15.constraint.params.basicConstraintsCritical=true
    policyset.serverCertSet.15.constraint.params.basicConstraintsIsCA=true
    policyset.serverCertSet.15.constraint.params.basicConstraintsMinPathLen=0
    policyset.serverCertSet.15.constraint.params.basicConstraintsMaxPathLen=0
    policyset.serverCertSet.15.default.class_id=basicConstraintsExtDefaultImpl
    policyset.serverCertSet.15.default.name=Basic Constraints Extension Default
    policyset.serverCertSet.15.default.params.basicConstraintsCritical=true
    policyset.serverCertSet.15.default.params.basicConstraintsIsCA=true
    policyset.serverCertSet.15.default.params.basicConstraintsPathLen=0

    Add the new components’ index to the component list, to ensure they get processed:

    policyset.serverCertSet.list=1,2,3,4,5,6,7,8,9,10,11,12,15
  5. Remove the commonNameToSANDefaultImpl and Extended Key Usage related components. This can be accomplished by removing the relevant indices (in my case, 7 and 12) from the component list:

    policyset.serverCertSet.list=1,2,3,4,5,6,8,9,10,11,15
  6. (Optional) edit the validity period in the validityDefaultImpl and validityConstraintImpl components. The default is 731 days. I did not change it.

For the avoidance of doubt, the diff between the caIPAserviceCert profile configuration and SubCA is:

--- caIPAserviceCert.cfg        2018-08-21 12:44:01.748884778 +1000
+++ SubCA.cfg   2018-08-21 14:05:53.484698688 +1000
@@ -13,5 +13,3 @@
-policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl
-policyset.serverCertSet.1.constraint.name=Subject Name Constraint
-policyset.serverCertSet.1.constraint.params.accept=true
-policyset.serverCertSet.1.constraint.params.pattern=CN=[^,]+,.+
-policyset.serverCertSet.1.default.class_id=subjectNameDefaultImpl
+policyset.serverCertSet.1.constraint.class_id=noConstraintImpl
+policyset.serverCertSet.1.constraint.name=No Constraint
+policyset.serverCertSet.1.default.class_id=userSubjectNameDefaultImpl
@@ -19 +16,0 @@
-policyset.serverCertSet.1.default.params.name=CN=$request.req_subject_name.cn$, o=IPA.LOCAL 201808022359
@@ -66,2 +63,2 @@
-policyset.serverCertSet.6.constraint.params.keyUsageCrlSign=false
-policyset.serverCertSet.6.constraint.params.keyUsageDataEncipherment=true
+policyset.serverCertSet.6.constraint.params.keyUsageCrlSign=true
+policyset.serverCertSet.6.constraint.params.keyUsageDataEncipherment=false
@@ -72,2 +69,2 @@
-policyset.serverCertSet.6.constraint.params.keyUsageKeyCertSign=false
-policyset.serverCertSet.6.constraint.params.keyUsageKeyEncipherment=true
+policyset.serverCertSet.6.constraint.params.keyUsageKeyCertSign=true
+policyset.serverCertSet.6.constraint.params.keyUsageKeyEncipherment=false
@@ -78,2 +75,2 @@
-policyset.serverCertSet.6.default.params.keyUsageCrlSign=false
-policyset.serverCertSet.6.default.params.keyUsageDataEncipherment=true
+policyset.serverCertSet.6.default.params.keyUsageCrlSign=true
+policyset.serverCertSet.6.default.params.keyUsageDataEncipherment=false
@@ -84,2 +81,2 @@
-policyset.serverCertSet.6.default.params.keyUsageKeyCertSign=false
-policyset.serverCertSet.6.default.params.keyUsageKeyEncipherment=true
+policyset.serverCertSet.6.default.params.keyUsageKeyCertSign=true
+policyset.serverCertSet.6.default.params.keyUsageKeyEncipherment=false
@@ -111,2 +108,13 @@
-policyset.serverCertSet.list=1,2,3,4,5,6,7,8,9,10,11,12
-profileId=caIPAserviceCert
+policyset.serverCertSet.15.constraint.class_id=basicConstraintsExtConstraintImpl
+policyset.serverCertSet.15.constraint.name=Basic Constraint Extension Constraint
+policyset.serverCertSet.15.constraint.params.basicConstraintsCritical=true
+policyset.serverCertSet.15.constraint.params.basicConstraintsIsCA=true
+policyset.serverCertSet.15.constraint.params.basicConstraintsMinPathLen=0
+policyset.serverCertSet.15.constraint.params.basicConstraintsMaxPathLen=0
+policyset.serverCertSet.15.default.class_id=basicConstraintsExtDefaultImpl
+policyset.serverCertSet.15.default.name=Basic Constraints Extension Default
+policyset.serverCertSet.15.default.params.basicConstraintsCritical=true
+policyset.serverCertSet.15.default.params.basicConstraintsIsCA=true
+policyset.serverCertSet.15.default.params.basicConstraintsPathLen=0
+policyset.serverCertSet.list=1,2,3,4,5,6,8,9,10,11,15
+profileId=SubCA

Now import the profile:

[f28-0]% ipa certprofile-import SubCA \
            --desc "Subordinate CA" \
            --file SubCA.cfg \
            --store=1
------------------------
Imported profile "SubCA"
------------------------
  Profile ID: SubCA
  Profile description: Subordinate CA
  Store issued certificates: TRUE

Creating the CA ACL

Before issuing a certificate, CA ACLs are checked to determine if the combination of CA, profile and subject principal is acceptable. We must create a CA ACL that permits use of the SubCA profile to issue certificate to our subject principal:

[f28-0]% ipa caacl-add SubCA
--------------------
Added CA ACL "SubCA"
--------------------
  ACL name: SubCA
  Enabled: TRUE

[f28-0]% ipa caacl-add-profile SubCA --certprofile SubCA
  ACL name: SubCA
  Enabled: TRUE
  Profiles: SubCA
-------------------------
Number of members added 1
-------------------------

[f28-0]% ipa caacl-add-ca SubCA --ca ipa
  ACL name: SubCA
  Enabled: TRUE
  CAs: ipa
  Profiles: SubCA
-------------------------
Number of members added 1
-------------------------

[f28-0]% ipa caacl-add-host SubCA --hosts sub.ipa.local
  ACL name: SubCA
  Enabled: TRUE
  CAs: ipa
  Profiles: SubCA
  Hosts: sub.ipa.local
-------------------------
Number of members added 1
-------------------------

Installing the secondary FreeIPA deployment

We are finally ready to run ipa-server-install to set up the secondary deployment. We need to use the --ca-subject option to override the default Subject DN that will be included in the CSR, providing a valid DN according to the rules discussed above.

[root@f28-1]# ipa-server-install \
    --realm SUB.IPA.LOCAL \
    --domain sub.ipa.local \
    --external-ca \
    --ca-subject 'CN=SUB.IPA.LOCAL,O=Red Hat'

...

The IPA Master Server will be configured with:
Hostname:       f28-1.ipa.local
IP address(es): 192.168.124.142
Domain name:    sub.ipa.local
Realm name:     SUB.IPA.LOCAL

The CA will be configured with:
Subject DN:   CN=SUB.IPA.LOCAL,O=Red Hat
Subject base: O=SUB.IPA.LOCAL
Chaining:     externally signed (two-step installation)

Continue to configure the system with these values? [no]: yes

...

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/8]: configuring certificate server instance

The next step is to get /root/ipa.csr signed by your CA and re-run
/usr/sbin/ipa-server-install as:
/usr/sbin/ipa-server-install
  --external-cert-file=/path/to/signed_certificate
  --external-cert-file=/path/to/external_ca_certificate
The ipa-server-install command was successful

Let’s inspect /root/ipa.csr:

[root@f28-1]# openssl req -text < /root/ipa.csr |grep Subject:
        Subject: O = Red Hat, CN = SUB.IPA.LOCAL

The desired Subject DN appears in the CSR (note that openssl shows DN components in the opposite order from FreeIPA). After copying the CSR to f28-0.ipa.local we can request the certificate:

[f28-0]% ipa cert-request ~/ipa.csr \
            --principal host/sub.ipa.local \
            --profile SubCA \
            --certificate-out ipa.pem
  Issuing CA: ipa
  Certificate: MIIEAzCCAuugAwIBAgIBFTANBgkqhkiG9w0BAQsF...
  Subject: CN=SUB.IPA.LOCAL,O=Red Hat
  Issuer: CN=Certificate Authority,O=IPA.LOCAL 201808022359
  Not Before: Tue Aug 21 04:16:24 2018 UTC
  Not After: Fri Aug 21 04:16:24 2020 UTC
  Serial number: 21
  Serial number (hex): 0x15

The certificate was saved in the file ipa.pem. We can see from the command output that the Subject DN in the certificate is exactly what was in the CSR. Further inspecting the certificate, observe that the Basic Constraints extension is present and the Key Usage extension contains the appropriate assertions:

[f28-0]% openssl x509 -text < ipa.pem
...
      X509v3 extensions:
          ...
          X509v3 Key Usage: critical
              Digital Signature, Non Repudiation, Certificate Sign, CRL Sign
          ...
          X509v3 Basic Constraints: critical
              CA:TRUE, pathlen:0
          ...

Now, after copying the just-issued subordinate CA certificate and the primary CA certificate (/etc/ipa/ca.crt) over to f28-1.ipa.local, we can continue the installation:

[root@f28-1]# ipa-server-install \
                --external-cert-file ca.crt \
                --external-cert-file ipa.pem

The log file for this installation can be found in /var/log/ipaserver-install.log
Directory Manager password: XXXXXXXX

...

Adding [192.168.124.142 f28-1.ipa.local] to your /etc/hosts file
Configuring ipa-custodia
  [1/5]: Making sure custodia container exists
...
The ipa-server-install command was successful

And we’re done.

Discussion

I’ve shown how to create a profile for issuing subordinate CA certificates in FreeIPA. Because of the way FreeIPA validates certificate requests—always against a subject principal—there are restrictions on the what the subject DN of the subordinate CA can be. The Subject DN must contain a CN attribute matching either the hostname of a host or service principal, or the UID of a user principal.

If you want to avoid these Subject DN restrictions, right now there is no choice but to use the Dogtag CA directly, instead of via the FreeIPA commands. If such a requirement emerges it might make sense to implement some “special handling” for issuing sub-CA certificates (similar to what we currently do for the KDC certificate). But the certificate request logic is already complicated; I am hesitant to complicate it even more.

Currently there is no sub-CA profile included in FreeIPA by default. It might make sense to include it, or at least to produce an official solution document describing the procedure outlined in this post.

August 21, 2018 12:00 AM

August 05, 2018

William Brown

Photography - Why You Should Use JPG (not RAW)

Photography - Why You Should Use JPG (not RAW)

When I started my modern journey into photography, I simply shot in JPG. I was happy with the results, and the images I was able to produce. It was only later that I was introduced to a now good friend and he said: “You should always shoot RAW! You can edit so much more if you do.”. It’s not hard to find many ‘beginner’ videos all touting the value of RAW for post editing, and how it’s the step from beginner to serious photographer (and editor).

Today, I would like to explore why I have turned off RAW on my camera bodies for good. This is a deeply personal decision, and I hope that my experience helps you to think about your own creative choices. If you want to stay shooting RAW and editing - good on you. If this encourages you to try turning back to JPG - good on you too.

There are two primary reasons for why I turned off RAW:

  • Colour reproduction of in body JPG is better to the eye today.
  • Photography is about composing an image from what you have infront of you.

Colour is about experts (and detail)

I have always been unhappy with the colour output of my editing software when processing RAW images. As someone who is colour blind I did not know if it was just my perception, or if real issues existed. No one else complained so it must just be me right!

Eventually I stumbled on an article about how to develop real colour and extract camera film simulations for my editor. I was interested in both the ability to get true reflections of colour in my images, but also to use the film simulations in post (the black and white of my camera body is beautiful and soft, but my editor is harsh).

I spent a solid week testing and profiling both of my cameras. I quickly realised a great deal about what was occuring in my editor, but also my camera body.

The editor I have, is attempting to generalise over the entire set of sensors that a manufacturer has created. They are also attempting to create a true colour output profile, that is as reflective of reality as possible. So when I was exporting RAWs to JPG, I was seeing the differences between what my camera hardware is, vs the editors profiles. (This was particularly bad on my older body, so I suspect the RAW profiles are designed for the newer sensor).

I then created film simulations and quickly noticed the subtle changes. Blacks were blacker, but retained more fine detail with the simulation. Skin tone was softer. Exposure was more even across a variety of image types. How? RAW and my editor is meant to create the best image possible? Why is a film-simulation I have “extracted” creating better images?

As any good engineer would do I created sample images. A/B testing. I would provide the RAW processed by my editor, and a RAW processed with my film simulation. I would vary the left/right of the image, exposure, subject, and more. After about 10 tests across 5 people, only on one occasion did someone prefer the RAW from my editor.

At this point I realised that my camera manufacturer is hiring experts who build, live and breath colour technology. They have tested and examined everything about the body I have, and likely calibrated it individually in the process to make produce exact reproductions as they see in a lab. They are developing colour profiles that are not just broadly applicable, but also pleasing to look at (even if not accurate reproductions).

So how can my film simulations I extracted and built in a week, measure up to the experts? I decided to find out. I shot test images in JPG and RAW and began to provide A/B/C tests to people.

If the editor RAW was washed out compared to the RAW with my film simulation, the JPG from the body made them pale in comparison. Every detail was better, across a range of conditions. The features in my camera body are better than my editor. Noise reduction, dynamic range, sharpening, softening, colour saturation. I was holding in my hands a device that has thousands of hours of expert design, that could eclipse anything I built on a weekend for fun to achieve the same.

It was then I came to think about and realise …

Composition (and effects) is about you

Photography is a complex skill. It’s not having a fancy camera and just clicking the shutter, or zooming in. Photography is about taking that camera and putting it in a position to take a well composed image based on many rules (and exceptions) that I am still continually learning.

When you stop to look at an image you should always think “how can I compose the best image possible?”.

So why shoot in RAW? RAW is all about enabling editing in post. After you have already composed and taken the image. There are valid times and useful functions of editing. For example whitebalance correction and minor cropping in some cases. Both of these are easily conducted with JPG with no loss in quality compared to the RAW. I still commonly do both of these.

However RAW allows you to recover mistakes during composition (to a point). For example, the powerful base-curve fusion module allows dynamic range “after the fact”. You may even add high or low pass filters, or mask areas to filter and affect the colour to make things pop, or want that RAW data to make your vibrance control as perfect as possible. You may change the perspective, or even add filters and more. Maybe you want to optimise de-noise to make smooth high ISO images. There are so many options!

But all these things are you composing after. Today, many of these functions are in your camera - and better performing. So while I’m composing I can enable dynamic range for the darker elements of the frame. I can compose and add my colour saturation (or remove it). I can sharpen, soften. I can move my own body to change perspective. All at the time I am building the image in my mind, as I compose, I am able to decide on the creative effects I want to place in that image. I’m not longer just composing within a frame, but a canvas of potential effects.

To me this was an important distinction. I always found I was editing poorly-composed images in an attempt to “fix” them to something acceptable. Instead I should have been looking at how to compose them from the start to be great, using the tool in my hand - my camera.

Really, this is a decision that is yours. Do you spend more time now to make the image you want? Or do you spend it later editing to achieve what you want?

Conclusion

Photography is a creative process. You will have your own ideas of how that process should look, and how you want to work with it. Great! This was my experience and how I have arrived at a creative process that I am satisfied with. I hope that it provides you an alternate perspective to the generally accepted “RAW is imperative” line that many people advertise.

August 05, 2018 02:00 PM

Powered by Planet