FreeIPA Identity Management planet - technical blogs

December 07, 2016

Rich Megginson

Monitoring Fluentd and the Elasticsearch output plugin

Fluentd has a monitor input plugin: http://docs.fluentd.org/articles/monitoring

Unfortunately, the documentation is pretty scant, and some of the useful, interesting endpoints and options are not documented. I've captured some of that missing information below, and shown how it can be used to monitor the Elasticsearch output plugin.

Endpoints

/api/plugins

Provides information about each plugin in a text based columnar format:
$ curl -s http://localhost:24220/api/plugins
plugin_id:object:1dce4b0        plugin_category:input   type:monitor_agent
output_plugin:false     retry_count:
plugin_id:object:11b4120        plugin_category:input   type:systemd    output_p
lugin:false     retry_count:
plugin_id:object:19fb914        plugin_category:output  type:rewrite_tag_filter
output_plugin:true      retry_count:
...

/api/plugins.json

Same as /api/plugins except in JSON format:
$ curl -s http://localhost:24220/api/plugins.json | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        },
...

/api/config

Provides basic fluentd configuration information in text format:
$ curl -s http://localhost:24220/api/config
pid:19  ppid:1  config_path:/etc/fluent/fluent.conf     pid_file:       plugin_dirs:["/etc/fluent/plugin"]      log_path:

/api/config.json

Provides basic fluentd configuration information in JSON format:
$ curl -s http://localhost:24220/api/config.json | python -mjson.tool
{
    "config_path": "/etc/fluent/fluent.conf",
    "log_path": null,
    "pid": 19,
    "pid_file": null,
    "plugin_dirs": [
        "/etc/fluent/plugin"
    ],
    "ppid": 1
}

Query String Options

debug

For plugins, this will print all of the instance variables:
$ http://localhost:24220/api/plugins.json\?debug=1 | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "instance_variables": {
                "bind": "0.0.0.0",
                "emit_config": false,
                "emit_interval": 60,
...

@type

Search for plugin by @type:
$ http://localhost:24220/api/plugins.json\?@type=monitor_agent | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        }
    ]
}

@id

Search for plugin by @id. For example, in the above output, there is "plugin_id": "object:1dce4b0". Once you have identified the id, you can use that to display only the information for that particular id:
$ http://localhost:24220/api/plugins.json\?@id=object:1dce4b0 | python -mjson.tool
{
    "plugins": [
        {
            "config": {
                "@type": "monitor_agent",
                "bind": "0.0.0.0",
                "port": "24220"
            },
            "output_plugin": false,
            "plugin_category": "input",
            "plugin_id": "object:1dce4b0",
            "retry_count": null,
            "type": "monitor_agent"
        }
    ]
}

tag

Match the tag and get the info from the matched output plugin. Only works on output plugins. I unfortunately don't have an example, but I suppose you could use something like this to find the output plugins which have a match block which has a match for **_sendtoforwarder_**:
$ http://localhost:24220/api/plugins.json\?tag=prefix_sendtoforwarder_suffix | python -mjson.tool
{
    "plugins": [
        {
...

Debugging the Fluentd Elasticsearch plugin


First, identify the output plugin in question to get the plugin id:
$ http://localhost:24220/api/plugins.json\?@type=elasticsearch_dynamic | python -mjson.tool
{
    "plugins": [
        {
            "buffer_queue_length": 0,
            "buffer_total_queued_size": 0,
            "config": {
                "@type": "elasticsearch_dynamic",
...
                "index_name": ".operations.${record['@timestamp'].nil? ? Time.at
(time).getutc.strftime(@logstash_dateformat) : Time.parse(record['@timestamp']).
getutc.strftime(@logstash_dateformat)}",
...
            "plugin_id": "object:1b4cc64",
...

This is the one I'm looking for, which has a plugin id of object:1b4cc64. Next, I can use the @id parameter in conjunction with the debug one to get some interesting statistics:
$ http://localhost:24220/api/plugins.json\?@id=object:1b4cc64\&debug=1 | \
  python -mjson.tool | \
  egrep 'buffer_total_queued_size|emit_count'
            "buffer_total_queued_size": 0,
                "emit_count": 3164,

I can even put this in a simple loop to see how the queue size and emit count change over time:
$ while true ; do
  date
  http://localhost:24220/api/plugins.json\?@id=object:1b4cc64\&debug=1 | \
    python -mjson.tool | egrep 'buffer_total_queued_size|emit_count'
  sleep 1
done
Wed Dec  7 23:56:18 UTC 2016
            "buffer_total_queued_size": 0,
                "emit_count": 3318,
Wed Dec  7 23:56:21 UTC 2016
            "buffer_total_queued_size": 1654,
                "emit_count": 3322,
Wed Dec  7 23:56:23 UTC 2016
            "buffer_total_queued_size": 2146,
                "emit_count": 3324,
Wed Dec  7 23:56:25 UTC 2016
            "buffer_total_queued_size": 0,
                "emit_count": 3326,

This tells me that the plugin is working, the queues are being flushed regularly, and the emit count (roughly, the number of times fluentd flushes the queued outputs, the number of times a request is made to Elasticsearch) is steadily increasing.

December 07, 2016 11:58 PM

December 06, 2016

Red Hat Blog

PCI Series: Requirement 8 – Identify and Authenticate Access to System Components

This post continues my series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS).  This specific post is related to requirement eight (i.e. the requirement to identify and authenticate access to system components). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.

Requirement eight is directly related to IdM. IdM can be used to address most of the requirements in this section. IdM stores user accounts, provides user account life-cycle management (from creation to termination), and controls the different types of credentials that users can use to authenticate (e.g. passwords, certificates, and one-time-password tokens); it also defines policies related to a number of associated credentials (e.g. password complexity, strength, and expiration policies or account lockout and retry policies). The details about these capabilities can be found in different chapters of the Linux Domain Identity, Authentication, and Policy Guide.

Requirement 8.3 explicitly calls for multi-factor authentication. IdM has an integrated support for open standard OTP tokens (e.g. Yubikey, FreeOTP, and Google Authenticator) and can also leverage existing authentication systems like, for example, RSA Authentication Manager. IdM can even be used as a back-end for RADIUS/TACACS or for a VPN server – allowing 2FA for remote access into a given network.

Questions about how Identity Management relates to requirement eight? Reach out using the comments section (below).

by Dmitri Pal at December 06, 2016 11:00 PM

Florence Blanc-Renaud

Using Certmonger to track certificates

When FreeIPA is installed with an integrated IdM CA, it is using certmonger to track and renew its certificates. But what does this exactly mean?

When the certificates are reaching their expiration date, certmonger detects that it needs to renew them and takes care of the renewal (request a renewed certificate, then install the new certificate at the right location and finally restart the service so that it picks up the new certificate). It means that the system administrator does not need to bother anymore with renewals!

Well… When everything works well it is really a great functionality. But sometimes a small problem can prevent the renewal and FreeIPA ends up with expired certificates and HTTP or LDAP services refusing to start. In this case, it is really difficult to understand what has gone wrong, and how to fix the issue.

In this post, I will explain what is happening behind the scene with certmonger, so that you understand where to look for if you need to troubleshoot.

Certmonger concepts

Certmonger daemon and CLI

Certmonger provides 2 main components:

  • the certmonger daemon that is the “engine” tracking the list of certificates and launching renewal commands
  • the command-line interface: getcert, that allows to send commands to the certmonger daemon (for instance request a new certificate, list the tracked certificates, start or stop tracking a certificate, renew a certificate…)

Certificate Authority

Certmonger provides a generic interface allowing to communicate with various certificate systems, such as Dogtag, FreeIPA… A simple definition for Certificate System would be a software solution able to deliver certificates. This allows to use the same certmonger command independently of the Certificate System that will actually handle the request. The getcert command just reads the additional argument -c to know with which Certificate authority to interface.

Then certmonger needs to know how to interface with each type of Certificate System. This is done by defining Certificate Authorities that can be listed with:

$ getcert list-cas
CA 'SelfSign':
 is-default: no
 ca-type: INTERNAL:SELF
 next-serial-number: 01
CA 'IPA':
 is-default: no
 ca-type: EXTERNAL
 helper-location: /usr/libexec/certmonger/ipa-submit
[...]

Each section starting with ‘CA’ defines a type of Certificate Authority that certmonger knows to handle. The output of the command also shows a helper-location, which is the command that certmonger will call to discuss with the Certificate Authority. For instance:

$ getcert list-cas -c IPA
CA 'IPA':
 is-default: no
 ca-type: EXTERNAL
 helper-location: /usr/libexec/certmonger/ipa-submit

shows that certmonger will run the command “/usr/libexec/certmonger/ipa-submit” when interfacing with IPA certificate authority.

Each helper command is following an interface imposed by certmonger. For instance, environment variables are set by certmonger to provide the operation to execute, the CSR etc…

Certificate tracking

List of tracked certificates

In order to know the list of certificates currently tracked by certmonger, the command getcert list can be used. It shows a lot of information:

  • the certificate location (for instance HTTP server cert is stored in the NSS database /etc/httpd/alias)
  • the certificate nickname
  • the file storing the pin
  • the Certificate Authority that will be used to renew the certificate
  • the expiration date
  • the status of the certificate (MONITORING when it is tracked and not expired)

For instance, to list all the tracking requests for certificates with a nickname “Server-Cert” stored in the NSS db /etc/httpd/alias:

$ getcert list -n Server-Cert -d /etc/httpd/alias/
Number of certificates and requests being tracked: 8.
Request ID '20161122101308':
 status: MONITORING
 stuck: no
 key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
 certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
 CA: IPA
 issuer: CN=Certificate Authority,O=DOMAIN.COM
 subject: CN=ipaserver.domain.com,O=DOMAIN.COM
 expires: 2018-11-23 10:09:34 UTC
 key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment
 eku: id-kp-serverAuth,id-kp-clientAuth
 pre-save command: 
 post-save command: /usr/lib64/ipa/certmonger/restart_httpd
 track: yes
 auto-renew: yes

Certificate renewal

When a certification is near its expiration date, certmonger daemon will automatically issue a renewal command using the CA helper, obtain a renewed certificate and replace the previous cert with the new one.

It is also possible to manually renew a certificate in advance by using the command getcert resubmit -i <id>, where <id> is the Request ID displayed by getcert list for the targetted certificate. This command will renew the certificate using the right helper command.

Start/Stop tracking a certificate

The commands getcert start-tracking and getcert stop-tracking enable or disable the monitoring of a certificate. It is important to understand that they do not manipulate the certificate (stop-tracking does not delete it or remove it from the NSS database) but simply add/remove the certificate to/from the list of monitored certificates.

Pre and post-save commands

When a certificate is tracked by certmonger, it can be useful to define pre-save and post-save commands that certmonger will call during the renewal process. For instance:

$ getcert list -n Server-Cert -d /etc/httpd/alias/
Number of certificates and requests being tracked: 8.
Request ID '20161122101308':
 status: MONITORING
 stuck: no
 key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
 certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
[...]
 pre-save command: 
 post-save command: /usr/lib64/ipa/certmonger/restart_httpd
 track: yes
 auto-renew: yes

shows that the renewal of HTTPd Server Cert:

  • will be handled by IPA Certificate Authority. Remember, we can find the associated helper using getcert list-cas -c IPA
  • will also launch the command restart_httpd

This is useful when a service needs to be restarted in order to pick up the new certificate.

Troubleshooting

 Certmonger logs

Certmonger uses the journal log. For instance, when a certificate is near its expiration date, the journal will show:

$ sudo journalctl -xe -t certmonger | more
Nov 05 11:35:47 ipaserver.domain.com certmonger[59223]: Certificate named "auditSigningCert cert-pki-ca" in token "NSS Certificate DB" in database "/etc/pki/pki-tomcat/alias" will not be valid after 20161115150822.

And when the certificate has been automatically renewed, the journal will show:

$ journalctl -t certmonger | more
Nov 24 12:23:15 ipaserver.domain.com certmonger[36674]: Certificate named "ipaCert" in token "NSS Certificate DB" in database "/etc/httpd/alias" issued by CA and saved.

Output of getcert list

It is possible to check the status for each certificate using getcert list:

  • when the certificate is still valid, the status should be MONITORING.
  • when the certificate is near its expiration date, certmonger will request its renewal and the status will change from MONITORING to SUBMITTING and finally back to MONITORING (you may also see intermediate status PRE_SAVE_CERT and POST_SAVE_CERT).

When the renewal fails, getcert list will also show an error message. It will help determine which phase failed, and from there you will need to check the logs specific to the CA helper or to the pre-save or post-save commands.

In the next post, I will detail the errors that can arise with the helpers used with FreeIPA.


by floblanc at December 06, 2016 01:17 PM

November 28, 2016

Red Hat Blog

PCI Series: Requirement 7 – Restrict Access to Cardholder Data by Business Need to Know

This is my sixth post dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS).  This specific post is related to requirement seven (i.e. the requirement to restrict access to cardholder data by business need to know).  The outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.

Section 7 of the PCI DSS standard talks about access control and limiting the privileges of administrative accounts.  IdM can play a big role in addressing these requirements.  IdM provides several key features that are related to access control and privileged account management.  The first one is host-based-access-control (HBAC).  With HBAC, one can centrally define which groups of users can access which groups of systems using which login services.  Another feature is an ability to centrally define sudo rules that control which users can run which commands on which systems as other users (usually as root).  Yet another capability worth mentioning is the ability to define how user account are mapped to the SELinux user.  Using this feature one can, for example, prevent developer accounts from touching executables on production machines while still enabling them read access to some of the parts of the application data and log(s) for better troubleshooting of potential bugs or misconfigurations.

Questions about how Identity Management relates to requirement seven? Reach out using the comments section (below).

by Dmitri Pal at November 28, 2016 03:30 PM

November 02, 2016

Alexander Bokovoy

FreeIPA JSON-RPC API article

FreeIPA Web UI provides a browser for discovering application programming interface (API) since version FreeIPA 4.2. However, the API itself is not yet officially supported and there is no documentation on how to access it. Some time ago I wrote a blog post detailing on how to access the API from an external client, like curl utility. The blog post was quite popular and allowed to create bindings to FreeIPA API in Perl and other languages. However, the blog post assumed you know what you are doing. In order to help those starting from scratch, I wrote a larger article, FreeIPA management API a nutshell.

The article is available in a new documentation section so that it stays independent of the blog. I plan to extend the API documentation and add more details later. The document assumes you are using FreeIPA 4.4 or later which is available in upcoming Fedora 25 and RHEL 7.3 releases.

November 02, 2016 02:00 PM

October 14, 2016

Rich Megginson

External Elasticsearch route with OpenShift logging

The Elasticsearch deployed with OpenShift aggregated logging is not accessible externally, outside the logging cluster, by default. The intention is that Kibana will be used to access the data, and the various ways to deploy/install OpenShift with logging allow you to specify the externally visible hostname that Kibana (including the separate operations cluster) will use. However, there are many tools that want to access the data from Elasticsearch. This post describes how to enable a route for external access to Elasticsearch.


You will first need an FQDN for the Elasticsearch (and a separate FQDN for the Elasticsearch ops instance if using the separate operations cluster). I am testing with an all-in-one (OpenShift master + node + logging components) install on an OpenStack machine, which has a private IP and hostname, and a public (floating) IP and hostname. In a real deployment, the public IP addresses and hostnames for the elasticsearch services will need to be added to DNS.
private host, IP: host-192-168-78-2.openstacklocal, 192.168.78.2
public host, IP: run-logging-source.oshift.rmeggins.test.novalocal, 10.x.y.z 

I have done the following on my local machine and in the all-in-one machine, by hacking /etc/hosts. All-in-one machine:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.x.y.z run-logging-source.oshift.rmeggins.test.novalocal kibana.run-logging-source.oshift.rmeggins.test kibana-ops.run-logging-source.oshift.rmeggins.test es.run-logging-source.oshift.rmeggins.test es-ops.run-logging-source.oshift.rmeggins.test

My local machine:
10.x.y.z run-logging-source.oshift.rmeggins.test.novalocal run-logging-source.oshift.rmeggins.test kibana.run-logging-source.oshift.rmeggins.test kibana-ops.run-logging-source.oshift.rmeggins.test es.run-logging-source.oshift.rmeggins.test es-ops.run-logging-source.oshift.rmeggins.test

I set up a router after installing OpenShift:
$ oc create serviceaccount router -n default
$ oadm policy add-scc-to-user privileged system:serviceaccount:default:router
$ oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:default:router
$ oadm router --create --namespace default --service-account=router \
     --credentials $MASTER_CONFIG_DIR/openshift-router.kubeconfig

$ oc get pods -n default
NAME                      READY     STATUS    RESTARTS   AGE
docker-registry-1-7z0gq   1/1       Running   0          35m
router-1-8bp88            1/1       Running   0          24m

$ oc logs -n default router-1-8bp88
I1010 19:57:57.815578       1 router.go:161] Router is including routes in all namespaces
I1010 19:57:57.922277       1 router.go:404] Router reloaded:
 - Checking HAProxy /healthz on port 1936 ...
 - HAProxy port 1936 health check ok : 0 retry attempt(s).
...

Logging setup should have already created services for Elasticsearch:
$ oc project logging
$ oc get svc
NAME                     CLUSTER-IP       EXTERNAL-IP   PORT(S)  AGE
logging-es               172.30.76.153    none          9200/TCP 33m
logging-es-ops           172.30.128.108   none          9200/TCP 33m

The route is a reencrypt route. TLS is terminated at the router, then reencrypted using client cert auth to Elasticsearch - SearchGuard is configured to require client cert auth. We use the admin cert/key (using the method to extract them from the previous posting. This allows us to use username/password/token authentication to Elasticsearch - the auth is proxied through the router to SearchGuard/Elasticsearch.
$ ca=`mktemp`
$ cert=`mktemp`
$ key=`mktemp`
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-ca"}}' | base64 -d > $ca
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-cert"}}' | base64 -d > $cert
$ oc get secret logging-elasticsearch \
    --template='{{index .data "admin-key"}}' | base64 -d > $key
$ oc create route -n logging reencrypt --service logging-es \
                        --port 9200 --hostname es.run-logging-source.oshift.rmeggins.test \
                        --dest-ca-cert=$ca --ca-cert=$ca --cert=$cert --key=$key
$ oc create route -n logging reencrypt --service logging-es-ops \
                         --port 9200 --hostname es-ops.run-logging-source.oshift.rmeggins.test \
                         --dest-ca-cert=$ca --ca-cert=$ca --cert=$cert --key=$key

I'm using the AllowAll identity provider so I can just create users/passwords with oc login (for testing):
$ more /tmp/openshift/origin-aggregated-logging/openshift.local.config/master/master-config.yaml
...
oauthConfig:
  identityProviders:
  - challenge: true
    login: true
    mappingMethod: claim
    name: anypassword
    provider:
      apiVersion: v1
      kind: AllowAllPasswordIdentityProvider

I create a user called "kibtest" (I also use this user for kibana testing) that has cluster admin rights:
$ oc login --username=system:admin
$ oc login --username=kibtest --password=kibtest
$ oc login --username=system:admin
$ oadm policy add-cluster-role-to-user cluster-admin kibtest

I get the username and token for kibtest:
$ oc login --username=kibtest --password=kibtest
$ test_token="$(oc whoami -t)"
$ test_name="$(oc whoami)"
$ test_ip="127.0.0.1"
$ oc login --username=system:admin

Now I can use curl like this:
$ curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For: 127.0.0.1" https://es.run-logging-source.oshift.rmeggins.test
{
  "name" : "Sugar Man",
  "cluster_name" : "logging-es",
  "version" : {
    "number" : "2.3.5",
    "build_hash" : "90f439ff60a3c0f497f91663701e64ccd01edbb4",
    "build_timestamp" : "2016-07-27T10:36:52Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

$ curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For: 127.0.0.1" https://es-ops.run-logging-source.oshift.rmeggins.test/.operations.*/_search?q=message:centos | python -mjson.tool | more
{
    "_shards": {
        "failed": 0,
        "successful": 1,
        "total": 1
    },
    "hits": {
        "hits": [
            {
                "_id": "AVewK5inAJ6n02oOdaIc",
                "_index": ".operations.2016.10.10",
                "_score": 11.1106205,
                "_source": {
                    "@timestamp": "2016-10-10T19:46:43.000000+00:00",
                    "hostname": "host-192-168-78-2.openstacklocal",
                    "ident": "docker-current",
                    "ipaddr4": "172.17.0.5",
                    "ipaddr6": "fe80::42:acff:fe11:5",
                    "message": "time=\"2016-10-10T19:46:43.564686094Z\" level=in....."
...

Works the same from my local machine.

October 14, 2016 04:29 PM

October 07, 2016

Adam Young

Securing the Cyrus SASL Sample Server and Client with Kerberos

Since running the Cyrus SASL sample server and client was not too bad, I figured I would see what happened when I tried to secure it using Kerberos.

Mechanisms

I’m going to run this on a system that has been enrolled as a FreeIPA client, so I start with a known good Kerberos setup.

To see the list of mechanisms available, run

sasl2-shared-mechlist 

I have the following available.

Available mechanisms: GSS-SPNEGO,GSSAPI,DIGEST-MD5,CRAM-MD5,ANONYMOUS
Library supports: ANONYMOUS,CRAM-MD5,EXTERNAL,DIGEST-MD5,GSSAPI,GSS-SPNEGO

For Kerberos, I want to use GSSAPI.

Lets do this the hard way, by trial and error. First, run the server, telling it to use the GSSAPI mechanism

/usr/bin/sasl2-sample-server -p 1789 -h localhost -s hello  -m GSSAPI

Then run the client in another terminal:

sasl2-sample-client -s hello -p 1789  -m GSSAPI localhost

Which includes the following in the output:

starting SASL negotiation: generic failure
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (No Kerberos credentials available)
closing connection

Kerberos

I need a Kerberos TGT in order to get a service ticket. Use kinit

$ kinit admin
Password for admin@AYOUNG-DELL-T1700.TEST: 

This time the error message is:

starting SASL negotiation: generic failure
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Server rcmd/localhost@AYOUNG-DELL-T1700.TEST not found in Kerberos database)

I notice two things, here:

  1. The service needs to be in the Kerberos servers directory.
  2. the service name should match the hostname.

 

If I rerun the command using the FQDN of the server, I can see the service name as expected:

 

$ sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.testreceiving capability list... ...
SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (Server hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST not found in Kerberos database)
closing connection

 

So I tried to create the service in the ipa server:

ipa service-add
Principal: hello/overcloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST
ipa: ERROR: Host does not have corresponding DNS A/AAAA record
[stack@overcloud ~]$ ipa service-find

Strange error, I don’t understand, as the Host does have an A record.

Work around it with Force:

ipa service-add  --force  hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST

Success:

------------------------------------------------------------------------------
Added service "hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST"
------------------------------------------------------------------------------
  Principal: hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST
  Managed by: undercloud.ayoung-dell-t1700.test

OK, lets try running this again.

 sasl2-sample-client -s hello -p 1789 -m GSSAPI 
...

SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure.  Minor code may provide more information (KDC has no support for encryption type)

Keytabs

OK, I’m going to guess that this is because my remote service can’t deal with the Kerberos service tickets it is getting. Since the service tickets are for the principal: hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST it needs to be able to decrypt requests using a key meant for this principal.

Fetch a keytab for that principal, and put it in a place where the GSSAPI libraries can access it automatically. This place is:

/var/kerberos/krb5/user/{uid}

Where {uid} is the numeric UID for a users. In this case, the users name is stack and I can find the numeric UID value using getent.

KRB5_KTNAME=/var/kerberos/krb5/user/1000/client.keytab

ipa-getkeytab -p hello/undercloud.ayoung-dell-t1700.test@AYOUNG-DELL-T1700.TEST -k client.keytab  -s identity.ayoung-dell-t1700.test
Keytab successfully retrieved and stored in: client.keytab
$  getent passwd stack
stack:x:1000:1000::/home/stack:/bin/bash
$ sudo mkdir /var/kerberos/krb5/user/1000
$ sudo chown stack:stack /var/kerberos/krb5/user/1000
$ mv client.keytab /var/kerberos/krb5/user/1000

Restart the server process, try again, and the log is interesting. Here is the full client side trace.

$ sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
receiving capability list... recv: {6}
GSSAPI
GSSAPI
please enter an authorization id: admin
using mechanism GSSAPI
send: {6}
GSSAPI
send: {1}
Y
send: {655}
`[82][2][8B][6][9]*[86]H[86][F7][12][1][2][2][1][0]n[82][2]z0[82][2]v[A0][3][2][1][5][A1][3][2][1][E][A2][7][3][5][0] [0][0][0][A3][82][1][82]a[82][1]~0[82][1]z[A0][3][2][1][5][A1][18][1B][16]AYOUNG-DELL-T1700.TEST[A2]503[A0][3][2][1][3][A1],0*[1B][5]hello[1B]!undercloud.ayoung-dell-t1700.test[A3][82][1] 0[82][1][1C][A0][3][2][1][12][A1][3][2][1][1][A2][82][1][E][4][82][1][A]T[DD][F8]B[F4][B4]5[D]`[A3]![EE][19]-NN[8E][F5][B7]{O,#[91][A4]}[86]k[D5][EE]vL[E4]&[6][3][A][1C][91][A5][A7][88]j[D1][A3][82][EC][A][D6][CB][F3]9[C][13]#[94][86]d+[B8]V[B7]C^[C6][A8][16][D1]r[E4][0][B9][2][2]&2[E5]Y~[C1]\([BA]x}[17][BC][D][FC][D5][CA][CA]h[E4][A1][81].[15][17]?[CA][A][8B]}[1C]l[F0][D9][E8][96]3<+[84][E7]q.[8E][D5][6][1C]p[E6][6]v[B0][84]5[9][B7]w[D6]3[B8][E3][5]T[BF][92][AA][D5][B3][[83]X[C0]:[BA]V[E5]{>[A5]T[F6]j[CB]p[BF]][EF][E1][91][ED][C][F3]Y[4]x[8E][C2]H[E7][14]#9[EE]5[B3]=[FA][80][DD][93][EF]3[0]q~22[6]I<[EB][F9]V[D1][9D][A8][A6]:[CE]u[AE]-l[D3]"[D7][FE]iB[84][E0]]B[E][C8]U[E][FD][D2]=[F2][97][88][D3][DA]j[B4][FA][16][D1]^CE2?[9F][89]^A[E9][AF][1A]5[99][CE][7][AF]M[1A][A][CB]^[E1][BA]f[7]-n<[F8]8![A4][81][DA]0[81][D7][A0][3][2][1][12][A2][81][CF][4][81][CC][91][F0][A]D[91][F6][FA][F4][B9][13][DF]d|[F4]Y[DF][9E]M[A2]f[11][15]x[C5]-|Qt[F4]nL>@[F4][18][FF],[F6][B5]F6[EC]+[C3]V[F1][81][97][E2][1D]i[4]wD&[9A]V[CE][A1][16][D7]4[E0]C[B]O[D1]v[DD][E9][84]lW[DA]%[F6]v[93]<m"SAfiF[8E][[95]"[CC][D2]4[FA]_[FB]i[E7][D4]M[AE][5][82][FF][D7][0][8C]6[8D][B0]3[F8][E3][B4]P[9C][9E][A2]`[7]U[F7][1D]zub[E0]([A9]P>[AE]f[1A][B1][80][A0]}s[EA][D1]Zk[FF]n_S[9E]rK[E5]n [85]#[DB][FF][B3][E2][19];[F5][E2][8A]>2[E5][A4][81][E8]z[9D][E3][BC][C8][87][F]:[81]7[C9]ix[1E]5[15])[8D][9D][C7][DB][13][98][97][C7]C[6]q[D2][C1][ED][B3]:[E0]
waiting for server reply...
authentication failed
closing connection

On the server side, it looks similar, but ends like this:

starting SASL negotiation: generic failureclosing connection

It is not a GSSAPI error this time. To dig deeper, I’m going to look at the source code on the server side.

Debugging

I’ll shortcut a few steps. Install both gdb and the debugInfo for the sample code:

sudo yum install gdb
sudo debuginfo-install cyrus-sasl-devel-2.1.26-20.el7_2.x86_64

Note that the version might change for the debuginfo.

The source code is included with the debuginfo rpm:

$ rpmquery  --list cyrus-sasl-debuginfo-2.1.26-20.el7_2.x86_64 | grep server.c
/usr/src/debug/cyrus-sasl-2.1.26/lib/server.c
/usr/src/debug/cyrus-sasl-2.1.26/sample/server.c

Looking at the server code at line 267 I see:

if (r != SASL_OK && r != SASL_CONTINUE) {
saslerr(r, “starting SASL negotiation”);
fputc(‘N’, out); /* send NO to client */
fflush(out);
return -1;
}

Let’s put a breakpoint at line 255 above it and see what is happening. Here is the session for setting up the breakpoint:

$  gdb /usr/bin/sasl2-sample-server
...
(gdb) break 255
Breakpoint 1 at 0x2557: file server.c, line 255.
(gdb) run  -h undercloud.ayoung-dell-t1700.test -p 1789 -m GSSAPI

Running the client code gets as far as prompting for the please enter an authorization id: admiyo

This is suspect. We’ll come back to it in a moment.

Back on the server, now, we see the breakpoint has been hit.

Breakpoint 1, mysasl_negotiate (in=0x55555575c150, out=0x55555575c390, conn=0x55555575a6e0)
    at server.c:255
255	    if(buf[0] == 'Y') {
Missing separate debuginfos, use: debuginfo-install keyutils-libs-1.5.8-3.el7.x86_64 libdb-5.3.21-19.el7.x86_64 libselinux-2.2.2-6.el7.x86_64 nss-softokn-freebl-3.16.2.3-14.2.el7_2.x86_64 openssl-libs-1.0.1e-51.el7_2.7.x86_64 pcre-8.32-15.el7_2.1.x86_64 xz-libs-5.1.2-12alpha.el7.x86_64 zlib-1.2.7-15.el7.x86_64

We might need some other RPMS if we want to step deeper through the code, but for now, let’s keep on here.

(gdb) print buf
$1 = "Y", '\000' ...
(gdb) n
257	        len = recv_string(in, buf, sizeof(buf));
(gdb) n
recv: {655}
`[82][2][8B][6][9]*[86]H[86][F7][12][1][2][2][1][0]n[82][2]z0[82][2]v[A0][3][2][1][5][A1][3][2][1][E][A2][7][3][5][0] [0][0][0][A3][82][1][82]a[82][1]~0[82][1]z[A0][3][2][1][5][A1][18][1B][16]AYOUNG-DELL-T1700.TEST[A2]503[A0][3][2][1][3][A1],0*[1B][5]hello[1B]!undercloud.ayoung-dell-t1700.test[A3][82][1] 0[82][1][1C][A0][3][2][1][12][A1][3][2][1][1][A2][82][1][E][4][82][1][A]T[DD][F8]B[F4][B4]5[D]`[A3]![EE][19]-NN[8E][F5][B7]{O,#[91][A4]}[86]k[D5][EE]vL[E4]&[6][3][A][1C][91][A5][A7][88]j[D1][A3][82][EC][A][D6][CB][F3]9[C][13]#[94][86]d+[B8]V[B7]C^[C6][A8][16][D1]r[E4][0][B9][2][2]&2[E5]Y~[C1]\([BA]x}[17][BC][D][FC][D5][CA][CA]h[E4][A1][81].[15][17]?[CA][A][8B]}[1C]l[F0][D9][E8][96]3<+[84][E7]q.[8E][D5][6][1C]p[E6][6]v[B0][84]5[9][B7]w[D6]3[B8][E3][5]T[BF][92][AA][D5][B3][[83]X[C0]:[BA]V[E5]{>[A5]T[F6]j[CB]p[BF]][EF][E1][91][ED][C][F3]Y[4]x[8E][C2]H[E7][14]#9[EE]5[B3]=[FA][80][DD][93][EF]3[0]q~22[6]I<[EB][F9]V[D1][9D][A8][A6]:[CE]u[AE]-l[D3]"[D7][FE]iB[84][E0]]B[E][C8]U[E][FD][D2]=[F2][97][88][D3][DA]j[B4][FA][16][D1]^CE2?[9F][89]^A[E9][AF][1A]5[99][CE][7][AF]M[1A][A][CB]^[E1][BA]f[7]-n<[F8]8![A4][81][DA]0[81][D7][A0][3][2][1][12][A2][81][CF][4][81][CC]hgdf j[CF][AE][7F]:![1C]D[F8]3^w[B7];"[3][D8]3"[8]i[9]J[D3]R[F]A[E7]![BE]0<[8][D3]'j`[B7]J[16][A9][F3][E6]=[E5]J[FE].-[A1]t[[2]W[8D]7[F3][8][EC][92][BB][A3]o5h[C1]A[CC][A2][F1][99][AA][93]2{[BA]Mx0[9D][9][CC]![A]Y[12][D8][2][95][17]ml[B4][1A][94]y[1A][BC][D2]I[8F]7Vg2[8E]6[13]:Lx[E6][1][D3][3][7]r?[12][84]3[B1][B5][AA]E)[EA][87][A][9F]Nk[D1]I[FD]{[B8]9#-[D][8]2[CC]C1[A8]Lfl[B0][E8][82][13][F9]t[1A][F6]^[8D] O13[12]L[E7][C0]k[99][E1]J[1F][FE]#[14]u[B][B2][8F][DB][E6]73*[FA][ED][11][F7][9E][B0][DC][D9][19][AB][97][D7][8B][BB]
260	        r = sasl_server_start(conn, chosenmech, buf, len,
(gdb) print len
$2 = 1
(gdb) n
257	        len = recv_string(in, buf, sizeof(buf));
(gdb) 
260	        r = sasl_server_start(conn, chosenmech, buf, len,
(gdb) 
267	    if (r != SASL_OK && r != SASL_CONTINUE) {
Missing separate debuginfos, use: debuginfo-install gssproxy-0.4.1-8.el7_2.x86_64
(gdb) print r
$3 = -1

A -1 response code usually is an error. Looking in /usr/include/sasl/sasl.h:

#define SASL_FAIL -1 /* generic failure */

I wonder if we can figure out why. Let’ see, first, if we can figure out what the client is sending in the authentication request. If it is a bad principal, then we have a pretty good reason to expect the server to reject it.

Let’s let the server continue running, and try debugging the client.

Client code can be found here

$ rpmquery  --list cyrus-sasl-debuginfo | grep client.c
/usr/src/debug/cyrus-sasl-2.1.26/lib/client.c
/usr/src/debug/cyrus-sasl-2.1.26/sample/client.c

At line 258 I see the call to sasl_client_start which includes what appears to be the initialization of the data variable. Set a breakpoint there

Running the code in the debugger like this:

$ gdb sasl2-sample-client
...
(gdb) break 258
Breakpoint 1 at 0x201b: file client.c, line 258.
(gdb) run -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
Starting program: /bin/sasl2-sample-client -s hello -p 1789 -m GSSAPI undercloud.ayoung-dell-t1700.test
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
receiving capability list... recv: {6}
GSSAPI
GSSAPI

Breakpoint 1, mysasl_negotiate (in=0x55555575cab0, out=0x55555575ccf0, conn=0x55555575b520)
    at client.c:258
258	    r = sasl_client_start(conn, mech, NULL, &data, &len, &chosenmech);
(gdb) print data
$1 = 0x0
(gdb) print mech
$2 = 0x7fffffffe714 "GSSAPI"
(gdb) print conn
$3 = (sasl_conn_t *) 0x55555575b520
(gdb) print len
$4 = 6
(gdb) n
please enter an authorization id: 

So it is the SASL library itself requesting an authorization ID. Let me try putting in the full Principal associated with the service ticket.

 
please enter an authorization id: ayoung@AYOUNG-DELL-T1700.TEST
259	    if (r != SASL_OK && r != SASL_CONTINUE) {
Missing separate debuginfos, use: debuginfo-install gssproxy-0.4.1-8.el7_2.x86_64
(gdb) print r
$5 = 1
(gdb) 

And from sasl.h we know that is good.

#define SASL_CONTINUE 1 /* another step is needed in authentication */

Let’s let it continue.

authentication failed

Nope. Continuing through the debugger, I see another generic failure here:

1531	            } else {
1532	                /* Mech wants client-first, so let them have it */
1533	                result = sasl_server_step(conn,
1534	                                          clientin,
1535						  clientinlen,
1536	                                          serverout,
1537						  serveroutlen);
(gdb) n
1557	    if (  result != SASL_OK
(gdb) print result
$15 = -1

Still…why is the Client side SASL call kicking into an interactive prompt? There should be enough information via the GSSAPI SASL library interaction to authenticate. The Man page for sasl_client_start even indicates that there might be prompts returned.

Looking deeper at the client code, I do see that the prompt is from line 122. The function simple at line 107 must be set as a callback. Perhaps the client code is not smart enough to work with the GSSAPI? At line 190 and 192 I see that the simple code is provided as a callback for the responses SASL_CB_USER or SASL_CB_AUTHNAME. Setting a break point and rerunning shows the id value to be 16385 or x4001.

#define SASL_CB_USER 0x4001 /* client user identity to login as */

 

Humility and Success

If you have followed through this far, you know I am in the weeds. I asked for help. Help, in this case,was Robbie Harwood, how showed me that the sample server/client worked OK if I ran the server as root, and userd the service host instead of hello. That gave me a succesfful comparison other to work with. I ran using strace and noticed that the failing version was not trying to read the keytab file from /var/kerberos/krb5/user/1000/client.keytab. The successful one running as root read the keytab from /etc/krb5.keytab THe failing one was trying to read from there and getting a permissions failure. The final blow that took down the wall was to realize that the krb5.conf file defined different values for default_client_keytab_name and default_keytab_name, with the latter being set to FILE:/etc/krb5.keytab. To work around this, I needed the environment variable KRB5_KTNAME to be set to the keytab. This was the winning entry:

KRB5_KTNAME=/var/kerberos/krb5/user/1000/client.keytab  sasl2-sample-server -h $HOSTNAME -p 9999 -s hello -m GSSAPI 

And then ran

sasl2-sample-client -s hello -p 9999 -m GSSAPI undercloud.ayoung-dell-t1700.test

Oh, one other tyhing Robbie told me was that the string I type when prompted with

please enter an authorization id:

Should be the Kerberos principal, minus the Realm, so for me it was

please enter an authorization id: ayoung

by Adam Young at October 07, 2016 02:34 AM

September 22, 2016

Red Hat Blog

PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications

This post is the fifth installment in my PCI DSS series – a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement six (i.e. the requirement to develop and maintain secure systems and applications). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.

Section six of the PCI DSS standard covers guidelines related to secure application development and testing. IdM and its ecosystem can help in multiple ways to address requirements in this part of the PCI-DSS standard. First of all, IdM includes a set of Apache modules for different methods of authentication. These modules externalize authentication logic from a web application so that the application does not need to re-implement different authentication methods itself. Such an approach significantly reduces the effort that developers need to invest into building different authentication capabilities into their applications – allowing them to focus on the business logic of the application itself and to deliver results faster. Externalized authentication based on Apache modules is (just) one of the best practices currently being adopted in the industry. There are a number of modules that provide different authentication methods, including:

  • A forms based password or one-time-password (OTP) authentication module (…a module that integrates with a given application’s login page and uses the PAM stack and SSSD in particular).
  • A Kerberos based single-sign-on (GSSAPI) module that allows for login into an application without prompting a given user for his or her credentials if he (or she) is already authenticated against a Kerberos server and holds proof of the authentication.
  • Certificate based modules based on either NSS or OpenSSL crypto libraries that enable certificate based authentication into an application.
  • A SAML module that connects an application to an identity provider (IdP); IdP-based federation uses redirection of the application login to an IdP – then accepting an authentication assertion as issued by the IdP.
  • An OpenID Connect module (similar to the SAML module) that allows an application to accept an OpenID Connect ticket from an authentication server.

The modules and details on how to integrate them are described on the following wiki page. Of note: all of the aforementioned modules are available in the current shipping version of Red Hat Enterprise Linux except for the OpenID Connect one.

As mentioned (above), externalizing authentication saves a lot of effort and is a good practice. To make developer life even easier we have been working on a container-based developer environment that would provide an application container, Apache web server (with pre-configured modules), an authentication server based on IdM (FreeIPA), and a client that allows for the testing of an application via browser. A prototype of this setup can be found here and the following video demonstrates how it can be used for development.

There is also an existing feature of the IdM server that allows for the management of SSH keys for different environments. Imagine you have an application with an administrative account. There are some operations that are done using this account, including SSH-ing into the system the application is running on. If you are developing this application, or if you are testing this application, or (perhaps) if you are deploying this appliaction – you would (likely) want to have different credentials for administrative accounts. IdM allows for the creation of ID views. Loading different SSH keys into different views enables use of the same administrative account across different environments with different SSH keys. Together, with different credentials, IdM allows for defining access control rules that are different for different environments and thus (for example) addresses requirement 6.4.1 (…or, to some extent, requirement 6.5.8).

Finally, it’s worth mentioning that it’s generally not a good idea to store passwords in configuration files. That said, indeed, some applications were built this way (in the past). To help developers to deal with secrets that an application needs to use, there are plans to provide a secrets API that would allow applications to fetch or store secrets in a more secure way without putting them in clear text in configuration files. You can read more about this capability here. A Technology Preview of the API is included as a part of SSSD (System Security Services Daemon) in the beta release of Red Hat Enterprise Linux 7.3.  Please reach out if you are interested in using this feature – our Technical Account Managers and Solution Architects would love to speak with you.

Questions about how Identity Management relates to requirement six?  Reach out using the comments section (below).

by Dmitri Pal at September 22, 2016 06:06 PM

September 20, 2016

Adam Young

Mirroring Keystone Delegations in FreeIPA/389DS

This is more musing than a practical design.

Most application servers have a means to query LDAP for the authorization information for a user.  This is separate from, and follows after, authentication which may be using one of multiple mechanism, possibly not even querying LDAP (although that would be strange).

And there are other mechanisms (SAML2, SSSD+mod_lookup_identity) that can, also, provide the authorization attributes.

Separating mechanism from meaning, however, we are left with the fact that applications need a way to query attributes to make authorization decisions.  In Keystone, the general pattern is this:

A project is a group of resources.

A user is assigned a role on a project.

A user requests a token for a project. That token references the users roles.

The user passes the token to the server when accessing and API. Access control is based on the roles that the user has in the associated token.

The key point here is that it is the roles associated with the token in question that matter.  From that point on, we have the ability to inject layers of indirection.

Here is where things fall down today. If we take an app like WordPress, and tried to make it query against Red Hat’s LDAP server for the groups to use, there is no mapping  between the groups assigned and the permissions that the user should have.  As the WordPress instance might be run by any one of several organizations within Red Hat, there is no direct mapping possible.

If we map this problem domain to IPA, we see where things fall down.

WordPress, here, is a service.  If the host it is running on is owned by a particular organization (say, EMEA-Sales) it should be the EMEA Sales group that determines who gets what permissions on WordPress.

Aside: WordPress, by the way, makes a great example to use, as it has very clear, well defined roles,  which have a clear scope of authorization for operations.

Subscriber < Contributor < Author < Editor < Administrator

Back to our regular article:

If we define and actor as either a user or agroup of users, a Role assignment is a : (actor, organization, application, role)

 

role-assignment-1

Now, a user should not have to go to IPA, get a token, and hand that to WordPress.  When a user connects to WordPress, and attempts to do any non-public action, they are prompted for credentials, and are authenticated.  At this point, WordPress can do the LDAP query. And here is the question:

“what should an application query for in LDAP”

If we use groups, then we have a nasty naming scheme.  EMEA-sales_wordpress_admin versus LATAM-sales_worpress_admin.  This is appending the query  (organization, application) and the result (role).

Ideally, we would tag the role on the service.  The service already reflects organization and application.

In the RFC based schemas, there is a organizationalRole objectclass which almost mirrors what we want.  But I think the most important thing is to return an object that looks like a Group, most specifically groupofnames.  Fortunately, I think this is just the ‘cn’.

Can we put a group of names under a service?  Its not a container.

‘ipaService’ DESC ‘IPA service objectclass’ AUXILIARY MAY ( memberOf $ managedBy $ ipaKrbAuthzData) X-ORIGIN ‘IPA v2’ )

objectClass: ipaobject
objectClass: top
objectClass: ipaservice
objectClass: pkiuser
objectClass: ipakrbprincipal
objectClass: krbprincipal
objectClass: krbprincipalaux
objectClass: krbTicketPolicyAux

It probably would make more sense to have a separate subtree service-roles,  with each service-name a container, and each role a group-of-names under that container. The application would  filter on (service-name) to get the set of roles.  For a specific user, the service would add an additional filter for memberof.

Now, that is a lot of embedded knowledge in the application, and does not provide any way to do additional business logic in the IPA server or to hide that complexity from the end user.  Ideally, we would have something like automember to populate these role assignments, or, even better, a light-weight way for a user with a role assignment to re-delegate that to another user or principal.

That is what really gets valuable:  user self service for delegation.  We want to make it such that you do not need to be an admin to create a role assignment, but rather (with exceptions) you can delegate to others any role that you have assigned to yourself.  This is a question of scale.

However, more than just scale, we want to be able to track responsibility;  who assigned a user the role that they have, and how did they have the authority to assign it?  When a user no longer has authority, should the people they have delegated to also lose it, or does that delegation get transferred?  Both patterns are required for some uses.

I think this fast gets beyond what can be represented easily in an LDAP schema.  Probably the right step is to use something like automember to place users into role assignments.  Expanding nested groups, while nice, might be too complicated.

by Adam Young at September 20, 2016 03:37 AM

September 19, 2016

Alexander Bokovoy

Samba and identity tales

Samba is built to bridge Windows and POSIX worlds. Apart from the file system semantics, there are many other differences. The story I’m about to tell concerns users and groups. They have different meaning and representation in both worlds, so translation is required, similar to a real life. In real life translators often have to take into account cultural differences and sometimes lack of certain concepts in the language they are translating to.

Protocol communications which Samba implements, end up bringing in objects which have a certain meaning in one world that doesn’t really have a one to one counterpart on the other side. One of tasks samba undertakes is translating the concepts between Windows and POSIX. It does this translation with the help of mapping databases.

Security identifiers

In Windows access controls are built around a concept of a security identifiers and security descriptors. Security identifier (SID) is associated with the object it represents. Internal processes in Windows refer to security identifiers of the objects rather than their names. Security descriptor is used to list what security identifiers can have access to a certain resource and what kind of access it could be. An important part of the story is that security identifiers have the same structure regardless of an object they represent. When security identifier is expressed in a textual form, in general we cannot say what object they represent – a user, a group, or a machine account, apart from so called ‘well-known’ SIDs. A nice property of a SID is that it is a global identifier – for two different domains their SIDs are guaranteed to be different even for ‘well-known’ objects within the domains.

POSIX identifiers

In POSIX world access controls are built around a simple model of rights for the resource owner, rights for the resource group ownership, and rights for all others. The model is further extended with POSIX Access Control Lists (ACLs) which allow to associate multiple simple model descriptors with a single resource but resulting access descriptor is still far from its Windows counterpart.

To a kernel of POSIX-compatible operating system access checks are done using numbers which represent users and groups. The kernel application interfaces don’t deal with user or group names, they deal with integer-based identifiers. Standard language library is supposed to translate user or group names to their numeric identifiers when talking to the kernel.

When operating on files and directories, Samba needs to translate NTFS-like semantics to POSIX file semantics. This includes translating security identifiers of SMB clients to POSIX identifiers of the users and their group membership. There are no SID-like structures in the kernel of POSIX operating system that Samba could directly map to; instead, it has to maintain such mapping in user space.

However, POSIX operating system already has own databases for users and groups which all POSIX applications are utilizing. In a primitive form these databases are stored as textual files, /etc/passwd and /etc/group, with a well-defined format. On Linux systems there are other ways to store information about POSIX users and groups, with the help of so-called ‘name service switch’ modules (NSS modules). How multiple modules are stacked up in an effort to deliver information about users, groups, and other resources is defined in /etc/nsswitch.conf configuration file. Standard C library reads this configuration file at application start and loads modules responsible for the resources. Standard application interfaces then will call the modules as defined in /etc/nsswitch.conf to retrieve required information.

Identity mapping

The information NSS modules provide includes nothing related to SMB protocol. Applications can query by user or group name but that’s all: they cannot query by SID value. Also, the interface functions differentiate between user and group information. When Samba gets a SID, it does not know whether it corresponds to a user or to a group, it cannot chose which interface function to call.

Let’s step aside at this point. Samba needs to deal with the system-level databases for users and groups. Samba needs to deal with SIDs that could be mapped to users, groups, and machine accounts. When user is referenced in SMB protocol communication, it can be in the form of a user name or a SID associated with the user object. When group is referenced in SMB protocol communication, it can also be in the form of a group name or a SID associated with the group object. Finally, the same applies for machine accounts but here Samba (and Windows) cheat and represent machine accounts as a special type of a user object.

The fact that Samba sits in the middle between the SMB protocol communication and the system-level databases for users and groups means Samba has to maintain own mapping between information relevant to SMB protocol and the information relevant to system level references to users and groups. In Windows a system level interface and a database for users, groups, and machine accounts is called Security Account Manager, SAM. Samba implements an abstraction level that allows to handle SAM-like requests. In fact, it implements two of those layers, not one.

IDMAP layer

To map security identifier to a POSIX identifier Samba uses identity mapping interfaces, IDMAP. IDMAP interface is very simple, it only has three functions:

  • map SID to a POSIX ID
  • map POSIX ID to a SID
  • allocate POSIX ID for a SID

A mapping of SID and POSIX ID is handled by an IDMAP module. SID name space is larger than POSIX ID name spaces (combined for users and groups). A relative identifier part of the SID, RID, is 32-bit long and identifies resources within a single domain, but there could be multiple domains involved. Samba has to potentially map all of those RIDs from all domains to a single 32-bit user and single 32-bit group name spaces. Such mapping most likely is a compression scheme with a collision potential when done algorithmically. There could be limiting factors in what particular 32-bit values for user and group identifiers could be chosen. Finally, manual assignment is something that could also be done. Thus, there are many IDMAP modules in Samba to cater to different needs.

A default IDMAP module in Samba is idmap_tdb. This module stores SID to POSIX ID mapping in a Samba native database format, so-called ‘trivial database’, TDB. When Samba requests a look up by SID, idmap_tdb module may allocate new POSIX ID if this SID is not mapped yet and there are enough POSIX IDs in the range defined for the domain. As result, when range is big enough to cover all users and groups from the domain, all SIDs will be mapped. However, there is no guarantee that SIDs will be mapped to the same POSIX IDs on all Samba servers in the domain. The order in which SID mapping request comes influences POSIX ID which is allocated for the SID. If different Samba servers get requests in the different order, they would assign different POSIX IDs to the same SIDs. This is, of course, a problem when accessing files on a distributed file system.

To solve this problem, other IDMAP modules were created. idmap_rid module algorithmically maps relative identifier of the SID to the range associated with the domain. idmap_ad looks up POSIX IDs at a domain controller of the Active Directory domain. In a similar approach, idmap_ldap looks up POSIX IDs at LDAP server defined in the configuration.

For configurations, where users and groups are maintained in the system-level databases, Samba allows to use idmap_nss module. The module queries the system-level databases in case it is known what SID maps to – to a user or to a group. In case it is unknown, IDMAP module queries a primary domain controller of the domain to convert SID to a name. A primary domain controller should know all users and groups of the domain, thus it should be able to answer where the SID maps to, or fail the request. In the latter case idmap_nss will also fail the request and Samba will consider the SID as unmapped.

PASSDB layer

Users and groups need to be known to Samba before they can be used. The very same users and groups must be known to the operating system because Samba processes change identity when performing operations as a particular user. The second layer Samba uses for identity mapping also allows to manage users and groups: create new ones, delete existing ones, modify information about them and, in general, perform a lot of actions Windows expects from SAM interface.

PASSDB module is an abstraction over the system-level database about users. It allows to retrieve user information from LDAP server or other storage scheme. The reason for this is, again, a lack of needed information in the system-level database format. Samba needs to know a lot more details about the user than POSIX interfaces provide and some of this information is unique to SMB protocol. For example, for each user to be able to authenticate with password, Samba needs to known corresponding password hashes for NTLM negotiation. NT and LM hashes are not used by the POSIX-compatible operating systems. Also, the interface to retrieve user information does not give access to actual passwords. In fact, in many environments applications have no access to password hashes, not even passwords.

Default PASSDB module is tdbsam. Similar to idmap_tdb, it stores additional information Samba needs to know about users in its own ‘trivial database’, TDB. tdbsam expects that if user information is stored in the database, the very same user exists in the system-level databases.

One can also force IDMAP subsystem to look up SID to POSIX ID mappings in a PASSDB backend. For this IDMAP module idmap_passdb can be used. As result, Samba will look up SIDs and POSIX IDs in a PASSDB module defined in smb.conf.

Group mapping

Groups are not stored in Samba databases. Instead, Samba allows to map existing POSIX group to a group in a domain. Because groups in Windows world can have different scope, Samba provides a mechanism to specify which POSIX group is mapped to which Windows group and what scope it should have. The mapping is managed with the help of Samba’s net utility: net groupmap family includes commands to add, modify, and remove group mappings. It also allows to associate (alias) certain SIDs with existing groups and list members of the groups.

For distributed environments it is convenient to store POSIX and SMB information about users and groups in the same place. For example, LDAP server could be used to store and retrieve such information with ldapsam PASSDB module and idmap_ldap IDMAP module. However, group mapping would still be maintained locally with net groupmap set of commands.

Practical considerations

Let’s apply all discussed above to a practice. Consider a single Samba server which serves as a primary domain controller to its own domain. The server does not use LDAP or any other distributed storage for its POSIX and SMB information for users and groups.

A minimal smb.conf configuration file for a primary domain controller is following:

# Global parameters
[global]
    workgroup = SAMBA
    domain logons = Yes
    security = USER
    winbind offline logon = Yes
    winbind use default domain = Yes
    idmap config * : range = 1000-1000000
    idmap config * : backend = passdb
    passdb backend = tdbsam
    template homedir = /home/%U
    template shell = /bin/bash

[homes]
    comment = Home Directories
    browseable = No
    inherit acls = Yes
    read only = No
    valid users = %S %D%w%S

This configuration defines a single-domain SMB server with IDMAP configuration to look up SID to POSIX ID mappings in a PASSDB module. PASSDB module is set to tdbsam which is a default module.

As result of this configuration, all non-POSIX attributes of users need to be stored in the PASSDB module. To modify them one can use pdbedit tool. But before that we need to create users and groups at the system level first.

SMB domains have few ‘well-known’ groups: ‘Domain Users’, ‘Domain Administrators’, ‘Domain Guests’. For ‘Domain Users’ and ‘Domain Guests’ we can reuse POSIX groups ‘users’ and ‘nobody’, for ‘Domain Admins’ it is better to create a separate group, for example, ‘admins’.

On Fedora 24 there are existing POSIX groups ‘users’ and ‘nobody’:

# getent group users nobody
users:x:100:
nobody:x:99:

We can create ‘admins’ group using groupadd utility:

# groupadd admins

When groups are ready, we can associated them with the well-known domain groups using net groupmap commands:

# net groupmap add ntgroup="Domain Admins" unixgroup=admins rid=512 type=d
Successfully added group Domain Admins to the mapping db as a domain group
# net groupmap add ntgroup="Domain Users"  unixgroup=users rid=513 
Successfully added group Domain Users to the mapping db as a domain group
# net groupmap add ntgroup="Domain Guests"  unixgroup=nobody rid=514
Successfully added group Domain Guests to the mapping db as a domain group

Finally, add users. Users should have their primary group associated with any of the groups mapped to the domain because Samba needs to recognize them. So there should be SID to POSIX ID mapping for primary groups. Let’s pretend that all our users are members of ‘users’ group:

# useradd -m -g users -G admins administrator
# pdbedit -a -u admin
new password:
retype new password:
Unix username:        administrator
NT username:          
Account Flags:        [U          ]
User SID:             S-1-5-21-1345368309-3761995768-4153620981-1008
Primary Group SID:    S-1-5-21-1345368309-3761995768-4153620981-513
Full Name:            
Home Directory:       \\smb\administrator
HomeDir Drive:        
Logon Script:         
Profile Path:         \\smb\administrator\profile
Domain:               SAMBA
Account desc:         
Workstations:         
Munged dial:          
Logon time:           0
Logoff time:          Wed, 06 Feb 2036 17:06:39 EET
Kickoff time:         Wed, 06 Feb 2036 17:06:39 EET
Password last set:    Mon, 19 Sep 2016 12:43:45 EEST
Password can change:  Mon, 19 Sep 2016 12:43:45 EEST
Password must change: never
Last bad password   : 0
Bad password count  : 0
Logon hours         : FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF

In the screen output above ‘Primary Group SID’ was automatically inferred from the group mapping.

We can now ask winbindd to resolve user information based on the IDMAP and PASSDB databases:

# wbinfo -i administrator
administrator:*:1002:100::/home/administrator:/bin/bash
# wbinfo -n administrator
S-1-5-21-1345368309-3761995768-4153620981-1008 SID_USER (1)
# wbinfo -s S-1-5-21-1345368309-3761995768-4153620981-1008
SAMBA\administrator 1

September 19, 2016 09:52 AM

September 16, 2016

Rich Megginson

How to print field name with dash ("-") in a golang template

For example, let's say your OpenShift secret has been created like this:
$ oc secrets new logging-elasticsearch \
        key=$dir/keystore.jks truststore=$dir/truststore.jks \
        searchguard.key=$dir/searchguard_node_key \
        searchguard.truststore=$dir/searchguard_node_truststore \
        admin-key=$dir/${admin_user}.key admin-cert=$dir/${admin_user}.crt \
        admin-ca=$dir/ca.crt \
        admin.jks=$dir/${admin_user}.jks

Now you want to extract the CA cert:
$ oc get secret logging-elasticsearch --template='{{.data.admin-ca}}'
error: error parsing template {{.data.admin-ca}}, template: output:1: bad character U+002D '-'

It doesn't like the - character in the field name. You can work around this using index like so:
$ oc get secret logging-elasticsearch --template='{{index .data "admin-ca"}}' |base64 -d > ca
$ openssl x509 -in ca -text|more
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1 (0x1)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=logging-signer-20160915173520
        Validity
            Not Before: Sep 15 17:35:19 2016 GMT
            Not After : Sep 14 17:35:20 2021 GMT
        Subject: CN=logging-signer-20160915173520
        Subject Public Key Info:

September 16, 2016 01:57 AM

Powered by Planet