Nfs Client Kerberos Cache

Home » CentOS » Nfs Client Kerberos Cache
CentOS No Comments

Greetings,

Not sure if this is the correct mail list.

I have the following test environment set up:
– 1x ipa master = ipa1.example.com
– 1x nfs server = nfs1.example.com
– 1x nfs client = nfsclient1.example.com

NFS version 4 is used and the appropriate Kerberos principal has been created in IPA:

[root@nfs1 ~]# ipa service-show nfs/nfs1.example.com@EXAMPLE.COM

Principal: nfs/nfs1.example.com@EXAMPLE.COM
Keytab: True Managed by: nfs1.example.com

Mounting using krb5p works:

[root@nfsclient1 ~]# mount -v -t nfs -o sec=krb5p nfs1.example.com:/exports/homes/ /mnt

mount.nfs: timeout set for Mon Jan 6 21:25:56 2014
mount.nfs: trying text-based options
‘sec=krb5p,vers=4,addr=192.168.12.172,clientaddr=192.168.12.173’
nfs1.example.com:/exports/homes/ on /mnt type nfs (rw,sec=krb5p)

rpcgssd created the Kerberos cache file as indicated in /var/log/messages:
rpc.gssd[2473]: INFO: Credentials in CC
‘FILE:/tmp/krb5cc_machine_EXAMPLE.COM’ are good until 1389125973

So far so good, but then:

1) I unmount everything from nfs1, remove the nfs1.example.host, its DNS
record(s) and service principcals.
2) I redeploy the nfs1.example.com and re-create the nfs/nfs1.example.com@EXAMPLE.COM principal

3) I try to mount the same NFS share from nfs1 on nfsclient1 I get an error:
mount.nfs: trying text-based options
‘sec=krb5p,vers=4,addr=192.168.12.172,clientaddr=192.168.12.173’
mount.nfs: mount(2): Operation not permitted

Now I’m not an IPA or Kerberos expert but I am guessing that this happens because the nfsclient1 still has, and uses, the /tmp/krb5cc_machine_EXAMPLE.COM cache file?
This file would have the “old” Kerberos credentials?…

On the NFS server in /var/log/messages this error message is displayed:
“rpc.svcgssd[5983]: ERROR: GSS-API: error in handle_nullreq:
gss_accept_sec_context(): GSS_S_FAILURE (Unspecified GSS failure. Minor code may provide more information) – Wrong principal in request”

On the NFS client in /var/log/messages these messages are displayed:
“creating context with server nfs@nfs1.example.com”

“WARNING: Failed to create machine krb5 context with credentials cache FILE:/tmp/krb5cc_machine_EXAMPLE.COM for server nfs1.example.com”

“WARNING: Machine cache is prematurely expired or corrupted trying to recreate cache for server nfs1.example.com”

Restarting the rpcgssd daemon works, this action removes the /tmp/krb5cc_machine_EXAMPLE.COM file and upon a mount command it is recreated. However restarting the rpcgssd daemon on all NFS clients every time an NFS server is redeployed doesn’t feel right.

Anyone perhaps have an idea on what I might be doing wrong?
Or is this by design?