From: Chuck Lever III <chuck.lever@oracle.com>
To: Jeff Layton <jlayton@kernel.org>
Cc: Olga Kornievskaia <aglo@umich.edu>,
"kernel-tls-handshake@lists.linux.dev"
<kernel-tls-handshake@lists.linux.dev>
Subject: Re: problems getting rpc over tls to work
Date: Tue, 28 Mar 2023 16:06:10 +0000 [thread overview]
Message-ID: <59A9649B-A426-4D08-ABFC-2F3F54464767@oracle.com> (raw)
In-Reply-To: <fb72ac57e1afe372cdec36fc6e6998cf8179e959.camel@kernel.org>
> On Mar 28, 2023, at 11:48 AM, Jeff Layton <jlayton@kernel.org> wrote:
>
> On Tue, 2023-03-28 at 10:38 -0400, Jeff Layton wrote:
>> On Tue, 2023-03-28 at 10:25 -0400, Olga Kornievskaia wrote:
>>> On Tue, Mar 28, 2023 at 10:14 AM Jeff Layton <jlayton@kernel.org> wrote:
>>>>
>>>> On Tue, 2023-03-28 at 13:55 +0000, Chuck Lever III wrote:
>>>>>
>>>>>> On Mar 28, 2023, at 9:29 AM, Chuck Lever III <chuck.lever@oracle.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On Mar 28, 2023, at 8:27 AM, Jeff Layton <jlayton@kernel.org> wrote:
>>>>>>>
>>>>>>> Hi Chuck!
>>>>>>>
>>>>>>> I have started the packaging work for Fedora for ktls-utils:
>>>>>>>
>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2182151
>>>>>>>
>>>>>>> I also built packages for this in copr:
>>>>>>>
>>>>>>> https://copr.fedorainfracloud.org/coprs/jlayton/ktls-utils/
>>>>>>>
>>>>>>> ...and built some interim nfs-utils packages with the requisite exportfs
>>>>>>> patches:
>>>>>>>
>>>>>>> https://copr.fedorainfracloud.org/coprs/jlayton/nfs-utils/
>>>>>>
>>>>>> Note that the nfs-utils changes aren't necessary to support
>>>>>> the kernel server in "opportunistic" mode -- the server will
>>>>>> use RPC-with-TLS if a client requests it, but otherwise does
>>>>>> not restrict access.
>>>>>>
>>>>>> Client side also has no nfs-utils requirements at this time,
>>>>>> since the new mount options are handled by the kernel.
>>>>>
>>>>> In case I wasn't clear:
>>>>>
>>>>> This was meant as a suggestion. If you want to simplify your
>>>>> test set-up a bit, the nfs-utils piece isn't needed at this
>>>>> point. But feel free to include it if you like!
>>>>>
>>>>
>>>> Understood. I needed to build it for the server side anyway, so I
>>>> figured I might as well. Eventually I'd like to set up a Fedora COPR
>>>> repo that has all of the packages we need to test this, but I need to
>>>> sort through the certificate handling here first.
>>>>
>>>> Are there docs on how to administer gnutls? For instance, I guess I'll
>>>> want to set up my own CA and issue client and server certs. How do I
>>>> make gnutls trust a new CA?
>>>
>>> Hi Jeff,
>>>
>>> To get self-signed certificates to work you need to (on the client's
>>> machine) copy your server's cert.pem file into
>>> /etc/pki/ca-trust/source/anchors and then run the “update-ca-trust
>>> extract”.
>>>
>>>
>>
>> Many thanks, Olga! That got me further:
>>
>> Mar 28 10:35:05 nfsclnt tlshd[1498]: Handshake with nfsd.poochiereds.net (192.168.1.140) was successful
>>
>> The mount still isn't working yet, but I think I'm getting closer. I'll
>> keep poking at it.
>>
>
> OK! I cranked up the debugging. Here's the kernel tracepoints during
> this time:
>
> <idle>-0 [007] ..s2. 3657.494946: svc_xprt_enqueue: server=0.0.0.0:2049 client=(einval) flags=BUSY|CONN|CHNGBUF|LISTENER|CACHE_AUTH|CONG_CTRL pid=1051
> nfsd-1051 [005] ..... 3657.494980: svc_xprt_dequeue: server=0.0.0.0:2049 client=(einval) flags=BUSY|CONN|CHNGBUF|LISTENER|CACHE_AUTH|CONG_CTRL wakeup-us=45
> nfsd-1051 [005] ..... 3657.495071: svcsock_new_socket: type=STREAM family=AF_INET
> nfsd-1051 [005] ..... 3657.495085: svc_xprt_enqueue: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL pid=1050
> nfsd-1051 [005] ..... 3657.495086: svc_xprt_accept: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL protocol=tcp service=nfsd
> nfsd-1051 [005] ..... 3657.495092: svc_xprt_enqueue: server=0.0.0.0:2049 client=(einval) flags=BUSY|CONN|CHNGBUF|LISTENER|CACHE_AUTH|CONG_CTRL pid=1049
> nfsd-1051 [005] ..... 3657.495095: svc_xprt_dequeue: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL wakeup-us=158
> nfsd-1051 [005] ..... 3657.495101: svcsock_marker: addr=192.168.1.136:818 length=40 (last)
> nfsd-1051 [005] ..... 3657.495104: svcsock_tcp_recv: addr=192.168.1.136:818 result=40 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL
> nfsd-1051 [005] ..... 3657.495111: svc_xprt_enqueue: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL pid=1048
> nfsd-1051 [005] ..... 3657.495112: svc_xdr_recvfrom: xid=0xd1e2303e head=[000000005f040892,40] page=0 tail=[0000000000000000,0] len=40
> nfsd-1050 [007] ..... 3657.495121: svc_xprt_dequeue: server=0.0.0.0:2049 client=(einval) flags=BUSY|CONN|CHNGBUF|LISTENER|CACHE_AUTH|CONG_CTRL wakeup-us=44
> nfsd-1051 [005] ..... 3657.495125: svc_tls_start: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL
> nfsd-1051 [005] ..... 3657.495128: svc_process: addr=192.168.1.136:818 xid=0xd1e2303e service=nfsd vers=4 proc=NULL
> nfsd-1051 [005] ..... 3657.495132: svc_xdr_sendto: xid=0xd1e2303e head=[000000005ccd151e,32] page=0(0) tail=[0000000000000000,0] len=32
> nfsd-1051 [005] ..... 3657.495133: svc_stats_latency: xid=0xd1e2303e server=192.168.1.140:2049 client=192.168.1.136:818 proc=NULL execute-us=21
> nfsd-1050 [007] ..... 3657.495146: svcsock_accept_err: addr=listener service=nfsd status=-11
> nfsd-1048 [004] ..... 3657.495147: svc_xprt_dequeue: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL|HANDSHAKE wakeup-us=42
> nfsd-1048 [004] ..... 3657.495151: svc_tls_upcall: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL|HANDSHAKE
> nfsd-1051 [005] ..... 3657.495163: svcsock_tcp_send: addr=192.168.1.136:818 result=36 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL|HANDSHAKE
> nfsd-1051 [005] ..... 3657.495198: svc_send: xid=0xd1e2303e server=192.168.1.140:2049 client=192.168.1.136:818 status=36 flags=SECURE|USEDEFERRAL|SPLICE_OK|BUSY|DATA
> <idle>-0 [007] ..s2. 3657.651316: svcsock_data_ready: addr=192.168.1.136:818 result=0 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL|HANDSHAKE
> <idle>-0 [007] ..s2. 3657.655648: svcsock_data_ready: addr=192.168.1.136:818 result=0 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL|HANDSHAKE
> <idle>-0 [007] ..s2. 3657.669552: svcsock_data_ready: addr=192.168.1.136:818 result=0 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL|HANDSHAKE
> nfsd-1048 [004] ..... 3662.666590: svc_tls_timed_out: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|DATA|TEMP|CACHE_AUTH|CONG_CTRL|HANDSHAKE <<<<<<<<<<< TIMEOUT HERE
> nfsd-1048 [004] ..... 3662.666602: svc_xprt_enqueue: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|CLOSE|DATA|TEMP|CACHE_AUTH|CONG_CTRL pid=1051
> nfsd-1048 [004] ..... 3662.666630: svc_xprt_dequeue: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|CLOSE|DATA|TEMP|CACHE_AUTH|CONG_CTRL wakeup-us=5171655
> nfsd-1048 [004] ..... 3662.666631: svc_xprt_detach: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|CLOSE|DATA|TEMP|DEAD|CACHE_AUTH|CONG_CTRL
> nfsd-1048 [004] ..... 3662.666689: svc_xprt_free: server=192.168.1.140:2049 client=192.168.1.136:818 flags=BUSY|CLOSE|DATA|TEMP|DEAD|CACHE_AUTH|CONG_CTRL
>
> It looks like it timed out waiting for the downcall. I cranked up the
> debug logging in tlshd at the same time and attached it to this. It
> looks like it all worked, so I'm not sure why the kernel didn't see the
> downcall.
Check that src/tlshd/netlink.h looks exactly like
include/uapi/linux/handshake.h
Otherwise, enable function tracing to confirm that the downcall
is either not getting done or is failing.
--
Chuck Lever
prev parent reply other threads:[~2023-03-28 16:06 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-28 12:27 problems getting rpc over tls to work Jeff Layton
2023-03-28 12:55 ` Jeff Layton
2023-03-28 14:04 ` Chuck Lever III
2023-03-28 14:23 ` Benjamin Coddington
2023-03-28 14:29 ` Jeff Layton
2023-03-28 14:39 ` Olga Kornievskaia
2023-03-28 14:45 ` Chuck Lever III
2023-03-28 14:50 ` Olga Kornievskaia
2023-03-28 15:06 ` Jeff Layton
2023-03-28 15:03 ` Jeff Layton
2023-03-28 15:05 ` Chuck Lever III
2023-03-28 15:15 ` Jeff Layton
2023-03-28 15:19 ` Olga Kornievskaia
2023-03-28 15:30 ` Olga Kornievskaia
2023-03-28 15:48 ` Chuck Lever III
2023-03-28 14:41 ` Chuck Lever III
2023-03-28 13:29 ` Chuck Lever III
2023-03-28 13:51 ` Jeff Layton
2023-03-28 13:55 ` Chuck Lever III
2023-03-28 14:13 ` Jeff Layton
2023-03-28 14:25 ` Olga Kornievskaia
2023-03-28 14:38 ` Jeff Layton
2023-03-28 14:44 ` Olga Kornievskaia
2023-03-28 14:47 ` Chuck Lever III
2023-03-28 15:48 ` Jeff Layton
2023-03-28 16:06 ` Chuck Lever III [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=59A9649B-A426-4D08-ABFC-2F3F54464767@oracle.com \
--to=chuck.lever@oracle.com \
--cc=aglo@umich.edu \
--cc=jlayton@kernel.org \
--cc=kernel-tls-handshake@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).