From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1853FA0A0C for ; Thu, 15 Apr 2021 08:58:04 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9775416206B; Thu, 15 Apr 2021 08:58:03 +0200 (CEST) Received: from mail-wr1-f43.google.com (mail-wr1-f43.google.com [209.85.221.43]) by mails.dpdk.org (Postfix) with ESMTP id 452A016206A for ; Thu, 15 Apr 2021 08:58:03 +0200 (CEST) Received: by mail-wr1-f43.google.com with SMTP id w4so18432649wrt.5 for ; Wed, 14 Apr 2021 23:58:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=KQU/2+H+1Ohbi7RpftpVOOGWfZBblywtzttIUUhmVrI=; b=nfjTxhGdSweDRsEKfVGpKN2G+tfO0v5PI75mGvCxWH+je5O8rkON4JoiB5jSrb/keL NVyZijrIGGokLJSQKRk9QQDEzohIsWRZcElfS9mbK64l+Ux+gTX9gzbtDCYhQHUs++IM 20zABQnhQvbaeUKsKunrD000KmufpH/teYHFpqhfZ9gi7Ab8wX9rHOVzko+KSsUGySmA gtZR5PqiLPmvWHb/Ah8Rw0/hyMX5x0im9t41GjyswgPaspQ0EZx4YyEAGus2oXwVYMmj UmXPD3dqNUsVV2TpVJv+uptAdnVJNT9On+Id/0GqPvxRpwxjYehRvClMOtTn0e0dXoo0 dFJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=KQU/2+H+1Ohbi7RpftpVOOGWfZBblywtzttIUUhmVrI=; b=hGkUZkUU2XrnUe8+gAYPDcJwfuy9KbrEEga5spaQ47Wx0iM6Vg9Slzwlpn2LLYWBVd AgC+A+DnGWxK0/PqfU+1jyFOvBV1ETrqCzNV6ydhaYFd1P7n8+T3059FXXB6ySAL1Oo9 S26y+7d1/LroxdTzRQ9AKppQToJlvPlSl55A5pjXAGQDZIk4t9wSuRitrsdew/69awmL SXWxGC9PKFhFCkaY8WRBYdameW1pLlEWk/0uSCAoyo8Hu9ph8Zhdfhmp8r9/iiUcNbYr 1KT1w5VJsXD4fIWOtcg4jAn+2n4RSoytIqy2XACZHLfm7sKmdfObvY1e+zv6LJIM5WLG ncLA== X-Gm-Message-State: AOAM533wqhf4kYQu+RbGrZP0/PRntN+bD28/D/VvB9QjoACXUyvBfwH1 0lkMaIWHalzs4RHdK8w6yGTARnV/u2N6w+m58VE= X-Google-Smtp-Source: ABdhPJwdJfmvjhF3hSQyH/IoancdYD4IUudpN8A5AQd3jtZ+Z9N76K+QthUqrbrOYcCCwBFddyyK7Y56zIW+IG2zRM4= X-Received: by 2002:adf:a703:: with SMTP id c3mr1780663wrd.72.1618469882890; Wed, 14 Apr 2021 23:58:02 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Pavel Vazharov Date: Thu, 15 Apr 2021 09:57:47 +0300 Message-ID: To: Hao Chen Cc: "users@dpdk.org" Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [dpdk-users] What is TCP read performance by using DPDK? X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" Hi, "Does it mean your code just look at IPHeader and TCPheader without handling TCP payload?" The proxy works in the application layer. I mean, it works with regular BSD sockets. As I said we use modified version of F-stack ( https://github.com/F-Stack/f-stack) for this. Basically our version is very close to the original libuinet (https://github.com/pkelsey/libuinet) but based on a newer version of the FreeBSD networking stack (FreeBSD 11). Here is a rough description how it works: 1. Every thread of our application reads packets in bursts from the single RX queue using the DPDK API. 2. These packets are then passed/injected into the FreeBSD/F-stack networking stack. We use separate networking stack per thread. 3. The networking stack processes the packets queueing them in the receive buffers of the TCP sockets. These are regular sockets. 4. Every application thread also calls regularly an epoll_wait API provided by the F-stack library. It's just a wrapper over the kevent API provided by the FreeBSD. 5. The application gets the read/write events from the epoll_wait and reads/writes to the corresponding sockets. Again this is done exactly like in a regular Linux application where you read/write data from/to the sockets. 6. Our test proxy application used sockets in pairs and all data read from a given TCP socket were written to the corresponding TCP socket in the other direction. 7. The written data to the given socket is put in the send buffers of this socket and eventually sent out via the given TX queue using the DPDK API. This happens via callback that's provided to the F-stack. The callback is called for every single packet that needs to be send out by the F-stack and our application implements this callback using the DPDK functionality. In our design the F-stack/FreeBSD stack doesn't know about the DPDK it can work with different packet processing framework. "Does it mean UDP-payload-size is NOT 1400 bytes (MTU size)? And it is as smaller as 64 bytes for example?" My personal observation is that for the same amount of traffic the UTP traffic generates much more packets per second than the corresponding HTTP traffic running over TCP. These are the two tests that we did. However, I can't provide you numbers about this at the moment but there are lots of packets smaller than the MTU size usually. I think they come from things like the internal ACK packets which seem to be send more frequently than TCP. Also the request, cancel, have, etc messages, from the BitTorrent protocol, are most of the times sent in smaller packets. "Do you handle UTP payload, or just "relay" it like proxy?" Our proxies always work with sockets. We have application business logic built over the socket layer. For the test case we just proxied the data between pairs of UTP sockets in the same way we did it for the TCP proxy above. We have implementation of the UTP protocol which provides a socket API similar to the BSD socket API with read/write/shutdown/close/etc functions. As you probably may have read, the UTP protocol is, kind of, a simplified version of the TCP protocol but also more suitable for the needs of the BitTorrent traffic. So this is a reliable protocol and this means that there is a need for socket buffers. Our implementation is built over the UDP sockets provided by the F-stack. The data are read from the UDP sockets and put into the buffers of the corresponding UTP socket. If contiguous data are collected into the buffers, the implementation fires notification to the application layer. The write direction works in the opposite way. The data from the application are first written to the buffers of the UTP socket and then later send via the internal UDP socket from the F-stack. So to summarize the above. We handle the TCP/UDP payload using the regular BSD socket API provided by the F-stack library and our UTP stack library. For the test we just relayed the data between a few thousands pairs of sockets. Currently we do much more complex manipulation of this data but this is still work in progress and the final performance is still not tested. Hope the above explanations help. Pavel. >