From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f181.google.com (mail-qt0-f181.google.com [209.85.216.181]) by dpdk.org (Postfix) with ESMTP id AF7582E41 for ; Mon, 17 Jul 2017 15:18:31 +0200 (CEST) Received: by mail-qt0-f181.google.com with SMTP id b40so105537234qtb.2 for ; Mon, 17 Jul 2017 06:18:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:from:date:message-id:subject:to; bh=etSovwNspH9sd5CykbzYvCTHre+HfqB0rP7tldXQ9RY=; b=U9qZo/uLXsuYDGYmTdLIWrdxgKFKk5wre9z+k6wjIPROZlrNq57IOpBCTYrV2TRYlC 2r8A3wVAXTo4cUTfHw/dG/nGEsg/LfPdd1lr6lMv0kHFKH2Vu6JEuNnAjsNOFgljJ9u+ /3+6Bo0zCtH4DNhA7O+VKMngWVsrDseWl0owutA1Ttog6/TZQFFIx1O8T8UZwH3Yp6xQ P3EKghy51SqgZC6kiszLk00AYll/Xu02k75rVUo80S69LoPuOEL58MX+sjcdX2JcB20E t/y2z2/w+zhldIc2MklKVycuo5LRIezxhufkM0ZxTbImgzJKQcMWwiPyUAyYRYpzSbMg 7D3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=etSovwNspH9sd5CykbzYvCTHre+HfqB0rP7tldXQ9RY=; b=Ny91DuLcZa1X7OWS2caXRg1LPWSUUJcs4LWKnEKkEi5nICGz/gfNGrYpxuStYYGfgx DVWONXkbmB7SXa/koOgL1Emd/dQmNbpaj1bIdV9tPaW3xEU7a7tGujM4dZZ0orck/2QO 9oUqL7l8iTYiHXq2PO66bJFCYfxArJ9a1nHpj2SLJ1+Mna6yBepEbAVlwpvZIgfNZVgA VBlTnQj68CXlTU5098P+mWXhSVcXh5b87eRMz9UdgLYSADWjfzWYbengr/ax7tlc+6p1 nJ0t8FskBCukaQA3HIcSwyczXnhfC9VdULqj3gpUnU0D6udzexPxvHayu5iFHBPDc2JP 4D7Q== X-Gm-Message-State: AIVw110xDToR6bHQyW7jvAXMF2NWPrglBxMrK7C9lxyXYYTrQ3uPY9UT cJOhC7qY7NWx0wX/Gn3jB7Fgm3+HkgVsaeM= X-Received: by 10.237.47.230 with SMTP id m93mr28999685qtd.103.1500297510844; Mon, 17 Jul 2017 06:18:30 -0700 (PDT) MIME-Version: 1.0 Received: by 10.237.61.51 with HTTP; Mon, 17 Jul 2017 06:18:30 -0700 (PDT) From: Harold Demure Date: Mon, 17 Jul 2017 15:18:30 +0200 Message-ID: To: users@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: [dpdk-users] Strange packet loss with multi-frame payloads X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Jul 2017 13:18:32 -0000 Hello, I am having a problem with packets loss and I hope you can help me out. Below you find a description of the application and of the problem. It is a little long, but I really hope somebody out there can help me, because this is driving me crazy. *Application* I have a client-server application; single server, multiple clients. The machines have 8 active cores which poll 8 distinct RX queues to receive packets and use 8 distinct TX queues to burst out packets (i.e., run-to-completion model). *Workload* The workload is composed of mostly single-frame packets, but occasionally clients send to the server multi-frame packets, and occasionally the server sends back to the client multi-frame replies. Packets are fragmented at the UDP level (i.e., no IP fragmentation, every packet of the same requests has a frag_id == 0, even though they share the same packet_id). *Problem* I experience huge packet loss on the server when the occasional multi-frame requests of the clients correspond to a big payload ( > 300 Kb). The eth stats that I gather on the server say that there is no error, nor any packet loss (q_errors, imissed, ierrors, oerrors, rx_nombuf are all equal to 0). Yet, the application is not seeing some packets of big requests that the clients send. I record some interesting facts 1) The clients do not experience such packet loss, although they also receive packets with an aggregate payload of the same size of the packets received by the server. The only differences w.r.t. the server is that a client machine of course has a lower RX load (it only gets the replies to its own requests) and a client thread only receives packets from a single machine (the server). 2) This behavior does not arise as long as the biggest payload exchanged between clients and servers is < 200 Kb. This leads me to conclude that fragmentation is not te issue (also, if I implement a stubborn retransmission, eventually all packets are received even with bigger payloads). Also, I reserve plenty of memory for my mempool, so I don't think the server runs out of mbufs (and if that was the case I guess I would see this in the dropped packets count, right?). 3) If I switch to the pipeline model (on the server only) this problem basically disappears. By pipeline model I mean something like the load-balancing app, where a single core on the server receives client packets on a single RX queue (worker cores reply back to the client using their own TX queue). This leads me to think that the problem is on the server, and not on the clients. 4) It doesn't seem to be a "load" problem. If I run the same tests multiple times, in some "lucky" runs I get that the run-to-completion model outperforms the pipeline one. Also, the run-to-completion model with single-frame packets can handle a number of single-frame packets per second that is much higher than the number of frames per second that are generated with the workload with some big packets. *Question* Do you have any idea why I am witnessing this behavior? I know that having fewer queues can help performance by relieving contention on the NIC, but is it possible that the contention is actually causing packets to get dropped? *Platform* DPDK: v 2.2-0 (I know this is an old version, but I am dealing with legacy code I cannot change) MLNX_OFED_LINUX-3.1-1.0.3-ubuntu14.04-x86_64 My NIC : Mellanox Technologies MT27520 Family [ConnectX-3 Pro] My machine runs a 4.4.0-72-generic on Ubuntu 16.04.02 CPU is Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz 2x8 cores Thank you a lot, especially if you went through the whole email :) Regards, Harold