From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-f179.google.com (mail-qt0-f179.google.com [209.85.216.179]) by dpdk.org (Postfix) with ESMTP id 4EF56377E for ; Tue, 18 Jul 2017 07:50:33 +0200 (CEST) Received: by mail-qt0-f179.google.com with SMTP id 21so7910447qtx.3 for ; Mon, 17 Jul 2017 22:50:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=HIEMu+4za/R45UykQRoH6RxrWPCpOrRSvEltozbyPTQ=; b=pVneqavaK3m9RsknA2KBe/6q57VhCNGRU2k4Kg6ylp+pxa+S0w51Uhr3Xk9UrpVV2h R3vn0//IVP1MyMyJOi+J768LjtgtVfUtXxDeDUMJkEbQmn46U8W/ANzk6yRF92ngSiB+ TsWQa5aZZww8g+7cIGfaVbyJDb0YLi8qa8OvKxbNMUQqkYEFmVaPuW0QfiDkGT+vOG5u svVRHYTomXt8JQigqbN+N26IftsjQ02aRYsC7wrg7rNiho+3fOJbXBbvH4t6ytHZKK1D iDyOsFcrzW76FDwE8kdynq6JsItUBXA6mmYUKWasVZ4rj8FZMBQ2K/YCf0TsZ1HNYyNJ LOiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=HIEMu+4za/R45UykQRoH6RxrWPCpOrRSvEltozbyPTQ=; b=ROqP2fHzU0TDbIjX8zbyv1iLGvA1a+1D5MqeDNzS2hJxLDAY03SjNTOrch9wSvGwe1 g5RyJR/y5+dMu6r4i6zbKiI5Q67kFzf3r8dYv595Ld1j3zCWhJwmXPgqMsp1fPrGABAW EAztv5Q58po+IQ0AyvFc2VebarlY7Ms7WxnnrcoTgnMNSYdgaytY9ogOQ9geAM5VsvfP muM811DmJw7p2FyRz/rlNDXEfuV1l+vjAdy1CReL3N79ctg2w/0CVMgbhWJg5+QsJWS0 4Z09GxSyy2mYUp0eb8hdXQlZIW1m3tzUFBAs6BG4qhxnqx8cRIaEu5tkrtjcLpxZ4eHU DZfg== X-Gm-Message-State: AIVw112+dOCTj4nkq/SL8uK9ArCDcHQZtxJNrEwlwwPvWUcUcjp8fZKP hZz00vVyi1BhKXN88i+z4Q5C+RHRtw== X-Received: by 10.200.53.20 with SMTP id y20mr9888qtb.98.1500357032725; Mon, 17 Jul 2017 22:50:32 -0700 (PDT) MIME-Version: 1.0 Received: by 10.12.149.235 with HTTP; Mon, 17 Jul 2017 22:50:32 -0700 (PDT) In-Reply-To: References: From: Shyam Shrivastav Date: Tue, 18 Jul 2017 11:20:32 +0530 Message-ID: To: Harold Demure Cc: Pavel Shirshov , users@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Strange packet loss with multi-frame payloads X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 18 Jul 2017 05:50:33 -0000 As I understand the problem disappears with 1 RX queue on server. You can reduce number of queues on server from 8 and arrive at an optimal value without packet loss. For intel 82599 NIC packet loss is experienced with more than 4 RX queues, this was reported in dpdk dev or user mailing list, read in archives sometime back while looking for similar information with 82599. On Tue, Jul 18, 2017 at 4:54 AM, Harold Demure wrote: > Hello again, > I tried to convert my statically defined buffers into buffers allocated > through rte_malloc (as discussed in the previous email, see quoted text). > Unfortunately, the problem is still there :( > Regards, > Harold > > > > > > > 2. How do you know you have the packet loss? > > > > > > *I know it because some fragmented packets never get reassembled fully. > If > > I print the packets seen by the server I see something like "PCKT_ID 10 > > FRAG 250, PCKT_ID 10 FRAG 252". And FRAG 251 is never printed.* > > > > *Actually, something strange that happens sometimes is that a core > > receives fragments of two packets and, say, receives frag 1 of packet > X, > > frag 2 of packet Y, frag 3 of packet X, frag 4 of packet Y.* > > *Or that, after "losing" a fragment for packet X, I only see printed > > fragments with EVEN frag_id for that packet X. At least for a while.* > > > > *This led me also to consider a bug in my implementation (I don't > > experience this problem if I run with a SINGLE client thread). However, > > with smaller payloads, even fragmented, everything runs smoothly.* > > *If you have any suggestions for tests to run to spot a possible bug in > my > > implementation, It'd be more than welcome!* > > > > *MORE ON THIS: the buffers in which I store the packets taken from RX are > > statically defined arrays, like struct rte_mbuf* temp_mbuf[SIZE]. SIZE > > can be pretty high (say, 10K entries), and there are 3 of those arrays > per > > core. Can it be that, somehow, they mess up the memory layout (e.g., they > > intersect)?* > > >