From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 324A2A034E for ; Sun, 7 Jun 2020 15:16:54 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7A04D1BFC7; Sun, 7 Jun 2020 15:16:53 +0200 (CEST) Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by dpdk.org (Postfix) with ESMTP id 7B5801BEB4 for ; Sun, 7 Jun 2020 15:16:52 +0200 (CEST) Received: by mail-wr1-f65.google.com with SMTP id e1so14545831wrt.5 for ; Sun, 07 Jun 2020 06:16:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=rMTfKKAcHlEe0ww5dd6f2EJf0PwDTbZMU4GEmKOGX38=; b=Rb4X2TwOHSU1fUEJirsR0OEB32N9Q32v+xbrZPEejDUMAef47JEgZqTYng+bh3Zspb m38VcIe6BvOcMXeEh76zGFI6N1yTwH1A10pkhnWLkJLZH+8Wh7G7WngC+lfOQnjFZgZl zrTWn3M1KIGH4/crKv5Z+MDkDazbInyQKS3QkHXG+ZiWRr2R6nHxRp6xHamZkPfxWleq UWLJKRBVVf2Ng8Q58wFmew/fIvsFVeE+VJKjYvOIQQwjfinfL5mcCrC7ZltRLR0KDEdX GEEOmqp0/Rs7j+OOSUs4SRwul+BSLOKWW4QOVEEKrmYpWfBnFS1WkmuTzM3jl8EXb3yv axOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=rMTfKKAcHlEe0ww5dd6f2EJf0PwDTbZMU4GEmKOGX38=; b=nM04qowT7qjtAVkRTqOu9U+pFrmjzV13GmToQg7mlzPPzASorpxsqyLzJckYf6VFLC nZF2+rtRJViWnRXyj7Ttiv/hK6Od/EFteLHNPbd0uIR6hhtgF2AheryYVEp96FeEDWV7 a4jiFMQ7m1QZ0Xrv8TSXgFS4Cf6cZOzpVULhQkal61dXOHtP0K2J16XAjzLzps4A0Smk Nf0RWtmuoCNK4kQBsc/TRecvAOhyT4cnlc1MoWaecEYO9TTnMcciOrz6A0rSAlF+G8RC X+ugt1a3VHpUAJMm2nhX7dNhNbc83LpTX6veXD/d8ld9N7fHdQD7ozjOMCPubRtLGUXN HbbA== X-Gm-Message-State: AOAM5334JLlZIYSSMUvZwRH8a3EDkWcj9aBSHMyura4XmRA3V5xlSzNJ uFaCUMDjN5LckbWeWjsPaDIxPh13Uv4clYYK5no= X-Google-Smtp-Source: ABdhPJwzUOmbCepCJ6lWM3IuK1t2GBHlgzfn717RmGJ7pclLHcf16+z2SDOoluXNtrM0hZLKfBcxorNTOrJltxgK+T8= X-Received: by 2002:adf:f251:: with SMTP id b17mr2126668wrp.289.1591535811955; Sun, 07 Jun 2020 06:16:51 -0700 (PDT) MIME-Version: 1.0 References: <504fcb6e5a12a03035e7b55507e7c279@therouter.net> <20200601091729.03ea9e50@hermes.lan> <7DA537F2-9887-4B0A-9249-064736E8A9AD@therouter.net> In-Reply-To: From: Cliff Burdick Date: Sun, 7 Jun 2020 06:16:40 -0700 Message-ID: To: Alex Kiselev Cc: Stephen Hemminger , users Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] segmention fault while accessing mbuf X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" That shouldn't matter. The mbuf size is allocated when you create the mempool, and data_len/pkt_len are just to specify the size of the total packet and each segment. The underlying storage size is still the same. Have you checked to see if it's potentially a hugepage issue? On Sun, Jun 7, 2020, 02:59 Alex Kiselev wrote: > On 2020-06-07 04:41, Cliff Burdick wrote: > > I can't tell from your code, but you assigned nb_rx to the number of > > packets received, but then used vec_size, which might be larger. Does > > this happen if you use nb_rx in your loops? > > No, this doesn't happen. > I just skip the part of the code that translates nb_rx to vec_size, > since that code is double checked. > > My actual question now is about possible impact of using > incorrect values of mbuf's pkt_len and data_len fields. > > > > > On Sat, Jun 6, 2020 at 5:59 AM Alex Kiselev > > wrote: > > > >>> 1 =D0=B8=D1=8E=D0=BD=D1=8F 2020 =D0=B3., =D0=B2 19:17, Stephen Hemmin= ger > >> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0=D0= =BB(=D0=B0): > >>> > >>> On Mon, 01 Jun 2020 15:24:25 +0200 > >>> Alex Kiselev wrote: > >>> > >>>> Hello, > >>>> > >>>> I've got a segmentation fault error in my data plane path. > >>>> I am pretty sure the code where the segfault happened is ok, > >>>> so my guess is that I somehow received a corrupted mbuf. > >>>> How could I troubleshoot this? Is there any way? > >>>> Is it possible that other threads of the application > >>>> corrupted that mbuf? > >>>> > >>>> I would really appriciate any advice. > >>>> Thanks. > >>>> > >>>> DPDK 18.11.3 > >>>> NIC: 82599ES > >>>> > >>>> Code: > >>>> > >>>> nb_rx =3D rte_eth_rx_burst(port_id, queue_id, pkts_burst, > >>>> MAX_PKT_BURST); > >>>> > >>>> ... > >>>> > >>>> for (i=3D0; i < vec_size; i++) { > >>>> rte_prefetch0(rte_pktmbuf_mtod(m_v[i], void *)); > >>>> > >>>> for (i=3D0; i < vec_size; i++) { > >>>> m =3D m_v[i]; > >>>> eth_hdr =3D rte_pktmbuf_mtod(m, struct ether_hdr *); > >>>> eth_type =3D rte_be_to_cpu_16(eth_hdr->ether_type); > >> <--- > >>>> Segmentation fault > >>>> ... > >>>> > >>>> #0 rte_arch_bswap16 (_x=3D >> memory > >>>> at address 0x4d80000000053010>) > >>> > >>> Build with as many of the debug options turned on in the DPDK > >> config, > >>> and build with EXTRA_CFLAGS of -g. > >> > >> Could using an incorrect (a very big one) value of mbuf pkt_len and > >> data_len while transmitting cause mbuf corruption and following > >> segmentation fault on rx? >