From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 47971A0AC5 for ; Sun, 5 May 2019 01:07:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 73ACF1F28; Sun, 5 May 2019 01:07:38 +0200 (CEST) Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by dpdk.org (Postfix) with ESMTP id A02BBA3 for ; Sun, 5 May 2019 01:07:36 +0200 (CEST) Received: by mail-lj1-f195.google.com with SMTP id q10so8102873ljc.6 for ; Sat, 04 May 2019 16:07:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=VrN27t4GUQOj1SN+ZTWl2vGcf43/W4KfocmMot1mf8c=; b=hevzB4tU6O9munjJJ6resXkkvWF/EftWFIifHVHPCYid7cPgtbgnQEtcm8mamz5VWp 8ePSz8ygVsT8oxt3PVT72QDQ5G5353+MVQpJW7CAMMWpocu6slZzDbf1pf16G1K0cqcv jh8mpnv5NiQAonnWHccV+U2HqmJ5DdzrXOi9NcdhnRMlLEhQiwkHGnSklvjemNLGbIZE 2RoZSlfZZp6q8ohlIwiLMALgjSD6xBH0do5HAUZYXaFrzkCX4oojTtIyrpEKNygrW0OK MZq6wtnAmN/wIl+oWYRvjVVfE6pkkD7wLJP/wEhvpYQANaOYbYPnZy0cT4ZGfTSSNhl4 A72w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=VrN27t4GUQOj1SN+ZTWl2vGcf43/W4KfocmMot1mf8c=; b=JTHKBXRzq3+KX3f/0c2jAGdnVwK0DDo/n5NbbFmfL85J37f2yc2b1mRRxR+iqceMCR H0Mb8CZqtyYcbOv9gtakofVx0D70r4ZQz9LTXf+R7izCa9QMf4nh5EurhEr9617UQ4dd MdrncctphIpQS+ATsOILLUuObJ+vazbZAh3fleV22E2gJ2ZqzVMTskLhY85EHWqaGhVn GuVOvPR3sAk5Xigt9GoWGHGkYVAgSQlHlBKZqzsYh4WpfPPo/T8iH48TeM/2M61OWi/W xWHYXe4EtihDfj7UfOOeMmbM0QdJ27f4m3FkLymSJL37UOQ2pS7B5QJyjOU/kAQwqGuZ WAJw== X-Gm-Message-State: APjAAAUEKLMQASRijiCY57Y2yfrloc1Sm1TKzr0mk8wOV3TdzX2KSW6L wwMbpJ/GKhQNJUPt0KD4ajfQ8bNEQ0qmRRPOEE0= X-Google-Smtp-Source: APXvYqw/pZSQ4v2IxD0IWhlTD72hRWADuYgbUyJKbD8G2CWbDTj5QkgcWCJnvFjGdq7yo8jMLdkhFoCDACnKQpMf8DA= X-Received: by 2002:a2e:1f02:: with SMTP id f2mr9249138ljf.86.1557011256063; Sat, 04 May 2019 16:07:36 -0700 (PDT) MIME-Version: 1.0 References: <1B492CED-5816-4A43-B7DD-4EFA4F58631D@mellanox.com> In-Reply-To: <1B492CED-5816-4A43-B7DD-4EFA4F58631D@mellanox.com> From: Arvind Narayanan Date: Sat, 4 May 2019 18:07:24 -0500 Message-ID: To: Yongseok Koh Cc: users Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] Issue with mlx5_rxtx.c while calling rte_eth_tx_burst() in DPDK 18.11 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org Sender: "users" It passes __rte_mbuf_sanity_check. rte_mbuf_check() is not available in dpdk 18.02. I debugged when the assertion failed and double checked all the mbuf's pkt_len and data_len. All seems fine. Yes, in my case its simple, all mbufs are single segment. Is there some bound on the number of tx calls we can do consecutively using mlx5 driver? Its like if I do a lot of tx calls consecutively (e.g. ~10 to 20 calls to rte_eth_tx_burst() with each call sending out a burst of ~64 mbufs), I face this problem otherwise I don't. Thoughts? Arvind On Tue, Apr 23, 2019 at 6:45 PM Yongseok Koh wrote: > > > On Apr 21, 2019, at 9:59 PM, Arvind Narayanan > wrote: > > > > I am running into a weird problem when using rte_eth_tx_burst() using > mlx5 > > in dpdk 18.11, running on Ubuntu 18.04 LTS (using Mellanox Connect X5 > 100G > > EN). > > > > Here is a simplified snippet. > > > > ================== > > #define MAX_BATCHES 64 > > #define MAX_BURST_SIZE 64 > > > > struct batch { > > struct rte_mbuf *mbufs[MAX_BURST_SIZE]; // array of packets > > int num_mbufs; // num of mbufs > > int queue; // outgoing tx_queue > > int port; // outgoing port > > } > > > > struct batch * batches[MAX_BATCHES]; > > > > /* dequeue a number of batches */ > > int batch_count = rte_ring_sc_dequeue_bulk(some_rte_ring, (void **) > > &(batches), MAX_BATCHES, NULL); > > > > /* transmit out all pkts from every batch */ > > if (likely(batch_count > 0)) { > > for (i = 0; i < batch_count; i++) { > > ret = rte_eth_tx_burst(batches[i]->port, batches[i]->queue, > (struct > > rte_mbuf **) batches[i]->mbufs, > > batches[i]->num_mbufs); > > } > > } > > > > ================== > > > > At rte_eth_tx_burst(), I keep getting an error saying: > > myapp: /home/arvind/dpdk/drivers/net/mlx5/mlx5_rxtx.c:1652: uint16_t > > txq_burst_empw(struct mlx5_txq_data *, struct rte_mbuf **, uint16_t): > > Assertion `length == DATA_LEN(buf)' failed. > > OR > > myapp: /home/arvind/dpdk/drivers/net/mlx5/mlx5_rxtx.c:1609: uint16_t > > txq_burst_empw(struct mlx5_txq_data *, struct rte_mbuf **, uint16_t): > > Assertion `length == DATA_LEN(buf)' failed. > > > > I have debugged and ensured all the mbuf counts (at least in my code) are > > good. All the memory references to the mbufs also look good. However, I > am > > not sure why Mellanox driver would complain. > > > > I have also tried to play with mlx5_rxtx.c by changing above lines to > > something like > > assert(length == pkts_n); // pkts_n is an argument passed to the func. > > Didn't help. > > > > Any thoughts? > > Hi, > > Does your mbuf pass rte_mbuf_check()? > That complaint is regarding mismatch between m->pkt_len and m->data_len. > If the mbuf is single segment packet (m->nb_segs == 1, m->next == NULL), > m->pkt_len should be same as m->data_len. > > That assert() ins't strictly needed in the txq_burst_empw() though. > > > Thanks, > Yongseok