DPDK patches and discussions
 help / color / mirror / Atom feed
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [dpdk-dev] [Bug 650] af_xdp:the packets in complete_queue can't match the tx_queue
Date: Tue, 09 Mar 2021 10:22:42 +0000	[thread overview]
Message-ID: <bug-650-3@http.bugs.dpdk.org/> (raw)

https://bugs.dpdk.org/show_bug.cgi?id=650

            Bug ID: 650
           Summary: af_xdp:the packets in complete_queue can't match the
                    tx_queue
           Product: DPDK
           Version: 20.05
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: other
          Assignee: dev@dpdk.org
          Reporter: huangying-c@360.cn
  Target Milestone: ---

kernel:5.9.1
=================================================
NIC:
driver: ixgbe
version: 5.9.1
firmware-version: 0x800006c5, 15.0.27
expansion-rom-version: 
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

ethtool -l eth2
Channel parameters for eth2:
Pre-set maximums:
RX:             0
TX:             0
Other:          1
Combined:       63
Current hardware settings:
RX:             0
TX:             0
Other:          1
Combined:       1
==================================================
appplication args:
EAL = --syslog local1 --log-level=pmd.net.af_xdp:8
--vdev=net_af_xdp,iface=eth2,start_queue=0,queue_count=1 -l0-1
==================================================


I wrote a DNS server using AF_XDP Drivre, and when I stress tested it, I found
that the number of packets that have been sent has been consistently lower (in
complete_queue) than the number that have been put into tx_queue, and this has
always been the case, resulting in poor performance during stress tests.
Then, I added logging to the code.

I'm so sorry, my English is not good, so the Chinese description is attached

英语不好,所以附上中文描述
我用af_xdp
drivre写了个dns的服务,当我压力测试的时候发现发送完成的数据包的数量一直比放进tx_queue的包的数量要少,而且一直是这样,如此一来在压力测试的时候,性能很低。

log is following :
af_xdp_tx_zc(): want to send nb_pkts 8 /*here I want to send 8 packets to the
af_xdp_tx_zc */
pull_umem_cq(): send complete n 3 packets 
af_xdp_tx_zc(): submit to tx_ring count 8 packets /*here put the 8 packets into
tx_ring*/
af_xdp_rx_zc(): rcvd 5 packets
reserve_fill_queue_zc(): reserve size 5 for fill queue
af_xdp_tx_zc(): want to send nb_pkts 5
pull_umem_cq(): send complete n 5 packets //here the send completely packets is
5 and not 8

the added log in rte_eth_af_xdp.c:af_xdp_tx_zc
static uint16_t
af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
{
    struct pkt_tx_queue *txq = queue;
    struct xsk_umem_info *umem = txq->umem;
    struct rte_mbuf *mbuf;
    unsigned long tx_bytes = 0;
    int i;
    uint32_t idx_tx;
    uint16_t count = 0;
    struct xdp_desc *desc;
    uint64_t addr, offset;

    AF_XDP_LOG(DEBUG, "want to send nb_pkts %u\n", nb_pkts); //the log is added
by me
    pull_umem_cq(umem, nb_pkts); // here, check last tx sended packets

    for (i = 0; i < nb_pkts; i++) {
        mbuf = bufs[i];

        if (mbuf->pool == umem->mb_pool) {
            if (!xsk_ring_prod__reserve(&txq->tx, 1, &idx_tx)) {
                AF_XDP_LOG(DEBUG, "=======nb_pkts %u, count %u\n", nb_pkts,
count);
                kick_tx(txq); 
                goto out;
            }
            desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
            desc->len = mbuf->pkt_len;
            addr = (uint64_t)mbuf - (uint64_t)umem->buffer -
                    umem->mb_pool->header_size;
            offset = rte_pktmbuf_mtod(mbuf, uint64_t) -
                    (uint64_t)mbuf +
                    umem->mb_pool->header_size;
            offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
            desc->addr = addr | offset;
            count++;
        } else {
            struct rte_mbuf *local_mbuf =
                    rte_pktmbuf_alloc(umem->mb_pool);
            void *pkt;

            if (local_mbuf == NULL)
                goto out;

            if (!xsk_ring_prod__reserve(&txq->tx, 1, &idx_tx)) {
                rte_pktmbuf_free(local_mbuf);
                kick_tx(txq);
               goto out;
            }

            desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
            desc->len = mbuf->pkt_len;

            addr = (uint64_t)local_mbuf - (uint64_t)umem->buffer -
                    umem->mb_pool->header_size;
            offset = rte_pktmbuf_mtod(local_mbuf, uint64_t) -
                    (uint64_t)local_mbuf +
                    umem->mb_pool->header_size;
            pkt = xsk_umem__get_data(umem->buffer, addr + offset);
            offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
            desc->addr = addr | offset;
            rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *),
                    desc->len);
            rte_pktmbuf_free(mbuf);
            count++;
        }

        tx_bytes += mbuf->pkt_len;
    }

    kick_tx(txq);
    AF_XDP_LOG(DEBUG, "submit to tx_ring count %u packets\n", count);//the log
is added by me
out:
    xsk_ring_prod__submit(&txq->tx, count);

    txq->stats.tx_pkts += count;
    txq->stats.tx_bytes += tx_bytes;
    txq->stats.tx_dropped += nb_pkts - count;

    return count;
}

the added log in rte_eth_af_xdp.c:af_xdp_tx_zc
static void
pull_umem_cq(struct xsk_umem_info *umem, int size)
{
    struct xsk_ring_cons *cq = &umem->cq;
    size_t i, n;
    uint32_t idx_cq = 0;

    n = xsk_ring_cons__peek(cq, size, &idx_cq);

    for (i = 0; i < n; i++) {
        uint64_t addr;
        addr = *xsk_ring_cons__comp_addr(cq, idx_cq++);
#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
        addr = xsk_umem__extract_addr(addr);
        rte_pktmbuf_free((struct rte_mbuf *)
                    xsk_umem__get_data(umem->buffer,
                    addr + umem->mb_pool->header_size));
#else
        rte_ring_enqueue(umem->buf_ring, (void *)addr);
#endif
    }
    AF_XDP_LOG(DEBUG, "send complete n %lu packets\n", n);//here the log
    xsk_ring_cons__release(cq, n);
}

-- 
You are receiving this mail because:
You are the assignee for the bug.

             reply	other threads:[~2021-03-09 10:22 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-09 10:22 bugzilla [this message]
2021-03-10  9:33 ` bugzilla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-650-3@http.bugs.dpdk.org/ \
    --to=bugzilla@dpdk.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).