DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 650] af_xdp:the packets in complete_queue can't match the tx_queue
@ 2021-03-09 10:22 bugzilla
  2021-03-10  9:33 ` bugzilla
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2021-03-09 10:22 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=650

            Bug ID: 650
           Summary: af_xdp:the packets in complete_queue can't match the
                    tx_queue
           Product: DPDK
           Version: 20.05
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: other
          Assignee: dev@dpdk.org
          Reporter: huangying-c@360.cn
  Target Milestone: ---

kernel:5.9.1
=================================================
NIC:
driver: ixgbe
version: 5.9.1
firmware-version: 0x800006c5, 15.0.27
expansion-rom-version: 
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes

ethtool -l eth2
Channel parameters for eth2:
Pre-set maximums:
RX:             0
TX:             0
Other:          1
Combined:       63
Current hardware settings:
RX:             0
TX:             0
Other:          1
Combined:       1
==================================================
appplication args:
EAL = --syslog local1 --log-level=pmd.net.af_xdp:8
--vdev=net_af_xdp,iface=eth2,start_queue=0,queue_count=1 -l0-1
==================================================


I wrote a DNS server using AF_XDP Drivre, and when I stress tested it, I found
that the number of packets that have been sent has been consistently lower (in
complete_queue) than the number that have been put into tx_queue, and this has
always been the case, resulting in poor performance during stress tests.
Then, I added logging to the code.

I'm so sorry, my English is not good, so the Chinese description is attached

英语不好,所以附上中文描述
我用af_xdp
drivre写了个dns的服务,当我压力测试的时候发现发送完成的数据包的数量一直比放进tx_queue的包的数量要少,而且一直是这样,如此一来在压力测试的时候,性能很低。

log is following :
af_xdp_tx_zc(): want to send nb_pkts 8 /*here I want to send 8 packets to the
af_xdp_tx_zc */
pull_umem_cq(): send complete n 3 packets 
af_xdp_tx_zc(): submit to tx_ring count 8 packets /*here put the 8 packets into
tx_ring*/
af_xdp_rx_zc(): rcvd 5 packets
reserve_fill_queue_zc(): reserve size 5 for fill queue
af_xdp_tx_zc(): want to send nb_pkts 5
pull_umem_cq(): send complete n 5 packets //here the send completely packets is
5 and not 8

the added log in rte_eth_af_xdp.c:af_xdp_tx_zc
static uint16_t
af_xdp_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
{
    struct pkt_tx_queue *txq = queue;
    struct xsk_umem_info *umem = txq->umem;
    struct rte_mbuf *mbuf;
    unsigned long tx_bytes = 0;
    int i;
    uint32_t idx_tx;
    uint16_t count = 0;
    struct xdp_desc *desc;
    uint64_t addr, offset;

    AF_XDP_LOG(DEBUG, "want to send nb_pkts %u\n", nb_pkts); //the log is added
by me
    pull_umem_cq(umem, nb_pkts); // here, check last tx sended packets

    for (i = 0; i < nb_pkts; i++) {
        mbuf = bufs[i];

        if (mbuf->pool == umem->mb_pool) {
            if (!xsk_ring_prod__reserve(&txq->tx, 1, &idx_tx)) {
                AF_XDP_LOG(DEBUG, "=======nb_pkts %u, count %u\n", nb_pkts,
count);
                kick_tx(txq); 
                goto out;
            }
            desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
            desc->len = mbuf->pkt_len;
            addr = (uint64_t)mbuf - (uint64_t)umem->buffer -
                    umem->mb_pool->header_size;
            offset = rte_pktmbuf_mtod(mbuf, uint64_t) -
                    (uint64_t)mbuf +
                    umem->mb_pool->header_size;
            offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
            desc->addr = addr | offset;
            count++;
        } else {
            struct rte_mbuf *local_mbuf =
                    rte_pktmbuf_alloc(umem->mb_pool);
            void *pkt;

            if (local_mbuf == NULL)
                goto out;

            if (!xsk_ring_prod__reserve(&txq->tx, 1, &idx_tx)) {
                rte_pktmbuf_free(local_mbuf);
                kick_tx(txq);
               goto out;
            }

            desc = xsk_ring_prod__tx_desc(&txq->tx, idx_tx);
            desc->len = mbuf->pkt_len;

            addr = (uint64_t)local_mbuf - (uint64_t)umem->buffer -
                    umem->mb_pool->header_size;
            offset = rte_pktmbuf_mtod(local_mbuf, uint64_t) -
                    (uint64_t)local_mbuf +
                    umem->mb_pool->header_size;
            pkt = xsk_umem__get_data(umem->buffer, addr + offset);
            offset = offset << XSK_UNALIGNED_BUF_OFFSET_SHIFT;
            desc->addr = addr | offset;
            rte_memcpy(pkt, rte_pktmbuf_mtod(mbuf, void *),
                    desc->len);
            rte_pktmbuf_free(mbuf);
            count++;
        }

        tx_bytes += mbuf->pkt_len;
    }

    kick_tx(txq);
    AF_XDP_LOG(DEBUG, "submit to tx_ring count %u packets\n", count);//the log
is added by me
out:
    xsk_ring_prod__submit(&txq->tx, count);

    txq->stats.tx_pkts += count;
    txq->stats.tx_bytes += tx_bytes;
    txq->stats.tx_dropped += nb_pkts - count;

    return count;
}

the added log in rte_eth_af_xdp.c:af_xdp_tx_zc
static void
pull_umem_cq(struct xsk_umem_info *umem, int size)
{
    struct xsk_ring_cons *cq = &umem->cq;
    size_t i, n;
    uint32_t idx_cq = 0;

    n = xsk_ring_cons__peek(cq, size, &idx_cq);

    for (i = 0; i < n; i++) {
        uint64_t addr;
        addr = *xsk_ring_cons__comp_addr(cq, idx_cq++);
#if defined(XDP_UMEM_UNALIGNED_CHUNK_FLAG)
        addr = xsk_umem__extract_addr(addr);
        rte_pktmbuf_free((struct rte_mbuf *)
                    xsk_umem__get_data(umem->buffer,
                    addr + umem->mb_pool->header_size));
#else
        rte_ring_enqueue(umem->buf_ring, (void *)addr);
#endif
    }
    AF_XDP_LOG(DEBUG, "send complete n %lu packets\n", n);//here the log
    xsk_ring_cons__release(cq, n);
}

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dpdk-dev] [Bug 650] af_xdp:the packets in complete_queue can't match the tx_queue
  2021-03-09 10:22 [dpdk-dev] [Bug 650] af_xdp:the packets in complete_queue can't match the tx_queue bugzilla
@ 2021-03-10  9:33 ` bugzilla
  0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2021-03-10  9:33 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=650

Ciara Loftus (ciara.loftus@intel.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|IN_PROGRESS                 |RESOLVED
         Resolution|---                         |FIXED

--- Comment #3 from Ciara Loftus (ciara.loftus@intel.com) ---
Closing. Issue resolved upstream.

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-03-10  9:33 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-09 10:22 [dpdk-dev] [Bug 650] af_xdp:the packets in complete_queue can't match the tx_queue bugzilla
2021-03-10  9:33 ` bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).