From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wi0-f176.google.com (mail-wi0-f176.google.com [209.85.212.176]) by dpdk.org (Postfix) with ESMTP id 59C34C320 for ; Tue, 4 Aug 2015 11:02:03 +0200 (CEST) Received: by wicmv11 with SMTP id mv11so166925262wic.0 for ; Tue, 04 Aug 2015 02:02:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :subject:references:in-reply-to:content-type :content-transfer-encoding; bh=3BRE6ZvuDe7Orl6bZ/ahZ4gnF26wAv5wFzWjWTHZ2aw=; b=NPcM1Rl8IxSSoZQqMp/DJy+UUsLloKnwLcp1nWlliuDNvKp4Uwfieg9OrPm5HuSDwN aJ7fr8nf8lR3WwzgXWcaj52LFW/q5IdizzfBl24WrAuJ64FZcBWlq1y9LbpkVRzm5PRS lU0mDCSfIIuXhH4qt67bFVY9e35HNdi/fo1xa0VMVtlIOEGBodpTfsRkTzRro1cshEzy fxRV9QUdQA07GinzqLM3sR8usqc55SC2/N+rZ6OStYr5L/ozdCPntc/MkUMlGZZnamZj U62VHw6P9yDBCChPXfOQEj7qWq0nvX3qUrLFwir8srgGmHqQxwyTtP+wJ/54Z4zlS2/C NHSg== X-Gm-Message-State: ALoCoQls9mOE+Ti36dR2gR+mIyEaUtY7hxCRFfT8i5HwxVhcAncJQRgUw7Yaed7f8XPo7y2zfvTd X-Received: by 10.180.95.35 with SMTP id dh3mr43438114wib.30.1438678923175; Tue, 04 Aug 2015 02:02:03 -0700 (PDT) Received: from [192.168.0.101] ([90.152.119.35]) by smtp.googlemail.com with ESMTPSA id bq7sm671939wjc.31.2015.08.04.02.02.02 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 04 Aug 2015 02:02:02 -0700 (PDT) Message-ID: <55C07F8A.6070202@linaro.org> Date: Tue, 04 Aug 2015 10:02:02 +0100 From: Zoltan Kiss User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0 MIME-Version: 1.0 To: Cunming Liang , dev@dpdk.org References: <1438330669-25942-1-git-send-email-cunming.liang@intel.com> <1438673558-13284-1-git-send-email-cunming.liang@intel.com> In-Reply-To: <1438673558-13284-1-git-send-email-cunming.liang@intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2] ixgbe: remove vector pmd burst size restriction X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 04 Aug 2015 09:02:03 -0000 On 04/08/15 08:32, Cunming Liang wrote: > On receive side, the burst size now floor aligns to RTE_IXGBE_DESCS_PER_LOOP power of 2. > According to this rule, the burst size less than 4 still won't receive anything. > (Before this change, the burst size less than 32 can't receive anything.) > _recv_*_pkts_vec returns no more than 32(RTE_IXGBE_RXQ_REARM_THRESH) packets. > > On transmit side, the max burst size no longer bind with a constant, however it still > require to check the cross tx_rs_thresh violation. > > There's no obvious performance drop found on both recv_pkts_vec > and recv_scattered_pkts_vec on burst size 32. > > Signed-off-by: Cunming Liang > --- > v2 change: > - keep max rx burst size in 32 > - reword some comments > > drivers/net/ixgbe/ixgbe_rxtx.c | 4 +++- > drivers/net/ixgbe/ixgbe_rxtx.h | 5 ++--- > drivers/net/ixgbe/ixgbe_rxtx_vec.c | 39 +++++++++++++++++++++----------------- > 3 files changed, 27 insertions(+), 21 deletions(-) > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c > index 91023b9..d06aaae 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > @@ -4008,7 +4008,9 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev) > */ > } else if (adapter->rx_vec_allowed) { > PMD_INIT_LOG(DEBUG, "Vector rx enabled, please make sure RX " > - "burst size no less than 32."); > + "burst size no less than " > + "RTE_IXGBE_DESCS_PER_LOOP(=4) (port=%d).", I think you should write in this line: "%d (port=%d)", RTE_IXGBE_DESCS_PER_LOOP, > + dev->data->port_id); > > dev->rx_pkt_burst = ixgbe_recv_pkts_vec; > } else if (adapter->rx_bulk_alloc_allowed) { > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h > index 113682a..b9eca67 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.h > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h > @@ -47,9 +47,8 @@ > (uint64_t) ((mb)->buf_physaddr + RTE_PKTMBUF_HEADROOM) > > #ifdef RTE_IXGBE_INC_VECTOR > -#define RTE_IXGBE_VPMD_RX_BURST 32 > -#define RTE_IXGBE_VPMD_TX_BURST 32 > -#define RTE_IXGBE_RXQ_REARM_THRESH RTE_IXGBE_VPMD_RX_BURST > +#define RTE_IXGBE_RXQ_REARM_THRESH 32 > +#define RTE_IXGBE_MAX_RX_BURST RTE_IXGBE_RXQ_REARM_THRESH > #define RTE_IXGBE_TX_MAX_FREE_BUF_SZ 64 > #endif > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > index cf25a53..2ca0e4c 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c > @@ -245,13 +245,13 @@ desc_to_olflags_v(__m128i descs[4], struct rte_mbuf **rx_pkts) > #endif > > /* > - * vPMD receive routine, now only accept (nb_pkts == RTE_IXGBE_VPMD_RX_BURST) > - * in one loop > + * vPMD raw receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) > * > * Notice: > - * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, just return no packet > - * - nb_pkts > RTE_IXGBE_VPMD_RX_BURST, only scan RTE_IXGBE_VPMD_RX_BURST > + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet > + * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST > * numbers of DD bit > + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two > * - don't support ol_flags for rss and csum err > */ > static inline uint16_t > @@ -286,8 +286,11 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, > __m128i dd_check, eop_check; > #endif /* RTE_NEXT_ABI */ > > - if (unlikely(nb_pkts < RTE_IXGBE_VPMD_RX_BURST)) > - return 0; > + /* nb_pkts shall be less equal than RTE_IXGBE_MAX_RX_BURST */ > + nb_pkts = RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); > + > + /* nb_pkts has to be floor-aligned to RTE_IXGBE_DESCS_PER_LOOP */ > + nb_pkts = RTE_ALIGN_FLOOR(nb_pkts, RTE_IXGBE_DESCS_PER_LOOP); > > /* Just the act of getting into the function from the application is > * going to cost about 7 cycles */ > @@ -356,7 +359,7 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, > * D. fill info. from desc to mbuf > */ > #endif /* RTE_NEXT_ABI */ > - for (pos = 0, nb_pkts_recd = 0; pos < RTE_IXGBE_VPMD_RX_BURST; > + for (pos = 0, nb_pkts_recd = 0; pos < nb_pkts; > pos += RTE_IXGBE_DESCS_PER_LOOP, > rxdp += RTE_IXGBE_DESCS_PER_LOOP) { > #ifdef RTE_NEXT_ABI > @@ -518,13 +521,13 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts, > } > > /* > - * vPMD receive routine, now only accept (nb_pkts == RTE_IXGBE_VPMD_RX_BURST) > - * in one loop > + * vPMD receive routine, only accept(nb_pkts >= RTE_IXGBE_DESCS_PER_LOOP) > * > * Notice: > - * - nb_pkts < RTE_IXGBE_VPMD_RX_BURST, just return no packet > - * - nb_pkts > RTE_IXGBE_VPMD_RX_BURST, only scan RTE_IXGBE_VPMD_RX_BURST > + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet > + * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST > * numbers of DD bit > + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two > * - don't support ol_flags for rss and csum err > */ > uint16_t > @@ -538,12 +541,11 @@ static inline uint16_t > reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs, > uint16_t nb_bufs, uint8_t *split_flags) > { > - struct rte_mbuf *pkts[RTE_IXGBE_VPMD_RX_BURST]; /*finished pkts*/ > + struct rte_mbuf *pkts[nb_bufs]; /*finished pkts*/ > struct rte_mbuf *start = rxq->pkt_first_seg; > struct rte_mbuf *end = rxq->pkt_last_seg; > unsigned pkt_idx, buf_idx; > > - > for (buf_idx = 0, pkt_idx = 0; buf_idx < nb_bufs; buf_idx++) { > if (end != NULL) { > /* processing a split packet */ > @@ -603,14 +605,17 @@ reassemble_packets(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_bufs, > * > * Notice: > * - don't support ol_flags for rss and csum err > - * - now only accept (nb_pkts == RTE_IXGBE_VPMD_RX_BURST) > + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet > + * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST > + * numbers of DD bit > + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two > */ > uint16_t > ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > uint16_t nb_pkts) > { > struct ixgbe_rx_queue *rxq = rx_queue; > - uint8_t split_flags[RTE_IXGBE_VPMD_RX_BURST] = {0}; > + uint8_t split_flags[RTE_IXGBE_MAX_RX_BURST] = {0}; > > /* get some new buffers */ > uint16_t nb_bufs = _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, I don't know if it actually matters from performance point of view, but you check the whole split_flags array, even if you received only 4 packets. Though the overhead of the a for loop might be bigger. > @@ -735,8 +740,8 @@ ixgbe_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, > uint64_t rs = IXGBE_ADVTXD_DCMD_RS|DCMD_DTYP_FLAGS; > int i; > > - if (unlikely(nb_pkts > RTE_IXGBE_VPMD_TX_BURST)) > - nb_pkts = RTE_IXGBE_VPMD_TX_BURST; > + /* cross rx_thresh boundary is not allowed */ > + nb_pkts = RTE_MIN(nb_pkts, txq->tx_rs_thresh); > > if (txq->nb_tx_free < txq->tx_free_thresh) > ixgbe_tx_free_bufs(txq); >