From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 7F8246CCF for ; Wed, 12 Oct 2016 04:55:36 +0200 (CEST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga101.jf.intel.com with ESMTP; 11 Oct 2016 19:55:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,332,1473145200"; d="scan'208";a="19141723" Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204]) by orsmga004.jf.intel.com with ESMTP; 11 Oct 2016 19:55:35 -0700 Received: from fmsmsx126.amr.corp.intel.com (10.18.125.43) by FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 11 Oct 2016 19:55:34 -0700 Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by FMSMSX126.amr.corp.intel.com (10.18.125.43) with Microsoft SMTP Server (TLS) id 14.3.248.2; Tue, 11 Oct 2016 19:55:34 -0700 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.139]) by SHSMSX152.ccr.corp.intel.com ([169.254.6.2]) with mapi id 14.03.0248.002; Wed, 12 Oct 2016 10:55:32 +0800 From: "Zhang, Qi Z" To: 'Jianbo Liu' CC: "Zhang, Helin" , "Wu, Jingjing" , "jerin.jacob@caviumnetworks.com" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH 1/5] i40e: extract non-x86 specific code from vector driver Thread-Index: AQHR/e2RAq50c29LEEO+gN+edjVBfKCi4ulQ Date: Wed, 12 Oct 2016 02:55:32 +0000 Message-ID: <039ED4275CED7440929022BC67E706115065A1BD@SHSMSX103.ccr.corp.intel.com> References: <1472032425-16136-1-git-send-email-jianbo.liu@linaro.org> <1472032425-16136-2-git-send-email-jianbo.liu@linaro.org> In-Reply-To: <1472032425-16136-2-git-send-email-jianbo.liu@linaro.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiZGUyNDFjNmQtN2M0MS00NWNjLWI3ZDMtMjRkYmQyYzVkZDExIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX0lDIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE1LjkuNi42IiwiVHJ1c3RlZExhYmVsSGFzaCI6IjJSOGo0alVOVnRTSzVrUXg3bXA1eVl1SnZMWGNhY3BQRnN0OUg4dE0yY2M9In0= x-ctpclassification: CTP_IC x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH 1/5] i40e: extract non-x86 specific code from vector driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Oct 2016 02:55:37 -0000 Hi Jianbo > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jianbo Liu > Sent: Wednesday, August 24, 2016 5:54 PM > To: Zhang, Helin ; Wu, Jingjing > ; jerin.jacob@caviumnetworks.com; dev@dpdk.org > Cc: Jianbo Liu > Subject: [dpdk-dev] [PATCH 1/5] i40e: extract non-x86 specific code from = vector > driver >=20 > move scalar code which does not use x86 intrinsic functions to new file > "i40e_rxtx_vec_common.h", while keeping x86 code in i40e_rxtx_vec.c. > This allows the scalar code to to be shared among vector drivers for diff= erent > platforms. >=20 > Signed-off-by: Jianbo Liu > --- > drivers/net/i40e/i40e_rxtx_vec.c | 184 +----------------------- > drivers/net/i40e/i40e_rxtx_vec_common.h | 239 > ++++++++++++++++++++++++++++++++ > 2 files changed, 243 insertions(+), 180 deletions(-) create mode 100644 > drivers/net/i40e/i40e_rxtx_vec_common.h >=20 > diff --git a/drivers/net/i40e/i40e_rxtx_vec.c b/drivers/net/i40e/i40e_rxt= x_vec.c > index 51fb282..f847469 100644 > --- a/drivers/net/i40e/i40e_rxtx_vec.c > +++ b/drivers/net/i40e/i40e_rxtx_vec.c > @@ -39,6 +39,7 @@ > #include "base/i40e_type.h" > #include "i40e_ethdev.h" > #include "i40e_rxtx.h" > +#include "i40e_rxtx_vec_common.h" >=20 > #include >=20 > @@ -421,68 +422,6 @@ i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf > **rx_pkts, > return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); } >=20 > -static inline uint16_t > -reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs, > - uint16_t nb_bufs, uint8_t *split_flags) > -{ > - struct rte_mbuf *pkts[RTE_I40E_VPMD_RX_BURST]; /*finished pkts*/ > - struct rte_mbuf *start =3D rxq->pkt_first_seg; > - struct rte_mbuf *end =3D rxq->pkt_last_seg; > - unsigned pkt_idx, buf_idx; > - > - for (buf_idx =3D 0, pkt_idx =3D 0; buf_idx < nb_bufs; buf_idx++) { > - if (end !=3D NULL) { > - /* processing a split packet */ > - end->next =3D rx_bufs[buf_idx]; > - rx_bufs[buf_idx]->data_len +=3D rxq->crc_len; > - > - start->nb_segs++; > - start->pkt_len +=3D rx_bufs[buf_idx]->data_len; > - end =3D end->next; > - > - if (!split_flags[buf_idx]) { > - /* it's the last packet of the set */ > - start->hash =3D end->hash; > - start->ol_flags =3D end->ol_flags; > - /* we need to strip crc for the whole packet */ > - start->pkt_len -=3D rxq->crc_len; > - if (end->data_len > rxq->crc_len) { > - end->data_len -=3D rxq->crc_len; > - } else { > - /* free up last mbuf */ > - struct rte_mbuf *secondlast =3D start; > - > - while (secondlast->next !=3D end) > - secondlast =3D secondlast->next; > - secondlast->data_len -=3D (rxq->crc_len - > - end->data_len); > - secondlast->next =3D NULL; > - rte_pktmbuf_free_seg(end); > - end =3D secondlast; > - } > - pkts[pkt_idx++] =3D start; > - start =3D end =3D NULL; > - } > - } else { > - /* not processing a split packet */ > - if (!split_flags[buf_idx]) { > - /* not a split packet, save and skip */ > - pkts[pkt_idx++] =3D rx_bufs[buf_idx]; > - continue; > - } > - end =3D start =3D rx_bufs[buf_idx]; > - rx_bufs[buf_idx]->data_len +=3D rxq->crc_len; > - rx_bufs[buf_idx]->pkt_len +=3D rxq->crc_len; > - } > - } > - > - /* save the partial packet for next time */ > - rxq->pkt_first_seg =3D start; > - rxq->pkt_last_seg =3D end; > - memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts))); > - return pkt_idx; > -} > - > /* vPMD receive routine that reassembles scattered packets > * Notice: > * - nb_pkts < RTE_I40E_DESCS_PER_LOOP, just return no packet @@ > -548,73 +487,6 @@ vtx(volatile struct i40e_tx_desc *txdp, > vtx1(txdp, *pkt, flags); > } >=20 > -static inline int __attribute__((always_inline)) -i40e_tx_free_bufs(stru= ct > i40e_tx_queue *txq) -{ > - struct i40e_tx_entry *txep; > - uint32_t n; > - uint32_t i; > - int nb_free =3D 0; > - struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; > - > - /* check DD bits on threshold descriptor */ > - if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & > - rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=3D > - rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) > - return 0; > - > - n =3D txq->tx_rs_thresh; > - > - /* first buffer to free from S/W ring is at index > - * tx_next_dd - (tx_rs_thresh-1) > - */ > - txep =3D &txq->sw_ring[txq->tx_next_dd - (n - 1)]; > - m =3D __rte_pktmbuf_prefree_seg(txep[0].mbuf); > - if (likely(m !=3D NULL)) { > - free[0] =3D m; > - nb_free =3D 1; > - for (i =3D 1; i < n; i++) { > - m =3D __rte_pktmbuf_prefree_seg(txep[i].mbuf); > - if (likely(m !=3D NULL)) { > - if (likely(m->pool =3D=3D free[0]->pool)) { > - free[nb_free++] =3D m; > - } else { > - rte_mempool_put_bulk(free[0]->pool, > - (void *)free, > - nb_free); > - free[0] =3D m; > - nb_free =3D 1; > - } > - } > - } > - rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); > - } else { > - for (i =3D 1; i < n; i++) { > - m =3D __rte_pktmbuf_prefree_seg(txep[i].mbuf); > - if (m !=3D NULL) > - rte_mempool_put(m->pool, m); > - } > - } > - > - /* buffers were freed, update counters */ > - txq->nb_tx_free =3D (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); > - txq->tx_next_dd =3D (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); > - if (txq->tx_next_dd >=3D txq->nb_tx_desc) > - txq->tx_next_dd =3D (uint16_t)(txq->tx_rs_thresh - 1); > - > - return txq->tx_rs_thresh; > -} > - > -static inline void __attribute__((always_inline)) -tx_backlog_entry(stru= ct > i40e_tx_entry *txep, > - struct rte_mbuf **tx_pkts, uint16_t nb_pkts) > -{ > - int i; > - > - for (i =3D 0; i < (int)nb_pkts; ++i) > - txep[i].mbuf =3D tx_pkts[i]; > -} > - > uint16_t > i40e_xmit_pkts_vec(void *tx_queue, struct rte_mbuf **tx_pkts, > uint16_t nb_pkts) > @@ -685,37 +557,13 @@ i40e_xmit_pkts_vec(void *tx_queue, struct > rte_mbuf **tx_pkts, void __attribute__((cold)) > i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq) { > - const unsigned mask =3D rxq->nb_rx_desc - 1; > - unsigned i; > - > - if (rxq->sw_ring =3D=3D NULL || rxq->rxrearm_nb >=3D rxq->nb_rx_desc) > - return; > - > - /* free all mbufs that are valid in the ring */ > - for (i =3D rxq->rx_tail; i !=3D rxq->rxrearm_start; i =3D (i + 1) & mas= k) > - rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); > - rxq->rxrearm_nb =3D rxq->nb_rx_desc; > - > - /* set all entries to NULL */ > - memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc); > + _i40e_rx_queue_release_mbufs_vec(rxq); > } >=20 > int __attribute__((cold)) > i40e_rxq_vec_setup(struct i40e_rx_queue *rxq) { > - uintptr_t p; > - struct rte_mbuf mb_def =3D { .buf_addr =3D 0 }; /* zeroed mbuf */ > - > - mb_def.nb_segs =3D 1; > - mb_def.data_off =3D RTE_PKTMBUF_HEADROOM; > - mb_def.port =3D rxq->port_id; > - rte_mbuf_refcnt_set(&mb_def, 1); > - > - /* prevent compiler reordering: rearm_data covers previous fields */ > - rte_compiler_barrier(); > - p =3D (uintptr_t)&mb_def.rearm_data; > - rxq->mbuf_initializer =3D *(uint64_t *)p; > - return 0; > + return i40e_rxq_vec_setup_default(rxq); > } >=20 > int __attribute__((cold)) > @@ -728,34 +576,10 @@ int __attribute__((cold)) > i40e_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev) { #ifndef > RTE_LIBRTE_IEEE1588 > - struct rte_eth_rxmode *rxmode =3D &dev->data->dev_conf.rxmode; > - struct rte_fdir_conf *fconf =3D &dev->data->dev_conf.fdir_conf; > - > /* need SSE4.1 support */ > if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_SSE4_1)) > return -1; > - > -#ifndef RTE_LIBRTE_I40E_RX_OLFLAGS_ENABLE > - /* whithout rx ol_flags, no VP flag report */ > - if (rxmode->hw_vlan_strip !=3D 0 || > - rxmode->hw_vlan_extend !=3D 0) > - return -1; > #endif >=20 > - /* no fdir support */ > - if (fconf->mode !=3D RTE_FDIR_MODE_NONE) > - return -1; > - > - /* - no csum error report support > - * - no header split support > - */ > - if (rxmode->hw_ip_checksum =3D=3D 1 || > - rxmode->header_split =3D=3D 1) > - return -1; > - > - return 0; > -#else > - RTE_SET_USED(dev); > - return -1; > -#endif > + return i40e_rx_vec_dev_conf_condition_check_default(dev); > } > diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h > b/drivers/net/i40e/i40e_rxtx_vec_common.h > new file mode 100644 > index 0000000..b31b39e > --- /dev/null > +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h > @@ -0,0 +1,239 @@ > +/*- > + * BSD LICENSE > + * > + * Copyright(c) 2010-2015 Intel Corporation. All rights reserved. > + * All rights reserved. > + * > + * Redistribution and use in source and binary forms, with or without > + * modification, are permitted provided that the following conditions > + * are met: > + * > + * * Redistributions of source code must retain the above copyright > + * notice, this list of conditions and the following disclaimer. > + * * Redistributions in binary form must reproduce the above copyrig= ht > + * notice, this list of conditions and the following disclaimer in > + * the documentation and/or other materials provided with the > + * distribution. > + * * Neither the name of Intel Corporation nor the names of its > + * contributors may be used to endorse or promote products derived > + * from this software without specific prior written permission. > + * > + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND > CONTRIBUTORS > + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT > NOT > + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND > FITNESS FOR > + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE > COPYRIGHT > + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, > INCIDENTAL, > + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, > BUT NOT > + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; > LOSS OF USE, > + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED > AND ON ANY > + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR > TORT > + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT > OF THE USE > + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH > DAMAGE. > + */ > + > +#ifndef _I40E_RXTX_VEC_COMMON_H_ > +#define _I40E_RXTX_VEC_COMMON_H_ > +#include > +#include > +#include > + > +#include "i40e_ethdev.h" > +#include "i40e_rxtx.h" > + > +static inline uint16_t > +reassemble_packets(struct i40e_rx_queue *rxq, struct rte_mbuf **rx_bufs, > + uint16_t nb_bufs, uint8_t *split_flags) { > + struct rte_mbuf *pkts[RTE_I40E_VPMD_RX_BURST]; /*finished pkts*/ > + struct rte_mbuf *start =3D rxq->pkt_first_seg; > + struct rte_mbuf *end =3D rxq->pkt_last_seg; > + unsigned pkt_idx, buf_idx; > + > + for (buf_idx =3D 0, pkt_idx =3D 0; buf_idx < nb_bufs; buf_idx++) { > + if (end !=3D NULL) { > + /* processing a split packet */ > + end->next =3D rx_bufs[buf_idx]; > + rx_bufs[buf_idx]->data_len +=3D rxq->crc_len; > + > + start->nb_segs++; > + start->pkt_len +=3D rx_bufs[buf_idx]->data_len; > + end =3D end->next; > + > + if (!split_flags[buf_idx]) { > + /* it's the last packet of the set */ > + start->hash =3D end->hash; > + start->ol_flags =3D end->ol_flags; > + /* we need to strip crc for the whole packet */ > + start->pkt_len -=3D rxq->crc_len; > + if (end->data_len > rxq->crc_len) { > + end->data_len -=3D rxq->crc_len; > + } else { > + /* free up last mbuf */ > + struct rte_mbuf *secondlast =3D start; > + > + while (secondlast->next !=3D end) > + secondlast =3D secondlast->next; > + secondlast->data_len -=3D (rxq->crc_len - > + end->data_len); > + secondlast->next =3D NULL; > + rte_pktmbuf_free_seg(end); > + end =3D secondlast; > + } > + pkts[pkt_idx++] =3D start; > + start =3D end =3D NULL; > + } > + } else { > + /* not processing a split packet */ > + if (!split_flags[buf_idx]) { > + /* not a split packet, save and skip */ > + pkts[pkt_idx++] =3D rx_bufs[buf_idx]; > + continue; > + } > + end =3D start =3D rx_bufs[buf_idx]; > + rx_bufs[buf_idx]->data_len +=3D rxq->crc_len; > + rx_bufs[buf_idx]->pkt_len +=3D rxq->crc_len; > + } > + } > + > + /* save the partial packet for next time */ > + rxq->pkt_first_seg =3D start; > + rxq->pkt_last_seg =3D end; > + memcpy(rx_bufs, pkts, pkt_idx * (sizeof(*pkts))); > + return pkt_idx; > +} > + > +static inline int __attribute__((always_inline)) > +i40e_tx_free_bufs(struct i40e_tx_queue *txq) { > + struct i40e_tx_entry *txep; > + uint32_t n; > + uint32_t i; > + int nb_free =3D 0; > + struct rte_mbuf *m, *free[RTE_I40E_TX_MAX_FREE_BUF_SZ]; > + > + /* check DD bits on threshold descriptor */ > + if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & > + rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) !=3D > + rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) > + return 0; > + > + n =3D txq->tx_rs_thresh; > + > + /* first buffer to free from S/W ring is at index > + * tx_next_dd - (tx_rs_thresh-1) > + */ > + txep =3D &txq->sw_ring[txq->tx_next_dd - (n - 1)]; > + m =3D __rte_pktmbuf_prefree_seg(txep[0].mbuf); > + if (likely(m !=3D NULL)) { > + free[0] =3D m; > + nb_free =3D 1; > + for (i =3D 1; i < n; i++) { > + m =3D __rte_pktmbuf_prefree_seg(txep[i].mbuf); > + if (likely(m !=3D NULL)) { > + if (likely(m->pool =3D=3D free[0]->pool)) { > + free[nb_free++] =3D m; > + } else { > + rte_mempool_put_bulk(free[0]->pool, > + (void *)free, > + nb_free); > + free[0] =3D m; > + nb_free =3D 1; > + } > + } > + } > + rte_mempool_put_bulk(free[0]->pool, (void **)free, nb_free); > + } else { > + for (i =3D 1; i < n; i++) { > + m =3D __rte_pktmbuf_prefree_seg(txep[i].mbuf); > + if (m !=3D NULL) > + rte_mempool_put(m->pool, m); > + } > + } > + > + /* buffers were freed, update counters */ > + txq->nb_tx_free =3D (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); > + txq->tx_next_dd =3D (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); > + if (txq->tx_next_dd >=3D txq->nb_tx_desc) > + txq->tx_next_dd =3D (uint16_t)(txq->tx_rs_thresh - 1); > + > + return txq->tx_rs_thresh; > +} > + > +static inline void __attribute__((always_inline)) > +tx_backlog_entry(struct i40e_tx_entry *txep, > + struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { > + int i; > + > + for (i =3D 0; i < (int)nb_pkts; ++i) > + txep[i].mbuf =3D tx_pkts[i]; > +} > + > +static inline void > +_i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq) { > + const unsigned mask =3D rxq->nb_rx_desc - 1; > + unsigned i; > + > + if (rxq->sw_ring =3D=3D NULL || rxq->rxrearm_nb >=3D rxq->nb_rx_desc) > + return; > + > + /* free all mbufs that are valid in the ring */ > + for (i =3D rxq->rx_tail; i !=3D rxq->rxrearm_start; i =3D (i + 1) & mas= k) > + rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); > + rxq->rxrearm_nb =3D rxq->nb_rx_desc; > + > + /* set all entries to NULL */ > + memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc); } > + > +static inline int > +i40e_rxq_vec_setup_default(struct i40e_rx_queue *rxq) { > + uintptr_t p; > + struct rte_mbuf mb_def =3D { .buf_addr =3D 0 }; /* zeroed mbuf */ > + > + mb_def.nb_segs =3D 1; > + mb_def.data_off =3D RTE_PKTMBUF_HEADROOM; > + mb_def.port =3D rxq->port_id; > + rte_mbuf_refcnt_set(&mb_def, 1); > + > + /* prevent compiler reordering: rearm_data covers previous fields */ > + rte_compiler_barrier(); > + p =3D (uintptr_t)&mb_def.rearm_data; > + rxq->mbuf_initializer =3D *(uint64_t *)p; > + return 0; > +} > + > +static inline int > +i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev) { > +#ifndef RTE_LIBRTE_IEEE1588 > + struct rte_eth_rxmode *rxmode =3D &dev->data->dev_conf.rxmode; > + struct rte_fdir_conf *fconf =3D &dev->data->dev_conf.fdir_conf; > + > +#ifndef RTE_LIBRTE_I40E_RX_OLFLAGS_ENABLE > + /* whithout rx ol_flags, no VP flag report */ > + if (rxmode->hw_vlan_strip !=3D 0 || > + rxmode->hw_vlan_extend !=3D 0) > + return -1; > +#endif > + > + /* no fdir support */ > + if (fconf->mode !=3D RTE_FDIR_MODE_NONE) > + return -1; > + > + /* - no csum error report support > + * - no header split support > + */ > + if (rxmode->hw_ip_checksum =3D=3D 1 || > + rxmode->header_split =3D=3D 1) > + return -1; > + > + return 0; > +#else > + RTE_SET_USED(dev); > + return -1; > +#endif > +} > +#endif > -- > 2.4.11 Should we rename the function "_40e_rx_queue_release_mbufs_vec" to "i40e_rx_queue_release_mbufs_vec_default", so functions be wrapped can foll= ow a consistent rule? =20 Thanks! Qi