From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 8725E4C9F for ; Wed, 12 Sep 2018 16:59:44 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Sep 2018 07:59:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,365,1531810800"; d="scan'208";a="72632056" Received: from irsmsx103.ger.corp.intel.com ([163.33.3.157]) by orsmga008.jf.intel.com with ESMTP; 12 Sep 2018 07:59:40 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.54]) by IRSMSX103.ger.corp.intel.com ([169.254.3.234]) with mapi id 14.03.0319.002; Wed, 12 Sep 2018 15:59:39 +0100 From: "Ananyev, Konstantin" To: "robertshearman@gmail.com" , "dev@dpdk.org" CC: "Lu, Wenzhuo" , Robert Shearman Thread-Topic: [PATCH] net/ixgbe: Strip SR-IOV transparent VLANs in VF Thread-Index: AQHUO8nxYK0Y4wHrnk+vqrgSfRL3TKTszttg Date: Wed, 12 Sep 2018 14:59:39 +0000 Message-ID: <2601191342CEEE43887BDE71AB977258EA954B73@irsmsx105.ger.corp.intel.com> References: <1535128501-31597-1-git-send-email-robertshearman@gmail.com> In-Reply-To: <1535128501-31597-1-git-send-email-robertshearman@gmail.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYzY1ZTEzNDQtNzdmYy00YzhlLWFkNmItNWE5ZGU0MThhNDI2IiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoieUJ0cmF2b2lxanFEUE0xQjVlZWUyNUk4RE05SU93MXI2QmgrNGE3SUZRWmZNOFwvU3F1RFwveE9WY3R0NXAzbVwvaCJ9 x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.0.400.15 dlp-reaction: no-action x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] net/ixgbe: Strip SR-IOV transparent VLANs in VF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 12 Sep 2018 14:59:45 -0000 Hi Robert, > -----Original Message----- > From: robertshearman@gmail.com [mailto:robertshearman@gmail.com] > Sent: Friday, August 24, 2018 5:35 PM > To: dev@dpdk.org > Cc: Lu, Wenzhuo ; Ananyev, Konstantin ; Robert Shearman > > Subject: [PATCH] net/ixgbe: Strip SR-IOV transparent VLANs in VF >=20 > From: Robert Shearman >=20 > SR-IOV VFs support "transparent" VLANs. Traffic from/to a VM > associated with a VF has a VLAN tag inserted/stripped in a manner > intended to be totally transparent to the VM. On a Linux hypervisor > the vlan can be specified by "ip link set vf vlan ". > The VM VF driver is not configured to use any VLAN and the VM should > never see the transparent VLAN for that reason. However, in practice > these VLAN headers are being received by the VM which discards the > packets as that VLAN is unknown to it. The Linux kernel ixbge driver > explicitly removes the VLAN in this case (presumably due to the > hardware not being able to do this) but the DPDK driver does not. >=20 > This patch mirrors the kernel driver behaviour by removing the VLAN on > the VF side. This is done by checking the VLAN in the VFTA, where the > hypervisor will have set the bit in the VFTA corresponding to the VLAN > if transparent VLANs were being used for the VF. If the VLAN is set in > the VFTA then it is known that it's a transparent VLAN case and so the > VLAN is stripped from the mbuf. To limit any potential performance > impact on the PF data path, the RX path is split into PF and VF > versions with the transparent VLAN stripping only done in the VF > path. Measurements with our application show ~2% performance hit for > the VF case and none for the PF case. I did some perf measurements too, and unfortunately I am seeing ~4 % drop=20 (tespmd iofwd on one core over 4x10Gb: from ~44.7 Mpps to ~43Mpps, that's = on BDX 2.2GHz). As you mentioned above: " VM VF driver is not configured to use any VLAN and the VM should never see the transparent VLAN for that reason." I wonder would it be sufficient for your purposes if VF RX function just ig= nore HW descriptor values and never set PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED? I think that could be done pretty easily (by setting rxq->vlan_flags). In that case no changes in RX code will be required no perf changes). It can be controlled by DEV_RX_OFFLOAD_VLAN_STRIP, not sure would it be suf= ficient for you. =20 BTW, in your case how hypervisor will propagate new VFTA table to VF? Presumably same way could be used to propagate rx offload flags? Konstantin >=20 > Signed-off-by: Robert Shearman > --- > drivers/net/ixgbe/ixgbe_ethdev.c | 18 +++---- > drivers/net/ixgbe/ixgbe_ethdev.h | 38 +++++++++++++++ > drivers/net/ixgbe/ixgbe_rxtx.c | 83 +++++++++++++++++++++++++++= ++--- > drivers/net/ixgbe/ixgbe_rxtx.h | 31 +++++++++++- > drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c | 7 +++ > drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c | 84 +++++++++++++++++++++++++++= +++--- > 6 files changed, 238 insertions(+), 23 deletions(-) >=20 > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_e= thdev.c > index 26b1927..3f88a02 100644 > --- a/drivers/net/ixgbe/ixgbe_ethdev.c > +++ b/drivers/net/ixgbe/ixgbe_ethdev.c > @@ -604,7 +604,7 @@ static const struct eth_dev_ops ixgbevf_eth_dev_ops = =3D { > .vlan_filter_set =3D ixgbevf_vlan_filter_set, > .vlan_strip_queue_set =3D ixgbevf_vlan_strip_queue_set, > .vlan_offload_set =3D ixgbevf_vlan_offload_set, > - .rx_queue_setup =3D ixgbe_dev_rx_queue_setup, > + .rx_queue_setup =3D ixgbevf_dev_rx_queue_setup, > .rx_queue_release =3D ixgbe_dev_rx_queue_release, > .rx_descriptor_done =3D ixgbe_dev_rx_descriptor_done, > .rx_descriptor_status =3D ixgbe_dev_rx_descriptor_status, > @@ -1094,7 +1094,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, voi= d *init_params __rte_unused) > "Using default TX function."); > } >=20 > - ixgbe_set_rx_function(eth_dev); > + ixgbe_set_rx_function(eth_dev, true); >=20 > return 0; > } > @@ -1576,7 +1576,7 @@ eth_ixgbevf_dev_init(struct rte_eth_dev *eth_dev) > "No TX queues configured yet. Using default TX function."); > } >=20 > - ixgbe_set_rx_function(eth_dev); > + ixgbe_set_rx_function(eth_dev, true); >=20 > return 0; > } > @@ -1839,8 +1839,8 @@ ixgbe_vlan_filter_set(struct rte_eth_dev *dev, uint= 16_t vlan_id, int on) > uint32_t vid_idx; > uint32_t vid_bit; >=20 > - vid_idx =3D (uint32_t) ((vlan_id >> 5) & 0x7F); > - vid_bit =3D (uint32_t) (1 << (vlan_id & 0x1F)); > + vid_idx =3D ixgbe_vfta_index(vlan_id); > + vid_bit =3D ixgbe_vfta_bit(vlan_id); > vfta =3D IXGBE_READ_REG(hw, IXGBE_VFTA(vid_idx)); > if (on) > vfta |=3D vid_bit; > @@ -3807,7 +3807,9 @@ ixgbe_dev_supported_ptypes_get(struct rte_eth_dev *= dev) >=20 > #if defined(RTE_ARCH_X86) > if (dev->rx_pkt_burst =3D=3D ixgbe_recv_pkts_vec || > - dev->rx_pkt_burst =3D=3D ixgbe_recv_scattered_pkts_vec) > + dev->rx_pkt_burst =3D=3D ixgbe_recv_scattered_pkts_vec || > + dev->rx_pkt_burst =3D=3D ixgbevf_recv_pkts_vec || > + dev->rx_pkt_burst =3D=3D ixgbevf_recv_scattered_pkts_vec) > return ptypes; > #endif > return NULL; > @@ -5231,8 +5233,8 @@ ixgbevf_vlan_filter_set(struct rte_eth_dev *dev, ui= nt16_t vlan_id, int on) > PMD_INIT_LOG(ERR, "Unable to set VF vlan"); > return ret; > } > - vid_idx =3D (uint32_t) ((vlan_id >> 5) & 0x7F); > - vid_bit =3D (uint32_t) (1 << (vlan_id & 0x1F)); > + vid_idx =3D ixgbe_vfta_index(vlan_id); > + vid_bit =3D ixgbe_vfta_bit(vlan_id); >=20 > /* Save what we set and retore it after device reset */ > if (on) > diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_e= thdev.h > index d0b9396..483d2cd 100644 > --- a/drivers/net/ixgbe/ixgbe_ethdev.h > +++ b/drivers/net/ixgbe/ixgbe_ethdev.h > @@ -568,6 +568,11 @@ int ixgbe_dev_rx_queue_setup(struct rte_eth_dev *de= v, uint16_t rx_queue_id, > const struct rte_eth_rxconf *rx_conf, > struct rte_mempool *mb_pool); >=20 > +int ixgbevf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_que= ue_id, > + uint16_t nb_rx_desc, unsigned int socket_id, > + const struct rte_eth_rxconf *rx_conf, > + struct rte_mempool *mb_pool); > + > int ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue= _id, > uint16_t nb_tx_desc, unsigned int socket_id, > const struct rte_eth_txconf *tx_conf); > @@ -779,4 +784,37 @@ ixgbe_ethertype_filter_remove(struct ixgbe_filter_in= fo *filter_info, > return idx; > } >=20 > +int ixgbe_fdir_ctrl_func(struct rte_eth_dev *dev, > + enum rte_filter_op filter_op, void *arg); > + > +/* > + * Calculate index in vfta array of the 32 bit value enclosing > + * a given vlan id > + */ > +static inline uint32_t > +ixgbe_vfta_index(uint16_t vlan) > +{ > + return (vlan >> 5) & 0x7f; > +} > + > +/* > + * Calculate vfta array entry bitmask for vlan id within the > + * enclosing 32 bit entry. > + */ > +static inline uint32_t > +ixgbe_vfta_bit(uint16_t vlan) > +{ > + return 1 << (vlan & 0x1f); > +} > + > +/* > + * Check in the vfta bit array if the bit corresponding to > + * the given vlan is set. > + */ > +static inline bool > +ixgbe_vfta_is_vlan_set(const struct ixgbe_vfta *vfta, uint16_t vlan) > +{ > + return (vfta->vfta[ixgbe_vfta_index(vlan)] & ixgbe_vfta_bit(vlan)) !=3D= 0; > +} > + > #endif /* _IXGBE_ETHDEV_H_ */ > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxt= x.c > index f82b74a..26a99cb 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c > @@ -1623,14 +1623,23 @@ ixgbe_rx_fill_from_stage(struct ixgbe_rx_queue *r= xq, struct rte_mbuf **rx_pkts, > uint16_t nb_pkts) > { > struct rte_mbuf **stage =3D &rxq->rx_stage[rxq->rx_next_avail]; > + const struct rte_eth_dev *dev; > + const struct ixgbe_vfta *vfta; > int i; >=20 > + dev =3D &rte_eth_devices[rxq->port_id]; > + vfta =3D IXGBE_DEV_PRIVATE_TO_VFTA(dev->data->dev_private); > + > /* how many packets are ready to return? */ > nb_pkts =3D (uint16_t)RTE_MIN(nb_pkts, rxq->rx_nb_avail); >=20 > /* copy mbuf pointers to the application's packet list */ > - for (i =3D 0; i < nb_pkts; ++i) > + for (i =3D 0; i < nb_pkts; ++i) { > rx_pkts[i] =3D stage[i]; > + if (rxq->vf) > + ixgbevf_trans_vlan_sw_filter_hdr(rx_pkts[i], > + vfta); > + } >=20 > /* update internal queue state */ > rxq->rx_nb_avail =3D (uint16_t)(rxq->rx_nb_avail - nb_pkts); > @@ -1750,6 +1759,8 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **r= x_pkts, > uint16_t nb_hold; > uint64_t pkt_flags; > uint64_t vlan_flags; > + const struct rte_eth_dev *dev; > + const struct ixgbe_vfta *vfta; >=20 > nb_rx =3D 0; > nb_hold =3D 0; > @@ -1758,6 +1769,9 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **r= x_pkts, > rx_ring =3D rxq->rx_ring; > sw_ring =3D rxq->sw_ring; > vlan_flags =3D rxq->vlan_flags; > + dev =3D &rte_eth_devices[rxq->port_id]; > + vfta =3D IXGBE_DEV_PRIVATE_TO_VFTA(dev->data->dev_private); > + > while (nb_rx < nb_pkts) { > /* > * The order of operations here is important as the DD status > @@ -1876,6 +1890,10 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **= rx_pkts, > ixgbe_rxd_pkt_info_to_pkt_type(pkt_info, > rxq->pkt_type_mask); >=20 > + if (rxq->vf) > + ixgbevf_trans_vlan_sw_filter_hdr(rxm, > + vfta); > + > if (likely(pkt_flags & PKT_RX_RSS_HASH)) > rxm->hash.rss =3D rte_le_to_cpu_32( > rxd.wb.lower.hi_dword.rss); > @@ -2016,6 +2034,11 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct rte_mbu= f **rx_pkts, uint16_t nb_pkts, > uint16_t nb_rx =3D 0; > uint16_t nb_hold =3D rxq->nb_rx_hold; > uint16_t prev_id =3D rxq->rx_tail; > + const struct rte_eth_dev *dev; > + const struct ixgbe_vfta *vfta; > + > + dev =3D &rte_eth_devices[rxq->port_id]; > + vfta =3D IXGBE_DEV_PRIVATE_TO_VFTA(dev->data->dev_private); >=20 > while (nb_rx < nb_pkts) { > bool eop; > @@ -2230,6 +2253,10 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct rte_mbu= f **rx_pkts, uint16_t nb_pkts, > rte_packet_prefetch((char *)first_seg->buf_addr + > first_seg->data_off); >=20 > + if (rxq->vf) > + ixgbevf_trans_vlan_sw_filter_hdr(first_seg, > + vfta); > + > /* > * Store the mbuf address into the next entry of the array > * of returned packets. > @@ -3066,6 +3093,25 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, > return 0; > } >=20 > +int __attribute__((cold)) > +ixgbevf_dev_rx_queue_setup(struct rte_eth_dev *dev, > + uint16_t queue_idx, > + uint16_t nb_desc, > + unsigned int socket_id, > + const struct rte_eth_rxconf *rx_conf, > + struct rte_mempool *mp) > +{ > + struct ixgbe_rx_queue *rxq; > + > + ixgbe_dev_rx_queue_setup(dev, queue_idx, nb_desc, socket_id, > + rx_conf, mp); > + > + rxq =3D dev->data->rx_queues[queue_idx]; > + rxq->vf =3D true; > + > + return 0; > +} > + > uint32_t > ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) > { > @@ -4561,7 +4607,7 @@ ixgbe_set_ivar(struct rte_eth_dev *dev, u8 entry, u= 8 vector, s8 type) > } >=20 > void __attribute__((cold)) > -ixgbe_set_rx_function(struct rte_eth_dev *dev) > +ixgbe_set_rx_function(struct rte_eth_dev *dev, bool vf) > { > uint16_t i, rx_using_sse; > struct ixgbe_adapter *adapter =3D > @@ -4608,7 +4654,9 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev) > "callback (port=3D%d).", > dev->data->port_id); >=20 > - dev->rx_pkt_burst =3D ixgbe_recv_scattered_pkts_vec; > + dev->rx_pkt_burst =3D vf ? > + ixgbevf_recv_scattered_pkts_vec : > + ixgbe_recv_scattered_pkts_vec; > } else if (adapter->rx_bulk_alloc_allowed) { > PMD_INIT_LOG(DEBUG, "Using a Scattered with bulk " > "allocation callback (port=3D%d).", > @@ -4637,7 +4685,8 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev) > RTE_IXGBE_DESCS_PER_LOOP, > dev->data->port_id); >=20 > - dev->rx_pkt_burst =3D ixgbe_recv_pkts_vec; > + dev->rx_pkt_burst =3D vf ? ixgbevf_recv_pkts_vec : > + ixgbe_recv_pkts_vec; > } else if (adapter->rx_bulk_alloc_allowed) { > PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " > "satisfied. Rx Burst Bulk Alloc function " > @@ -4658,7 +4707,9 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev) >=20 > rx_using_sse =3D > (dev->rx_pkt_burst =3D=3D ixgbe_recv_scattered_pkts_vec || > - dev->rx_pkt_burst =3D=3D ixgbe_recv_pkts_vec); > + dev->rx_pkt_burst =3D=3D ixgbe_recv_pkts_vec || > + dev->rx_pkt_burst =3D=3D ixgbevf_recv_scattered_pkts_vec || > + dev->rx_pkt_burst =3D=3D ixgbevf_recv_pkts_vec); >=20 > for (i =3D 0; i < dev->data->nb_rx_queues; i++) { > struct ixgbe_rx_queue *rxq =3D dev->data->rx_queues[i]; > @@ -4977,7 +5028,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev) > if (rc) > return rc; >=20 > - ixgbe_set_rx_function(dev); > + ixgbe_set_rx_function(dev, false); >=20 > return 0; > } > @@ -5500,7 +5551,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev) > IXGBE_PSRTYPE_RQPL_SHIFT; > IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, psrtype); >=20 > - ixgbe_set_rx_function(dev); > + ixgbe_set_rx_function(dev, true); >=20 > return 0; > } > @@ -5731,6 +5782,24 @@ ixgbe_recv_pkts_vec( > } >=20 > uint16_t __attribute__((weak)) > +ixgbevf_recv_pkts_vec( > + void __rte_unused *rx_queue, > + struct rte_mbuf __rte_unused **rx_pkts, > + uint16_t __rte_unused nb_pkts) > +{ > + return 0; > +} > + > +uint16_t __attribute__((weak)) > +ixgbevf_recv_scattered_pkts_vec( > + void __rte_unused *rx_queue, > + struct rte_mbuf __rte_unused **rx_pkts, > + uint16_t __rte_unused nb_pkts) > +{ > + return 0; > +} > + > +uint16_t __attribute__((weak)) > ixgbe_recv_scattered_pkts_vec( > void __rte_unused *rx_queue, > struct rte_mbuf __rte_unused **rx_pkts, > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxt= x.h > index 39378f7..676557b 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx.h > +++ b/drivers/net/ixgbe/ixgbe_rxtx.h > @@ -111,6 +111,7 @@ struct ixgbe_rx_queue { > uint16_t rx_free_trigger; /**< triggers rx buffer allocation */ > uint8_t rx_using_sse; > /**< indicates that vector RX is in use */ > + uint8_t vf; /**< indicates that this is for a VF */ > #ifdef RTE_LIBRTE_SECURITY > uint8_t using_ipsec; > /**< indicates that IPsec RX feature is in use */ > @@ -254,6 +255,30 @@ struct ixgbe_txq_ops { > IXGBE_ADVTXD_DCMD_EOP) >=20 >=20 > + > +/* > + * Filter out unknown vlans resulting from use of transparent vlan. > + * > + * When a VF is configured to use transparent vlans then the VF can > + * see this VLAN being set in the packet, meaning that the transparent > + * property isn't preserved. Furthermore, when the VF is used in a > + * guest VM then there's no way of knowing for sure that transparent > + * VLAN is in use and what tag value has been configured. So work > + * around this by removing the VLAN flag if the VF isn't interested in > + * the VLAN tag. > + */ > +static inline void > +ixgbevf_trans_vlan_sw_filter_hdr(struct rte_mbuf *m, > + const struct ixgbe_vfta *vfta) > +{ > + if (m->ol_flags & PKT_RX_VLAN) { > + uint16_t vlan =3D m->vlan_tci & 0xFFF; > + > + if (!ixgbe_vfta_is_vlan_set(vfta, vlan)) > + m->ol_flags &=3D ~PKT_RX_VLAN; > + } > +} > + > /* Takes an ethdev and a queue and sets up the tx function to be used ba= sed on > * the queue parameters. Used in tx_queue_setup by primary process and t= hen > * in dev_init by secondary process when attaching to an existing ethdev= . > @@ -274,12 +299,16 @@ void ixgbe_set_tx_function(struct rte_eth_dev *dev,= struct ixgbe_tx_queue *txq); > * > * @dev rte_eth_dev handle > */ > -void ixgbe_set_rx_function(struct rte_eth_dev *dev); > +void ixgbe_set_rx_function(struct rte_eth_dev *dev, bool vf); >=20 > uint16_t ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > uint16_t nb_pkts); > uint16_t ixgbe_recv_scattered_pkts_vec(void *rx_queue, > struct rte_mbuf **rx_pkts, uint16_t nb_pkts); > +uint16_t ixgbevf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts= , > + uint16_t nb_pkts); > +uint16_t ixgbevf_recv_scattered_pkts_vec(void *rx_queue, > + struct rte_mbuf **rx_pkts, uint16_t nb_pkts); > int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev); > int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq); > void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq); > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c b/drivers/net/ixgbe/= ixgbe_rxtx_vec_neon.c > index edb1383..d077918 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_neon.c > @@ -149,6 +149,9 @@ static inline uint16_t > _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts= , > uint16_t nb_pkts, uint8_t *split_packet) > { > + const struct rte_eth_dev *dev =3D &rte_eth_devices[rxq->port_id]; > + const struct ixgbe_vfta *vfta > + =3D IXGBE_DEV_PRIVATE_TO_VFTA(dev->data->dev_private); > volatile union ixgbe_adv_rx_desc *rxdp; > struct ixgbe_rx_entry *sw_ring; > uint16_t nb_pkts_recd; > @@ -272,8 +275,10 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struc= t rte_mbuf **rx_pkts, > /* D.3 copy final 3,4 data to rx_pkts */ > vst1q_u8((void *)&rx_pkts[pos + 3]->rx_descriptor_fields1, > pkt_mb4); > + ixgbe_unknown_vlan_sw_filter_hdr(rx_pkts[pos + 3], vfta, rxq); > vst1q_u8((void *)&rx_pkts[pos + 2]->rx_descriptor_fields1, > pkt_mb3); > + ixgbe_unknown_vlan_sw_filter_hdr(rx_pkts[pos + 2], vfta, rxq); >=20 > /* D.2 pkt 1,2 set in_port/nb_seg and remove crc */ > tmp =3D vsubq_u16(vreinterpretq_u16_u8(pkt_mb2), crc_adjust); > @@ -294,8 +299,10 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struc= t rte_mbuf **rx_pkts, > /* D.3 copy final 1,2 data to rx_pkts */ > vst1q_u8((uint8_t *)&rx_pkts[pos + 1]->rx_descriptor_fields1, > pkt_mb2); > + ixgbe_unknown_vlan_sw_filter_hdr(rx_pkts[pos + 1], vfta, rxq); > vst1q_u8((uint8_t *)&rx_pkts[pos]->rx_descriptor_fields1, > pkt_mb1); > + ixgbe_unknown_vlan_sw_filter_hdr(rx_pkts[pos], vfta, rxq); >=20 > stat &=3D IXGBE_VPMD_DESC_DD_MASK; >=20 > diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c b/drivers/net/ixgbe/i= xgbe_rxtx_vec_sse.c > index c9ba482..04a3307 100644 > --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c > +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_sse.c > @@ -313,9 +313,10 @@ desc_to_ptype_v(__m128i descs[4], uint16_t pkt_type_= mask, > */ > static inline uint16_t > _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struct rte_mbuf **rx_pkts= , > - uint16_t nb_pkts, uint8_t *split_packet) > + uint16_t nb_pkts, bool vf, uint8_t *split_packet) > { > volatile union ixgbe_adv_rx_desc *rxdp; > + const struct ixgbe_vfta *vfta =3D NULL; > struct ixgbe_rx_entry *sw_ring; > uint16_t nb_pkts_recd; > #ifdef RTE_LIBRTE_SECURITY > @@ -344,6 +345,13 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struc= t rte_mbuf **rx_pkts, > __m128i mbuf_init; > uint8_t vlan_flags; >=20 > + if (vf) { > + const struct rte_eth_dev *dev =3D > + &rte_eth_devices[rxq->port_id]; > + > + vfta =3D IXGBE_DEV_PRIVATE_TO_VFTA(dev->data->dev_private); > + } > + > /* nb_pkts shall be less equal than RTE_IXGBE_MAX_RX_BURST */ > nb_pkts =3D RTE_MIN(nb_pkts, RTE_IXGBE_MAX_RX_BURST); >=20 > @@ -500,8 +508,15 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struc= t rte_mbuf **rx_pkts, > /* D.3 copy final 3,4 data to rx_pkts */ > _mm_storeu_si128((void *)&rx_pkts[pos+3]->rx_descriptor_fields1, > pkt_mb4); > + if (vf) > + ixgbevf_trans_vlan_sw_filter_hdr(rx_pkts[pos + 3], > + vfta); > + > _mm_storeu_si128((void *)&rx_pkts[pos+2]->rx_descriptor_fields1, > pkt_mb3); > + if (vf) > + ixgbevf_trans_vlan_sw_filter_hdr(rx_pkts[pos + 2], > + vfta); >=20 > /* D.2 pkt 1,2 set in_port/nb_seg and remove crc */ > pkt_mb2 =3D _mm_add_epi16(pkt_mb2, crc_adjust); > @@ -536,8 +551,15 @@ _recv_raw_pkts_vec(struct ixgbe_rx_queue *rxq, struc= t rte_mbuf **rx_pkts, > /* D.3 copy final 1,2 data to rx_pkts */ > _mm_storeu_si128((void *)&rx_pkts[pos+1]->rx_descriptor_fields1, > pkt_mb2); > + if (vf) > + ixgbevf_trans_vlan_sw_filter_hdr(rx_pkts[pos + 1], > + vfta); > + > _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, > pkt_mb1); > + if (vf) > + ixgbevf_trans_vlan_sw_filter_hdr(rx_pkts[pos], > + vfta); >=20 > desc_to_ptype_v(descs, rxq->pkt_type_mask, &rx_pkts[pos]); >=20 > @@ -569,11 +591,11 @@ uint16_t > ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > uint16_t nb_pkts) > { > - return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, NULL); > + return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, false, NULL); > } >=20 > /* > - * vPMD receive routine that reassembles scattered packets > + * vPMD raw receive routine that reassembles scattered packets > * > * Notice: > * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet > @@ -581,16 +603,16 @@ ixgbe_recv_pkts_vec(void *rx_queue, struct rte_mbuf= **rx_pkts, > * numbers of DD bit > * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two > */ > -uint16_t > -ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > - uint16_t nb_pkts) > +static inline uint16_t > +_recv_raw_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts, bool vf) > { > struct ixgbe_rx_queue *rxq =3D rx_queue; > uint8_t split_flags[RTE_IXGBE_MAX_RX_BURST] =3D {0}; >=20 > /* get some new buffers */ > uint16_t nb_bufs =3D _recv_raw_pkts_vec(rxq, rx_pkts, nb_pkts, > - split_flags); > + vf, split_flags); > if (nb_bufs =3D=3D 0) > return 0; >=20 > @@ -614,6 +636,54 @@ ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct= rte_mbuf **rx_pkts, > &split_flags[i]); > } >=20 > +/* > + * vPMD receive routine that reassembles scattered packets > + * > + * Notice: > + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet > + * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST > + * numbers of DD bit > + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two > + */ > +uint16_t > +ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > +{ > + return _recv_raw_scattered_pkts_vec(rx_queue, rx_pkts, nb_pkts, false); > +} > + > +/* > + * vPMD VF receive routine, only accept(nb_pkts >=3D RTE_IXGBE_DESCS_PER= _LOOP) > + * > + * Notice: > + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet > + * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST > + * numbers of DD bit > + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two > + */ > +uint16_t > +ixgbevf_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, > + uint16_t nb_pkts) > +{ > + return _recv_raw_pkts_vec(rx_queue, rx_pkts, nb_pkts, true, NULL); > +} > + > +/* > + * vPMD VF receive routine that reassembles scattered packets > + * > + * Notice: > + * - nb_pkts < RTE_IXGBE_DESCS_PER_LOOP, just return no packet > + * - nb_pkts > RTE_IXGBE_MAX_RX_BURST, only scan RTE_IXGBE_MAX_RX_BURST > + * numbers of DD bit > + * - floor align nb_pkts to a RTE_IXGBE_DESC_PER_LOOP power-of-two > + */ > +uint16_t > +ixgbevf_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkt= s, > + uint16_t nb_pkts) > +{ > + return _recv_raw_scattered_pkts_vec(rx_queue, rx_pkts, nb_pkts, true); > +} > + > static inline void > vtx1(volatile union ixgbe_adv_tx_desc *txdp, > struct rte_mbuf *pkt, uint64_t flags) > -- > 2.7.4