* Re: [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
@ 2020-09-17 3:00 ` Wang, Haiyue
2020-09-18 2:41 ` Guo, Jia
2020-09-23 7:45 ` [dpdk-dev] [PATCH v2] " Jeff Guo
` (12 subsequent siblings)
13 siblings, 1 reply; 40+ messages in thread
From: Wang, Haiyue @ 2020-09-17 3:00 UTC (permalink / raw)
To: Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei; +Cc: dev, Guo, Jia
Hi Jeff,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Jeff Guo
> Sent: Wednesday, September 9, 2020 10:54
> To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Guo, Jia <jia.guo@intel.com>
> Subject: [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction
>
> Enable metadata extraction for flexible descriptors in AVF, that would
> allow network function directly get metadata without additional parsing
> which would reduce the CPU cost for VFs. The enabling metadata
> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/OVS/
> MPLS flexible descriptors, and the VF could negotiate the capability of
> the flexible descriptor with PF and correspondingly configure the
> specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> ---
> doc/guides/rel_notes/release_20_11.rst | 6 +
> drivers/net/iavf/Makefile | 1 +
> drivers/net/iavf/iavf.h | 25 +-
> drivers/net/iavf/iavf_ethdev.c | 398 +++++++++++++++++++++++++
> drivers/net/iavf/iavf_rxtx.c | 230 +++++++++++++-
> drivers/net/iavf/iavf_rxtx.h | 17 ++
> drivers/net/iavf/iavf_vchnl.c | 22 +-
> drivers/net/iavf/meson.build | 2 +
> drivers/net/iavf/rte_pmd_iavf.h | 258 ++++++++++++++++
> 9 files changed, 937 insertions(+), 22 deletions(-)
> create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
>
> -------------
> diff --git a/drivers/net/iavf/Makefile b/drivers/net/iavf/Makefile
> index 792cbb7f7..05fcbdc47 100644
> --- a/drivers/net/iavf/Makefile
> +++ b/drivers/net/iavf/Makefile
meson build only now, remove the Makefile
> diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
> index 05a7dd898..fa71b4a80 100644
> --- a/drivers/net/iavf/iavf_rxtx.c
> +++ b/drivers/net/iavf/iavf_rxtx.c
> @@ -26,6 +26,74 @@
> +
> +/* Translate the rx flex descriptor status to pkt flags */
> +static inline void
> +iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
> + volatile union iavf_rx_flex_desc *rxdp, uint8_t rxdid)
> +{
> + if (rxdid == IAVF_RXDID_COMMS_GENERIC ||
> + rxdid == IAVF_RXDID_COMMS_AUX_VLAN ||
> + rxdid == IAVF_RXDID_COMMS_AUX_IPV4 ||
> + rxdid == IAVF_RXDID_COMMS_AUX_IPV6 ||
> + rxdid == IAVF_RXDID_COMMS_AUX_IPV6_FLOW ||
> + rxdid == IAVF_RXDID_COMMS_AUX_TCP ||
> + rxdid == IAVF_RXDID_COMMS_AUX_IP_OFFSET)
> + iavf_rxd_to_pkt_fields_aux(mb, rxdp);
> + else if (rxdid == IAVF_RXDID_COMMS_OVS_1)
> + iavf_rxd_to_pkt_fields_ovs(mb, rxdp);
> +}
We can optimize this by calling function handle:
struct iavf_rx_queue *rxq->rxd_to_pkt_fields(mb, rxdp)
and when setup the queue, assign the right handle according to the rxdid.
if (rxdid == IAVF_RXDID_COMMS_GENERIC ...)
rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_aux;
else if (OVS_1)
rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_ovs;
> --- a/drivers/net/iavf/meson.build
> +++ b/drivers/net/iavf/meson.build
> @@ -35,3 +35,5 @@ if arch_subdir == 'x86'
> objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
> endif
> endif
> +
> +install_headers('rte_pmd_iavf.h')
> diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
> new file mode 100644
> index 000000000..858201bd7
> --- /dev/null
> +++ b/drivers/net/iavf/rte_pmd_iavf.h
> @@ -0,0 +1,258 @@
> +/* SPDX-Liavfnse-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#ifndef _RTE_PMD_IAVF_H_
> +#define _RTE_PMD_IAVF_H_
> +
> +/**
> + * @file rte_pmd_iavf.h
> + *
> + * iavf PMD specific functions.
> + *
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
> + *
> + */
> +
> +#include <stdio.h>
> +#include <rte_mbuf.h>
> +#include <rte_mbuf_dyn.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * The supported network flexible descriptor's extraction metadata format.
> + */
> +union rte_net_iavf_flex_desc_metadata {
> + uint32_t metadata;
> +
> + struct {
> + uint16_t data0;
> + uint16_t data1;
> + } raw;
> +
> + struct {
> + uint16_t stag_vid:12,
> + stag_dei:1,
> + stag_pcp:3;
> + uint16_t ctag_vid:12,
> + ctag_dei:1,
> + ctag_pcp:3;
> + } vlan;
> +
> + struct {
> + uint16_t protocol:8,
> + ttl:8;
> + uint16_t tos:8,
> + ihl:4,
> + version:4;
> + } ipv4;
> +
> + struct {
> + uint16_t hoplimit:8,
> + nexthdr:8;
> + uint16_t flowhi4:4,
> + tc:8,
> + version:4;
> + } ipv6;
> +
> + struct {
> + uint16_t flowlo16;
> + uint16_t flowhi4:4,
> + tc:8,
> + version:4;
> + } ipv6_flow;
> +
> + struct {
> + uint16_t fin:1,
> + syn:1,
> + rst:1,
> + psh:1,
> + ack:1,
> + urg:1,
> + ece:1,
> + cwr:1,
> + res1:4,
> + doff:4;
> + uint16_t rsvd;
> + } tcp;
> +
> + uint32_t ip_ofs;
> +};
> +
> +/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
> +extern int rte_net_iavf_dynfield_flex_desc_metadata_offs;
> +
> +/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
> +extern uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask;
> +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask;
> +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask;
> +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
> +extern uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask;
> +extern uint64_t rte_net_iavf_dynflag_flex_desc_ovs_mask;
> +extern uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
> +
> +/**
> + * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
> + */
> +#define RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m) \
> + RTE_MBUF_DYNFIELD((m), \
> + rte_net_iavf_dynfield_flex_desc_metadata_offs, \
> + uint32_t *)
> +
> +/**
> + * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
> + * when dev_args 'flex_desc' has 'vlan' specified.
> + */
> +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN \
> + (rte_net_iavf_dynflag_flex_desc_vlan_mask)
> +
> +/**
> + * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
> + * when dev_args 'flex_desc' has 'ipv4' specified.
> + */
> +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4 \
> + (rte_net_iavf_dynflag_flex_desc_ipv4_mask)
> +
> +/**
> + * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
> + * when dev_args 'flex_desc' has 'ipv6' specified.
> + */
> +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6 \
> + (rte_net_iavf_dynflag_flex_desc_ipv6_mask)
> +
> +/**
> + * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
> + * valid when dev_args 'flex_desc' has 'ipv6_flow' specified.
> + */
> +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW \
> + (rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask)
> +
> +/**
> + * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
> + * when dev_args 'flex_desc' has 'tcp' specified.
> + */
> +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP \
> + (rte_net_iavf_dynflag_flex_desc_tcp_mask)
> +
> +/**
> + * The mbuf dynamic flag for the extraction metadata of OVS flexible
> + * descriptor, it is valid when dev_args 'flex_desc' has 'ovs' specified.
> + */
> +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_OVS \
> + (rte_net_iavf_dynflag_flex_desc_ovs_mask)
> +
> +/**
> + * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
> + * when dev_args 'flex_desc' has 'ip_offset' specified.
> + */
> +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET \
> + (rte_net_iavf_dynflag_flex_desc_ip_offset_mask)
> +
> +/**
> + * Check if mbuf dynamic field for flexible descriptor's extraction metadata
> + * is registered.
> + *
> + * @return
> + * True if registered, false otherwise.
> + */
> +__rte_experimental
> +static __rte_always_inline int
> +rte_net_iavf_dynf_flex_desc_metadata_avail(void)
> +{
> + return rte_net_iavf_dynfield_flex_desc_metadata_offs != -1;
> +}
> +
> +/**
> + * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @return
> + * The saved protocol extraction metadata.
> + */
> +__rte_experimental
> +static __rte_always_inline uint32_t
> +rte_net_iavf_dynf_flex_desc_metadata_get(struct rte_mbuf *m)
> +{
> + return *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m);
> +}
> +
> +/**
> + * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
> + *
> + * @param m
> + * The pointer to the mbuf.
> + */
> +__rte_experimental
> +static inline void
> +rte_net_iavf_dump_flex_desc_metadata(struct rte_mbuf *m)
> +{
> + union rte_net_iavf_flex_desc_metadata data;
> +
> + if (!rte_net_iavf_dynf_flex_desc_metadata_avail())
> + return;
> +
> + data.metadata = rte_net_iavf_dynf_flex_desc_metadata_get(m);
> +
> + if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN)
> + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> + "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
> + data.raw.data0, data.raw.data1,
> + data.vlan.stag_pcp,
> + data.vlan.stag_dei,
> + data.vlan.stag_vid,
> + data.vlan.ctag_pcp,
> + data.vlan.ctag_dei,
> + data.vlan.ctag_vid);
> + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4)
> + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> + "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
> + data.raw.data0, data.raw.data1,
> + data.ipv4.version,
> + data.ipv4.ihl,
> + data.ipv4.tos,
> + data.ipv4.ttl,
> + data.ipv4.protocol);
> + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6)
> + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> + "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
> + data.raw.data0, data.raw.data1,
> + data.ipv6.version,
> + data.ipv6.tc,
> + data.ipv6.flowhi4,
> + data.ipv6.nexthdr,
> + data.ipv6.hoplimit);
> + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW)
> + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> + "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
> + data.raw.data0, data.raw.data1,
> + data.ipv6_flow.version,
> + data.ipv6_flow.tc,
> + data.ipv6_flow.flowhi4,
> + data.ipv6_flow.flowlo16);
> + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP)
> + printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> + "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
> + data.raw.data0, data.raw.data1,
> + data.tcp.doff,
> + data.tcp.cwr ? "C" : "",
> + data.tcp.ece ? "E" : "",
> + data.tcp.urg ? "U" : "",
> + data.tcp.ack ? "A" : "",
> + data.tcp.psh ? "P" : "",
> + data.tcp.rst ? "R" : "",
> + data.tcp.syn ? "S" : "",
> + data.tcp.fin ? "F" : "");
> + else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET)
> + printf(" - Flexible descriptor's Extraction: ip_offset=%u",
> + data.ip_ofs);
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_PMD_IAVF_H_ */
You need to export these global symbols into rte_pmd_iavf_version.map like:
EXPERIMENTAL {
global:
rte_net_iavf_dynfield_proto_xtr_metadata_offs;
...
};
> --
> 2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction
2020-09-17 3:00 ` Wang, Haiyue
@ 2020-09-18 2:41 ` Guo, Jia
0 siblings, 0 replies; 40+ messages in thread
From: Guo, Jia @ 2020-09-18 2:41 UTC (permalink / raw)
To: Wang, Haiyue, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei; +Cc: dev
hi, haiyue
> -----Original Message-----
> From: Wang, Haiyue <haiyue.wang@intel.com>
> Sent: Thursday, September 17, 2020 11:00 AM
> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Guo, Jia <jia.guo@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata
> extraction
>
> Hi Jeff,
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Jeff Guo
> > Sent: Wednesday, September 9, 2020 10:54
> > To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z
> > <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > Cc: dev@dpdk.org; Guo, Jia <jia.guo@intel.com>
> > Subject: [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata
> > extraction
> >
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional
> > parsing which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/OVS/
> > MPLS flexible descriptors, and the VF could negotiate the capability
> > of the flexible descriptor with PF and correspondingly configure the
> > specific offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > ---
> > doc/guides/rel_notes/release_20_11.rst | 6 +
> > drivers/net/iavf/Makefile | 1 +
> > drivers/net/iavf/iavf.h | 25 +-
> > drivers/net/iavf/iavf_ethdev.c | 398 +++++++++++++++++++++++++
> > drivers/net/iavf/iavf_rxtx.c | 230 +++++++++++++-
> > drivers/net/iavf/iavf_rxtx.h | 17 ++
> > drivers/net/iavf/iavf_vchnl.c | 22 +-
> > drivers/net/iavf/meson.build | 2 +
> > drivers/net/iavf/rte_pmd_iavf.h | 258 ++++++++++++++++
> > 9 files changed, 937 insertions(+), 22 deletions(-) create mode
> > 100644 drivers/net/iavf/rte_pmd_iavf.h
> >
>
>
> > -------------
> > diff --git a/drivers/net/iavf/Makefile b/drivers/net/iavf/Makefile
> > index 792cbb7f7..05fcbdc47 100644
> > --- a/drivers/net/iavf/Makefile
> > +++ b/drivers/net/iavf/Makefile
>
> meson build only now, remove the Makefile
>
Oh, that is exactly.
>
> > diff --git a/drivers/net/iavf/iavf_rxtx.c
> > b/drivers/net/iavf/iavf_rxtx.c index 05a7dd898..fa71b4a80 100644
> > --- a/drivers/net/iavf/iavf_rxtx.c
> > +++ b/drivers/net/iavf/iavf_rxtx.c
> > @@ -26,6 +26,74 @@
>
>
> > +
> > +/* Translate the rx flex descriptor status to pkt flags */ static
> > +inline void iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
> > + volatile union iavf_rx_flex_desc *rxdp, uint8_t rxdid) { if
> > +(rxdid == IAVF_RXDID_COMMS_GENERIC ||
> > + rxdid == IAVF_RXDID_COMMS_AUX_VLAN ||
> > + rxdid == IAVF_RXDID_COMMS_AUX_IPV4 ||
> > + rxdid == IAVF_RXDID_COMMS_AUX_IPV6 ||
> > + rxdid == IAVF_RXDID_COMMS_AUX_IPV6_FLOW ||
> > + rxdid == IAVF_RXDID_COMMS_AUX_TCP ||
> > + rxdid == IAVF_RXDID_COMMS_AUX_IP_OFFSET)
> > +iavf_rxd_to_pkt_fields_aux(mb, rxdp); else if (rxdid ==
> > +IAVF_RXDID_COMMS_OVS_1) iavf_rxd_to_pkt_fields_ovs(mb, rxdp); }
>
> We can optimize this by calling function handle:
>
> struct iavf_rx_queue *rxq->rxd_to_pkt_fields(mb, rxdp)
>
> and when setup the queue, assign the right handle according to the rxdid.
> if (rxdid == IAVF_RXDID_COMMS_GENERIC ...)
> rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_aux;
> else if (OVS_1)
> rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_ovs;
>
>
Sounds not bad if it could let it more clear for handling these diversity protocols, let's see what we could bring some modification in coming version.
> > --- a/drivers/net/iavf/meson.build
> > +++ b/drivers/net/iavf/meson.build
> > @@ -35,3 +35,5 @@ if arch_subdir == 'x86'
> > objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
> > endif
> > endif
> > +
> > +install_headers('rte_pmd_iavf.h')
> > diff --git a/drivers/net/iavf/rte_pmd_iavf.h
> > b/drivers/net/iavf/rte_pmd_iavf.h new file mode 100644 index
> > 000000000..858201bd7
> > --- /dev/null
> > +++ b/drivers/net/iavf/rte_pmd_iavf.h
> > @@ -0,0 +1,258 @@
> > +/* SPDX-Liavfnse-Identifier: BSD-3-Clause
> > + * Copyright(c) 2019 Intel Corporation */
> > +
> > +#ifndef _RTE_PMD_IAVF_H_
> > +#define _RTE_PMD_IAVF_H_
> > +
> > +/**
> > + * @file rte_pmd_iavf.h
> > + *
> > + * iavf PMD specific functions.
> > + *
> > + * @b EXPERIMENTAL: this API may change, or be removed, without prior
> > +notiavf
> > + *
> > + */
> > +
> > +#include <stdio.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_mbuf_dyn.h>
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +/**
> > + * The supported network flexible descriptor's extraction metadata
> format.
> > + */
> > +union rte_net_iavf_flex_desc_metadata { uint32_t metadata;
> > +
> > +struct {
> > +uint16_t data0;
> > +uint16_t data1;
> > +} raw;
> > +
> > +struct {
> > +uint16_t stag_vid:12,
> > + stag_dei:1,
> > + stag_pcp:3;
> > +uint16_t ctag_vid:12,
> > + ctag_dei:1,
> > + ctag_pcp:3;
> > +} vlan;
> > +
> > +struct {
> > +uint16_t protocol:8,
> > + ttl:8;
> > +uint16_t tos:8,
> > + ihl:4,
> > + version:4;
> > +} ipv4;
> > +
> > +struct {
> > +uint16_t hoplimit:8,
> > + nexthdr:8;
> > +uint16_t flowhi4:4,
> > + tc:8,
> > + version:4;
> > +} ipv6;
> > +
> > +struct {
> > +uint16_t flowlo16;
> > +uint16_t flowhi4:4,
> > + tc:8,
> > + version:4;
> > +} ipv6_flow;
> > +
> > +struct {
> > +uint16_t fin:1,
> > + syn:1,
> > + rst:1,
> > + psh:1,
> > + ack:1,
> > + urg:1,
> > + ece:1,
> > + cwr:1,
> > + res1:4,
> > + doff:4;
> > +uint16_t rsvd;
> > +} tcp;
> > +
> > +uint32_t ip_ofs;
> > +};
> > +
> > +/* Offset of mbuf dynamic field for flexible descriptor's extraction
> > +data */ extern int rte_net_iavf_dynfield_flex_desc_metadata_offs;
> > +
> > +/* Mask of mbuf dynamic flags for flexible descriptor's extraction
> > +type */ extern uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask;
> > +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask;
> > +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask;
> > +extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
> > +extern uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask;
> > +extern uint64_t rte_net_iavf_dynflag_flex_desc_ovs_mask;
> > +extern uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
> > +
> > +/**
> > + * The mbuf dynamic field pointer for flexible descriptor's extraction
> metadata.
> > + */
> > +#define RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m) \
> > +RTE_MBUF_DYNFIELD((m), \
> > + rte_net_iavf_dynfield_flex_desc_metadata_offs, \
> > + uint32_t *)
> > +
> > +/**
> > + * The mbuf dynamic flag for VLAN protocol extraction metadata, it is
> > +valid
> > + * when dev_args 'flex_desc' has 'vlan' specified.
> > + */
> > +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN \
> > +(rte_net_iavf_dynflag_flex_desc_vlan_mask)
> > +
> > +/**
> > + * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is
> > +valid
> > + * when dev_args 'flex_desc' has 'ipv4' specified.
> > + */
> > +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4 \
> > +(rte_net_iavf_dynflag_flex_desc_ipv4_mask)
> > +
> > +/**
> > + * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is
> > +valid
> > + * when dev_args 'flex_desc' has 'ipv6' specified.
> > + */
> > +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6 \
> > +(rte_net_iavf_dynflag_flex_desc_ipv6_mask)
> > +
> > +/**
> > + * The mbuf dynamic flag for IPv6 with flow protocol extraction
> > +metadata, it is
> > + * valid when dev_args 'flex_desc' has 'ipv6_flow' specified.
> > + */
> > +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW \
> > +(rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask)
> > +
> > +/**
> > + * The mbuf dynamic flag for TCP protocol extraction metadata, it is
> > +valid
> > + * when dev_args 'flex_desc' has 'tcp' specified.
> > + */
> > +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP \
> > +(rte_net_iavf_dynflag_flex_desc_tcp_mask)
> > +
> > +/**
> > + * The mbuf dynamic flag for the extraction metadata of OVS flexible
> > + * descriptor, it is valid when dev_args 'flex_desc' has 'ovs' specified.
> > + */
> > +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_OVS \
> > +(rte_net_iavf_dynflag_flex_desc_ovs_mask)
> > +
> > +/**
> > + * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is
> > +valid
> > + * when dev_args 'flex_desc' has 'ip_offset' specified.
> > + */
> > +#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET \
> > +(rte_net_iavf_dynflag_flex_desc_ip_offset_mask)
> > +
> > +/**
> > + * Check if mbuf dynamic field for flexible descriptor's extraction
> > +metadata
> > + * is registered.
> > + *
> > + * @return
> > + * True if registered, false otherwise.
> > + */
> > +__rte_experimental
> > +static __rte_always_inline int
> > +rte_net_iavf_dynf_flex_desc_metadata_avail(void)
> > +{
> > +return rte_net_iavf_dynfield_flex_desc_metadata_offs != -1; }
> > +
> > +/**
> > + * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @return
> > + * The saved protocol extraction metadata.
> > + */
> > +__rte_experimental
> > +static __rte_always_inline uint32_t
> > +rte_net_iavf_dynf_flex_desc_metadata_get(struct rte_mbuf *m)
> { return
> > +*RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m);
> > +}
> > +
> > +/**
> > + * Dump the mbuf dynamic field for flexible descriptor's extraction
> metadata.
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + */
> > +__rte_experimental
> > +static inline void
> > +rte_net_iavf_dump_flex_desc_metadata(struct rte_mbuf *m) { union
> > +rte_net_iavf_flex_desc_metadata data;
> > +
> > +if (!rte_net_iavf_dynf_flex_desc_metadata_avail())
> > +return;
> > +
> > +data.metadata = rte_net_iavf_dynf_flex_desc_metadata_get(m);
> > +
> > +if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN)
> > +printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> > + "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
> > + data.raw.data0, data.raw.data1,
> > + data.vlan.stag_pcp,
> > + data.vlan.stag_dei,
> > + data.vlan.stag_vid,
> > + data.vlan.ctag_pcp,
> > + data.vlan.ctag_dei,
> > + data.vlan.ctag_vid);
> > +else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4)
> > +printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> > + "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
> > + data.raw.data0, data.raw.data1,
> > + data.ipv4.version,
> > + data.ipv4.ihl,
> > + data.ipv4.tos,
> > + data.ipv4.ttl,
> > + data.ipv4.protocol);
> > +else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6)
> > +printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> > + "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
> > + data.raw.data0, data.raw.data1,
> > + data.ipv6.version,
> > + data.ipv6.tc,
> > + data.ipv6.flowhi4,
> > + data.ipv6.nexthdr,
> > + data.ipv6.hoplimit);
> > +else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW)
> > +printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> > + "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
> > + data.raw.data0, data.raw.data1,
> > + data.ipv6_flow.version,
> > + data.ipv6_flow.tc,
> > + data.ipv6_flow.flowhi4,
> > + data.ipv6_flow.flowlo16);
> > +else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP)
> > +printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
> > + "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
> > + data.raw.data0, data.raw.data1,
> > + data.tcp.doff,
> > + data.tcp.cwr ? "C" : "",
> > + data.tcp.ece ? "E" : "",
> > + data.tcp.urg ? "U" : "",
> > + data.tcp.ack ? "A" : "",
> > + data.tcp.psh ? "P" : "",
> > + data.tcp.rst ? "R" : "",
> > + data.tcp.syn ? "S" : "",
> > + data.tcp.fin ? "F" : "");
> > +else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET)
> > +printf(" - Flexible descriptor's Extraction: ip_offset=%u",
> > + data.ip_ofs);
> > +}
> > +
> > +#ifdef __cplusplus
> > +}
> > +#endif
> > +
> > +#endif /* _RTE_PMD_IAVF_H_ */
>
> You need to export these global symbols into rte_pmd_iavf_version.map like:
>
Ok.
> EXPERIMENTAL {
> global:
>
> rte_net_iavf_dynfield_proto_xtr_metadata_offs;
> ...
> };
>
> > --
> > 2.20.1
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
2020-09-17 3:00 ` Wang, Haiyue
@ 2020-09-23 7:45 ` Jeff Guo
2020-09-23 7:52 ` [dpdk-dev] [PATCH v3] " Jeff Guo
` (11 subsequent siblings)
13 siblings, 0 replies; 40+ messages in thread
From: Jeff Guo @ 2020-09-23 7:45 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing; +Cc: dev, haiyue.wang, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
v2->v1:
remove makefile change and modify the rxdid handling
---
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 25 +-
drivers/net/iavf/iavf_ethdev.c | 395 +++++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 282 ++++++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 233 ++++++++-------
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++++
8 files changed, 1068 insertions(+), 147 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d4a66d045..054424d94 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -62,6 +62,12 @@ New Features
* Added support for non-zero priorities for group 0 flows
* Added support for VXLAN decap combined with VLAN pop
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3..44e28df56 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *flex_desc; /* flexible descriptor type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,28 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_flex_desc_type {
+ IAVF_FLEX_DESC_NONE,
+ IAVF_FLEX_DESC_VLAN,
+ IAVF_FLEX_DESC_IPV4,
+ IAVF_FLEX_DESC_IPV6,
+ IAVF_FLEX_DESC_IPV6_FLOW,
+ IAVF_FLEX_DESC_TCP,
+ IAVF_FLEX_DESC_OVS,
+ IAVF_FLEX_DESC_IP_OFFSET,
+ IAVF_FLEX_DESC_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t flex_desc_dflt;
+ uint8_t flex_desc[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +188,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 440da7d76..02b55cb49 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_FLEX_DESC_ARG "flex_desc"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_FLEX_DESC_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_flex_desc_metadata_param = {
+ .name = "iavf_dynfield_flex_desc_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_flex_desc_ol_flag {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_flex_desc_ol_flag iavf_flex_desc_ol_flag_params[] = {
+ [IAVF_FLEX_DESC_VLAN] = {
+ .param = { .name = "iavf_dynflag_flex_desc_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_vlan_mask },
+ [IAVF_FLEX_DESC_IPV4] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv4_mask },
+ [IAVF_FLEX_DESC_IPV6] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv6_mask },
+ [IAVF_FLEX_DESC_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask },
+ [IAVF_FLEX_DESC_TCP] = {
+ .param = { .name = "iavf_dynflag_flex_desc_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_tcp_mask },
+ [IAVF_FLEX_DESC_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_flex_desc_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1213,6 +1256,350 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_flex_desc_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_flex_desc_type type;
+ } flex_type_map[] = {
+ { "vlan", IAVF_FLEX_DESC_VLAN },
+ { "ipv4", IAVF_FLEX_DESC_IPV4 },
+ { "ipv6", IAVF_FLEX_DESC_IPV6 },
+ { "ipv6_flow", IAVF_FLEX_DESC_IPV6_FLOW },
+ { "tcp", IAVF_FLEX_DESC_TCP },
+ { "ovs", IAVF_FLEX_DESC_OVS },
+ { "ip_offset", IAVF_FLEX_DESC_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(flex_type_map); i++) {
+ if (strcmp(flex_name, flex_type_map[i].name) == 0)
+ return flex_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong flex_desc type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ovs|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int flex_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->flex_desc[idx] = flex_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->flex_desc[idx] = flex_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_flex_desc(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int flex_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ flex_type = iavf_lookup_flex_desc_type(queues);
+ if (flex_type < 0)
+ return -1;
+
+ devargs->flex_desc_dflt = flex_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ flex_type = iavf_lookup_flex_desc_type(flex_name);
+ if (flex_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, flex_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_flex_desc_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_flex_desc(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the flex_desc's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.flex_desc_dflt = IAVF_FLEX_DESC_NONE;
+ memset(ad->devargs.flex_desc, IAVF_FLEX_DESC_NONE,
+ sizeof(ad->devargs.flex_desc));
+
+ ret = rte_kvargs_process(kvlist, IAVF_FLEX_DESC_ARG,
+ &iavf_handle_flex_desc_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_flex_desc(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_flex_desc_ol_flag *ol_flag;
+ bool flex_desc_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->flex_desc = rte_zmalloc("vf flex desc",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->flex_desc))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up flex_desc's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->flex_desc[i] = ad->devargs.flex_desc[i] !=
+ IAVF_FLEX_DESC_NONE ?
+ ad->devargs.flex_desc[i] :
+ ad->devargs.flex_desc_dflt;
+
+ if (vf->flex_desc[i] != IAVF_FLEX_DESC_NONE) {
+ uint8_t type = vf->flex_desc[i];
+
+ iavf_flex_desc_ol_flag_params[type].required = true;
+ flex_desc_enable = true;
+ }
+ }
+
+ if (likely(!flex_desc_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_flex_desc_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract flex_desc metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "flex_desc extraction metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_flex_desc_ol_flag_params); i++) {
+ ol_flag = &iavf_flex_desc_ol_flag_params[i];
+
+ uint8_t rxdid = iavf_flex_desc_type_to_rxdid((uint8_t)i);
+
+ if (!ol_flag->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&ol_flag->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register offload '%s', error %d",
+ ol_flag->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "flex_desc extraction offload '%s' offset in mbuf is : %d",
+ ol_flag->param.name, offset);
+ *ol_flag->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1222,6 +1609,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1287,6 +1680,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
vf->vf_reset = false;
+ iavf_init_flex_desc(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 05a7dd898..a65c8454d 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,36 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+int rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's type */
+uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+
+uint8_t
+iavf_flex_desc_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_FLEX_DESC_NONE] = IAVF_RXDID_COMMS_GENERIC,
+ [IAVF_FLEX_DESC_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_FLEX_DESC_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_FLEX_DESC_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_FLEX_DESC_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_FLEX_DESC_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_FLEX_DESC_OVS] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_FLEX_DESC_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_GENERIC;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +324,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_flex_desc_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +493,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t flex_desc;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +531,16 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ flex_desc = vf->flex_desc ? vf->flex_desc[queue_idx] :
+ IAVF_FLEX_DESC_NONE;
+ rxq->rxdid = iavf_flex_desc_type_to_rxdid(flex_desc);
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +902,45 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
+{
+ uint64_t flags = 0;
+
+ /* check if HW has decoded the packet and checksum */
+ if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
+ return 0;
+
+ if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
+ flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
+ flags |= PKT_RX_IP_CKSUM_BAD;
+ else
+ flags |= PKT_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
+ flags |= PKT_RX_L4_CKSUM_BAD;
+ else
+ flags |= PKT_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
+ flags |= PKT_RX_EIP_CKSUM_BAD;
+
+ return flags;
+}
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +966,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1045,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1299,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1440,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1677,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1869,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2100,6 +2317,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 59625a979..de7a1f633 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,110 +57,6 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
-/* HW desc structure, both 16-byte and 32-byte types are supported */
-#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
-#define iavf_rx_desc iavf_16byte_rx_desc
-#define iavf_rx_flex_desc iavf_16b_rx_flex_desc
-#else
-#define iavf_rx_desc iavf_32byte_rx_desc
-#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
-#endif
-
-struct iavf_rxq_ops {
- void (*release_mbufs)(struct iavf_rx_queue *rxq);
-};
-
-struct iavf_txq_ops {
- void (*release_mbufs)(struct iavf_tx_queue *txq);
-};
-
-/* Structure associated with each Rx queue. */
-struct iavf_rx_queue {
- struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
- const struct rte_memzone *mz; /* memzone for Rx ring */
- volatile union iavf_rx_desc *rx_ring; /* Rx ring virtual address */
- uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
- struct rte_mbuf **sw_ring; /* address of SW ring */
- uint16_t nb_rx_desc; /* ring length */
- uint16_t rx_tail; /* current value of tail */
- volatile uint8_t *qrx_tail; /* register address of tail */
- uint16_t rx_free_thresh; /* max free RX desc to hold */
- uint16_t nb_rx_hold; /* number of held free RX desc */
- struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
- struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
- struct rte_mbuf fake_mbuf; /* dummy mbuf */
- uint8_t rxdid;
-
- /* used for VPMD */
- uint16_t rxrearm_nb; /* number of remaining to be re-armed */
- uint16_t rxrearm_start; /* the idx we start the re-arming from */
- uint64_t mbuf_initializer; /* value to init mbufs */
-
- /* for rx bulk */
- uint16_t rx_nb_avail; /* number of staged packets ready */
- uint16_t rx_next_avail; /* index of next staged packets */
- uint16_t rx_free_trigger; /* triggers rx buffer allocation */
- struct rte_mbuf *rx_stage[IAVF_RX_MAX_BURST * 2]; /* store mbuf */
-
- uint16_t port_id; /* device port ID */
- uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
- uint8_t fdir_enabled; /* 0 if FDIR disabled, 1 when enabled */
- uint16_t queue_id; /* Rx queue index */
- uint16_t rx_buf_len; /* The packet buffer size */
- uint16_t rx_hdr_len; /* The header buffer size */
- uint16_t max_pkt_len; /* Maximum packet length */
- struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
-
- bool q_set; /* if rx queue has been configured */
- bool rx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_rxq_ops *ops;
-};
-
-struct iavf_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-/* Structure associated with each TX queue. */
-struct iavf_tx_queue {
- const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
- uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
- struct iavf_tx_entry *sw_ring; /* address array of SW ring */
- uint16_t nb_tx_desc; /* ring length */
- uint16_t tx_tail; /* current value of tail */
- volatile uint8_t *qtx_tail; /* register address of tail */
- /* number of used desc since RS bit set */
- uint16_t nb_used;
- uint16_t nb_free;
- uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t free_thresh;
- uint16_t rs_thresh;
-
- uint16_t port_id;
- uint16_t queue_id;
- uint64_t offloads;
- uint16_t next_dd; /* next to set RS, for VPMD */
- uint16_t next_rs; /* next to check DD, for VPMD */
-
- bool q_set; /* if rx queue has been configured */
- bool tx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_txq_ops *ops;
-};
-
-/* Offload features */
-union iavf_tx_offload {
- uint64_t data;
- struct {
- uint64_t l2_len:7; /* L2 (MAC) Header Length. */
- uint64_t l3_len:9; /* L3 (IP) Header Length. */
- uint64_t l4_len:8; /* L4 Header Length. */
- uint64_t tso_segsz:16; /* TCP TSO segment size */
- /* uint64_t unused : 24; */
- };
-};
-
/* Rx Flex Descriptors
* These descriptors are used instead of the legacy version descriptors
*/
@@ -331,6 +227,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,12 +252,138 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
/* for iavf_32b_rx_flex_desc.pkt_len member */
#define IAVF_RX_FLX_DESC_PKT_LEN_M (0x3FFF) /* 14-bits */
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+#define iavf_rx_desc iavf_16byte_rx_desc
+#define iavf_rx_flex_desc iavf_16b_rx_flex_desc
+#else
+#define iavf_rx_desc iavf_32byte_rx_desc
+#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
+#endif
+
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
+struct iavf_rxq_ops {
+ void (*release_mbufs)(struct iavf_rx_queue *rxq);
+};
+
+struct iavf_txq_ops {
+ void (*release_mbufs)(struct iavf_tx_queue *txq);
+};
+
+/* Structure associated with each Rx queue. */
+struct iavf_rx_queue {
+ struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
+ const struct rte_memzone *mz; /* memzone for Rx ring */
+ volatile union iavf_rx_desc *rx_ring; /* Rx ring virtual address */
+ uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
+ struct rte_mbuf **sw_ring; /* address of SW ring */
+ uint16_t nb_rx_desc; /* ring length */
+ uint16_t rx_tail; /* current value of tail */
+ volatile uint8_t *qrx_tail; /* register address of tail */
+ uint16_t rx_free_thresh; /* max free RX desc to hold */
+ uint16_t nb_rx_hold; /* number of held free RX desc */
+ struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+ struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
+ struct rte_mbuf fake_mbuf; /* dummy mbuf */
+ uint8_t rxdid;
+
+ /* used for VPMD */
+ uint16_t rxrearm_nb; /* number of remaining to be re-armed */
+ uint16_t rxrearm_start; /* the idx we start the re-arming from */
+ uint64_t mbuf_initializer; /* value to init mbufs */
+
+ /* for rx bulk */
+ uint16_t rx_nb_avail; /* number of staged packets ready */
+ uint16_t rx_next_avail; /* index of next staged packets */
+ uint16_t rx_free_trigger; /* triggers rx buffer allocation */
+ struct rte_mbuf *rx_stage[IAVF_RX_MAX_BURST * 2]; /* store mbuf */
+
+ uint16_t port_id; /* device port ID */
+ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+ uint8_t fdir_enabled; /* 0 if FDIR disabled, 1 when enabled */
+ uint16_t queue_id; /* Rx queue index */
+ uint16_t rx_buf_len; /* The packet buffer size */
+ uint16_t rx_hdr_len; /* The header buffer size */
+ uint16_t max_pkt_len; /* Maximum packet length */
+ struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
+
+ bool q_set; /* if rx queue has been configured */
+ bool rx_deferred_start; /* don't start this queue in dev start */
+ const struct iavf_rxq_ops *ops;
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
+};
+
+struct iavf_tx_entry {
+ struct rte_mbuf *mbuf;
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct iavf_tx_queue {
+ const struct rte_memzone *mz; /* memzone for Tx ring */
+ volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
+ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ struct iavf_tx_entry *sw_ring; /* address array of SW ring */
+ uint16_t nb_tx_desc; /* ring length */
+ uint16_t tx_tail; /* current value of tail */
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ /* number of used desc since RS bit set */
+ uint16_t nb_used;
+ uint16_t nb_free;
+ uint16_t last_desc_cleaned; /* last desc have been cleaned*/
+ uint16_t free_thresh;
+ uint16_t rs_thresh;
+
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint64_t offloads;
+ uint16_t next_dd; /* next to set RS, for VPMD */
+ uint16_t next_rs; /* next to check DD, for VPMD */
+
+ bool q_set; /* if rx queue has been configured */
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ const struct iavf_txq_ops *ops;
+};
+
+/* Offload features */
+union iavf_tx_offload {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+ uint64_t l3_len:9; /* L3 (IP) Header Length. */
+ uint64_t l4_len:8; /* L4 Header Length. */
+ uint64_t tso_segsz:16; /* TCP TSO segment size */
+ /* uint64_t unused : 24; */
+ };
+};
+
int iavf_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
uint16_t nb_desc,
@@ -438,6 +461,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_flex_desc_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 76f8e38d1..7981dfa30 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -647,25 +647,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index a3fad363d..cd5159332 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 000000000..dddb4340a
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_flex_desc_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_flex_desc_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_flex_desc_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN \
+ (rte_net_iavf_dynflag_flex_desc_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4 \
+ (rte_net_iavf_dynflag_flex_desc_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6 \
+ (rte_net_iavf_dynflag_flex_desc_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'flex_desc' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW \
+ (rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP \
+ (rte_net_iavf_dynflag_flex_desc_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET \
+ (rte_net_iavf_dynflag_flex_desc_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_flex_desc_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_flex_desc_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_flex_desc_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_flex_desc_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_flex_desc_metadata data;
+
+ if (!rte_net_iavf_dynf_flex_desc_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_flex_desc_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
2020-09-17 3:00 ` Wang, Haiyue
2020-09-23 7:45 ` [dpdk-dev] [PATCH v2] " Jeff Guo
@ 2020-09-23 7:52 ` Jeff Guo
2020-09-23 8:10 ` Wang, Haiyue
2020-09-23 15:36 ` [dpdk-dev] [PATCH v4] " Jeff Guo
` (10 subsequent siblings)
13 siblings, 1 reply; 40+ messages in thread
From: Jeff Guo @ 2020-09-23 7:52 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing; +Cc: dev, haiyue.wang, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 25 +-
drivers/net/iavf/iavf_ethdev.c | 395 ++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 282 +++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 233 +++++++------
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
9 files changed, 1081 insertions(+), 147 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d4a66d045..054424d94 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -62,6 +62,12 @@ New Features
* Added support for non-zero priorities for group 0 flows
* Added support for VXLAN decap combined with VLAN pop
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3..44e28df56 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *flex_desc; /* flexible descriptor type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,28 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_flex_desc_type {
+ IAVF_FLEX_DESC_NONE,
+ IAVF_FLEX_DESC_VLAN,
+ IAVF_FLEX_DESC_IPV4,
+ IAVF_FLEX_DESC_IPV6,
+ IAVF_FLEX_DESC_IPV6_FLOW,
+ IAVF_FLEX_DESC_TCP,
+ IAVF_FLEX_DESC_OVS,
+ IAVF_FLEX_DESC_IP_OFFSET,
+ IAVF_FLEX_DESC_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t flex_desc_dflt;
+ uint8_t flex_desc[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +188,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 440da7d76..02b55cb49 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_FLEX_DESC_ARG "flex_desc"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_FLEX_DESC_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_flex_desc_metadata_param = {
+ .name = "iavf_dynfield_flex_desc_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_flex_desc_ol_flag {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_flex_desc_ol_flag iavf_flex_desc_ol_flag_params[] = {
+ [IAVF_FLEX_DESC_VLAN] = {
+ .param = { .name = "iavf_dynflag_flex_desc_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_vlan_mask },
+ [IAVF_FLEX_DESC_IPV4] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv4_mask },
+ [IAVF_FLEX_DESC_IPV6] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv6_mask },
+ [IAVF_FLEX_DESC_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask },
+ [IAVF_FLEX_DESC_TCP] = {
+ .param = { .name = "iavf_dynflag_flex_desc_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_tcp_mask },
+ [IAVF_FLEX_DESC_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_flex_desc_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1213,6 +1256,350 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_flex_desc_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_flex_desc_type type;
+ } flex_type_map[] = {
+ { "vlan", IAVF_FLEX_DESC_VLAN },
+ { "ipv4", IAVF_FLEX_DESC_IPV4 },
+ { "ipv6", IAVF_FLEX_DESC_IPV6 },
+ { "ipv6_flow", IAVF_FLEX_DESC_IPV6_FLOW },
+ { "tcp", IAVF_FLEX_DESC_TCP },
+ { "ovs", IAVF_FLEX_DESC_OVS },
+ { "ip_offset", IAVF_FLEX_DESC_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(flex_type_map); i++) {
+ if (strcmp(flex_name, flex_type_map[i].name) == 0)
+ return flex_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong flex_desc type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ovs|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int flex_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->flex_desc[idx] = flex_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->flex_desc[idx] = flex_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_flex_desc(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int flex_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ flex_type = iavf_lookup_flex_desc_type(queues);
+ if (flex_type < 0)
+ return -1;
+
+ devargs->flex_desc_dflt = flex_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ flex_type = iavf_lookup_flex_desc_type(flex_name);
+ if (flex_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, flex_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_flex_desc_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_flex_desc(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the flex_desc's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.flex_desc_dflt = IAVF_FLEX_DESC_NONE;
+ memset(ad->devargs.flex_desc, IAVF_FLEX_DESC_NONE,
+ sizeof(ad->devargs.flex_desc));
+
+ ret = rte_kvargs_process(kvlist, IAVF_FLEX_DESC_ARG,
+ &iavf_handle_flex_desc_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_flex_desc(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_flex_desc_ol_flag *ol_flag;
+ bool flex_desc_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->flex_desc = rte_zmalloc("vf flex desc",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->flex_desc))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up flex_desc's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->flex_desc[i] = ad->devargs.flex_desc[i] !=
+ IAVF_FLEX_DESC_NONE ?
+ ad->devargs.flex_desc[i] :
+ ad->devargs.flex_desc_dflt;
+
+ if (vf->flex_desc[i] != IAVF_FLEX_DESC_NONE) {
+ uint8_t type = vf->flex_desc[i];
+
+ iavf_flex_desc_ol_flag_params[type].required = true;
+ flex_desc_enable = true;
+ }
+ }
+
+ if (likely(!flex_desc_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_flex_desc_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract flex_desc metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "flex_desc extraction metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_flex_desc_ol_flag_params); i++) {
+ ol_flag = &iavf_flex_desc_ol_flag_params[i];
+
+ uint8_t rxdid = iavf_flex_desc_type_to_rxdid((uint8_t)i);
+
+ if (!ol_flag->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&ol_flag->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register offload '%s', error %d",
+ ol_flag->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "flex_desc extraction offload '%s' offset in mbuf is : %d",
+ ol_flag->param.name, offset);
+ *ol_flag->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1222,6 +1609,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1287,6 +1680,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
vf->vf_reset = false;
+ iavf_init_flex_desc(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 05a7dd898..a65c8454d 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,36 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+int rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's type */
+uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+
+uint8_t
+iavf_flex_desc_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_FLEX_DESC_NONE] = IAVF_RXDID_COMMS_GENERIC,
+ [IAVF_FLEX_DESC_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_FLEX_DESC_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_FLEX_DESC_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_FLEX_DESC_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_FLEX_DESC_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_FLEX_DESC_OVS] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_FLEX_DESC_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_GENERIC;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +324,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_flex_desc_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +493,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t flex_desc;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +531,16 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ flex_desc = vf->flex_desc ? vf->flex_desc[queue_idx] :
+ IAVF_FLEX_DESC_NONE;
+ rxq->rxdid = iavf_flex_desc_type_to_rxdid(flex_desc);
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +902,45 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
+{
+ uint64_t flags = 0;
+
+ /* check if HW has decoded the packet and checksum */
+ if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
+ return 0;
+
+ if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
+ flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
+ flags |= PKT_RX_IP_CKSUM_BAD;
+ else
+ flags |= PKT_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
+ flags |= PKT_RX_L4_CKSUM_BAD;
+ else
+ flags |= PKT_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
+ flags |= PKT_RX_EIP_CKSUM_BAD;
+
+ return flags;
+}
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +966,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1045,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1299,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1440,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1677,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1869,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2100,6 +2317,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 59625a979..de7a1f633 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,110 +57,6 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
-/* HW desc structure, both 16-byte and 32-byte types are supported */
-#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
-#define iavf_rx_desc iavf_16byte_rx_desc
-#define iavf_rx_flex_desc iavf_16b_rx_flex_desc
-#else
-#define iavf_rx_desc iavf_32byte_rx_desc
-#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
-#endif
-
-struct iavf_rxq_ops {
- void (*release_mbufs)(struct iavf_rx_queue *rxq);
-};
-
-struct iavf_txq_ops {
- void (*release_mbufs)(struct iavf_tx_queue *txq);
-};
-
-/* Structure associated with each Rx queue. */
-struct iavf_rx_queue {
- struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
- const struct rte_memzone *mz; /* memzone for Rx ring */
- volatile union iavf_rx_desc *rx_ring; /* Rx ring virtual address */
- uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
- struct rte_mbuf **sw_ring; /* address of SW ring */
- uint16_t nb_rx_desc; /* ring length */
- uint16_t rx_tail; /* current value of tail */
- volatile uint8_t *qrx_tail; /* register address of tail */
- uint16_t rx_free_thresh; /* max free RX desc to hold */
- uint16_t nb_rx_hold; /* number of held free RX desc */
- struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
- struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
- struct rte_mbuf fake_mbuf; /* dummy mbuf */
- uint8_t rxdid;
-
- /* used for VPMD */
- uint16_t rxrearm_nb; /* number of remaining to be re-armed */
- uint16_t rxrearm_start; /* the idx we start the re-arming from */
- uint64_t mbuf_initializer; /* value to init mbufs */
-
- /* for rx bulk */
- uint16_t rx_nb_avail; /* number of staged packets ready */
- uint16_t rx_next_avail; /* index of next staged packets */
- uint16_t rx_free_trigger; /* triggers rx buffer allocation */
- struct rte_mbuf *rx_stage[IAVF_RX_MAX_BURST * 2]; /* store mbuf */
-
- uint16_t port_id; /* device port ID */
- uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
- uint8_t fdir_enabled; /* 0 if FDIR disabled, 1 when enabled */
- uint16_t queue_id; /* Rx queue index */
- uint16_t rx_buf_len; /* The packet buffer size */
- uint16_t rx_hdr_len; /* The header buffer size */
- uint16_t max_pkt_len; /* Maximum packet length */
- struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
-
- bool q_set; /* if rx queue has been configured */
- bool rx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_rxq_ops *ops;
-};
-
-struct iavf_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-/* Structure associated with each TX queue. */
-struct iavf_tx_queue {
- const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
- uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
- struct iavf_tx_entry *sw_ring; /* address array of SW ring */
- uint16_t nb_tx_desc; /* ring length */
- uint16_t tx_tail; /* current value of tail */
- volatile uint8_t *qtx_tail; /* register address of tail */
- /* number of used desc since RS bit set */
- uint16_t nb_used;
- uint16_t nb_free;
- uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t free_thresh;
- uint16_t rs_thresh;
-
- uint16_t port_id;
- uint16_t queue_id;
- uint64_t offloads;
- uint16_t next_dd; /* next to set RS, for VPMD */
- uint16_t next_rs; /* next to check DD, for VPMD */
-
- bool q_set; /* if rx queue has been configured */
- bool tx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_txq_ops *ops;
-};
-
-/* Offload features */
-union iavf_tx_offload {
- uint64_t data;
- struct {
- uint64_t l2_len:7; /* L2 (MAC) Header Length. */
- uint64_t l3_len:9; /* L3 (IP) Header Length. */
- uint64_t l4_len:8; /* L4 Header Length. */
- uint64_t tso_segsz:16; /* TCP TSO segment size */
- /* uint64_t unused : 24; */
- };
-};
-
/* Rx Flex Descriptors
* These descriptors are used instead of the legacy version descriptors
*/
@@ -331,6 +227,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,12 +252,138 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
/* for iavf_32b_rx_flex_desc.pkt_len member */
#define IAVF_RX_FLX_DESC_PKT_LEN_M (0x3FFF) /* 14-bits */
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+#define iavf_rx_desc iavf_16byte_rx_desc
+#define iavf_rx_flex_desc iavf_16b_rx_flex_desc
+#else
+#define iavf_rx_desc iavf_32byte_rx_desc
+#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
+#endif
+
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
+struct iavf_rxq_ops {
+ void (*release_mbufs)(struct iavf_rx_queue *rxq);
+};
+
+struct iavf_txq_ops {
+ void (*release_mbufs)(struct iavf_tx_queue *txq);
+};
+
+/* Structure associated with each Rx queue. */
+struct iavf_rx_queue {
+ struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
+ const struct rte_memzone *mz; /* memzone for Rx ring */
+ volatile union iavf_rx_desc *rx_ring; /* Rx ring virtual address */
+ uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
+ struct rte_mbuf **sw_ring; /* address of SW ring */
+ uint16_t nb_rx_desc; /* ring length */
+ uint16_t rx_tail; /* current value of tail */
+ volatile uint8_t *qrx_tail; /* register address of tail */
+ uint16_t rx_free_thresh; /* max free RX desc to hold */
+ uint16_t nb_rx_hold; /* number of held free RX desc */
+ struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+ struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
+ struct rte_mbuf fake_mbuf; /* dummy mbuf */
+ uint8_t rxdid;
+
+ /* used for VPMD */
+ uint16_t rxrearm_nb; /* number of remaining to be re-armed */
+ uint16_t rxrearm_start; /* the idx we start the re-arming from */
+ uint64_t mbuf_initializer; /* value to init mbufs */
+
+ /* for rx bulk */
+ uint16_t rx_nb_avail; /* number of staged packets ready */
+ uint16_t rx_next_avail; /* index of next staged packets */
+ uint16_t rx_free_trigger; /* triggers rx buffer allocation */
+ struct rte_mbuf *rx_stage[IAVF_RX_MAX_BURST * 2]; /* store mbuf */
+
+ uint16_t port_id; /* device port ID */
+ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+ uint8_t fdir_enabled; /* 0 if FDIR disabled, 1 when enabled */
+ uint16_t queue_id; /* Rx queue index */
+ uint16_t rx_buf_len; /* The packet buffer size */
+ uint16_t rx_hdr_len; /* The header buffer size */
+ uint16_t max_pkt_len; /* Maximum packet length */
+ struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
+
+ bool q_set; /* if rx queue has been configured */
+ bool rx_deferred_start; /* don't start this queue in dev start */
+ const struct iavf_rxq_ops *ops;
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
+};
+
+struct iavf_tx_entry {
+ struct rte_mbuf *mbuf;
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct iavf_tx_queue {
+ const struct rte_memzone *mz; /* memzone for Tx ring */
+ volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
+ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ struct iavf_tx_entry *sw_ring; /* address array of SW ring */
+ uint16_t nb_tx_desc; /* ring length */
+ uint16_t tx_tail; /* current value of tail */
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ /* number of used desc since RS bit set */
+ uint16_t nb_used;
+ uint16_t nb_free;
+ uint16_t last_desc_cleaned; /* last desc have been cleaned*/
+ uint16_t free_thresh;
+ uint16_t rs_thresh;
+
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint64_t offloads;
+ uint16_t next_dd; /* next to set RS, for VPMD */
+ uint16_t next_rs; /* next to check DD, for VPMD */
+
+ bool q_set; /* if rx queue has been configured */
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ const struct iavf_txq_ops *ops;
+};
+
+/* Offload features */
+union iavf_tx_offload {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+ uint64_t l3_len:9; /* L3 (IP) Header Length. */
+ uint64_t l4_len:8; /* L4 Header Length. */
+ uint64_t tso_segsz:16; /* TCP TSO segment size */
+ /* uint64_t unused : 24; */
+ };
+};
+
int iavf_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
uint16_t nb_desc,
@@ -438,6 +461,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_flex_desc_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 76f8e38d1..7981dfa30 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -647,25 +647,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index a3fad363d..cd5159332 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 000000000..dddb4340a
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_flex_desc_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_flex_desc_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_flex_desc_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN \
+ (rte_net_iavf_dynflag_flex_desc_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4 \
+ (rte_net_iavf_dynflag_flex_desc_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6 \
+ (rte_net_iavf_dynflag_flex_desc_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'flex_desc' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW \
+ (rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP \
+ (rte_net_iavf_dynflag_flex_desc_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET \
+ (rte_net_iavf_dynflag_flex_desc_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_flex_desc_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_flex_desc_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_flex_desc_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_flex_desc_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_flex_desc_metadata data;
+
+ if (!rte_net_iavf_dynf_flex_desc_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_flex_desc_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 4a76d1d52..6c821c88d 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_net_iavf_dynfield_flex_desc_metadata_offs;
+ rte_net_iavf_dynflag_flex_desc_vlan_mask;
+ rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+ rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+ rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+ rte_net_iavf_dynflag_flex_desc_tcp_mask;
+ rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3] net/iavf: support flex desc metadata extraction
2020-09-23 7:52 ` [dpdk-dev] [PATCH v3] " Jeff Guo
@ 2020-09-23 8:10 ` Wang, Haiyue
2020-09-23 8:22 ` Guo, Jia
0 siblings, 1 reply; 40+ messages in thread
From: Wang, Haiyue @ 2020-09-23 8:10 UTC (permalink / raw)
To: Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei; +Cc: dev
> -----Original Message-----
> From: Guo, Jia <jia.guo@intel.com>
> Sent: Wednesday, September 23, 2020 15:53
> To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Guo, Jia <jia.guo@intel.com>
> Subject: [PATCH v3] net/iavf: support flex desc metadata extraction
>
> Enable metadata extraction for flexible descriptors in AVF, that would
> allow network function directly get metadata without additional parsing
> which would reduce the CPU cost for VFs. The enabling metadata
> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> flexible descriptors, and the VF could negotiate the capability of
> the flexible descriptor with PF and correspondingly configure the
> specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> ---
> v3:
> export these global symbols into .map
>
> v2:
> remove makefile change and modify the rxdid handling
> ---
> doc/guides/rel_notes/release_20_11.rst | 6 +
> drivers/net/iavf/iavf.h | 25 +-
> drivers/net/iavf/iavf_ethdev.c | 395 ++++++++++++++++++++++
> drivers/net/iavf/iavf_rxtx.c | 282 +++++++++++++--
> drivers/net/iavf/iavf_rxtx.h | 233 +++++++------
> drivers/net/iavf/iavf_vchnl.c | 22 +-
> drivers/net/iavf/meson.build | 2 +
> drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
> drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
> 9 files changed, 1081 insertions(+), 147 deletions(-)
> create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
>
> +enum iavf_flex_desc_type {
> + IAVF_FLEX_DESC_NONE,
> + IAVF_FLEX_DESC_VLAN,
> + IAVF_FLEX_DESC_IPV4,
> + IAVF_FLEX_DESC_IPV6,
> + IAVF_FLEX_DESC_IPV6_FLOW,
> + IAVF_FLEX_DESC_TCP,
> + IAVF_FLEX_DESC_OVS,
> + IAVF_FLEX_DESC_IP_OFFSET,
> + IAVF_FLEX_DESC_MAX,
> +};
The vector PMD will also support extract the above data type ? Take ice as
an example, if user specifies the 'proto_xtr', the vector Rx path will be
disabled, it will be handled in C function.
enum proto_xtr_type {
PROTO_XTR_NONE,
PROTO_XTR_VLAN,
PROTO_XTR_IPV4,
PROTO_XTR_IPV6,
PROTO_XTR_IPV6_FLOW,
PROTO_XTR_TCP,
PROTO_XTR_IP_OFFSET,
PROTO_XTR_MAX /* The last one */
};
static inline int
ice_rx_vec_queue_default(struct ice_rx_queue *rxq)
{
...
if (rxq->proto_xtr != PROTO_XTR_NONE)
return -1;
return 0;
}
> --
> 2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3] net/iavf: support flex desc metadata extraction
2020-09-23 8:10 ` Wang, Haiyue
@ 2020-09-23 8:22 ` Guo, Jia
0 siblings, 0 replies; 40+ messages in thread
From: Guo, Jia @ 2020-09-23 8:22 UTC (permalink / raw)
To: Wang, Haiyue, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei; +Cc: dev
> -----Original Message-----
> From: Wang, Haiyue <haiyue.wang@intel.com>
> Sent: Wednesday, September 23, 2020 4:11 PM
> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v3] net/iavf: support flex desc metadata extraction
>
> > -----Original Message-----
> > From: Guo, Jia <jia.guo@intel.com>
> > Sent: Wednesday, September 23, 2020 15:53
> > To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z
> > <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Guo, Jia
> > <jia.guo@intel.com>
> > Subject: [PATCH v3] net/iavf: support flex desc metadata extraction
> >
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional
> > parsing which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-
> FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of the
> > flexible descriptor with PF and correspondingly configure the specific
> > offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > ---
> > v3:
> > export these global symbols into .map
> >
> > v2:
> > remove makefile change and modify the rxdid handling
> > ---
> > doc/guides/rel_notes/release_20_11.rst | 6 +
> > drivers/net/iavf/iavf.h | 25 +-
> > drivers/net/iavf/iavf_ethdev.c | 395 ++++++++++++++++++++++
> > drivers/net/iavf/iavf_rxtx.c | 282 +++++++++++++--
> > drivers/net/iavf/iavf_rxtx.h | 233 +++++++------
> > drivers/net/iavf/iavf_vchnl.c | 22 +-
> > drivers/net/iavf/meson.build | 2 +
> > drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
> > drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
> > 9 files changed, 1081 insertions(+), 147 deletions(-) create mode
> > 100644 drivers/net/iavf/rte_pmd_iavf.h
> >
>
>
> > +enum iavf_flex_desc_type {
> > +IAVF_FLEX_DESC_NONE,
> > +IAVF_FLEX_DESC_VLAN,
> > +IAVF_FLEX_DESC_IPV4,
> > +IAVF_FLEX_DESC_IPV6,
> > +IAVF_FLEX_DESC_IPV6_FLOW,
> > +IAVF_FLEX_DESC_TCP,
> > +IAVF_FLEX_DESC_OVS,
> > +IAVF_FLEX_DESC_IP_OFFSET,
> > +IAVF_FLEX_DESC_MAX,
> > +};
>
> The vector PMD will also support extract the above data type ? Take ice as an
> example, if user specifies the 'proto_xtr', the vector Rx path will be disabled,
> it will be handled in C function.
>
> enum proto_xtr_type {
> PROTO_XTR_NONE,
> PROTO_XTR_VLAN,
> PROTO_XTR_IPV4,
> PROTO_XTR_IPV6,
> PROTO_XTR_IPV6_FLOW,
> PROTO_XTR_TCP,
> PROTO_XTR_IP_OFFSET,
> PROTO_XTR_MAX /* The last one */
> };
>
> static inline int
> ice_rx_vec_queue_default(struct ice_rx_queue *rxq) { ...
>
> if (rxq->proto_xtr != PROTO_XTR_NONE)
> return -1;
>
> return 0;
> }
>
You are right, vector will not support extraction, the version lack of handling for that.
>
> > --
> > 2.20.1
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (2 preceding siblings ...)
2020-09-23 7:52 ` [dpdk-dev] [PATCH v3] " Jeff Guo
@ 2020-09-23 15:36 ` Jeff Guo
2020-09-25 6:23 ` [dpdk-dev] [PATCH v5] " Jeff Guo
` (9 subsequent siblings)
13 siblings, 0 replies; 40+ messages in thread
From: Jeff Guo @ 2020-09-23 15:36 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing; +Cc: dev, haiyue.wang, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 25 +-
drivers/net/iavf/iavf_ethdev.c | 399 ++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 284 +++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 234 +++++++------
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
10 files changed, 1091 insertions(+), 147 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index d4a66d045..054424d94 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -62,6 +62,12 @@ New Features
* Added support for non-zero priorities for group 0 flows
* Added support for VXLAN decap combined with VLAN pop
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3..44e28df56 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *flex_desc; /* flexible descriptor type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,28 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_flex_desc_type {
+ IAVF_FLEX_DESC_NONE,
+ IAVF_FLEX_DESC_VLAN,
+ IAVF_FLEX_DESC_IPV4,
+ IAVF_FLEX_DESC_IPV6,
+ IAVF_FLEX_DESC_IPV6_FLOW,
+ IAVF_FLEX_DESC_TCP,
+ IAVF_FLEX_DESC_OVS,
+ IAVF_FLEX_DESC_IP_OFFSET,
+ IAVF_FLEX_DESC_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t flex_desc_dflt;
+ uint8_t flex_desc[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +188,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 440da7d76..e057fd875 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_FLEX_DESC_ARG "flex_desc"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_FLEX_DESC_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_flex_desc_metadata_param = {
+ .name = "iavf_dynfield_flex_desc_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_flex_desc_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_flex_desc_xtr_ol iavf_flex_desc_xtr_params[] = {
+ [IAVF_FLEX_DESC_VLAN] = {
+ .param = { .name = "iavf_dynflag_flex_desc_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_vlan_mask },
+ [IAVF_FLEX_DESC_IPV4] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv4_mask },
+ [IAVF_FLEX_DESC_IPV6] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv6_mask },
+ [IAVF_FLEX_DESC_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_flex_desc_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask },
+ [IAVF_FLEX_DESC_TCP] = {
+ .param = { .name = "iavf_dynflag_flex_desc_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_tcp_mask },
+ [IAVF_FLEX_DESC_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_flex_desc_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_flex_desc_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1213,6 +1256,354 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_flex_desc_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_flex_desc_type type;
+ } flex_type_map[] = {
+ { "vlan", IAVF_FLEX_DESC_VLAN },
+ { "ipv4", IAVF_FLEX_DESC_IPV4 },
+ { "ipv6", IAVF_FLEX_DESC_IPV6 },
+ { "ipv6_flow", IAVF_FLEX_DESC_IPV6_FLOW },
+ { "tcp", IAVF_FLEX_DESC_TCP },
+ { "ovs", IAVF_FLEX_DESC_OVS },
+ { "ip_offset", IAVF_FLEX_DESC_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(flex_type_map); i++) {
+ if (strcmp(flex_name, flex_type_map[i].name) == 0)
+ return flex_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong flex_desc type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ovs|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int flex_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->flex_desc[idx] = flex_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->flex_desc[idx] = flex_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_flex_desc(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int flex_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ flex_type = iavf_lookup_flex_desc_type(queues);
+ if (flex_type < 0)
+ return -1;
+
+ devargs->flex_desc_dflt = flex_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ flex_type = iavf_lookup_flex_desc_type(flex_name);
+ if (flex_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, flex_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_flex_desc_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_flex_desc(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the flex_desc's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.flex_desc_dflt = IAVF_FLEX_DESC_NONE;
+ memset(ad->devargs.flex_desc, IAVF_FLEX_DESC_NONE,
+ sizeof(ad->devargs.flex_desc));
+
+ ret = rte_kvargs_process(kvlist, IAVF_FLEX_DESC_ARG,
+ &iavf_handle_flex_desc_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_flex_desc(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_flex_desc_xtr_ol *xtr_ol;
+ bool flex_desc_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->flex_desc = rte_zmalloc("vf flex desc",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->flex_desc))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up flex_desc's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->flex_desc[i] = ad->devargs.flex_desc[i] !=
+ IAVF_FLEX_DESC_NONE ?
+ ad->devargs.flex_desc[i] :
+ ad->devargs.flex_desc_dflt;
+
+ if (vf->flex_desc[i] != IAVF_FLEX_DESC_NONE) {
+ /* no metadata extraction for OVS */
+ if (vf->flex_desc[i] != IAVF_FLEX_DESC_OVS) {
+ uint8_t tp = vf->flex_desc[i];
+
+ iavf_flex_desc_xtr_params[tp].required = true;
+ }
+
+ flex_desc_enable = true;
+ }
+ }
+
+ if (likely(!flex_desc_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_flex_desc_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract flex_desc metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "flex_desc extraction metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_flex_desc_xtr_params); i++) {
+ xtr_ol = &iavf_flex_desc_xtr_params[i];
+
+ uint8_t rxdid = iavf_flex_desc_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register extraction offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "flex_desc extraction offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1222,6 +1613,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1287,6 +1684,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
vf->vf_reset = false;
+ iavf_init_flex_desc(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 05a7dd898..ec6609178 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,36 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+int rte_net_iavf_dynfield_flex_desc_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's type */
+uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask;
+uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+
+uint8_t
+iavf_flex_desc_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_FLEX_DESC_NONE] = IAVF_RXDID_COMMS_GENERIC,
+ [IAVF_FLEX_DESC_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_FLEX_DESC_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_FLEX_DESC_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_FLEX_DESC_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_FLEX_DESC_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_FLEX_DESC_OVS] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_FLEX_DESC_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_GENERIC;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +324,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_flex_desc_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_flex_desc_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +493,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t flex_desc;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +531,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ flex_desc = vf->flex_desc ? vf->flex_desc[queue_idx] :
+ IAVF_FLEX_DESC_NONE;
+ rxq->rxdid = iavf_flex_desc_type_to_rxdid(flex_desc);
+ rxq->flex_desc = flex_desc;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->flex_desc = IAVF_FLEX_DESC_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +904,45 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
+{
+ uint64_t flags = 0;
+
+ /* check if HW has decoded the packet and checksum */
+ if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
+ return 0;
+
+ if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
+ flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
+ flags |= PKT_RX_IP_CKSUM_BAD;
+ else
+ flags |= PKT_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
+ flags |= PKT_RX_L4_CKSUM_BAD;
+ else
+ flags |= PKT_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
+ flags |= PKT_RX_EIP_CKSUM_BAD;
+
+ return flags;
+}
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +968,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1047,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1301,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1442,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1679,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1871,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2100,6 +2319,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 59625a979..577df3c1b 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,110 +57,6 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
-/* HW desc structure, both 16-byte and 32-byte types are supported */
-#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
-#define iavf_rx_desc iavf_16byte_rx_desc
-#define iavf_rx_flex_desc iavf_16b_rx_flex_desc
-#else
-#define iavf_rx_desc iavf_32byte_rx_desc
-#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
-#endif
-
-struct iavf_rxq_ops {
- void (*release_mbufs)(struct iavf_rx_queue *rxq);
-};
-
-struct iavf_txq_ops {
- void (*release_mbufs)(struct iavf_tx_queue *txq);
-};
-
-/* Structure associated with each Rx queue. */
-struct iavf_rx_queue {
- struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
- const struct rte_memzone *mz; /* memzone for Rx ring */
- volatile union iavf_rx_desc *rx_ring; /* Rx ring virtual address */
- uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
- struct rte_mbuf **sw_ring; /* address of SW ring */
- uint16_t nb_rx_desc; /* ring length */
- uint16_t rx_tail; /* current value of tail */
- volatile uint8_t *qrx_tail; /* register address of tail */
- uint16_t rx_free_thresh; /* max free RX desc to hold */
- uint16_t nb_rx_hold; /* number of held free RX desc */
- struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
- struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
- struct rte_mbuf fake_mbuf; /* dummy mbuf */
- uint8_t rxdid;
-
- /* used for VPMD */
- uint16_t rxrearm_nb; /* number of remaining to be re-armed */
- uint16_t rxrearm_start; /* the idx we start the re-arming from */
- uint64_t mbuf_initializer; /* value to init mbufs */
-
- /* for rx bulk */
- uint16_t rx_nb_avail; /* number of staged packets ready */
- uint16_t rx_next_avail; /* index of next staged packets */
- uint16_t rx_free_trigger; /* triggers rx buffer allocation */
- struct rte_mbuf *rx_stage[IAVF_RX_MAX_BURST * 2]; /* store mbuf */
-
- uint16_t port_id; /* device port ID */
- uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
- uint8_t fdir_enabled; /* 0 if FDIR disabled, 1 when enabled */
- uint16_t queue_id; /* Rx queue index */
- uint16_t rx_buf_len; /* The packet buffer size */
- uint16_t rx_hdr_len; /* The header buffer size */
- uint16_t max_pkt_len; /* Maximum packet length */
- struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
-
- bool q_set; /* if rx queue has been configured */
- bool rx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_rxq_ops *ops;
-};
-
-struct iavf_tx_entry {
- struct rte_mbuf *mbuf;
- uint16_t next_id;
- uint16_t last_id;
-};
-
-/* Structure associated with each TX queue. */
-struct iavf_tx_queue {
- const struct rte_memzone *mz; /* memzone for Tx ring */
- volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
- uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
- struct iavf_tx_entry *sw_ring; /* address array of SW ring */
- uint16_t nb_tx_desc; /* ring length */
- uint16_t tx_tail; /* current value of tail */
- volatile uint8_t *qtx_tail; /* register address of tail */
- /* number of used desc since RS bit set */
- uint16_t nb_used;
- uint16_t nb_free;
- uint16_t last_desc_cleaned; /* last desc have been cleaned*/
- uint16_t free_thresh;
- uint16_t rs_thresh;
-
- uint16_t port_id;
- uint16_t queue_id;
- uint64_t offloads;
- uint16_t next_dd; /* next to set RS, for VPMD */
- uint16_t next_rs; /* next to check DD, for VPMD */
-
- bool q_set; /* if rx queue has been configured */
- bool tx_deferred_start; /* don't start this queue in dev start */
- const struct iavf_txq_ops *ops;
-};
-
-/* Offload features */
-union iavf_tx_offload {
- uint64_t data;
- struct {
- uint64_t l2_len:7; /* L2 (MAC) Header Length. */
- uint64_t l3_len:9; /* L3 (IP) Header Length. */
- uint64_t l4_len:8; /* L4 Header Length. */
- uint64_t tso_segsz:16; /* TCP TSO segment size */
- /* uint64_t unused : 24; */
- };
-};
-
/* Rx Flex Descriptors
* These descriptors are used instead of the legacy version descriptors
*/
@@ -331,6 +227,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,12 +252,139 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
/* for iavf_32b_rx_flex_desc.pkt_len member */
#define IAVF_RX_FLX_DESC_PKT_LEN_M (0x3FFF) /* 14-bits */
+/* HW desc structure, both 16-byte and 32-byte types are supported */
+#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+#define iavf_rx_desc iavf_16byte_rx_desc
+#define iavf_rx_flex_desc iavf_16b_rx_flex_desc
+#else
+#define iavf_rx_desc iavf_32byte_rx_desc
+#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
+#endif
+
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
+struct iavf_rxq_ops {
+ void (*release_mbufs)(struct iavf_rx_queue *rxq);
+};
+
+struct iavf_txq_ops {
+ void (*release_mbufs)(struct iavf_tx_queue *txq);
+};
+
+/* Structure associated with each Rx queue. */
+struct iavf_rx_queue {
+ struct rte_mempool *mp; /* mbuf pool to populate Rx ring */
+ const struct rte_memzone *mz; /* memzone for Rx ring */
+ volatile union iavf_rx_desc *rx_ring; /* Rx ring virtual address */
+ uint64_t rx_ring_phys_addr; /* Rx ring DMA address */
+ struct rte_mbuf **sw_ring; /* address of SW ring */
+ uint16_t nb_rx_desc; /* ring length */
+ uint16_t rx_tail; /* current value of tail */
+ volatile uint8_t *qrx_tail; /* register address of tail */
+ uint16_t rx_free_thresh; /* max free RX desc to hold */
+ uint16_t nb_rx_hold; /* number of held free RX desc */
+ struct rte_mbuf *pkt_first_seg; /* first segment of current packet */
+ struct rte_mbuf *pkt_last_seg; /* last segment of current packet */
+ struct rte_mbuf fake_mbuf; /* dummy mbuf */
+ uint8_t rxdid;
+
+ /* used for VPMD */
+ uint16_t rxrearm_nb; /* number of remaining to be re-armed */
+ uint16_t rxrearm_start; /* the idx we start the re-arming from */
+ uint64_t mbuf_initializer; /* value to init mbufs */
+
+ /* for rx bulk */
+ uint16_t rx_nb_avail; /* number of staged packets ready */
+ uint16_t rx_next_avail; /* index of next staged packets */
+ uint16_t rx_free_trigger; /* triggers rx buffer allocation */
+ struct rte_mbuf *rx_stage[IAVF_RX_MAX_BURST * 2]; /* store mbuf */
+
+ uint16_t port_id; /* device port ID */
+ uint8_t crc_len; /* 0 if CRC stripped, 4 otherwise */
+ uint8_t fdir_enabled; /* 0 if FDIR disabled, 1 when enabled */
+ uint16_t queue_id; /* Rx queue index */
+ uint16_t rx_buf_len; /* The packet buffer size */
+ uint16_t rx_hdr_len; /* The header buffer size */
+ uint16_t max_pkt_len; /* Maximum packet length */
+ struct iavf_vsi *vsi; /**< the VSI this queue belongs to */
+
+ bool q_set; /* if rx queue has been configured */
+ bool rx_deferred_start; /* don't start this queue in dev start */
+ const struct iavf_rxq_ops *ops;
+ uint8_t flex_desc; /* flexible descriptor type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
+};
+
+struct iavf_tx_entry {
+ struct rte_mbuf *mbuf;
+ uint16_t next_id;
+ uint16_t last_id;
+};
+
+/* Structure associated with each TX queue. */
+struct iavf_tx_queue {
+ const struct rte_memzone *mz; /* memzone for Tx ring */
+ volatile struct iavf_tx_desc *tx_ring; /* Tx ring virtual address */
+ uint64_t tx_ring_phys_addr; /* Tx ring DMA address */
+ struct iavf_tx_entry *sw_ring; /* address array of SW ring */
+ uint16_t nb_tx_desc; /* ring length */
+ uint16_t tx_tail; /* current value of tail */
+ volatile uint8_t *qtx_tail; /* register address of tail */
+ /* number of used desc since RS bit set */
+ uint16_t nb_used;
+ uint16_t nb_free;
+ uint16_t last_desc_cleaned; /* last desc have been cleaned*/
+ uint16_t free_thresh;
+ uint16_t rs_thresh;
+
+ uint16_t port_id;
+ uint16_t queue_id;
+ uint64_t offloads;
+ uint16_t next_dd; /* next to set RS, for VPMD */
+ uint16_t next_rs; /* next to check DD, for VPMD */
+
+ bool q_set; /* if rx queue has been configured */
+ bool tx_deferred_start; /* don't start this queue in dev start */
+ const struct iavf_txq_ops *ops;
+};
+
+/* Offload features */
+union iavf_tx_offload {
+ uint64_t data;
+ struct {
+ uint64_t l2_len:7; /* L2 (MAC) Header Length. */
+ uint64_t l3_len:9; /* L3 (IP) Header Length. */
+ uint64_t l4_len:8; /* L4 Header Length. */
+ uint64_t tso_segsz:16; /* TCP TSO segment size */
+ /* uint64_t unused : 24; */
+ };
+};
+
int iavf_dev_rx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx,
uint16_t nb_desc,
@@ -438,6 +462,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_flex_desc_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de..58cce7a9d 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->flex_desc != IAVF_FLEX_DESC_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 76f8e38d1..7981dfa30 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -647,25 +647,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index a3fad363d..cd5159332 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 000000000..dddb4340a
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_flex_desc_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_flex_desc_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_flex_desc_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_flex_desc_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN \
+ (rte_net_iavf_dynflag_flex_desc_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4 \
+ (rte_net_iavf_dynflag_flex_desc_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6 \
+ (rte_net_iavf_dynflag_flex_desc_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'flex_desc' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW \
+ (rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP \
+ (rte_net_iavf_dynflag_flex_desc_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'flex_desc' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET \
+ (rte_net_iavf_dynflag_flex_desc_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_flex_desc_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_flex_desc_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_flex_desc_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_FLEX_DESC_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_flex_desc_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_flex_desc_metadata data;
+
+ if (!rte_net_iavf_dynf_flex_desc_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_flex_desc_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_FLEX_DESC_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 4a76d1d52..6c821c88d 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_net_iavf_dynfield_flex_desc_metadata_offs;
+ rte_net_iavf_dynflag_flex_desc_vlan_mask;
+ rte_net_iavf_dynflag_flex_desc_ipv4_mask;
+ rte_net_iavf_dynflag_flex_desc_ipv6_mask;
+ rte_net_iavf_dynflag_flex_desc_ipv6_flow_mask;
+ rte_net_iavf_dynflag_flex_desc_tcp_mask;
+ rte_net_iavf_dynflag_flex_desc_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v5] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (3 preceding siblings ...)
2020-09-23 15:36 ` [dpdk-dev] [PATCH v4] " Jeff Guo
@ 2020-09-25 6:23 ` Jeff Guo
2020-09-25 6:33 ` Wang, Haiyue
2020-09-27 2:08 ` [dpdk-dev] [PATCH v6] " Jeff Guo
` (8 subsequent siblings)
13 siblings, 1 reply; 40+ messages in thread
From: Jeff Guo @ 2020-09-25 6:23 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing; +Cc: dev, haiyue.wang, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 283 ++++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 168 +++++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
10 files changed, 1051 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index c6642f5f9..c4867b44d 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -78,6 +78,12 @@ New Features
``--portmask=N``
where N represents the hexadecimal bitmask of ports used.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3..d56611608 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +187,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 440da7d76..a88d53ab0 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "iavf_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1213,6 +1256,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1222,6 +1608,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1287,6 +1679,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
vf->vf_reset = false;
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 05a7dd898..7b81bf8ad 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +323,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +492,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +530,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +903,45 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
+{
+ uint64_t flags = 0;
+
+ /* check if HW has decoded the packet and checksum */
+ if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
+ return 0;
+
+ if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
+ flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
+ flags |= PKT_RX_IP_CKSUM_BAD;
+ else
+ flags |= PKT_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
+ flags |= PKT_RX_L4_CKSUM_BAD;
+ else
+ flags |= PKT_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
+ flags |= PKT_RX_EIP_CKSUM_BAD;
+
+ return flags;
+}
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +967,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1046,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1300,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1441,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1678,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1870,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2100,6 +2318,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 59625a979..5225493bc 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,77 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/* Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +137,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +189,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -161,77 +241,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -331,6 +340,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,6 +365,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -438,6 +462,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de..7ad1e0f68 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 76f8e38d1..7981dfa30 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -647,25 +647,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index a3fad363d..cd5159332 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 000000000..5e41568c3
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_net_iavf_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_net_iavf_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_net_iavf_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_proto_xtr_metadata data;
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 4a76d1d52..d7afd31d1 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+ rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v5] net/iavf: support flex desc metadata extraction
2020-09-25 6:23 ` [dpdk-dev] [PATCH v5] " Jeff Guo
@ 2020-09-25 6:33 ` Wang, Haiyue
0 siblings, 0 replies; 40+ messages in thread
From: Wang, Haiyue @ 2020-09-25 6:33 UTC (permalink / raw)
To: Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei; +Cc: dev
> -----Original Message-----
> From: Guo, Jia <jia.guo@intel.com>
> Sent: Friday, September 25, 2020 14:23
> To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Guo, Jia <jia.guo@intel.com>
> Subject: [PATCH v5] net/iavf: support flex desc metadata extraction
>
> Enable metadata extraction for flexible descriptors in AVF, that would
> allow network function directly get metadata without additional parsing
> which would reduce the CPU cost for VFs. The enabling metadata
> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> flexible descriptors, and the VF could negotiate the capability of
> the flexible descriptor with PF and correspondingly configure the
> specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> ---
> v5:
> remove ovs configure since ovs is not protocol extraction
>
> v4:
> add flex desc type in rx queue for handling vector path
> handle ovs flex type
>
> v3:
> export these global symbols into .map
>
> v2:
> remove makefile change and modify the rxdid handling
> ---
> doc/guides/rel_notes/release_20_11.rst | 6 +
> drivers/net/iavf/iavf.h | 24 +-
> drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
> drivers/net/iavf/iavf_rxtx.c | 283 ++++++++++++++--
> drivers/net/iavf/iavf_rxtx.h | 168 +++++----
> drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> drivers/net/iavf/iavf_vchnl.c | 22 +-
> drivers/net/iavf/meson.build | 2 +
> drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
> drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
> 10 files changed, 1051 insertions(+), 114 deletions(-)
> create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
LGTM
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> 2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (4 preceding siblings ...)
2020-09-25 6:23 ` [dpdk-dev] [PATCH v5] " Jeff Guo
@ 2020-09-27 2:08 ` Jeff Guo
2020-09-27 3:00 ` Zhang, Qi Z
2020-09-28 15:59 ` Ferruh Yigit
2020-09-29 6:10 ` [dpdk-dev] [PATCH v7] " Jeff Guo
` (7 subsequent siblings)
13 siblings, 2 replies; 40+ messages in thread
From: Jeff Guo @ 2020-09-27 2:08 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing; +Cc: dev, haiyue.wang, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
---
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 283 ++++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 168 +++++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
10 files changed, 1051 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4bcf220c3..96d8c1448 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -84,6 +84,12 @@ New Features
* Added support for 200G PAM4 link speed.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3..d56611608 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +187,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 440da7d76..a88d53ab0 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "iavf_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1213,6 +1256,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1222,6 +1608,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1287,6 +1679,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
vf->vf_reset = false;
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 05a7dd898..7b81bf8ad 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +323,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +492,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +530,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +903,45 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
+/* Rx L3/L4 checksum */
+static inline uint64_t
+iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
+{
+ uint64_t flags = 0;
+
+ /* check if HW has decoded the packet and checksum */
+ if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
+ return 0;
+
+ if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
+ flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
+ return flags;
+ }
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
+ flags |= PKT_RX_IP_CKSUM_BAD;
+ else
+ flags |= PKT_RX_IP_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
+ flags |= PKT_RX_L4_CKSUM_BAD;
+ else
+ flags |= PKT_RX_L4_CKSUM_GOOD;
+
+ if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
+ flags |= PKT_RX_EIP_CKSUM_BAD;
+
+ return flags;
+}
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +967,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1046,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1300,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1441,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1678,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1870,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2100,6 +2318,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 59625a979..5225493bc 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,77 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/* Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +137,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +189,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -161,77 +241,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -331,6 +340,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,6 +365,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -438,6 +462,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de..7ad1e0f68 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 76f8e38d1..7981dfa30 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -647,25 +647,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 33407c503..c1c74571a 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 000000000..5e41568c3
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_net_iavf_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_net_iavf_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_net_iavf_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_proto_xtr_metadata data;
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 4a76d1d52..d7afd31d1 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+ rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
2020-09-27 2:08 ` [dpdk-dev] [PATCH v6] " Jeff Guo
@ 2020-09-27 3:00 ` Zhang, Qi Z
2020-09-28 15:59 ` Ferruh Yigit
1 sibling, 0 replies; 40+ messages in thread
From: Zhang, Qi Z @ 2020-09-27 3:00 UTC (permalink / raw)
To: Guo, Jia, Wu, Jingjing, Xing, Beilei; +Cc: dev, Wang, Haiyue
> -----Original Message-----
> From: Guo, Jia <jia.guo@intel.com>
> Sent: Sunday, September 27, 2020 10:09 AM
> To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Guo, Jia
> <jia.guo@intel.com>
> Subject: [PATCH v6] net/iavf: support flex desc metadata extraction
>
> Enable metadata extraction for flexible descriptors in AVF, that would allow
> network function directly get metadata without additional parsing which
> would reduce the CPU cost for VFs. The enabling metadata extractions involve
> the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS flexible descriptors,
> and the VF could negotiate the capability of the flexible descriptor with PF and
> correspondingly configure the specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Applied to dpdk-next-net-intel.
Thanks
Qi
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
2020-09-27 2:08 ` [dpdk-dev] [PATCH v6] " Jeff Guo
2020-09-27 3:00 ` Zhang, Qi Z
@ 2020-09-28 15:59 ` Ferruh Yigit
2020-09-28 16:17 ` Wang, Haiyue
2020-09-29 2:27 ` Guo, Jia
1 sibling, 2 replies; 40+ messages in thread
From: Ferruh Yigit @ 2020-09-28 15:59 UTC (permalink / raw)
To: Jeff Guo, jingjing.wu, qi.z.zhang, beilei.xing; +Cc: dev, haiyue.wang
On 9/27/2020 3:08 AM, Jeff Guo wrote:
> Enable metadata extraction for flexible descriptors in AVF, that would
> allow network function directly get metadata without additional parsing
> which would reduce the CPU cost for VFs. The enabling metadata
> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> flexible descriptors, and the VF could negotiate the capability of
> the flexible descriptor with PF and correspondingly configure the
> specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
<...>
> +/* Rx L3/L4 checksum */
> +static inline uint64_t
> +iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
> +{
> + uint64_t flags = 0;
> +
> + /* check if HW has decoded the packet and checksum */
> + if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
> + return 0;
> +
> + if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
> + flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
> + return flags;
> + }
> +
> + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
> + flags |= PKT_RX_IP_CKSUM_BAD;
> + else
> + flags |= PKT_RX_IP_CKSUM_GOOD;
> +
> + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
> + flags |= PKT_RX_L4_CKSUM_BAD;
> + else
> + flags |= PKT_RX_L4_CKSUM_GOOD;
> +
> + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
> + flags |= PKT_RX_EIP_CKSUM_BAD;
> +
> + return flags;
> +}
Is this static inline function used anywhere? If not can we delete it?
> +
> static inline void
> iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
> {
> @@ -740,6 +967,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
> } else {
> mb->vlan_tci = 0;
> }
> +
> +#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
> + if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
> + (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
> + mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
> + PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
> + mb->vlan_tci_outer = mb->vlan_tci;
> + mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
> + PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
> + rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
> + rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
> + } else {
> + mb->vlan_tci_outer = 0;
> + }
> +#endif
How this 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' controlled with meson?
Also is it mentioned in any driver documentation?
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
2020-09-28 15:59 ` Ferruh Yigit
@ 2020-09-28 16:17 ` Wang, Haiyue
2020-09-28 16:21 ` Bruce Richardson
2020-09-29 2:27 ` Guo, Jia
1 sibling, 1 reply; 40+ messages in thread
From: Wang, Haiyue @ 2020-09-28 16:17 UTC (permalink / raw)
To: Yigit, Ferruh, Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei,
Richardson, Bruce
Cc: dev
+ Bruce
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, September 29, 2020 00:00
> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
>
> On 9/27/2020 3:08 AM, Jeff Guo wrote:
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional parsing
> > which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of
> > the flexible descriptor with PF and correspondingly configure the
> > specific offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>
> <...>
>
> > +/* Rx L3/L4 checksum */
> > +static inline uint64_t
> > +iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
> > +{
> > + uint64_t flags = 0;
> > +
> > + /* check if HW has decoded the packet and checksum */
> > + if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
> > + return 0;
> > +
> > + if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
> > + flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
> > + return flags;
> > + }
> > +
> > + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
> > + flags |= PKT_RX_IP_CKSUM_BAD;
> > + else
> > + flags |= PKT_RX_IP_CKSUM_GOOD;
> > +
> > + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
> > + flags |= PKT_RX_L4_CKSUM_BAD;
> > + else
> > + flags |= PKT_RX_L4_CKSUM_GOOD;
> > +
> > + if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
> > + flags |= PKT_RX_EIP_CKSUM_BAD;
> > +
> > + return flags;
> > +}
>
> Is this static inline function used anywhere? If not can we delete it?
>
The same function as iavf_flex_rxd_error_to_pkt_flags.
Looks like meson/gcc missed this [-Werror,-Wunused-function] capturing.
http://mails.dpdk.org/archives/test-report/2020-September/154839.html
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
2020-09-28 16:17 ` Wang, Haiyue
@ 2020-09-28 16:21 ` Bruce Richardson
2020-09-28 16:29 ` Wang, Haiyue
0 siblings, 1 reply; 40+ messages in thread
From: Bruce Richardson @ 2020-09-28 16:21 UTC (permalink / raw)
To: Wang, Haiyue
Cc: Yigit, Ferruh, Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei, dev
On Mon, Sep 28, 2020 at 05:17:24PM +0100, Wang, Haiyue wrote:
> + Bruce
>
> > -----Original Message-----
> > From: Ferruh Yigit <ferruh.yigit@intel.com>
> > Sent: Tuesday, September 29, 2020 00:00
> > To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z
> > <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
> >
> > On 9/27/2020 3:08 AM, Jeff Guo wrote:
> > > Enable metadata extraction for flexible descriptors in AVF, that would
> > > allow network function directly get metadata without additional parsing
> > > which would reduce the CPU cost for VFs. The enabling metadata
> > > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> > > flexible descriptors, and the VF could negotiate the capability of
> > > the flexible descriptor with PF and correspondingly configure the
> > > specific offload at receiving queues.
> > >
> > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> >
> > <...>
> >
> > > +/* Rx L3/L4 checksum */
> > > +static inline uint64_t
> > > +iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
> > > +{
> > > +uint64_t flags = 0;
> > > +
> > > +/* check if HW has decoded the packet and checksum */
> > > +if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
> > > +return 0;
> > > +
> > > +if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
> > > +flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
> > > +return flags;
> > > +}
> > > +
> > > +if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
> > > +flags |= PKT_RX_IP_CKSUM_BAD;
> > > +else
> > > +flags |= PKT_RX_IP_CKSUM_GOOD;
> > > +
> > > +if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
> > > +flags |= PKT_RX_L4_CKSUM_BAD;
> > > +else
> > > +flags |= PKT_RX_L4_CKSUM_GOOD;
> > > +
> > > +if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
> > > +flags |= PKT_RX_EIP_CKSUM_BAD;
> > > +
> > > +return flags;
> > > +}
> >
> > Is this static inline function used anywhere? If not can we delete it?
> >
>
> The same function as iavf_flex_rxd_error_to_pkt_flags.
>
> Looks like meson/gcc missed this [-Werror,-Wunused-function] capturing.
> http://mails.dpdk.org/archives/test-report/2020-September/154839.html
>
AFAIK, unused static functions get a warning about being unused, static inline
functions don't. Unless you are defining a function in a header files, I'd
recommend omitting the "inline" part so you do get unused function errors.
/Bruce
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
2020-09-28 16:21 ` Bruce Richardson
@ 2020-09-28 16:29 ` Wang, Haiyue
0 siblings, 0 replies; 40+ messages in thread
From: Wang, Haiyue @ 2020-09-28 16:29 UTC (permalink / raw)
To: Richardson, Bruce
Cc: Yigit, Ferruh, Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei, dev
> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Tuesday, September 29, 2020 00:22
> To: Wang, Haiyue <haiyue.wang@intel.com>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Guo, Jia <jia.guo@intel.com>; Wu, Jingjing
> <jingjing.wu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
>
> On Mon, Sep 28, 2020 at 05:17:24PM +0100, Wang, Haiyue wrote:
> > + Bruce
> >
> > > -----Original Message-----
> > > From: Ferruh Yigit <ferruh.yigit@intel.com>
> > > Sent: Tuesday, September 29, 2020 00:00
> > > To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z
> > > <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> > > Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>
> > > Subject: Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
> > >
> > > On 9/27/2020 3:08 AM, Jeff Guo wrote:
> > > > Enable metadata extraction for flexible descriptors in AVF, that would
> > > > allow network function directly get metadata without additional parsing
> > > > which would reduce the CPU cost for VFs. The enabling metadata
> > > > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> > > > flexible descriptors, and the VF could negotiate the capability of
> > > > the flexible descriptor with PF and correspondingly configure the
> > > > specific offload at receiving queues.
> > > >
> > > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> > >
> > > <...>
> > >
> > > > +/* Rx L3/L4 checksum */
> > > > +static inline uint64_t
> > > > +iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
> > > > +{
> > > > +uint64_t flags = 0;
> > > > +
> > > > +/* check if HW has decoded the packet and checksum */
> > > > +if (unlikely(!(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
> > > > +return 0;
> > > > +
> > > > +if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
> > > > +flags |= (PKT_RX_IP_CKSUM_GOOD | PKT_RX_L4_CKSUM_GOOD);
> > > > +return flags;
> > > > +}
> > > > +
> > > > +if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
> > > > +flags |= PKT_RX_IP_CKSUM_BAD;
> > > > +else
> > > > +flags |= PKT_RX_IP_CKSUM_GOOD;
> > > > +
> > > > +if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
> > > > +flags |= PKT_RX_L4_CKSUM_BAD;
> > > > +else
> > > > +flags |= PKT_RX_L4_CKSUM_GOOD;
> > > > +
> > > > +if (unlikely(stat_err0 & (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
> > > > +flags |= PKT_RX_EIP_CKSUM_BAD;
> > > > +
> > > > +return flags;
> > > > +}
> > >
> > > Is this static inline function used anywhere? If not can we delete it?
> > >
> >
> > The same function as iavf_flex_rxd_error_to_pkt_flags.
> >
> > Looks like meson/gcc missed this [-Werror,-Wunused-function] capturing.
> > http://mails.dpdk.org/archives/test-report/2020-September/154839.html
> >
> AFAIK, unused static functions get a warning about being unused, static inline
> functions don't. Unless you are defining a function in a header files, I'd
> recommend omitting the "inline" part so you do get unused function errors.
>
Thanks, Bruce
Yes, after removing the 'inline' keyword, got the warning message:
[1315/2332] Compiling C object 'drivers/drivers@@tmp_rte_pmd_iavf@sta/net_iavf_iavf_rxtx.c.o'.
../drivers/net/iavf/iavf_rxtx.c:916:1: warning: 'iavf_rxd_error_to_pkt_flags' defined but not used [-Wunused-function]
916 | iavf_rxd_error_to_pkt_flags(uint16_t stat_err0)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
Not sure how does clang handle 'static inline' as 'static'. Anyway, code review
needs to be enhanced.;-)
> /Bruce
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata extraction
2020-09-28 15:59 ` Ferruh Yigit
2020-09-28 16:17 ` Wang, Haiyue
@ 2020-09-29 2:27 ` Guo, Jia
1 sibling, 0 replies; 40+ messages in thread
From: Guo, Jia @ 2020-09-29 2:27 UTC (permalink / raw)
To: Yigit, Ferruh, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei; +Cc: dev, Wang, Haiyue
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, September 29, 2020 12:00 AM
> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v6] net/iavf: support flex desc metadata
> extraction
>
> On 9/27/2020 3:08 AM, Jeff Guo wrote:
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional
> > parsing which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-
> FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of the
> > flexible descriptor with PF and correspondingly configure the specific
> > offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>
> <...>
>
> > +/* Rx L3/L4 checksum */
> > +static inline uint64_t
> > +iavf_rxd_error_to_pkt_flags(uint16_t stat_err0) {
> > + uint64_t flags = 0;
> > +
> > + /* check if HW has decoded the packet and checksum */
> > + if (unlikely(!(stat_err0 & (1 <<
> IAVF_RX_FLEX_DESC_STATUS0_L3L4P_S))))
> > + return 0;
> > +
> > + if (likely(!(stat_err0 & IAVF_RX_FLEX_ERR0_BITS))) {
> > + flags |= (PKT_RX_IP_CKSUM_GOOD |
> PKT_RX_L4_CKSUM_GOOD);
> > + return flags;
> > + }
> > +
> > + if (unlikely(stat_err0 & (1 <<
> IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S)))
> > + flags |= PKT_RX_IP_CKSUM_BAD;
> > + else
> > + flags |= PKT_RX_IP_CKSUM_GOOD;
> > +
> > + if (unlikely(stat_err0 & (1 <<
> IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S)))
> > + flags |= PKT_RX_L4_CKSUM_BAD;
> > + else
> > + flags |= PKT_RX_L4_CKSUM_GOOD;
> > +
> > + if (unlikely(stat_err0 & (1 <<
> IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S)))
> > + flags |= PKT_RX_EIP_CKSUM_BAD;
> > +
> > + return flags;
> > +}
>
> Is this static inline function used anywhere? If not can we delete it?
>
Oh, sorry, that is a mistake here and could and should be deleted, thanks Ferruh.
> > +
> > static inline void
> > iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc
> *rxdp)
> > {
> > @@ -740,6 +967,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
> > } else {
> > mb->vlan_tci = 0;
> > }
> > +
> > +#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
> > + if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
> > + (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
> > + mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
> > + PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
> > + mb->vlan_tci_outer = mb->vlan_tci;
> > + mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
> > + PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u,
> l2tag2_2: %u",
> > + rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
> > + rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
> > + } else {
> > + mb->vlan_tci_outer = 0;
> > + }
> > +#endif
>
> How this 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' controlled with meson?
> Also is it mentioned in any driver documentation?
Oh, another thing I miss, the config should announce and doc, will add it later.
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v7] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (5 preceding siblings ...)
2020-09-27 2:08 ` [dpdk-dev] [PATCH v6] " Jeff Guo
@ 2020-09-29 6:10 ` Jeff Guo
2020-09-29 6:12 ` Jeff Guo
` (6 subsequent siblings)
13 siblings, 0 replies; 40+ messages in thread
From: Jeff Guo @ 2020-09-29 6:10 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing; +Cc: dev, haiyue.wang, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
---
v7:
clean some useless and add doc
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
config/rte_config.h | 3 +
doc/guides/nics/intel_vf.rst | 16 +
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 168 +++++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
12 files changed, 1039 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/config/rte_config.h b/config/rte_config.h
index 0bae630fd..e6db2c840 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -124,6 +124,9 @@
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
+/* iavf defines */
+#undef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+
/* Ring net PMD settings */
#define RTE_PMD_RING_MAX_RX_RINGS 16
#define RTE_PMD_RING_MAX_TX_RINGS 16
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index ade515259..207f45614 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -615,3 +615,19 @@ which belongs to the destination VF on the VM.
.. figure:: img/inter_vm_comms.*
Inter-VM Communication
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC`` (default ``n``)
+
+ Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+ Configure to 16-byte Rx descriptor may cause a negotiation failure during VF driver initialization
+ if the PF driver doesn't support.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4bcf220c3..96d8c1448 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -84,6 +84,12 @@ New Features
* Added support for 200G PAM4 link speed.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3..d56611608 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +187,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 440da7d76..a88d53ab0 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "iavf_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1213,6 +1256,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1222,6 +1608,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1287,6 +1679,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
vf->vf_reset = false;
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 05a7dd898..b3534472e 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +323,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +492,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +530,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +903,14 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +936,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1015,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1269,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1410,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1647,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1839,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2100,6 +2287,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 59625a979..5225493bc 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,77 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/* Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +137,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +189,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -161,77 +241,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -331,6 +340,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,6 +365,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -438,6 +462,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de..7ad1e0f68 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 76f8e38d1..7981dfa30 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -647,25 +647,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 33407c503..c1c74571a 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 000000000..5e41568c3
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_net_iavf_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_net_iavf_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_net_iavf_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_proto_xtr_metadata data;
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 4a76d1d52..d7afd31d1 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+ rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v7] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (6 preceding siblings ...)
2020-09-29 6:10 ` [dpdk-dev] [PATCH v7] " Jeff Guo
@ 2020-09-29 6:12 ` Jeff Guo
2020-10-13 8:17 ` [dpdk-dev] [PATCH v8] " Jeff Guo
` (5 subsequent siblings)
13 siblings, 0 replies; 40+ messages in thread
From: Jeff Guo @ 2020-09-29 6:12 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing, ferruh.yigit
Cc: dev, haiyue.wang, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
---
v7:
clean some useless and add doc
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
config/rte_config.h | 3 +
doc/guides/nics/intel_vf.rst | 16 +
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 168 +++++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
12 files changed, 1039 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/config/rte_config.h b/config/rte_config.h
index 0bae630fd..e6db2c840 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -124,6 +124,9 @@
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
+/* iavf defines */
+#undef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+
/* Ring net PMD settings */
#define RTE_PMD_RING_MAX_RX_RINGS 16
#define RTE_PMD_RING_MAX_TX_RINGS 16
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index ade515259..207f45614 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -615,3 +615,19 @@ which belongs to the destination VF on the VM.
.. figure:: img/inter_vm_comms.*
Inter-VM Communication
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC`` (default ``n``)
+
+ Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+ Configure to 16-byte Rx descriptor may cause a negotiation failure during VF driver initialization
+ if the PF driver doesn't support.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 4bcf220c3..96d8c1448 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -84,6 +84,12 @@ New Features
* Added support for 200G PAM4 link speed.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3..d56611608 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +187,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 440da7d76..a88d53ab0 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "iavf_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1213,6 +1256,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1222,6 +1608,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1287,6 +1679,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
vf->vf_reset = false;
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 05a7dd898..b3534472e 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +323,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +492,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +530,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +903,14 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +936,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1015,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1269,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1410,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1647,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1839,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2100,6 +2287,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 59625a979..5225493bc 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,77 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/* Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +137,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +189,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -161,77 +241,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -331,6 +340,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,6 +365,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -438,6 +462,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de..7ad1e0f68 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 76f8e38d1..7981dfa30 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -647,25 +647,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 33407c503..c1c74571a 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 000000000..5e41568c3
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_net_iavf_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_net_iavf_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_net_iavf_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_proto_xtr_metadata data;
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 4a76d1d52..d7afd31d1 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+ rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (7 preceding siblings ...)
2020-09-29 6:12 ` Jeff Guo
@ 2020-10-13 8:17 ` Jeff Guo
2020-10-13 10:10 ` Zhang, Qi Z
2020-10-14 12:31 ` Ferruh Yigit
2020-10-15 3:41 ` [dpdk-dev] [PATCH v9] " Jeff Guo
` (4 subsequent siblings)
13 siblings, 2 replies; 40+ messages in thread
From: Jeff Guo @ 2020-10-13 8:17 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing, ferruh.yigit
Cc: dev, haiyue.wang, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
---
v8:
rebase patch for apply issue
v7:
clean some useless and add doc
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
config/rte_config.h | 3 +
doc/guides/nics/intel_vf.rst | 16 +
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 168 +++++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
12 files changed, 1039 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/config/rte_config.h b/config/rte_config.h
index 03d90d78bc..2c53072c3d 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -127,6 +127,9 @@
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
+/* iavf defines */
+#undef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+
/* Ring net PMD settings */
#define RTE_PMD_RING_MAX_RX_RINGS 16
#define RTE_PMD_RING_MAX_TX_RINGS 16
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index ade5152595..207f456143 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -615,3 +615,19 @@ which belongs to the destination VF on the VM.
.. figure:: img/inter_vm_comms.*
Inter-VM Communication
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC`` (default ``n``)
+
+ Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+ Configure to 16-byte Rx descriptor may cause a negotiation failure during VF driver initialization
+ if the PF driver doesn't support.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index e7691ee732..93d3ccc60a 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -160,6 +160,12 @@ New Features
packets with specified ratio, and apply with own set of actions with a fate
action. When the ratio is set to 1 then the packets will be 100% mirrored.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3a..d566116086 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +187,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index f5e6e852ae..93e26c768c 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "iavf_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1247,6 +1290,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1256,6 +1642,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1319,6 +1711,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
}
}
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 1b0efe0433..7e6e425ac8 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +323,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +492,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +530,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +903,14 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +936,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1015,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1269,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1410,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1647,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1839,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2099,6 +2286,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 3d02c6589d..39b31aaa8e 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,77 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/* Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +137,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +189,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -161,77 +241,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -331,6 +340,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,6 +365,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -439,6 +463,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de2..7ad1e0f68a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index db0b768765..5e7142893b 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -648,25 +648,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 33407c5032..c1c74571a1 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 0000000000..5e41568c32
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_net_iavf_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_net_iavf_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_net_iavf_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_proto_xtr_metadata data;
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 4a76d1d52d..d7afd31d14 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+ rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-10-13 8:17 ` [dpdk-dev] [PATCH v8] " Jeff Guo
@ 2020-10-13 10:10 ` Zhang, Qi Z
2020-10-14 12:31 ` Ferruh Yigit
1 sibling, 0 replies; 40+ messages in thread
From: Zhang, Qi Z @ 2020-10-13 10:10 UTC (permalink / raw)
To: Guo, Jia, Wu, Jingjing, Xing, Beilei, Yigit, Ferruh; +Cc: dev, Wang, Haiyue
> -----Original Message-----
> From: Guo, Jia <jia.guo@intel.com>
> Sent: Tuesday, October 13, 2020 4:18 PM
> To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Guo, Jia
> <jia.guo@intel.com>
> Subject: [PATCH v8] net/iavf: support flex desc metadata extraction
>
> Enable metadata extraction for flexible descriptors in AVF, that would allow
> network function directly get metadata without additional parsing which
> would reduce the CPU cost for VFs. The enabling metadata extractions involve
> the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS flexible descriptors,
> and the VF could negotiate the capability of the flexible descriptor with PF and
> correspondingly configure the specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Applied to dpdk-next-net-intel.
Thanks
Qi
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-10-13 8:17 ` [dpdk-dev] [PATCH v8] " Jeff Guo
2020-10-13 10:10 ` Zhang, Qi Z
@ 2020-10-14 12:31 ` Ferruh Yigit
2020-10-14 14:03 ` Bruce Richardson
` (2 more replies)
1 sibling, 3 replies; 40+ messages in thread
From: Ferruh Yigit @ 2020-10-14 12:31 UTC (permalink / raw)
To: Jeff Guo, jingjing.wu, qi.z.zhang, beilei.xing
Cc: dev, haiyue.wang, Bruce Richardson, Olivier Matz
On 10/13/2020 9:17 AM, Jeff Guo wrote:
> Enable metadata extraction for flexible descriptors in AVF, that would
> allow network function directly get metadata without additional parsing
> which would reduce the CPU cost for VFs. The enabling metadata
> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> flexible descriptors, and the VF could negotiate the capability of
> the flexible descriptor with PF and correspondingly configure the
> specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> ---
> v8:
> rebase patch for apply issue
>
> v7:
> clean some useless and add doc
>
> v6:
> rebase patch
>
> v5:
> remove ovs configure since ovs is not protocol extraction
>
> v4:
> add flex desc type in rx queue for handling vector path
> handle ovs flex type
>
> v3:
> export these global symbols into .map
>
> v2:
> remove makefile change and modify the rxdid handling
> ---
> config/rte_config.h | 3 +
> doc/guides/nics/intel_vf.rst | 16 +
> doc/guides/rel_notes/release_20_11.rst | 6 +
> drivers/net/iavf/iavf.h | 24 +-
> drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
> drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
> drivers/net/iavf/iavf_rxtx.h | 168 +++++----
> drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> drivers/net/iavf/iavf_vchnl.c | 22 +-
> drivers/net/iavf/meson.build | 2 +
> drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
> drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
> 12 files changed, 1039 insertions(+), 114 deletions(-)
> create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
>
> diff --git a/config/rte_config.h b/config/rte_config.h
> index 03d90d78bc..2c53072c3d 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -127,6 +127,9 @@
> #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
> #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
>
> +/* iavf defines */
> +#undef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
> +
Hi Jeff,
The 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' was already there, not introduced with this
patch, so I think better to add this change as different patch.
Also not sure if we want to add more config options to the 'rte_config.h',
indeed otherway around and we are trying to get rid of as much as compile time
optios.
cc'ed Bruce too.
> /* Ring net PMD settings */
> #define RTE_PMD_RING_MAX_RX_RINGS 16
> #define RTE_PMD_RING_MAX_TX_RINGS 16
> diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
> index ade5152595..207f456143 100644
> --- a/doc/guides/nics/intel_vf.rst
> +++ b/doc/guides/nics/intel_vf.rst
> @@ -615,3 +615,19 @@ which belongs to the destination VF on the VM.
> .. figure:: img/inter_vm_comms.*
>
> Inter-VM Communication
> +
> +
> +Pre-Installation Configuration
> +------------------------------
> +
> +Config File Options
> +~~~~~~~~~~~~~~~~~~~
> +
> +The following options can be modified in the ``config`` file.
> +Please note that enabling debugging options may affect system performance.
> +
> +- ``CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC`` (default ``n``)
There is no 'CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC' anymore, this is from make
days naming.
Instead, what do you think not adding the 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' to
the 'rte_config.h', but document how this flag can be provided by meson during
build:
meson -Dc_args="-DRTE_LIBRTE_IAVF_16BYTE_RX_DESC"
And we should plan for long term to convert this compile time flag to runtime
devargs.
What do you think?
> +
> + Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
> + Configure to 16-byte Rx descriptor may cause a negotiation failure during VF driver initialization
> + if the PF driver doesn't support.
> diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
> index e7691ee732..93d3ccc60a 100644
> --- a/doc/guides/rel_notes/release_20_11.rst
> +++ b/doc/guides/rel_notes/release_20_11.rst
> @@ -160,6 +160,12 @@ New Features
> packets with specified ratio, and apply with own set of actions with a fate
> action. When the ratio is set to 1 then the packets will be 100% mirrored.
>
> +* **Updated Intel iavf driver.**
> +
> + Updated iavf PMD with new features and improvements, including:
> +
> + * Added support for flexible descriptor metadata extraction.
> +
Can you please move the update to the net drivers block, instead of very bottom.
There is an order in the release notes (as commented in section header) like:
- core libs
- ethdev lib related changes
- ethdev PMDS change
- ...
<...>
> +
> +EXPERIMENTAL {
> + global:
> +
> + # added in 20.11
> + rte_net_iavf_dynfield_proto_xtr_metadata_offs;
> + rte_net_iavf_dynflag_proto_xtr_vlan_mask;
> + rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
> + rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
> + rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
> + rte_net_iavf_dynflag_proto_xtr_tcp_mask;
> + rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
As a namespace previously "rte_pmd_xxx" was used for PMD specific APIs, can you
please switch to that?
'rte_net_' is used by the 'librte_net' library.
Above list is the dynfield values, what is the correct usage for dynfields,
1- Put dynfileds names in to the header, and application does a lookup
('rte_mbuf_dynfield_lookup()') to get the dynfield values.
or
2- Expose dynfield values to be accessed directly from application, as done above.
@Oliver, can you please support.
I can see (1) has advantage of portability if more than one PMD supports same
dynfield names, but that sees not a case for above ones.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-10-14 12:31 ` Ferruh Yigit
@ 2020-10-14 14:03 ` Bruce Richardson
2020-10-15 3:40 ` Guo, Jia
2020-10-15 5:26 ` Guo, Jia
2020-10-26 9:37 ` Olivier Matz
2 siblings, 1 reply; 40+ messages in thread
From: Bruce Richardson @ 2020-10-14 14:03 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Jeff Guo, jingjing.wu, qi.z.zhang, beilei.xing, dev, haiyue.wang,
Olivier Matz
On Wed, Oct 14, 2020 at 01:31:39PM +0100, Ferruh Yigit wrote:
> On 10/13/2020 9:17 AM, Jeff Guo wrote:
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional parsing
> > which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of
> > the flexible descriptor with PF and correspondingly configure the
> > specific offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> > ---
> > v8:
> > rebase patch for apply issue
> >
> > v7:
> > clean some useless and add doc
> >
> > v6:
> > rebase patch
> >
> > v5:
> > remove ovs configure since ovs is not protocol extraction
> >
> > v4:
> > add flex desc type in rx queue for handling vector path
> > handle ovs flex type
> >
> > v3:
> > export these global symbols into .map
> >
> > v2:
> > remove makefile change and modify the rxdid handling
> > ---
> > config/rte_config.h | 3 +
> > doc/guides/nics/intel_vf.rst | 16 +
> > doc/guides/rel_notes/release_20_11.rst | 6 +
> > drivers/net/iavf/iavf.h | 24 +-
> > drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
> > drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
> > drivers/net/iavf/iavf_rxtx.h | 168 +++++----
> > drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> > drivers/net/iavf/iavf_vchnl.c | 22 +-
> > drivers/net/iavf/meson.build | 2 +
> > drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
> > drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
> > 12 files changed, 1039 insertions(+), 114 deletions(-)
> > create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
> >
> > diff --git a/config/rte_config.h b/config/rte_config.h
> > index 03d90d78bc..2c53072c3d 100644
> > --- a/config/rte_config.h
> > +++ b/config/rte_config.h
> > @@ -127,6 +127,9 @@
> > #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
> > #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
> > +/* iavf defines */
> > +#undef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
> > +
>
> Hi Jeff,
>
> The 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' was already there, not introduced with
> this patch, so I think better to add this change as different patch.
>
> Also not sure if we want to add more config options to the 'rte_config.h',
> indeed otherway around and we are trying to get rid of as much as compile
> time optios.
> cc'ed Bruce too.
>
Actually, there is also patchset [1] to consider, which changes the format
of these values in the header file. It's better to not "undef" the not set
values, as that prevents someone from setting them via cflags/c_args when
building.
/Bruce
[1] http://patches.dpdk.org/project/dpdk/list/?series=11928
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-10-14 14:03 ` Bruce Richardson
@ 2020-10-15 3:40 ` Guo, Jia
0 siblings, 0 replies; 40+ messages in thread
From: Guo, Jia @ 2020-10-15 3:40 UTC (permalink / raw)
To: Richardson, Bruce, Yigit, Ferruh
Cc: Wu, Jingjing, Zhang, Qi Z, Xing, Beilei, dev, Wang, Haiyue, Olivier Matz
> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Wednesday, October 14, 2020 10:04 PM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Olivier Matz
> <olivier.matz@6wind.com>
> Subject: Re: [PATCH v8] net/iavf: support flex desc metadata extraction
>
> On Wed, Oct 14, 2020 at 01:31:39PM +0100, Ferruh Yigit wrote:
> > On 10/13/2020 9:17 AM, Jeff Guo wrote:
> > > Enable metadata extraction for flexible descriptors in AVF, that
> > > would allow network function directly get metadata without
> > > additional parsing which would reduce the CPU cost for VFs. The
> > > enabling metadata extractions involve the metadata of
> > > VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS flexible descriptors, and the VF
> > > could negotiate the capability of the flexible descriptor with PF
> > > and correspondingly configure the specific offload at receiving queues.
> > >
> > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> > > ---
> > > v8:
> > > rebase patch for apply issue
> > >
> > > v7:
> > > clean some useless and add doc
> > >
> > > v6:
> > > rebase patch
> > >
> > > v5:
> > > remove ovs configure since ovs is not protocol extraction
> > >
> > > v4:
> > > add flex desc type in rx queue for handling vector path handle ovs
> > > flex type
> > >
> > > v3:
> > > export these global symbols into .map
> > >
> > > v2:
> > > remove makefile change and modify the rxdid handling
> > > ---
> > > config/rte_config.h | 3 +
> > > doc/guides/nics/intel_vf.rst | 16 +
> > > doc/guides/rel_notes/release_20_11.rst | 6 +
> > > drivers/net/iavf/iavf.h | 24 +-
> > > drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
> > > drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
> > > drivers/net/iavf/iavf_rxtx.h | 168 +++++----
> > > drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> > > drivers/net/iavf/iavf_vchnl.c | 22 +-
> > > drivers/net/iavf/meson.build | 2 +
> > > drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
> > > drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
> > > 12 files changed, 1039 insertions(+), 114 deletions(-)
> > > create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
> > >
> > > diff --git a/config/rte_config.h b/config/rte_config.h index
> > > 03d90d78bc..2c53072c3d 100644
> > > --- a/config/rte_config.h
> > > +++ b/config/rte_config.h
> > > @@ -127,6 +127,9 @@
> > > #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
> > > #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
> > > +/* iavf defines */
> > > +#undef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
> > > +
> >
> > Hi Jeff,
> >
> > The 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' was already there, not
> introduced
> > with this patch, so I think better to add this change as different patch.
> >
> > Also not sure if we want to add more config options to the
> > 'rte_config.h', indeed otherway around and we are trying to get rid of
> > as much as compile time optios.
> > cc'ed Bruce too.
> >
> Actually, there is also patchset [1] to consider, which changes the format of
> these values in the header file. It's better to not "undef" the not set values,
> as that prevents someone from setting them via cflags/c_args when building.
>
> /Bruce
>
> [1] http://patches.dpdk.org/project/dpdk/list/?series=11928
Ok, make sense, and I will flow the new policy when I update the new version.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-10-14 12:31 ` Ferruh Yigit
2020-10-14 14:03 ` Bruce Richardson
@ 2020-10-15 5:26 ` Guo, Jia
2020-10-15 8:33 ` Ferruh Yigit
2020-10-26 9:37 ` Olivier Matz
2 siblings, 1 reply; 40+ messages in thread
From: Guo, Jia @ 2020-10-15 5:26 UTC (permalink / raw)
To: Yigit, Ferruh, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei
Cc: dev, Wang, Haiyue, Richardson, Bruce, Olivier Matz
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, October 14, 2020 8:32 PM
> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Richardson,
> Bruce <bruce.richardson@intel.com>; Olivier Matz <olivier.matz@6wind.com>
> Subject: Re: [PATCH v8] net/iavf: support flex desc metadata extraction
>
> On 10/13/2020 9:17 AM, Jeff Guo wrote:
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional
> > parsing which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-
> FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of the
> > flexible descriptor with PF and correspondingly configure the specific
> > offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> > ---
> > v8:
> > rebase patch for apply issue
> >
> > v7:
> > clean some useless and add doc
> >
> > v6:
> > rebase patch
> >
> > v5:
> > remove ovs configure since ovs is not protocol extraction
> >
> > v4:
> > add flex desc type in rx queue for handling vector path handle ovs
> > flex type
> >
> > v3:
> > export these global symbols into .map
> >
> > v2:
> > remove makefile change and modify the rxdid handling
> > ---
> > config/rte_config.h | 3 +
> > doc/guides/nics/intel_vf.rst | 16 +
> > doc/guides/rel_notes/release_20_11.rst | 6 +
> > drivers/net/iavf/iavf.h | 24 +-
> > drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
> > drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
> > drivers/net/iavf/iavf_rxtx.h | 168 +++++----
> > drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> > drivers/net/iavf/iavf_vchnl.c | 22 +-
> > drivers/net/iavf/meson.build | 2 +
> > drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
> > drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
> > 12 files changed, 1039 insertions(+), 114 deletions(-)
> > create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
> >
> > diff --git a/config/rte_config.h b/config/rte_config.h index
> > 03d90d78bc..2c53072c3d 100644
> > --- a/config/rte_config.h
> > +++ b/config/rte_config.h
> > @@ -127,6 +127,9 @@
> > #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
> > #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
> >
> > +/* iavf defines */
> > +#undef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
> > +
>
> Hi Jeff,
>
> The 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' was already there, not introduced
> with this patch, so I think better to add this change as different patch.
>
> Also not sure if we want to add more config options to the 'rte_config.h',
> indeed otherway around and we are trying to get rid of as much as compile
> time optios.
> cc'ed Bruce too.
>
> > /* Ring net PMD settings */
> > #define RTE_PMD_RING_MAX_RX_RINGS 16
> > #define RTE_PMD_RING_MAX_TX_RINGS 16 diff --git
> > a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index
> > ade5152595..207f456143 100644
> > --- a/doc/guides/nics/intel_vf.rst
> > +++ b/doc/guides/nics/intel_vf.rst
> > @@ -615,3 +615,19 @@ which belongs to the destination VF on the VM.
> > .. figure:: img/inter_vm_comms.*
> >
> > Inter-VM Communication
> > +
> > +
> > +Pre-Installation Configuration
> > +------------------------------
> > +
> > +Config File Options
> > +~~~~~~~~~~~~~~~~~~~
> > +
> > +The following options can be modified in the ``config`` file.
> > +Please note that enabling debugging options may affect system
> performance.
> > +
> > +- ``CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC`` (default ``n``)
>
> There is no 'CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC' anymore, this is
> from make days naming.
>
> Instead, what do you think not adding the
> 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' to the 'rte_config.h', but document
> how this flag can be provided by meson during
> build:
> meson -Dc_args="-DRTE_LIBRTE_IAVF_16BYTE_RX_DESC"
>
> And we should plan for long term to convert this compile time flag to runtime
> devargs.
>
> What do you think?
>
Sorry, I miss this comment. And, I agree on. Too more compile time flag is not friendly to use. Do you agree to do the runtime devargs on next separate patch set?
> > +
> > + Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32
> byte.
> > + Configure to 16-byte Rx descriptor may cause a negotiation failure
> > + during VF driver initialization if the PF driver doesn't support.
> > diff --git a/doc/guides/rel_notes/release_20_11.rst
> > b/doc/guides/rel_notes/release_20_11.rst
> > index e7691ee732..93d3ccc60a 100644
> > --- a/doc/guides/rel_notes/release_20_11.rst
> > +++ b/doc/guides/rel_notes/release_20_11.rst
> > @@ -160,6 +160,12 @@ New Features
> > packets with specified ratio, and apply with own set of actions with a fate
> > action. When the ratio is set to 1 then the packets will be 100% mirrored.
> >
> > +* **Updated Intel iavf driver.**
> > +
> > + Updated iavf PMD with new features and improvements, including:
> > +
> > + * Added support for flexible descriptor metadata extraction.
> > +
>
> Can you please move the update to the net drivers block, instead of very
> bottom.
> There is an order in the release notes (as commented in section header) like:
> - core libs
> - ethdev lib related changes
> - ethdev PMDS change
> - ...
>
Sure, will update it in v10.
> <...>
>
> > +
> > +EXPERIMENTAL {
> > + global:
> > +
> > + # added in 20.11
> > + rte_net_iavf_dynfield_proto_xtr_metadata_offs;
> > + rte_net_iavf_dynflag_proto_xtr_vlan_mask;
> > + rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
> > + rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
> > + rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
> > + rte_net_iavf_dynflag_proto_xtr_tcp_mask;
> > + rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
>
> As a namespace previously "rte_pmd_xxx" was used for PMD specific APIs,
> can you please switch to that?
> 'rte_net_' is used by the 'librte_net' library.
>
Make sense.
> Above list is the dynfield values, what is the correct usage for dynfields,
> 1- Put dynfileds names in to the header, and application does a lookup
> ('rte_mbuf_dynfield_lookup()') to get the dynfield values.
> or
> 2- Expose dynfield values to be accessed directly from application, as done
> above.
>
> @Oliver, can you please support.
>
> I can see (1) has advantage of portability if more than one PMD supports
> same dynfield names, but that sees not a case for above ones.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-10-15 5:26 ` Guo, Jia
@ 2020-10-15 8:33 ` Ferruh Yigit
0 siblings, 0 replies; 40+ messages in thread
From: Ferruh Yigit @ 2020-10-15 8:33 UTC (permalink / raw)
To: Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei
Cc: dev, Wang, Haiyue, Richardson, Bruce, Olivier Matz
On 10/15/2020 6:26 AM, Guo, Jia wrote:
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Wednesday, October 14, 2020 8:32 PM
>> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
>> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
>> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Richardson,
>> Bruce <bruce.richardson@intel.com>; Olivier Matz <olivier.matz@6wind.com>
>> Subject: Re: [PATCH v8] net/iavf: support flex desc metadata extraction
>>
>> On 10/13/2020 9:17 AM, Jeff Guo wrote:
>>> Enable metadata extraction for flexible descriptors in AVF, that would
>>> allow network function directly get metadata without additional
>>> parsing which would reduce the CPU cost for VFs. The enabling metadata
>>> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-
>> FLOW/TCP/MPLS
>>> flexible descriptors, and the VF could negotiate the capability of the
>>> flexible descriptor with PF and correspondingly configure the specific
>>> offload at receiving queues.
>>>
>>> Signed-off-by: Jeff Guo <jia.guo@intel.com>
>>> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>>> ---
>>> v8:
>>> rebase patch for apply issue
>>>
>>> v7:
>>> clean some useless and add doc
>>>
>>> v6:
>>> rebase patch
>>>
>>> v5:
>>> remove ovs configure since ovs is not protocol extraction
>>>
>>> v4:
>>> add flex desc type in rx queue for handling vector path handle ovs
>>> flex type
>>>
>>> v3:
>>> export these global symbols into .map
>>>
>>> v2:
>>> remove makefile change and modify the rxdid handling
>>> ---
>>> config/rte_config.h | 3 +
>>> doc/guides/nics/intel_vf.rst | 16 +
>>> doc/guides/rel_notes/release_20_11.rst | 6 +
>>> drivers/net/iavf/iavf.h | 24 +-
>>> drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
>>> drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
>>> drivers/net/iavf/iavf_rxtx.h | 168 +++++----
>>> drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
>>> drivers/net/iavf/iavf_vchnl.c | 22 +-
>>> drivers/net/iavf/meson.build | 2 +
>>> drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
>>> drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
>>> 12 files changed, 1039 insertions(+), 114 deletions(-)
>>> create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
>>>
>>> diff --git a/config/rte_config.h b/config/rte_config.h index
>>> 03d90d78bc..2c53072c3d 100644
>>> --- a/config/rte_config.h
>>> +++ b/config/rte_config.h
>>> @@ -127,6 +127,9 @@
>>> #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
>>> #define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
>>>
>>> +/* iavf defines */
>>> +#undef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
>>> +
>>
>> Hi Jeff,
>>
>> The 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' was already there, not introduced
>> with this patch, so I think better to add this change as different patch.
>>
>> Also not sure if we want to add more config options to the 'rte_config.h',
>> indeed otherway around and we are trying to get rid of as much as compile
>> time optios.
>> cc'ed Bruce too.
>>
>>> /* Ring net PMD settings */
>>> #define RTE_PMD_RING_MAX_RX_RINGS 16
>>> #define RTE_PMD_RING_MAX_TX_RINGS 16 diff --git
>>> a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index
>>> ade5152595..207f456143 100644
>>> --- a/doc/guides/nics/intel_vf.rst
>>> +++ b/doc/guides/nics/intel_vf.rst
>>> @@ -615,3 +615,19 @@ which belongs to the destination VF on the VM.
>>> .. figure:: img/inter_vm_comms.*
>>>
>>> Inter-VM Communication
>>> +
>>> +
>>> +Pre-Installation Configuration
>>> +------------------------------
>>> +
>>> +Config File Options
>>> +~~~~~~~~~~~~~~~~~~~
>>> +
>>> +The following options can be modified in the ``config`` file.
>>> +Please note that enabling debugging options may affect system
>> performance.
>>> +
>>> +- ``CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC`` (default ``n``)
>>
>> There is no 'CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC' anymore, this is
>> from make days naming.
>>
>> Instead, what do you think not adding the
>> 'RTE_LIBRTE_IAVF_16BYTE_RX_DESC' to the 'rte_config.h', but document
>> how this flag can be provided by meson during
>> build:
>> meson -Dc_args="-DRTE_LIBRTE_IAVF_16BYTE_RX_DESC"
>>
>> And we should plan for long term to convert this compile time flag to runtime
>> devargs.
>>
>> What do you think?
>>
>
> Sorry, I miss this comment. And, I agree on. Too more compile time flag is not friendly to use. Do you agree to do the runtime devargs on next separate patch set?
>
Sure, OK to have it as separate patchset.
>>> +
>>> + Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32
>> byte.
>>> + Configure to 16-byte Rx descriptor may cause a negotiation failure
>>> + during VF driver initialization if the PF driver doesn't support.
>>> diff --git a/doc/guides/rel_notes/release_20_11.rst
>>> b/doc/guides/rel_notes/release_20_11.rst
>>> index e7691ee732..93d3ccc60a 100644
>>> --- a/doc/guides/rel_notes/release_20_11.rst
>>> +++ b/doc/guides/rel_notes/release_20_11.rst
>>> @@ -160,6 +160,12 @@ New Features
>>> packets with specified ratio, and apply with own set of actions with a fate
>>> action. When the ratio is set to 1 then the packets will be 100% mirrored.
>>>
>>> +* **Updated Intel iavf driver.**
>>> +
>>> + Updated iavf PMD with new features and improvements, including:
>>> +
>>> + * Added support for flexible descriptor metadata extraction.
>>> +
>>
>> Can you please move the update to the net drivers block, instead of very
>> bottom.
>> There is an order in the release notes (as commented in section header) like:
>> - core libs
>> - ethdev lib related changes
>> - ethdev PMDS change
>> - ...
>>
>
> Sure, will update it in v10.
>
>> <...>
>>
>>> +
>>> +EXPERIMENTAL {
>>> + global:
>>> +
>>> + # added in 20.11
>>> + rte_net_iavf_dynfield_proto_xtr_metadata_offs;
>>> + rte_net_iavf_dynflag_proto_xtr_vlan_mask;
>>> + rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
>>> + rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
>>> + rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
>>> + rte_net_iavf_dynflag_proto_xtr_tcp_mask;
>>> + rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
>>
>> As a namespace previously "rte_pmd_xxx" was used for PMD specific APIs,
>> can you please switch to that?
>> 'rte_net_' is used by the 'librte_net' library.
>>
>
> Make sense.
>
>> Above list is the dynfield values, what is the correct usage for dynfields,
>> 1- Put dynfileds names in to the header, and application does a lookup
>> ('rte_mbuf_dynfield_lookup()') to get the dynfield values.
>> or
>> 2- Expose dynfield values to be accessed directly from application, as done
>> above.
>>
>> @Oliver, can you please support.
>>
>> I can see (1) has advantage of portability if more than one PMD supports
>> same dynfield names, but that sees not a case for above ones.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-10-14 12:31 ` Ferruh Yigit
2020-10-14 14:03 ` Bruce Richardson
2020-10-15 5:26 ` Guo, Jia
@ 2020-10-26 9:37 ` Olivier Matz
2020-10-26 11:41 ` Wang, Haiyue
2 siblings, 1 reply; 40+ messages in thread
From: Olivier Matz @ 2020-10-26 9:37 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Jeff Guo, jingjing.wu, qi.z.zhang, beilei.xing, dev, haiyue.wang,
Bruce Richardson
Hi,
On Wed, Oct 14, 2020 at 01:31:39PM +0100, Ferruh Yigit wrote:
> On 10/13/2020 9:17 AM, Jeff Guo wrote:
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional parsing
> > which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of
> > the flexible descriptor with PF and correspondingly configure the
> > specific offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
[...]
> > +EXPERIMENTAL {
> > + global:
> > +
> > + # added in 20.11
> > + rte_net_iavf_dynfield_proto_xtr_metadata_offs;
> > + rte_net_iavf_dynflag_proto_xtr_vlan_mask;
> > + rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
> > + rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
> > + rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
> > + rte_net_iavf_dynflag_proto_xtr_tcp_mask;
> > + rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
>
> As a namespace previously "rte_pmd_xxx" was used for PMD specific APIs, can
> you please switch to that?
> 'rte_net_' is used by the 'librte_net' library.
>
> Above list is the dynfield values, what is the correct usage for dynfields,
> 1- Put dynfileds names in to the header, and application does a lookup
> ('rte_mbuf_dynfield_lookup()') to get the dynfield values.
> or
> 2- Expose dynfield values to be accessed directly from application, as done above.
>
> @Oliver, can you please support.
>
> I can see (1) has advantage of portability if more than one PMD supports
> same dynfield names, but that sees not a case for above ones.
If I understand the question correctly, this is the same that was
discussed here:
http://inbox.dpdk.org/dev/20191030165626.w3flq5wdpitpsv2v@platinum/
To me, exporting the variables containing the dynfield offsets is easier
to use: we don't need to have additional private variables to store them
in each API users (usually one static variable per file, which can be
heavy).
Olivier
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v8] net/iavf: support flex desc metadata extraction
2020-10-26 9:37 ` Olivier Matz
@ 2020-10-26 11:41 ` Wang, Haiyue
0 siblings, 0 replies; 40+ messages in thread
From: Wang, Haiyue @ 2020-10-26 11:41 UTC (permalink / raw)
To: Olivier Matz, Yigit, Ferruh
Cc: Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei, dev,
Richardson, Bruce
Hi Olivier,
> -----Original Message-----
> From: Olivier Matz <olivier.matz@6wind.com>
> Sent: Monday, October 26, 2020 17:37
> To: Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z
> <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; dev@dpdk.org; Wang, Haiyue
> <haiyue.wang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
> Subject: Re: [PATCH v8] net/iavf: support flex desc metadata extraction
>
> Hi,
>
> On Wed, Oct 14, 2020 at 01:31:39PM +0100, Ferruh Yigit wrote:
> > On 10/13/2020 9:17 AM, Jeff Guo wrote:
> > > Enable metadata extraction for flexible descriptors in AVF, that would
> > > allow network function directly get metadata without additional parsing
> > > which would reduce the CPU cost for VFs. The enabling metadata
> > > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> > > flexible descriptors, and the VF could negotiate the capability of
> > > the flexible descriptor with PF and correspondingly configure the
> > > specific offload at receiving queues.
> > >
> > > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>
> [...]
>
> > > +EXPERIMENTAL {
> > > + global:
> > > +
> > > + # added in 20.11
> > > + rte_net_iavf_dynfield_proto_xtr_metadata_offs;
> > > + rte_net_iavf_dynflag_proto_xtr_vlan_mask;
> > > + rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
> > > + rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
> > > + rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
> > > + rte_net_iavf_dynflag_proto_xtr_tcp_mask;
> > > + rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
> >
> > As a namespace previously "rte_pmd_xxx" was used for PMD specific APIs, can
> > you please switch to that?
> > 'rte_net_' is used by the 'librte_net' library.
> >
> > Above list is the dynfield values, what is the correct usage for dynfields,
> > 1- Put dynfileds names in to the header, and application does a lookup
> > ('rte_mbuf_dynfield_lookup()') to get the dynfield values.
> > or
> > 2- Expose dynfield values to be accessed directly from application, as done above.
> >
> > @Oliver, can you please support.
> >
> > I can see (1) has advantage of portability if more than one PMD supports
> > same dynfield names, but that sees not a case for above ones.
>
> If I understand the question correctly, this is the same that was
> discussed here:
>
> http://inbox.dpdk.org/dev/20191030165626.w3flq5wdpitpsv2v@platinum/
>
> To me, exporting the variables containing the dynfield offsets is easier
> to use: we don't need to have additional private variables to store them
> in each API users (usually one static variable per file, which can be
> heavy).
No issue for one PMD, but if two PMDs share the same dynfields, the application
has to use two namespace variables to access the same value, like:
if (mb->ol_flags & PMD_A_DYNFIELD_B_MASK)
else if (mb->ol_flags & PMD_B_DYNFIELD_B_MASK)
This make the application code a little duplicated. ;-)
>
> Olivier
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v9] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (8 preceding siblings ...)
2020-10-13 8:17 ` [dpdk-dev] [PATCH v8] " Jeff Guo
@ 2020-10-15 3:41 ` Jeff Guo
2020-10-27 5:04 ` [dpdk-dev] [PATCH v10] " Jeff Guo
` (3 subsequent siblings)
13 siblings, 0 replies; 40+ messages in thread
From: Jeff Guo @ 2020-10-15 3:41 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing, ferruh.yigit
Cc: dev, haiyue.wang, bruce.richardson, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
---
v9:
change the undef config
v8:
rebase patch for apply issue
v7:
clean some useless and add doc
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
config/rte_config.h | 3 +
doc/guides/nics/intel_vf.rst | 16 +
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 252 ++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 168 +++++----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 ++++++++++++++
drivers/net/iavf/rte_pmd_iavf_version.map | 13 +
12 files changed, 1039 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/config/rte_config.h b/config/rte_config.h
index 03d90d78bc..67a7ad66df 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -127,6 +127,9 @@
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF 4
#define RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 4
+/* iavf defines */
+// RTE_LIBRTE_IAVF_16BYTE_RX_DESC is not set
+
/* Ring net PMD settings */
#define RTE_PMD_RING_MAX_RX_RINGS 16
#define RTE_PMD_RING_MAX_TX_RINGS 16
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index ade5152595..207f456143 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -615,3 +615,19 @@ which belongs to the destination VF on the VM.
.. figure:: img/inter_vm_comms.*
Inter-VM Communication
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+Please note that enabling debugging options may affect system performance.
+
+- ``CONFIG_RTE_LIBRTE_IAVF_16BYTE_RX_DESC`` (default ``n``)
+
+ Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
+ Configure to 16-byte Rx descriptor may cause a negotiation failure during VF driver initialization
+ if the PF driver doesn't support.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 30db8f27e9..e8ae4d4912 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -183,6 +183,12 @@ New Features
packets with specified ratio, and apply with own set of actions with a fate
action. When the ratio is set to 1 then the packets will be 100% mirrored.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
Removed Items
-------------
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3198d85b3a..d566116086 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -119,7 +119,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -153,6 +153,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -166,6 +187,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index f5e6e852ae..93e26c768c 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "iavf_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "iavf_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "ice_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_net_iavf_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1247,6 +1290,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1256,6 +1642,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1319,6 +1711,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
}
}
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 1b0efe0433..7e6e425ac8 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -26,6 +26,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_net_iavf_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -294,6 +323,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -309,6 +492,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -346,14 +530,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -715,6 +903,14 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -740,6 +936,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -804,30 +1015,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1082,7 +1269,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1223,7 +1410,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1460,7 +1647,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1652,7 +1839,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2099,6 +2286,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 3d02c6589d..39b31aaa8e 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,77 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/* Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +137,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +189,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -161,77 +241,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -331,6 +340,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,6 +365,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -439,6 +463,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de2..7ad1e0f68a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index db0b768765..5e7142893b 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -648,25 +648,27 @@ iavf_configure_queues(struct iavf_adapter *adapter)
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 33407c5032..c1c74571a1 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 0000000000..5e41568c32
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_net_iavf_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_net_iavf_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_net_iavf_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_net_iavf_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_net_iavf_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_net_iavf_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_net_iavf_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_NET_IAVF_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_net_iavf_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_net_iavf_proto_xtr_metadata data;
+
+ if (!rte_net_iavf_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_net_iavf_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 4a76d1d52d..d7afd31d14 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_net_iavf_dynfield_proto_xtr_metadata_offs;
+ rte_net_iavf_dynflag_proto_xtr_vlan_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv4_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_mask;
+ rte_net_iavf_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_net_iavf_dynflag_proto_xtr_tcp_mask;
+ rte_net_iavf_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v10] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (9 preceding siblings ...)
2020-10-15 3:41 ` [dpdk-dev] [PATCH v9] " Jeff Guo
@ 2020-10-27 5:04 ` Jeff Guo
2020-10-27 5:21 ` Wang, Haiyue
2020-10-30 2:54 ` [dpdk-dev] [PATCH v11] " Jeff Guo
` (2 subsequent siblings)
13 siblings, 1 reply; 40+ messages in thread
From: Jeff Guo @ 2020-10-27 5:04 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing, ferruh.yigit
Cc: dev, haiyue.wang, bruce.richardson, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
---
v10:
delete the makefile configure and rename the dynamic mbuf name
v9:
change the undef config
v8:
rebase patch for apply issue
v7:
clean some useless and add doc
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
9 files changed, 1008 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 3bc4b42dc5..89e0959f98 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -190,6 +190,12 @@ New Features
Updated the Intel qat driver to use write combining stores.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
* **Updated Memif PMD.**
* Added support for abstract socket address.
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3d3b0da5dd..6d5912d8c1 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -133,7 +133,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -169,6 +169,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -182,6 +203,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index d12a2363f5..6a67990839 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "intel_pmd_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1393,6 +1436,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1402,6 +1788,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1465,6 +1857,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
}
}
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 6635f7fd91..d30aaf8920 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -27,6 +27,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -295,6 +324,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_pmd_ifd_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -310,6 +493,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -347,14 +531,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -735,6 +923,14 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -760,6 +956,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -824,30 +1035,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1102,7 +1289,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1243,7 +1430,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1480,7 +1667,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1672,7 +1859,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2119,6 +2306,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index 3d02c6589d..02945b8768 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,78 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/**
+ * Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +138,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +190,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -161,77 +242,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -331,6 +341,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -355,6 +366,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -439,6 +464,8 @@ int iavf_tx_vec_dev_check(struct rte_eth_dev *dev);
int iavf_rxq_vec_setup(struct iavf_rx_queue *rxq);
int iavf_txq_vec_setup(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de2..7ad1e0f68a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 54d9917c0a..64d194670b 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -850,25 +850,27 @@ iavf_configure_queues(struct iavf_adapter *adapter,
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 33407c5032..c1c74571a1 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -35,3 +35,5 @@ if arch_subdir == 'x86'
objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
endif
endif
+
+install_headers('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 0000000000..955084e197
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_pmd_ifd_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_pmd_ifd_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_pmd_ifd_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_pmd_ifd_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_pmd_ifd_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_pmd_ifd_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_pmd_ifd_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_pmd_ifd_proto_xtr_metadata data;
+
+ if (!rte_pmd_ifd_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_pmd_ifd_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v10] net/iavf: support flex desc metadata extraction
2020-10-27 5:04 ` [dpdk-dev] [PATCH v10] " Jeff Guo
@ 2020-10-27 5:21 ` Wang, Haiyue
2020-10-27 8:27 ` Guo, Jia
2020-10-27 11:55 ` Zhang, Qi Z
0 siblings, 2 replies; 40+ messages in thread
From: Wang, Haiyue @ 2020-10-27 5:21 UTC (permalink / raw)
To: Guo, Jia, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei, Yigit, Ferruh
Cc: dev, Richardson, Bruce
> -----Original Message-----
> From: Guo, Jia <jia.guo@intel.com>
> Sent: Tuesday, October 27, 2020 13:05
> To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei
> <beilei.xing@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Guo, Jia <jia.guo@intel.com>
> Subject: [PATCH v10] net/iavf: support flex desc metadata extraction
>
> Enable metadata extraction for flexible descriptors in AVF, that would
> allow network function directly get metadata without additional parsing
> which would reduce the CPU cost for VFs. The enabling metadata
> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> flexible descriptors, and the VF could negotiate the capability of
> the flexible descriptor with PF and correspondingly configure the
> specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> ---
> v10:
> delete the makefile configure and rename the dynamic mbuf name
>
> v9:
> change the undef config
>
> v8:
> rebase patch for apply issue
>
> v7:
> clean some useless and add doc
>
> v6:
> rebase patch
>
> v5:
> remove ovs configure since ovs is not protocol extraction
>
> v4:
> add flex desc type in rx queue for handling vector path
> handle ovs flex type
>
> v3:
> export these global symbols into .map
>
> v2:
> remove makefile change and modify the rxdid handling
> ---
> doc/guides/rel_notes/release_20_11.rst | 6 +
> drivers/net/iavf/iavf.h | 24 +-
> drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++++
> drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
> drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
> drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> drivers/net/iavf/iavf_vchnl.c | 22 +-
> drivers/net/iavf/meson.build | 2 +
> drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
> 9 files changed, 1008 insertions(+), 114 deletions(-)
> create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
> --- a/drivers/net/iavf/meson.build
> +++ b/drivers/net/iavf/meson.build
> @@ -35,3 +35,5 @@ if arch_subdir == 'x86'
> objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
> endif
> endif
> +
> +install_headers('rte_pmd_iavf.h')
One issue: headers = files('rte_pmd_iavf.h')
Please refer to:
commit 30105f664f8ebbecd878deff7f0733a3f92edd17
Author: David Marchand <david.marchand@redhat.com>
Date: Thu Oct 22 09:55:45 2020 +0200
drivers: add headers install helper
A lot of drivers export headers, reproduce the same facility than for
libraries.
Others, LGTM.
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> --
> 2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v10] net/iavf: support flex desc metadata extraction
2020-10-27 5:21 ` Wang, Haiyue
@ 2020-10-27 8:27 ` Guo, Jia
2020-10-27 11:55 ` Zhang, Qi Z
1 sibling, 0 replies; 40+ messages in thread
From: Guo, Jia @ 2020-10-27 8:27 UTC (permalink / raw)
To: Wang, Haiyue, Wu, Jingjing, Zhang, Qi Z, Xing, Beilei, Yigit, Ferruh
Cc: dev, Richardson, Bruce
> -----Original Message-----
> From: Wang, Haiyue <haiyue.wang@intel.com>
> Sent: Tuesday, October 27, 2020 1:22 PM
> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>;
> Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>
> Subject: RE: [PATCH v10] net/iavf: support flex desc metadata extraction
>
> > -----Original Message-----
> > From: Guo, Jia <jia.guo@intel.com>
> > Sent: Tuesday, October 27, 2020 13:05
> > To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z
> > <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; Yigit,
> > Ferruh <ferruh.yigit@intel.com>
> > Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Richardson,
> > Bruce <bruce.richardson@intel.com>; Guo, Jia <jia.guo@intel.com>
> > Subject: [PATCH v10] net/iavf: support flex desc metadata extraction
> >
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional
> > parsing which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-
> FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of the
> > flexible descriptor with PF and correspondingly configure the specific
> > offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > ---
> > v10:
> > delete the makefile configure and rename the dynamic mbuf name
> >
> > v9:
> > change the undef config
> >
> > v8:
> > rebase patch for apply issue
> >
> > v7:
> > clean some useless and add doc
> >
> > v6:
> > rebase patch
> >
> > v5:
> > remove ovs configure since ovs is not protocol extraction
> >
> > v4:
> > add flex desc type in rx queue for handling vector path handle ovs
> > flex type
> >
> > v3:
> > export these global symbols into .map
> >
> > v2:
> > remove makefile change and modify the rxdid handling
> > ---
> > doc/guides/rel_notes/release_20_11.rst | 6 +
> > drivers/net/iavf/iavf.h | 24 +-
> > drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++++
> > drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
> > drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
> > drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> > drivers/net/iavf/iavf_vchnl.c | 22 +-
> > drivers/net/iavf/meson.build | 2 +
> > drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
> > 9 files changed, 1008 insertions(+), 114 deletions(-) create mode
> > 100644 drivers/net/iavf/rte_pmd_iavf.h
>
>
> > --- a/drivers/net/iavf/meson.build
> > +++ b/drivers/net/iavf/meson.build
> > @@ -35,3 +35,5 @@ if arch_subdir == 'x86'
> > objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
> > endif
> > endif
> > +
> > +install_headers('rte_pmd_iavf.h')
>
> One issue: headers = files('rte_pmd_iavf.h')
>
> Please refer to:
>
> commit 30105f664f8ebbecd878deff7f0733a3f92edd17
> Author: David Marchand <david.marchand@redhat.com>
> Date: Thu Oct 22 09:55:45 2020 +0200
>
> drivers: add headers install helper
>
> A lot of drivers export headers, reproduce the same facility than for
> libraries.
>
Oh, thank for your notify, will update it later.
>
> Others, LGTM.
>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>
> > --
> > 2.20.1
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v10] net/iavf: support flex desc metadata extraction
2020-10-27 5:21 ` Wang, Haiyue
2020-10-27 8:27 ` Guo, Jia
@ 2020-10-27 11:55 ` Zhang, Qi Z
1 sibling, 0 replies; 40+ messages in thread
From: Zhang, Qi Z @ 2020-10-27 11:55 UTC (permalink / raw)
To: Wang, Haiyue, Guo, Jia, Wu, Jingjing, Xing, Beilei, Yigit, Ferruh
Cc: dev, Richardson, Bruce
> -----Original Message-----
> From: Wang, Haiyue <haiyue.wang@intel.com>
> Sent: Tuesday, October 27, 2020 1:22 PM
> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; Yigit,
> Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>
> Subject: RE: [PATCH v10] net/iavf: support flex desc metadata extraction
>
> > -----Original Message-----
> > From: Guo, Jia <jia.guo@intel.com>
> > Sent: Tuesday, October 27, 2020 13:05
> > To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z
> > <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>; Yigit,
> > Ferruh <ferruh.yigit@intel.com>
> > Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Richardson,
> > Bruce <bruce.richardson@intel.com>; Guo, Jia <jia.guo@intel.com>
> > Subject: [PATCH v10] net/iavf: support flex desc metadata extraction
> >
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional
> > parsing which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of the
> > flexible descriptor with PF and correspondingly configure the specific
> > offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > ---
> > v10:
> > delete the makefile configure and rename the dynamic mbuf name
> >
> > v9:
> > change the undef config
> >
> > v8:
> > rebase patch for apply issue
> >
> > v7:
> > clean some useless and add doc
> >
> > v6:
> > rebase patch
> >
> > v5:
> > remove ovs configure since ovs is not protocol extraction
> >
> > v4:
> > add flex desc type in rx queue for handling vector path handle ovs
> > flex type
> >
> > v3:
> > export these global symbols into .map
> >
> > v2:
> > remove makefile change and modify the rxdid handling
> > ---
> > doc/guides/rel_notes/release_20_11.rst | 6 +
> > drivers/net/iavf/iavf.h | 24 +-
> > drivers/net/iavf/iavf_ethdev.c | 394
> ++++++++++++++++++++++++
> > drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
> > drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
> > drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> > drivers/net/iavf/iavf_vchnl.c | 22 +-
> > drivers/net/iavf/meson.build | 2 +
> > drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
> > 9 files changed, 1008 insertions(+), 114 deletions(-) create mode
> > 100644 drivers/net/iavf/rte_pmd_iavf.h
>
>
> > --- a/drivers/net/iavf/meson.build
> > +++ b/drivers/net/iavf/meson.build
> > @@ -35,3 +35,5 @@ if arch_subdir == 'x86'
> > objs += iavf_avx2_lib.extract_objects('iavf_rxtx_vec_avx2.c')
> > endif
> > endif
> > +
> > +install_headers('rte_pmd_iavf.h')
>
> One issue: headers = files('rte_pmd_iavf.h')
Will fix when apply the patch.
>
> Please refer to:
>
> commit 30105f664f8ebbecd878deff7f0733a3f92edd17
> Author: David Marchand <david.marchand@redhat.com>
> Date: Thu Oct 22 09:55:45 2020 +0200
>
> drivers: add headers install helper
>
> A lot of drivers export headers, reproduce the same facility than for
> libraries.
>
>
> Others, LGTM.
>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Applied to dpdk-next-net-intel.
Thanks
Qi
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v11] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (10 preceding siblings ...)
2020-10-27 5:04 ` [dpdk-dev] [PATCH v10] " Jeff Guo
@ 2020-10-30 2:54 ` Jeff Guo
2020-10-30 8:34 ` [dpdk-dev] [PATCH v12] " Jeff Guo
2020-10-30 8:40 ` Jeff Guo
13 siblings, 0 replies; 40+ messages in thread
From: Jeff Guo @ 2020-10-30 2:54 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing, ferruh.yigit
Cc: dev, haiyue.wang, bruce.richardson, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
---
v11:
update doc in .map and .rst
v10:
delete the makefile configure and rename the dynamic mbuf name
v9:
change the undef config
v8:
rebase patch for apply issue
v7:
clean some useless and add doc
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
doc/guides/nics/intel_vf.rst | 122 ++++++++
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
drivers/net/iavf/version.map | 13 +
11 files changed, 1143 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 723a9c0fa2..c7d238b8cb 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -606,3 +606,125 @@ which belongs to the destination VF on the VM.
.. figure:: img/inter_vm_comms.*
Inter-VM Communication
+
+
+Pre-Installation Configuration
+------------------------------
+
+Runtime Config Options
+~~~~~~~~~~~~~~~~~~~~~~
+
+- ``Protocol extraction for per queue``
+
+ Configure the RX queues to do protocol extraction into mbuf for protocol
+ handling acceleration, like checking the TCP SYN packets quickly.
+
+ The argument format is::
+
+ -w 18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...]
+ -w 18:00.0,proto_xtr=<protocol>
+
+ Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
+ is used as a range separator and ``,`` is used as a single number separator.
+ The grouping ``()`` can be omitted for single element group. If no queues are
+ specified, PMD will use this protocol extraction type for all queues.
+
+ Protocol is : ``vlan, ipv4, ipv6, ipv6_flow, tcp, ip_offset``.
+
+ .. code-block:: console
+
+ dpdk-testpmd -w 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]'
+
+ This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
+ VLAN extraction, other queues run with no protocol extraction.
+
+ .. code-block:: console
+
+ dpdk-testpmd -w 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]'
+
+ This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
+ IPv6 extraction, other queues use the default VLAN extraction.
+
+ The extraction metadata is copied into the registered dynamic mbuf field, and
+ the related dynamic mbuf flags is set.
+
+ .. table:: Protocol extraction : ``vlan``
+
+ +----------------------------+----------------------------+
+ | VLAN2 | VLAN1 |
+ +======+===+=================+======+===+=================+
+ | PCP | D | VID | PCP | D | VID |
+ +------+---+-----------------+------+---+-----------------+
+
+ VLAN1 - single or EVLAN (first for QinQ).
+
+ VLAN2 - C-VLAN (second for QinQ).
+
+ .. table:: Protocol extraction : ``ipv4``
+
+ +----------------------------+----------------------------+
+ | IPHDR2 | IPHDR1 |
+ +======+=======+=============+==============+=============+
+ | Ver |Hdr Len| ToS | TTL | Protocol |
+ +------+-------+-------------+--------------+-------------+
+
+ IPHDR1 - IPv4 header word 4, "TTL" and "Protocol" fields.
+
+ IPHDR2 - IPv4 header word 0, "Ver", "Hdr Len" and "Type of Service" fields.
+
+ .. table:: Protocol extraction : ``ipv6``
+
+ +----------------------------+----------------------------+
+ | IPHDR2 | IPHDR1 |
+ +=====+=============+========+=============+==============+
+ | Ver |Traffic class| Flow | Next Header | Hop Limit |
+ +-----+-------------+--------+-------------+--------------+
+
+ IPHDR1 - IPv6 header word 3, "Next Header" and "Hop Limit" fields.
+
+ IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
+ "Flow Label" fields.
+
+ .. table:: Protocol extraction : ``ipv6_flow``
+
+ +----------------------------+----------------------------+
+ | IPHDR2 | IPHDR1 |
+ +=====+=============+========+============================+
+ | Ver |Traffic class| Flow Label |
+ +-----+-------------+-------------------------------------+
+
+ IPHDR1 - IPv6 header word 1, 16 low bits of the "Flow Label" field.
+
+ IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
+ "Flow Label" fields.
+
+ .. table:: Protocol extraction : ``tcp``
+
+ +----------------------------+----------------------------+
+ | TCPHDR2 | TCPHDR1 |
+ +============================+======+======+==============+
+ | Reserved |Offset| RSV | Flags |
+ +----------------------------+------+------+--------------+
+
+ TCPHDR1 - TCP header word 6, "Data Offset" and "Flags" fields.
+
+ TCPHDR2 - Reserved
+
+ .. table:: Protocol extraction : ``ip_offset``
+
+ +----------------------------+----------------------------+
+ | IPHDR2 | IPHDR1 |
+ +============================+============================+
+ | IPv6 HDR Offset | IPv4 HDR Offset |
+ +----------------------------+----------------------------+
+
+ IPHDR1 - Outer/Single IPv4 Header offset.
+
+ IPHDR2 - Outer/Single IPv6 Header offset.
+
+ Use ``rte_pmd_ifd_dynf_proto_xtr_metadata_get`` to access the protocol
+ extraction metadata, and use ``RTE_PKT_RX_DYNF_PROTO_XTR_*`` to get the
+ metadata type of ``struct rte_mbuf::ol_flags``.
+
+ The ``rte_pmd_ifd_dump_proto_xtr_metadata`` routine shows how to
+ access the protocol extraction result in ``struct rte_mbuf``.
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 17b59c2c3d..022aa0dc6f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -205,6 +205,12 @@ New Features
Updated the Intel qat driver to use write combining stores.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
* **Updated Memif PMD.**
* Added support for abstract socket address.
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3d3b0da5dd..6d5912d8c1 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -133,7 +133,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -169,6 +169,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -182,6 +203,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 9eea8bf90c..7e3c26a94e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "intel_pmd_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1394,6 +1437,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1403,6 +1789,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1466,6 +1858,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
}
}
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 160d81b761..baac5d65c8 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -27,6 +27,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -295,6 +324,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_pmd_ifd_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -310,6 +493,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -347,14 +531,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -735,6 +923,14 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -760,6 +956,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -824,30 +1035,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1102,7 +1289,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1243,7 +1430,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1480,7 +1667,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1672,7 +1859,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2119,6 +2306,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index b22ccc42eb..d4b4935be6 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,78 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/**
+ * Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +138,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +190,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -165,77 +246,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -335,6 +345,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -359,6 +370,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -457,6 +482,8 @@ uint16_t iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de2..7ad1e0f68a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 54d9917c0a..64d194670b 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -850,25 +850,27 @@ iavf_configure_queues(struct iavf_adapter *adapter,
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 3388cdf407..e257f5a6e1 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -55,3 +55,5 @@ if arch_subdir == 'x86'
objs += iavf_avx512_lib.extract_objects('iavf_rxtx_vec_avx512.c')
endif
endif
+
+headers = files('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 0000000000..955084e197
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_pmd_ifd_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_pmd_ifd_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_pmd_ifd_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_pmd_ifd_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_pmd_ifd_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_pmd_ifd_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_pmd_ifd_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_pmd_ifd_proto_xtr_metadata data;
+
+ if (!rte_pmd_ifd_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_pmd_ifd_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/version.map b/drivers/net/iavf/version.map
index 4a76d1d52d..2a411da2e9 100644
--- a/drivers/net/iavf/version.map
+++ b/drivers/net/iavf/version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs;
+ rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v12] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (11 preceding siblings ...)
2020-10-30 2:54 ` [dpdk-dev] [PATCH v11] " Jeff Guo
@ 2020-10-30 8:34 ` Jeff Guo
2020-10-30 8:40 ` Jeff Guo
13 siblings, 0 replies; 40+ messages in thread
From: Jeff Guo @ 2020-10-30 8:34 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing, ferruh.yigit
Cc: dev, haiyue.wang, bruce.richardson, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
---
v12:
refine doc to be briefly
v11:
update doc in .map and .rst
v10:
delete the makefile configure and rename the dynamic mbuf name
v9:
change the undef config
v8:
rebase patch for apply issue
v7:
clean some useless and add doc
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
doc/guides/nics/intel_vf.rst | 4 +
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
drivers/net/iavf/version.map | 13 +
11 files changed, 1025 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 723a9c0fa2..e767695724 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -87,6 +87,10 @@ For more detail on SR-IOV, please refer to the following documents:
To use DPDK IAVF PMD on Intel® 700 Series Ethernet Controller, the device id (0x1889) need to specified during device
assignment in hypervisor. Take qemu for example, the device assignment should carry the IAVF device id (0x1889) like
``-device vfio-pci,x-pci-device-id=0x1889,host=03:0a.0``.
+
+ When IAVF is backed by an Intel® E810 device, the "Protocol Extraction" feature which is supported by ice PMD is also
+ available for IAVF PMD. The same devargs with the same parameters can be applied to IAVF PMD, for detail please reference
+ the section ``Protocol extraction for per queue`` of ice.rst.
The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 17b59c2c3d..022aa0dc6f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -205,6 +205,12 @@ New Features
Updated the Intel qat driver to use write combining stores.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
* **Updated Memif PMD.**
* Added support for abstract socket address.
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3d3b0da5dd..6d5912d8c1 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -133,7 +133,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -169,6 +169,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -182,6 +203,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 9eea8bf90c..7e3c26a94e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "intel_pmd_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1394,6 +1437,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1403,6 +1789,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1466,6 +1858,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
}
}
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 160d81b761..baac5d65c8 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -27,6 +27,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -295,6 +324,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_pmd_ifd_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -310,6 +493,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -347,14 +531,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -735,6 +923,14 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -760,6 +956,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -824,30 +1035,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1102,7 +1289,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1243,7 +1430,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1480,7 +1667,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1672,7 +1859,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2119,6 +2306,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index b22ccc42eb..d4b4935be6 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,78 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/**
+ * Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +138,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +190,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -165,77 +246,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -335,6 +345,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -359,6 +370,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -457,6 +482,8 @@ uint16_t iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de2..7ad1e0f68a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 54d9917c0a..64d194670b 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -850,25 +850,27 @@ iavf_configure_queues(struct iavf_adapter *adapter,
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 3388cdf407..e257f5a6e1 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -55,3 +55,5 @@ if arch_subdir == 'x86'
objs += iavf_avx512_lib.extract_objects('iavf_rxtx_vec_avx512.c')
endif
endif
+
+headers = files('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 0000000000..955084e197
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_pmd_ifd_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_pmd_ifd_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_pmd_ifd_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_pmd_ifd_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_pmd_ifd_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_pmd_ifd_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_pmd_ifd_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_pmd_ifd_proto_xtr_metadata data;
+
+ if (!rte_pmd_ifd_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_pmd_ifd_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/version.map b/drivers/net/iavf/version.map
index 4a76d1d52d..2a411da2e9 100644
--- a/drivers/net/iavf/version.map
+++ b/drivers/net/iavf/version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs;
+ rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v12] net/iavf: support flex desc metadata extraction
2020-09-09 2:54 [dpdk-dev] [PATCH v1] net/iavf: support flex desc metadata extraction Jeff Guo
` (12 preceding siblings ...)
2020-10-30 8:34 ` [dpdk-dev] [PATCH v12] " Jeff Guo
@ 2020-10-30 8:40 ` Jeff Guo
2020-10-30 9:35 ` Zhang, Qi Z
2020-10-30 10:51 ` Ferruh Yigit
13 siblings, 2 replies; 40+ messages in thread
From: Jeff Guo @ 2020-10-30 8:40 UTC (permalink / raw)
To: jingjing.wu, qi.z.zhang, beilei.xing, ferruh.yigit
Cc: dev, haiyue.wang, bruce.richardson, jia.guo
Enable metadata extraction for flexible descriptors in AVF, that would
allow network function directly get metadata without additional parsing
which would reduce the CPU cost for VFs. The enabling metadata
extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
flexible descriptors, and the VF could negotiate the capability of
the flexible descriptor with PF and correspondingly configure the
specific offload at receiving queues.
Signed-off-by: Jeff Guo <jia.guo@intel.com>
Acked-by: Haiyue Wang <haiyue.wang@intel.com>
---
v12:
refine doc to be briefly
v11:
update doc in .map and .rst
v10:
delete the makefile configure and rename the dynamic mbuf name
v9:
change the undef config
v8:
rebase patch for apply issue
v7:
clean some useless and add doc
v6:
rebase patch
v5:
remove ovs configure since ovs is not protocol extraction
v4:
add flex desc type in rx queue for handling vector path
handle ovs flex type
v3:
export these global symbols into .map
v2:
remove makefile change and modify the rxdid handling
---
doc/guides/nics/intel_vf.rst | 4 +
doc/guides/rel_notes/release_20_11.rst | 6 +
drivers/net/iavf/iavf.h | 24 +-
drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++++
drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
drivers/net/iavf/iavf_vchnl.c | 22 +-
drivers/net/iavf/meson.build | 2 +
drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
drivers/net/iavf/version.map | 13 +
11 files changed, 1025 insertions(+), 114 deletions(-)
create mode 100644 drivers/net/iavf/rte_pmd_iavf.h
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index 723a9c0fa2..529ff4a955 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -88,6 +88,10 @@ For more detail on SR-IOV, please refer to the following documents:
assignment in hypervisor. Take qemu for example, the device assignment should carry the IAVF device id (0x1889) like
``-device vfio-pci,x-pci-device-id=0x1889,host=03:0a.0``.
+ When IAVF is backed by an Intel® E810 device, the "Protocol Extraction" feature which is supported by ice PMD is also
+ available for IAVF PMD. The same devargs with the same parameters can be applied to IAVF PMD, for detail please reference
+ the section ``Protocol extraction for per queue`` of ice.rst.
+
The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 17b59c2c3d..022aa0dc6f 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -205,6 +205,12 @@ New Features
Updated the Intel qat driver to use write combining stores.
+* **Updated Intel iavf driver.**
+
+ Updated iavf PMD with new features and improvements, including:
+
+ * Added support for flexible descriptor metadata extraction.
+
* **Updated Memif PMD.**
* Added support for abstract socket address.
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index 3d3b0da5dd..6d5912d8c1 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -133,7 +133,7 @@ struct iavf_info {
struct virtchnl_vf_resource *vf_res; /* VF resource */
struct virtchnl_vsi_resource *vsi_res; /* LAN VSI */
uint64_t supported_rxdid;
-
+ uint8_t *proto_xtr; /* proto xtr type for all queues */
volatile enum virtchnl_ops pend_cmd; /* pending command not finished */
uint32_t cmd_retval; /* return value of the cmd response from PF */
uint8_t *aq_resp; /* buffer to store the adminq response from PF */
@@ -169,6 +169,27 @@ struct iavf_info {
#define IAVF_MAX_PKT_TYPE 1024
+#define IAVF_MAX_QUEUE_NUM 2048
+
+enum iavf_proto_xtr_type {
+ IAVF_PROTO_XTR_NONE,
+ IAVF_PROTO_XTR_VLAN,
+ IAVF_PROTO_XTR_IPV4,
+ IAVF_PROTO_XTR_IPV6,
+ IAVF_PROTO_XTR_IPV6_FLOW,
+ IAVF_PROTO_XTR_TCP,
+ IAVF_PROTO_XTR_IP_OFFSET,
+ IAVF_PROTO_XTR_MAX,
+};
+
+/**
+ * Cache devargs parse result.
+ */
+struct iavf_devargs {
+ uint8_t proto_xtr_dflt;
+ uint8_t proto_xtr[IAVF_MAX_QUEUE_NUM];
+};
+
/* Structure to store private data for each VF instance. */
struct iavf_adapter {
struct iavf_hw hw;
@@ -182,6 +203,7 @@ struct iavf_adapter {
const uint32_t *ptype_tbl;
bool stopped;
uint16_t fdir_ref_cnt;
+ struct iavf_devargs devargs;
};
/* IAVF_DEV_PRIVATE_TO */
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 9eea8bf90c..7e3c26a94e 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -28,6 +28,49 @@
#include "iavf.h"
#include "iavf_rxtx.h"
#include "iavf_generic_flow.h"
+#include "rte_pmd_iavf.h"
+
+/* devargs */
+#define IAVF_PROTO_XTR_ARG "proto_xtr"
+
+static const char * const iavf_valid_args[] = {
+ IAVF_PROTO_XTR_ARG,
+ NULL
+};
+
+static const struct rte_mbuf_dynfield iavf_proto_xtr_metadata_param = {
+ .name = "intel_pmd_dynfield_proto_xtr_metadata",
+ .size = sizeof(uint32_t),
+ .align = __alignof__(uint32_t),
+ .flags = 0,
+};
+
+struct iavf_proto_xtr_ol {
+ const struct rte_mbuf_dynflag param;
+ uint64_t *ol_flag;
+ bool required;
+};
+
+static struct iavf_proto_xtr_ol iavf_proto_xtr_params[] = {
+ [IAVF_PROTO_XTR_VLAN] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_vlan" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_vlan_mask },
+ [IAVF_PROTO_XTR_IPV4] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv4" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask },
+ [IAVF_PROTO_XTR_IPV6] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv6" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask },
+ [IAVF_PROTO_XTR_IPV6_FLOW] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ipv6_flow" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask },
+ [IAVF_PROTO_XTR_TCP] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_tcp" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_tcp_mask },
+ [IAVF_PROTO_XTR_IP_OFFSET] = {
+ .param = { .name = "intel_pmd_dynflag_proto_xtr_ip_offset" },
+ .ol_flag = &rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask },
+};
static int iavf_dev_configure(struct rte_eth_dev *dev);
static int iavf_dev_start(struct rte_eth_dev *dev);
@@ -1394,6 +1437,349 @@ iavf_check_vf_reset_done(struct iavf_hw *hw)
return 0;
}
+static int
+iavf_lookup_proto_xtr_type(const char *flex_name)
+{
+ static struct {
+ const char *name;
+ enum iavf_proto_xtr_type type;
+ } xtr_type_map[] = {
+ { "vlan", IAVF_PROTO_XTR_VLAN },
+ { "ipv4", IAVF_PROTO_XTR_IPV4 },
+ { "ipv6", IAVF_PROTO_XTR_IPV6 },
+ { "ipv6_flow", IAVF_PROTO_XTR_IPV6_FLOW },
+ { "tcp", IAVF_PROTO_XTR_TCP },
+ { "ip_offset", IAVF_PROTO_XTR_IP_OFFSET },
+ };
+ uint32_t i;
+
+ for (i = 0; i < RTE_DIM(xtr_type_map); i++) {
+ if (strcmp(flex_name, xtr_type_map[i].name) == 0)
+ return xtr_type_map[i].type;
+ }
+
+ PMD_DRV_LOG(ERR, "wrong proto_xtr type, "
+ "it should be: vlan|ipv4|ipv6|ipv6_flow|tcp|ip_offset");
+
+ return -1;
+}
+
+/**
+ * Parse elem, the elem could be single number/range or '(' ')' group
+ * 1) A single number elem, it's just a simple digit. e.g. 9
+ * 2) A single range elem, two digits with a '-' between. e.g. 2-6
+ * 3) A group elem, combines multiple 1) or 2) with '( )'. e.g (0,2-4,6)
+ * Within group elem, '-' used for a range separator;
+ * ',' used for a single number.
+ */
+static int
+iavf_parse_queue_set(const char *input, int xtr_type,
+ struct iavf_devargs *devargs)
+{
+ const char *str = input;
+ char *end = NULL;
+ uint32_t min, max;
+ uint32_t idx;
+
+ while (isblank(*str))
+ str++;
+
+ if (!isdigit(*str) && *str != '(')
+ return -1;
+
+ /* process single number or single range of number */
+ if (*str != '(') {
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ while (isblank(*end))
+ end++;
+
+ min = idx;
+ max = idx;
+
+ /* process single <number>-<number> */
+ if (*end == '-') {
+ end++;
+ while (isblank(*end))
+ end++;
+ if (!isdigit(*end))
+ return -1;
+
+ errno = 0;
+ idx = strtoul(end, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ max = idx;
+ while (isblank(*end))
+ end++;
+ }
+
+ if (*end != ':')
+ return -1;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ return 0;
+ }
+
+ /* process set within bracket */
+ str++;
+ while (isblank(*str))
+ str++;
+ if (*str == '\0')
+ return -1;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ do {
+ /* go ahead to the first digit */
+ while (isblank(*str))
+ str++;
+ if (!isdigit(*str))
+ return -1;
+
+ /* get the digit value */
+ errno = 0;
+ idx = strtoul(str, &end, 10);
+ if (errno || !end || idx >= IAVF_MAX_QUEUE_NUM)
+ return -1;
+
+ /* go ahead to separator '-',',' and ')' */
+ while (isblank(*end))
+ end++;
+ if (*end == '-') {
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+ else /* avoid continuous '-' */
+ return -1;
+ } else if (*end == ',' || *end == ')') {
+ max = idx;
+ if (min == IAVF_MAX_QUEUE_NUM)
+ min = idx;
+
+ for (idx = RTE_MIN(min, max);
+ idx <= RTE_MAX(min, max); idx++)
+ devargs->proto_xtr[idx] = xtr_type;
+
+ min = IAVF_MAX_QUEUE_NUM;
+ } else {
+ return -1;
+ }
+
+ str = end + 1;
+ } while (*end != ')' && *end != '\0');
+
+ return 0;
+}
+
+static int
+iavf_parse_queue_proto_xtr(const char *queues, struct iavf_devargs *devargs)
+{
+ const char *queue_start;
+ uint32_t idx;
+ int xtr_type;
+ char flex_name[32];
+
+ while (isblank(*queues))
+ queues++;
+
+ if (*queues != '[') {
+ xtr_type = iavf_lookup_proto_xtr_type(queues);
+ if (xtr_type < 0)
+ return -1;
+
+ devargs->proto_xtr_dflt = xtr_type;
+
+ return 0;
+ }
+
+ queues++;
+ do {
+ while (isblank(*queues))
+ queues++;
+ if (*queues == '\0')
+ return -1;
+
+ queue_start = queues;
+
+ /* go across a complete bracket */
+ if (*queue_start == '(') {
+ queues += strcspn(queues, ")");
+ if (*queues != ')')
+ return -1;
+ }
+
+ /* scan the separator ':' */
+ queues += strcspn(queues, ":");
+ if (*queues++ != ':')
+ return -1;
+ while (isblank(*queues))
+ queues++;
+
+ for (idx = 0; ; idx++) {
+ if (isblank(queues[idx]) ||
+ queues[idx] == ',' ||
+ queues[idx] == ']' ||
+ queues[idx] == '\0')
+ break;
+
+ if (idx > sizeof(flex_name) - 2)
+ return -1;
+
+ flex_name[idx] = queues[idx];
+ }
+ flex_name[idx] = '\0';
+ xtr_type = iavf_lookup_proto_xtr_type(flex_name);
+ if (xtr_type < 0)
+ return -1;
+
+ queues += idx;
+
+ while (isblank(*queues) || *queues == ',' || *queues == ']')
+ queues++;
+
+ if (iavf_parse_queue_set(queue_start, xtr_type, devargs) < 0)
+ return -1;
+ } while (*queues != '\0');
+
+ return 0;
+}
+
+static int
+iavf_handle_proto_xtr_arg(__rte_unused const char *key, const char *value,
+ void *extra_args)
+{
+ struct iavf_devargs *devargs = extra_args;
+
+ if (!value || !extra_args)
+ return -EINVAL;
+
+ if (iavf_parse_queue_proto_xtr(value, devargs) < 0) {
+ PMD_DRV_LOG(ERR, "the proto_xtr's parameter is wrong : '%s'",
+ value);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int iavf_parse_devargs(struct rte_eth_dev *dev)
+{
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ struct rte_devargs *devargs = dev->device->devargs;
+ struct rte_kvargs *kvlist;
+ int ret;
+
+ if (!devargs)
+ return 0;
+
+ kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
+ if (!kvlist) {
+ PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ return -EINVAL;
+ }
+
+ ad->devargs.proto_xtr_dflt = IAVF_PROTO_XTR_NONE;
+ memset(ad->devargs.proto_xtr, IAVF_PROTO_XTR_NONE,
+ sizeof(ad->devargs.proto_xtr));
+
+ ret = rte_kvargs_process(kvlist, IAVF_PROTO_XTR_ARG,
+ &iavf_handle_proto_xtr_arg, &ad->devargs);
+ if (ret)
+ goto bail;
+
+bail:
+ rte_kvargs_free(kvlist);
+ return ret;
+}
+
+static void
+iavf_init_proto_xtr(struct rte_eth_dev *dev)
+{
+ struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ struct iavf_adapter *ad =
+ IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
+ const struct iavf_proto_xtr_ol *xtr_ol;
+ bool proto_xtr_enable = false;
+ int offset;
+ uint16_t i;
+
+ vf->proto_xtr = rte_zmalloc("vf proto xtr",
+ vf->vsi_res->num_queue_pairs, 0);
+ if (unlikely(!(vf->proto_xtr))) {
+ PMD_DRV_LOG(ERR, "no memory for setting up proto_xtr's table");
+ return;
+ }
+
+ for (i = 0; i < vf->vsi_res->num_queue_pairs; i++) {
+ vf->proto_xtr[i] = ad->devargs.proto_xtr[i] !=
+ IAVF_PROTO_XTR_NONE ?
+ ad->devargs.proto_xtr[i] :
+ ad->devargs.proto_xtr_dflt;
+
+ if (vf->proto_xtr[i] != IAVF_PROTO_XTR_NONE) {
+ uint8_t type = vf->proto_xtr[i];
+
+ iavf_proto_xtr_params[type].required = true;
+ proto_xtr_enable = true;
+ }
+ }
+
+ if (likely(!proto_xtr_enable))
+ return;
+
+ offset = rte_mbuf_dynfield_register(&iavf_proto_xtr_metadata_param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to extract protocol metadata, error %d",
+ -rte_errno);
+ return;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr metadata offset in mbuf is : %d",
+ offset);
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = offset;
+
+ for (i = 0; i < RTE_DIM(iavf_proto_xtr_params); i++) {
+ xtr_ol = &iavf_proto_xtr_params[i];
+
+ uint8_t rxdid = iavf_proto_xtr_type_to_rxdid((uint8_t)i);
+
+ if (!xtr_ol->required)
+ continue;
+
+ if (!(vf->supported_rxdid & BIT(rxdid))) {
+ PMD_DRV_LOG(ERR,
+ "rxdid[%u] is not supported in hardware",
+ rxdid);
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ offset = rte_mbuf_dynflag_register(&xtr_ol->param);
+ if (unlikely(offset == -1)) {
+ PMD_DRV_LOG(ERR,
+ "failed to register proto_xtr offload '%s', error %d",
+ xtr_ol->param.name, -rte_errno);
+
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+ break;
+ }
+
+ PMD_DRV_LOG(DEBUG,
+ "proto_xtr offload '%s' offset in mbuf is : %d",
+ xtr_ol->param.name, offset);
+ *xtr_ol->ol_flag = 1ULL << offset;
+ }
+}
+
static int
iavf_init_vf(struct rte_eth_dev *dev)
{
@@ -1403,6 +1789,12 @@ iavf_init_vf(struct rte_eth_dev *dev)
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+ err = iavf_parse_devargs(dev);
+ if (err) {
+ PMD_INIT_LOG(ERR, "Failed to parse devargs");
+ goto err;
+ }
+
err = iavf_set_mac_type(hw);
if (err) {
PMD_INIT_LOG(ERR, "set_mac_type failed: %d", err);
@@ -1466,6 +1858,8 @@ iavf_init_vf(struct rte_eth_dev *dev)
}
}
+ iavf_init_proto_xtr(dev);
+
return 0;
err_rss:
rte_free(vf->rss_key);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index 160d81b761..baac5d65c8 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -27,6 +27,35 @@
#include "iavf.h"
#include "iavf_rxtx.h"
+#include "rte_pmd_iavf.h"
+
+/* Offset of mbuf dynamic field for protocol extraction's metadata */
+int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs = -1;
+
+/* Mask of mbuf dynamic flags for protocol extraction's type */
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+
+uint8_t
+iavf_proto_xtr_type_to_rxdid(uint8_t flex_type)
+{
+ static uint8_t rxdid_map[] = {
+ [IAVF_PROTO_XTR_NONE] = IAVF_RXDID_COMMS_OVS_1,
+ [IAVF_PROTO_XTR_VLAN] = IAVF_RXDID_COMMS_AUX_VLAN,
+ [IAVF_PROTO_XTR_IPV4] = IAVF_RXDID_COMMS_AUX_IPV4,
+ [IAVF_PROTO_XTR_IPV6] = IAVF_RXDID_COMMS_AUX_IPV6,
+ [IAVF_PROTO_XTR_IPV6_FLOW] = IAVF_RXDID_COMMS_AUX_IPV6_FLOW,
+ [IAVF_PROTO_XTR_TCP] = IAVF_RXDID_COMMS_AUX_TCP,
+ [IAVF_PROTO_XTR_IP_OFFSET] = IAVF_RXDID_COMMS_AUX_IP_OFFSET,
+ };
+
+ return flex_type < RTE_DIM(rxdid_map) ?
+ rxdid_map[flex_type] : IAVF_RXDID_COMMS_OVS_1;
+}
static inline int
check_rx_thresh(uint16_t nb_desc, uint16_t thresh)
@@ -295,6 +324,160 @@ static const struct iavf_txq_ops def_txq_ops = {
.release_mbufs = release_txq_mbufs,
};
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_ovs(__rte_unused struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ uint16_t stat_err;
+#endif
+
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v1(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error1);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S))
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+
+ if (stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S))
+ metadata |=
+ rte_le_to_cpu_16(desc->flex_ts.flex.aux1) << 16;
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static inline void
+iavf_rxd_to_pkt_fields_by_comms_aux_v2(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp)
+{
+ volatile struct iavf_32b_rx_flex_desc_comms *desc =
+ (volatile struct iavf_32b_rx_flex_desc_comms *)rxdp;
+ uint16_t stat_err;
+
+ stat_err = rte_le_to_cpu_16(desc->status_error0);
+ if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
+ mb->ol_flags |= PKT_RX_RSS_HASH;
+ mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
+ }
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (desc->flow_id != 0xFFFFFFFF) {
+ mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
+ mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
+ }
+
+ if (rxq->xtr_ol_flag) {
+ uint32_t metadata = 0;
+
+ if (desc->flex_ts.flex.aux0 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux0);
+ else if (desc->flex_ts.flex.aux1 != 0xFFFF)
+ metadata = rte_le_to_cpu_16(desc->flex_ts.flex.aux1);
+
+ if (metadata) {
+ mb->ol_flags |= rxq->xtr_ol_flag;
+
+ *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(mb) = metadata;
+ }
+ }
+#endif
+}
+
+static void
+iavf_select_rxd_to_pkt_fields_handler(struct iavf_rx_queue *rxq, uint32_t rxdid)
+{
+ switch (rxdid) {
+ case IAVF_RXDID_COMMS_AUX_VLAN:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV4:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IPV6_FLOW:
+ rxq->xtr_ol_flag =
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_TCP:
+ rxq->xtr_ol_flag = rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v1;
+ break;
+ case IAVF_RXDID_COMMS_AUX_IP_OFFSET:
+ rxq->xtr_ol_flag =
+ rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+ rxq->rxd_to_pkt_fields =
+ iavf_rxd_to_pkt_fields_by_comms_aux_v2;
+ break;
+ case IAVF_RXDID_COMMS_OVS_1:
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ default:
+ /* update this according to the RXDID for FLEX_DESC_NONE */
+ rxq->rxd_to_pkt_fields = iavf_rxd_to_pkt_fields_by_comms_ovs;
+ break;
+ }
+
+ if (!rte_pmd_ifd_dynf_proto_xtr_metadata_avail())
+ rxq->xtr_ol_flag = 0;
+}
+
int
iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
uint16_t nb_desc, unsigned int socket_id,
@@ -310,6 +493,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct iavf_rx_queue *rxq;
const struct rte_memzone *mz;
uint32_t ring_size;
+ uint8_t proto_xtr;
uint16_t len;
uint16_t rx_free_thresh;
@@ -347,14 +531,18 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
return -ENOMEM;
}
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- rxq->rxdid = IAVF_RXDID_COMMS_OVS_1;
+ if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC) {
+ proto_xtr = vf->proto_xtr ? vf->proto_xtr[queue_idx] :
+ IAVF_PROTO_XTR_NONE;
+ rxq->rxdid = iavf_proto_xtr_type_to_rxdid(proto_xtr);
+ rxq->proto_xtr = proto_xtr;
} else {
rxq->rxdid = IAVF_RXDID_LEGACY_1;
+ rxq->proto_xtr = IAVF_PROTO_XTR_NONE;
}
+ iavf_select_rxd_to_pkt_fields_handler(rxq, rxq->rxdid);
+
rxq->mp = mp;
rxq->nb_rx_desc = nb_desc;
rxq->rx_free_thresh = rx_free_thresh;
@@ -735,6 +923,14 @@ iavf_stop_queues(struct rte_eth_dev *dev)
}
}
+#define IAVF_RX_FLEX_ERR0_BITS \
+ ((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_L4E_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EIPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_S) | \
+ (1 << IAVF_RX_FLEX_DESC_STATUS0_RXE_S))
+
static inline void
iavf_rxd_to_vlan_tci(struct rte_mbuf *mb, volatile union iavf_rx_desc *rxdp)
{
@@ -760,6 +956,21 @@ iavf_flex_rxd_to_vlan_tci(struct rte_mbuf *mb,
} else {
mb->vlan_tci = 0;
}
+
+#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
+ if (rte_le_to_cpu_16(rxdp->wb.status_error1) &
+ (1 << IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S)) {
+ mb->ol_flags |= PKT_RX_QINQ_STRIPPED | PKT_RX_QINQ |
+ PKT_RX_VLAN_STRIPPED | PKT_RX_VLAN;
+ mb->vlan_tci_outer = mb->vlan_tci;
+ mb->vlan_tci = rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd);
+ PMD_RX_LOG(DEBUG, "Descriptor l2tag2_1: %u, l2tag2_2: %u",
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_1st),
+ rte_le_to_cpu_16(rxdp->wb.l2tag2_2nd));
+ } else {
+ mb->vlan_tci_outer = 0;
+ }
+#endif
}
/* Translate the rx descriptor status and error fields to pkt flags */
@@ -824,30 +1035,6 @@ iavf_rxd_build_fdir(volatile union iavf_rx_desc *rxdp, struct rte_mbuf *mb)
return flags;
}
-
-/* Translate the rx flex descriptor status to pkt flags */
-static inline void
-iavf_rxd_to_pkt_fields(struct rte_mbuf *mb,
- volatile union iavf_rx_flex_desc *rxdp)
-{
- volatile struct iavf_32b_rx_flex_desc_comms_ovs *desc =
- (volatile struct iavf_32b_rx_flex_desc_comms_ovs *)rxdp;
-#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
- uint16_t stat_err;
-
- stat_err = rte_le_to_cpu_16(desc->status_error0);
- if (likely(stat_err & (1 << IAVF_RX_FLEX_DESC_STATUS0_RSS_VALID_S))) {
- mb->ol_flags |= PKT_RX_RSS_HASH;
- mb->hash.rss = rte_le_to_cpu_32(desc->rss_hash);
- }
-#endif
-
- if (desc->flow_id != 0xFFFFFFFF) {
- mb->ol_flags |= PKT_RX_FDIR | PKT_RX_FDIR_ID;
- mb->hash.fdir.hi = rte_le_to_cpu_32(desc->flow_id);
- }
-}
-
#define IAVF_RX_FLEX_ERR0_BITS \
((1 << IAVF_RX_FLEX_DESC_STATUS0_HBO_S) | \
(1 << IAVF_RX_FLEX_DESC_STATUS0_XSUM_IPE_S) | \
@@ -1102,7 +1289,7 @@ iavf_recv_pkts_flex_rxd(void *rx_queue,
rxm->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(rxm, &rxd);
- iavf_rxd_to_pkt_fields(rxm, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, rxm, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
rxm->ol_flags |= pkt_flags;
@@ -1243,7 +1430,7 @@ iavf_recv_scattered_pkts_flex_rxd(void *rx_queue, struct rte_mbuf **rx_pkts,
first_seg->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(first_seg, &rxd);
- iavf_rxd_to_pkt_fields(first_seg, &rxd);
+ rxq->rxd_to_pkt_fields(rxq, first_seg, &rxd);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(rx_stat_err0);
first_seg->ol_flags |= pkt_flags;
@@ -1480,7 +1667,7 @@ iavf_rx_scan_hw_ring_flex_rxd(struct iavf_rx_queue *rxq)
mb->packet_type = ptype_tbl[IAVF_RX_FLEX_DESC_PTYPE_M &
rte_le_to_cpu_16(rxdp[j].wb.ptype_flex_flags0)];
iavf_flex_rxd_to_vlan_tci(mb, &rxdp[j]);
- iavf_rxd_to_pkt_fields(mb, &rxdp[j]);
+ rxq->rxd_to_pkt_fields(rxq, mb, &rxdp[j]);
stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0);
pkt_flags = iavf_flex_rxd_error_to_pkt_flags(stat_err0);
@@ -1672,7 +1859,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rxq->rx_nb_avail)
return iavf_rx_fill_from_stage(rxq, rx_pkts, nb_pkts);
- if (rxq->rxdid == IAVF_RXDID_COMMS_OVS_1)
+ if (rxq->rxdid >= IAVF_RXDID_FLEX_NIC && rxq->rxdid <= IAVF_RXDID_LAST)
nb_rx = (uint16_t)iavf_rx_scan_hw_ring_flex_rxd(rxq);
else
nb_rx = (uint16_t)iavf_rx_scan_hw_ring(rxq);
@@ -2119,6 +2306,7 @@ iavf_set_rx_function(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
+
#ifdef RTE_ARCH_X86
struct iavf_rx_queue *rxq;
int i;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index b22ccc42eb..d4b4935be6 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -57,6 +57,78 @@
#define IAVF_TX_OFFLOAD_NOTSUP_MASK \
(PKT_TX_OFFLOAD_MASK ^ IAVF_TX_OFFLOAD_MASK)
+/**
+ * Rx Flex Descriptors
+ * These descriptors are used instead of the legacy version descriptors
+ */
+union iavf_16b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+ } wb; /* writeback */
+};
+
+union iavf_32b_rx_flex_desc {
+ struct {
+ __le64 pkt_addr; /* Packet buffer address */
+ __le64 hdr_addr; /* Header buffer address */
+ /* bit 0 of hdr_addr is DD bit */
+ __le64 rsvd1;
+ __le64 rsvd2;
+ } read;
+ struct {
+ /* Qword 0 */
+ u8 rxdid; /* descriptor builder profile ID */
+ u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
+ __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
+ __le16 pkt_len; /* [15:14] are reserved */
+ __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
+ /* sph=[11:11] */
+ /* ff1/ext=[15:12] */
+
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le16 flex_meta0;
+ __le16 flex_meta1;
+
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flex_flags2;
+ u8 time_stamp_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+
+ /* Qword 3 */
+ __le16 flex_meta2;
+ __le16 flex_meta3;
+ union {
+ struct {
+ __le16 flex_meta4;
+ __le16 flex_meta5;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+ } wb; /* writeback */
+};
+
/* HW desc structure, both 16-byte and 32-byte types are supported */
#ifdef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
#define iavf_rx_desc iavf_16byte_rx_desc
@@ -66,6 +138,10 @@
#define iavf_rx_flex_desc iavf_32b_rx_flex_desc
#endif
+typedef void (*iavf_rxd_to_pkt_fields_t)(struct iavf_rx_queue *rxq,
+ struct rte_mbuf *mb,
+ volatile union iavf_rx_flex_desc *rxdp);
+
struct iavf_rxq_ops {
void (*release_mbufs)(struct iavf_rx_queue *rxq);
};
@@ -114,6 +190,11 @@ struct iavf_rx_queue {
bool q_set; /* if rx queue has been configured */
bool rx_deferred_start; /* don't start this queue in dev start */
const struct iavf_rxq_ops *ops;
+ uint8_t proto_xtr; /* protocol extraction type */
+ uint64_t xtr_ol_flag;
+ /* flexible descriptor metadata extraction offload flag */
+ iavf_rxd_to_pkt_fields_t rxd_to_pkt_fields;
+ /* handle flexible descriptor by RXDID */
};
struct iavf_tx_entry {
@@ -165,77 +246,6 @@ union iavf_tx_offload {
};
};
-/* Rx Flex Descriptors
- * These descriptors are used instead of the legacy version descriptors
- */
-union iavf_16b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
- } wb; /* writeback */
-};
-
-union iavf_32b_rx_flex_desc {
- struct {
- __le64 pkt_addr; /* Packet buffer address */
- __le64 hdr_addr; /* Header buffer address */
- /* bit 0 of hdr_addr is DD bit */
- __le64 rsvd1;
- __le64 rsvd2;
- } read;
- struct {
- /* Qword 0 */
- u8 rxdid; /* descriptor builder profile ID */
- u8 mir_id_umb_cast; /* mirror=[5:0], umb=[7:6] */
- __le16 ptype_flex_flags0; /* ptype=[9:0], ff0=[15:10] */
- __le16 pkt_len; /* [15:14] are reserved */
- __le16 hdr_len_sph_flex_flags1; /* header=[10:0] */
- /* sph=[11:11] */
- /* ff1/ext=[15:12] */
-
- /* Qword 1 */
- __le16 status_error0;
- __le16 l2tag1;
- __le16 flex_meta0;
- __le16 flex_meta1;
-
- /* Qword 2 */
- __le16 status_error1;
- u8 flex_flags2;
- u8 time_stamp_low;
- __le16 l2tag2_1st;
- __le16 l2tag2_2nd;
-
- /* Qword 3 */
- __le16 flex_meta2;
- __le16 flex_meta3;
- union {
- struct {
- __le16 flex_meta4;
- __le16 flex_meta5;
- } flex;
- __le32 ts_high;
- } flex_ts;
- } wb; /* writeback */
-};
-
/* Rx Flex Descriptor
* RxDID Profile ID 16-21
* Flex-field 0: RSS hash lower 16-bits
@@ -335,6 +345,7 @@ enum iavf_rxdid {
IAVF_RXDID_COMMS_AUX_TCP = 21,
IAVF_RXDID_COMMS_OVS_1 = 22,
IAVF_RXDID_COMMS_OVS_2 = 23,
+ IAVF_RXDID_COMMS_AUX_IP_OFFSET = 25,
IAVF_RXDID_LAST = 63,
};
@@ -359,6 +370,20 @@ enum iavf_rx_flex_desc_status_error_0_bits {
IAVF_RX_FLEX_DESC_STATUS0_LAST /* this entry must be last!!! */
};
+enum iavf_rx_flex_desc_status_error_1_bits {
+ /* Note: These are predefined bit offsets */
+ IAVF_RX_FLEX_DESC_STATUS1_CPM_S = 0, /* 4 bits */
+ IAVF_RX_FLEX_DESC_STATUS1_NAT_S = 4,
+ IAVF_RX_FLEX_DESC_STATUS1_CRYPTO_S = 5,
+ /* [10:6] reserved */
+ IAVF_RX_FLEX_DESC_STATUS1_L2TAG2P_S = 11,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_S = 12,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_S = 13,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_S = 14,
+ IAVF_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_S = 15,
+ IAVF_RX_FLEX_DESC_STATUS1_LAST /* this entry must be last!!! */
+};
+
/* for iavf_32b_rx_flex_desc.ptype_flex_flags0 member */
#define IAVF_RX_FLEX_DESC_PTYPE_M (0x3FF) /* 10-bits */
@@ -457,6 +482,8 @@ uint16_t iavf_xmit_pkts_vec_avx512(void *tx_queue, struct rte_mbuf **tx_pkts,
uint16_t nb_pkts);
int iavf_txq_vec_setup_avx512(struct iavf_tx_queue *txq);
+uint8_t iavf_proto_xtr_type_to_rxdid(uint8_t xtr_type);
+
const uint32_t *iavf_get_default_ptype_table(void);
static inline
diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h
index 25bb502de2..7ad1e0f68a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_common.h
+++ b/drivers/net/iavf/iavf_rxtx_vec_common.h
@@ -224,6 +224,9 @@ iavf_rx_vec_queue_default(struct iavf_rx_queue *rxq)
if (rxq->nb_rx_desc % rxq->rx_free_thresh)
return -1;
+ if (rxq->proto_xtr != IAVF_PROTO_XTR_NONE)
+ return -1;
+
return 0;
}
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 54d9917c0a..64d194670b 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -850,25 +850,27 @@ iavf_configure_queues(struct iavf_adapter *adapter,
#ifndef RTE_LIBRTE_IAVF_16BYTE_RX_DESC
if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
- vf->supported_rxdid & BIT(IAVF_RXDID_COMMS_OVS_1)) {
- vc_qp->rxq.rxdid = IAVF_RXDID_COMMS_OVS_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
+ vf->supported_rxdid & BIT(rxq[i]->rxdid)) {
+ vc_qp->rxq.rxdid = rxq[i]->rxdid;
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
+ PMD_DRV_LOG(NOTICE, "RXDID[%d] is not supported, "
+ "request default RXDID[%d] in Queue[%d]",
+ rxq[i]->rxdid, IAVF_RXDID_LEGACY_1, i);
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_1;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
}
#else
if (vf->vf_res->vf_cap_flags &
VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC &&
vf->supported_rxdid & BIT(IAVF_RXDID_LEGACY_0)) {
vc_qp->rxq.rxdid = IAVF_RXDID_LEGACY_0;
- PMD_DRV_LOG(NOTICE, "request RXDID == %d in "
- "Queue[%d]", vc_qp->rxq.rxdid, i);
+ PMD_DRV_LOG(NOTICE, "request RXDID[%d] in Queue[%d]",
+ vc_qp->rxq.rxdid, i);
} else {
- PMD_DRV_LOG(ERR, "RXDID == 0 is not supported");
+ PMD_DRV_LOG(ERR, "RXDID[%d] is not supported",
+ IAVF_RXDID_LEGACY_0);
return -1;
}
#endif
diff --git a/drivers/net/iavf/meson.build b/drivers/net/iavf/meson.build
index 3388cdf407..e257f5a6e1 100644
--- a/drivers/net/iavf/meson.build
+++ b/drivers/net/iavf/meson.build
@@ -55,3 +55,5 @@ if arch_subdir == 'x86'
objs += iavf_avx512_lib.extract_objects('iavf_rxtx_vec_avx512.c')
endif
endif
+
+headers = files('rte_pmd_iavf.h')
diff --git a/drivers/net/iavf/rte_pmd_iavf.h b/drivers/net/iavf/rte_pmd_iavf.h
new file mode 100644
index 0000000000..955084e197
--- /dev/null
+++ b/drivers/net/iavf/rte_pmd_iavf.h
@@ -0,0 +1,250 @@
+/* SPDX-Liavfnse-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_PMD_IAVF_H_
+#define _RTE_PMD_IAVF_H_
+
+/**
+ * @file rte_pmd_iavf.h
+ *
+ * iavf PMD specific functions.
+ *
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notiavf
+ *
+ */
+
+#include <stdio.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * The supported network flexible descriptor's extraction metadata format.
+ */
+union rte_pmd_ifd_proto_xtr_metadata {
+ uint32_t metadata;
+
+ struct {
+ uint16_t data0;
+ uint16_t data1;
+ } raw;
+
+ struct {
+ uint16_t stag_vid:12,
+ stag_dei:1,
+ stag_pcp:3;
+ uint16_t ctag_vid:12,
+ ctag_dei:1,
+ ctag_pcp:3;
+ } vlan;
+
+ struct {
+ uint16_t protocol:8,
+ ttl:8;
+ uint16_t tos:8,
+ ihl:4,
+ version:4;
+ } ipv4;
+
+ struct {
+ uint16_t hoplimit:8,
+ nexthdr:8;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6;
+
+ struct {
+ uint16_t flowlo16;
+ uint16_t flowhi4:4,
+ tc:8,
+ version:4;
+ } ipv6_flow;
+
+ struct {
+ uint16_t fin:1,
+ syn:1,
+ rst:1,
+ psh:1,
+ ack:1,
+ urg:1,
+ ece:1,
+ cwr:1,
+ res1:4,
+ doff:4;
+ uint16_t rsvd;
+ } tcp;
+
+ uint32_t ip_ofs;
+};
+
+/* Offset of mbuf dynamic field for flexible descriptor's extraction data */
+extern int rte_pmd_ifd_dynfield_proto_xtr_metadata_offs;
+
+/* Mask of mbuf dynamic flags for flexible descriptor's extraction type */
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+extern uint64_t rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+
+/**
+ * The mbuf dynamic field pointer for flexible descriptor's extraction metadata.
+ */
+#define RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(m) \
+ RTE_MBUF_DYNFIELD((m), \
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs, \
+ uint32_t *)
+
+/**
+ * The mbuf dynamic flag for VLAN protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'vlan' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN \
+ (rte_pmd_ifd_dynflag_proto_xtr_vlan_mask)
+
+/**
+ * The mbuf dynamic flag for IPv4 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv4' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4 \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ipv6' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6 \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask)
+
+/**
+ * The mbuf dynamic flag for IPv6 with flow protocol extraction metadata, it is
+ * valid when dev_args 'proto_xtr' has 'ipv6_flow' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW \
+ (rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask)
+
+/**
+ * The mbuf dynamic flag for TCP protocol extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'tcp' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP \
+ (rte_pmd_ifd_dynflag_proto_xtr_tcp_mask)
+
+/**
+ * The mbuf dynamic flag for IP_OFFSET extraction metadata, it is valid
+ * when dev_args 'proto_xtr' has 'ip_offset' specified.
+ */
+#define RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET \
+ (rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask)
+
+/**
+ * Check if mbuf dynamic field for flexible descriptor's extraction metadata
+ * is registered.
+ *
+ * @return
+ * True if registered, false otherwise.
+ */
+__rte_experimental
+static __rte_always_inline int
+rte_pmd_ifd_dynf_proto_xtr_metadata_avail(void)
+{
+ return rte_pmd_ifd_dynfield_proto_xtr_metadata_offs != -1;
+}
+
+/**
+ * Get the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @return
+ * The saved protocol extraction metadata.
+ */
+__rte_experimental
+static __rte_always_inline uint32_t
+rte_pmd_ifd_dynf_proto_xtr_metadata_get(struct rte_mbuf *m)
+{
+ return *RTE_PMD_IFD_DYNF_PROTO_XTR_METADATA(m);
+}
+
+/**
+ * Dump the mbuf dynamic field for flexible descriptor's extraction metadata.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ */
+__rte_experimental
+static inline void
+rte_pmd_ifd_dump_proto_xtr_metadata(struct rte_mbuf *m)
+{
+ union rte_pmd_ifd_proto_xtr_metadata data;
+
+ if (!rte_pmd_ifd_dynf_proto_xtr_metadata_avail())
+ return;
+
+ data.metadata = rte_pmd_ifd_dynf_proto_xtr_metadata_get(m);
+
+ if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_VLAN)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "vlan,stag=%u:%u:%u,ctag=%u:%u:%u",
+ data.raw.data0, data.raw.data1,
+ data.vlan.stag_pcp,
+ data.vlan.stag_dei,
+ data.vlan.stag_vid,
+ data.vlan.ctag_pcp,
+ data.vlan.ctag_dei,
+ data.vlan.ctag_vid);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV4)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv4,ver=%u,hdrlen=%u,tos=%u,ttl=%u,proto=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv4.version,
+ data.ipv4.ihl,
+ data.ipv4.tos,
+ data.ipv4.ttl,
+ data.ipv4.protocol);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6,ver=%u,tc=%u,flow_hi4=0x%x,nexthdr=%u,hoplimit=%u",
+ data.raw.data0, data.raw.data1,
+ data.ipv6.version,
+ data.ipv6.tc,
+ data.ipv6.flowhi4,
+ data.ipv6.nexthdr,
+ data.ipv6.hoplimit);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IPV6_FLOW)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "ipv6_flow,ver=%u,tc=%u,flow=0x%x%04x",
+ data.raw.data0, data.raw.data1,
+ data.ipv6_flow.version,
+ data.ipv6_flow.tc,
+ data.ipv6_flow.flowhi4,
+ data.ipv6_flow.flowlo16);
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_TCP)
+ printf(" - Flexible descriptor's Metadata: [0x%04x:0x%04x],"
+ "tcp,doff=%u,flags=%s%s%s%s%s%s%s%s",
+ data.raw.data0, data.raw.data1,
+ data.tcp.doff,
+ data.tcp.cwr ? "C" : "",
+ data.tcp.ece ? "E" : "",
+ data.tcp.urg ? "U" : "",
+ data.tcp.ack ? "A" : "",
+ data.tcp.psh ? "P" : "",
+ data.tcp.rst ? "R" : "",
+ data.tcp.syn ? "S" : "",
+ data.tcp.fin ? "F" : "");
+ else if (m->ol_flags & RTE_IAVF_PKT_RX_DYNF_PROTO_XTR_IP_OFFSET)
+ printf(" - Flexible descriptor's Extraction: ip_offset=%u",
+ data.ip_ofs);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_PMD_IAVF_H_ */
diff --git a/drivers/net/iavf/version.map b/drivers/net/iavf/version.map
index 4a76d1d52d..2a411da2e9 100644
--- a/drivers/net/iavf/version.map
+++ b/drivers/net/iavf/version.map
@@ -1,3 +1,16 @@
DPDK_21 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 20.11
+ rte_pmd_ifd_dynfield_proto_xtr_metadata_offs;
+ rte_pmd_ifd_dynflag_proto_xtr_vlan_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv4_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ipv6_flow_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_tcp_mask;
+ rte_pmd_ifd_dynflag_proto_xtr_ip_offset_mask;
+};
--
2.20.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v12] net/iavf: support flex desc metadata extraction
2020-10-30 8:40 ` Jeff Guo
@ 2020-10-30 9:35 ` Zhang, Qi Z
2020-10-30 10:51 ` Ferruh Yigit
1 sibling, 0 replies; 40+ messages in thread
From: Zhang, Qi Z @ 2020-10-30 9:35 UTC (permalink / raw)
To: Guo, Jia, Wu, Jingjing, Xing, Beilei, Yigit, Ferruh
Cc: dev, Wang, Haiyue, Richardson, Bruce
> -----Original Message-----
> From: Guo, Jia <jia.guo@intel.com>
> Sent: Friday, October 30, 2020 4:41 PM
> To: Wu, Jingjing <jingjing.wu@intel.com>; Zhang, Qi Z <qi.z.zhang@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Richardson,
> Bruce <bruce.richardson@intel.com>; Guo, Jia <jia.guo@intel.com>
> Subject: [PATCH v12] net/iavf: support flex desc metadata extraction
>
> Enable metadata extraction for flexible descriptors in AVF, that would allow
> network function directly get metadata without additional parsing which
> would reduce the CPU cost for VFs. The enabling metadata extractions involve
> the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS flexible descriptors,
> and the VF could negotiate the capability of the flexible descriptor with PF and
> correspondingly configure the specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
Applied to dpdk-next-net-intel after revert v10
Thanks
Qi
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v12] net/iavf: support flex desc metadata extraction
2020-10-30 8:40 ` Jeff Guo
2020-10-30 9:35 ` Zhang, Qi Z
@ 2020-10-30 10:51 ` Ferruh Yigit
2020-10-30 11:14 ` Zhang, Qi Z
1 sibling, 1 reply; 40+ messages in thread
From: Ferruh Yigit @ 2020-10-30 10:51 UTC (permalink / raw)
To: Jeff Guo, jingjing.wu, qi.z.zhang, beilei.xing
Cc: dev, haiyue.wang, bruce.richardson
On 10/30/2020 8:40 AM, Jeff Guo wrote:
> Enable metadata extraction for flexible descriptors in AVF, that would
> allow network function directly get metadata without additional parsing
> which would reduce the CPU cost for VFs. The enabling metadata
> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> flexible descriptors, and the VF could negotiate the capability of
> the flexible descriptor with PF and correspondingly configure the
> specific offload at receiving queues.
>
> Signed-off-by: Jeff Guo <jia.guo@intel.com>
> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> ---
> v12:
> refine doc to be briefly
>
> v11:
> update doc in .map and .rst
>
> v10:
> delete the makefile configure and rename the dynamic mbuf name
>
> v9:
> change the undef config
>
> v8:
> rebase patch for apply issue
>
> v7:
> clean some useless and add doc
>
> v6:
> rebase patch
>
> v5:
> remove ovs configure since ovs is not protocol extraction
>
> v4:
> add flex desc type in rx queue for handling vector path
> handle ovs flex type
>
> v3:
> export these global symbols into .map
>
> v2:
> remove makefile change and modify the rxdid handling
> ---
> doc/guides/nics/intel_vf.rst | 4 +
> doc/guides/rel_notes/release_20_11.rst | 6 +
> drivers/net/iavf/iavf.h | 24 +-
> drivers/net/iavf/iavf_ethdev.c | 394 ++++++++++++++++++++++++
> drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
> drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
> drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> drivers/net/iavf/iavf_vchnl.c | 22 +-
> drivers/net/iavf/meson.build | 2 +
> drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
We should add this public header to the API documentation, if that is the only
change I can do while merging. Something like:
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index a9c12d1a2f..36f8ed7ba8 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -41,6 +41,7 @@ The public API headers are grouped by topics:
[vhost] (@ref rte_vhost.h),
[vdpa] (@ref rte_vdpa.h),
[KNI] (@ref rte_kni.h),
+ [iavf] (@ref rte_pmd_iavf.h),
[ixgbe] (@ref rte_pmd_ixgbe.h),
[i40e] (@ref rte_pmd_i40e.h),
[ice] (@ref rte_pmd_ice.h),
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v12] net/iavf: support flex desc metadata extraction
2020-10-30 10:51 ` Ferruh Yigit
@ 2020-10-30 11:14 ` Zhang, Qi Z
2020-10-30 16:03 ` Ferruh Yigit
0 siblings, 1 reply; 40+ messages in thread
From: Zhang, Qi Z @ 2020-10-30 11:14 UTC (permalink / raw)
To: Yigit, Ferruh, Guo, Jia, Wu, Jingjing, Xing, Beilei
Cc: dev, Wang, Haiyue, Richardson, Bruce
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, October 30, 2020 6:52 PM
> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Richardson,
> Bruce <bruce.richardson@intel.com>
> Subject: Re: [PATCH v12] net/iavf: support flex desc metadata extraction
>
> On 10/30/2020 8:40 AM, Jeff Guo wrote:
> > Enable metadata extraction for flexible descriptors in AVF, that would
> > allow network function directly get metadata without additional
> > parsing which would reduce the CPU cost for VFs. The enabling metadata
> > extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
> > flexible descriptors, and the VF could negotiate the capability of the
> > flexible descriptor with PF and correspondingly configure the specific
> > offload at receiving queues.
> >
> > Signed-off-by: Jeff Guo <jia.guo@intel.com>
> > Acked-by: Haiyue Wang <haiyue.wang@intel.com>
> > ---
> > v12:
> > refine doc to be briefly
> >
> > v11:
> > update doc in .map and .rst
> >
> > v10:
> > delete the makefile configure and rename the dynamic mbuf name
> >
> > v9:
> > change the undef config
> >
> > v8:
> > rebase patch for apply issue
> >
> > v7:
> > clean some useless and add doc
> >
> > v6:
> > rebase patch
> >
> > v5:
> > remove ovs configure since ovs is not protocol extraction
> >
> > v4:
> > add flex desc type in rx queue for handling vector path handle ovs
> > flex type
> >
> > v3:
> > export these global symbols into .map
> >
> > v2:
> > remove makefile change and modify the rxdid handling
> > ---
> > doc/guides/nics/intel_vf.rst | 4 +
> > doc/guides/rel_notes/release_20_11.rst | 6 +
> > drivers/net/iavf/iavf.h | 24 +-
> > drivers/net/iavf/iavf_ethdev.c | 394
> ++++++++++++++++++++++++
> > drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
> > drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
> > drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
> > drivers/net/iavf/iavf_vchnl.c | 22 +-
> > drivers/net/iavf/meson.build | 2 +
> > drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
>
> We should add this public header to the API documentation, if that is the only
> change I can do while merging. Something like:
Yes, it think this is the only change, thanks.
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
> index a9c12d1a2f..36f8ed7ba8 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -41,6 +41,7 @@ The public API headers are grouped by topics:
> [vhost] (@ref rte_vhost.h),
> [vdpa] (@ref rte_vdpa.h),
> [KNI] (@ref rte_kni.h),
> + [iavf] (@ref rte_pmd_iavf.h),
> [ixgbe] (@ref rte_pmd_ixgbe.h),
> [i40e] (@ref rte_pmd_i40e.h),
> [ice] (@ref rte_pmd_ice.h),
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v12] net/iavf: support flex desc metadata extraction
2020-10-30 11:14 ` Zhang, Qi Z
@ 2020-10-30 16:03 ` Ferruh Yigit
0 siblings, 0 replies; 40+ messages in thread
From: Ferruh Yigit @ 2020-10-30 16:03 UTC (permalink / raw)
To: Zhang, Qi Z, Guo, Jia, Wu, Jingjing, Xing, Beilei
Cc: dev, Wang, Haiyue, Richardson, Bruce
On 10/30/2020 11:14 AM, Zhang, Qi Z wrote:
>
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Friday, October 30, 2020 6:52 PM
>> To: Guo, Jia <jia.guo@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
>> Zhang, Qi Z <qi.z.zhang@intel.com>; Xing, Beilei <beilei.xing@intel.com>
>> Cc: dev@dpdk.org; Wang, Haiyue <haiyue.wang@intel.com>; Richardson,
>> Bruce <bruce.richardson@intel.com>
>> Subject: Re: [PATCH v12] net/iavf: support flex desc metadata extraction
>>
>> On 10/30/2020 8:40 AM, Jeff Guo wrote:
>>> Enable metadata extraction for flexible descriptors in AVF, that would
>>> allow network function directly get metadata without additional
>>> parsing which would reduce the CPU cost for VFs. The enabling metadata
>>> extractions involve the metadata of VLAN/IPv4/IPv6/IPv6-FLOW/TCP/MPLS
>>> flexible descriptors, and the VF could negotiate the capability of the
>>> flexible descriptor with PF and correspondingly configure the specific
>>> offload at receiving queues.
>>>
>>> Signed-off-by: Jeff Guo <jia.guo@intel.com>
>>> Acked-by: Haiyue Wang <haiyue.wang@intel.com>
>>> ---
>>> v12:
>>> refine doc to be briefly
>>>
>>> v11:
>>> update doc in .map and .rst
>>>
>>> v10:
>>> delete the makefile configure and rename the dynamic mbuf name
>>>
>>> v9:
>>> change the undef config
>>>
>>> v8:
>>> rebase patch for apply issue
>>>
>>> v7:
>>> clean some useless and add doc
>>>
>>> v6:
>>> rebase patch
>>>
>>> v5:
>>> remove ovs configure since ovs is not protocol extraction
>>>
>>> v4:
>>> add flex desc type in rx queue for handling vector path handle ovs
>>> flex type
>>>
>>> v3:
>>> export these global symbols into .map
>>>
>>> v2:
>>> remove makefile change and modify the rxdid handling
>>> ---
>>> doc/guides/nics/intel_vf.rst | 4 +
>>> doc/guides/rel_notes/release_20_11.rst | 6 +
>>> drivers/net/iavf/iavf.h | 24 +-
>>> drivers/net/iavf/iavf_ethdev.c | 394
>> ++++++++++++++++++++++++
>>> drivers/net/iavf/iavf_rxtx.c | 252 +++++++++++++--
>>> drivers/net/iavf/iavf_rxtx.h | 169 +++++-----
>>> drivers/net/iavf/iavf_rxtx_vec_common.h | 3 +
>>> drivers/net/iavf/iavf_vchnl.c | 22 +-
>>> drivers/net/iavf/meson.build | 2 +
>>> drivers/net/iavf/rte_pmd_iavf.h | 250 +++++++++++++++
>>
>> We should add this public header to the API documentation, if that is the only
>> change I can do while merging. Something like:
>
> Yes, it think this is the only change, thanks.
>
Added following while merging:
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index c629b5fea9..9c9899c45a 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -44,6 +44,7 @@ The public API headers are grouped by topics:
[ixgbe] (@ref rte_pmd_ixgbe.h),
[i40e] (@ref rte_pmd_i40e.h),
[ice] (@ref rte_pmd_ice.h),
+ [iavf] (@ref rte_pmd_iavf.h),
[ioat] (@ref rte_ioat_rawdev.h),
[bnxt] (@ref rte_pmd_bnxt.h),
[dpaa] (@ref rte_pmd_dpaa.h),
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index 567fe62f8f..6eeabba9e1 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -13,6 +13,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/drivers/net/dpaa \
@TOPDIR@/drivers/net/dpaa2 \
@TOPDIR@/drivers/net/i40e \
+ @TOPDIR@/drivers/net/iavf \
@TOPDIR@/drivers/net/ice \
@TOPDIR@/drivers/net/ixgbe \
@TOPDIR@/drivers/net/mlx5 \
^ permalink raw reply [flat|nested] 40+ messages in thread