* [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
@ 2021-08-23 10:02 Akhil Goyal
2021-08-23 10:18 ` Andrew Rybchenko
` (3 more replies)
0 siblings, 4 replies; 184+ messages in thread
From: Akhil Goyal @ 2021-08-23 10:02 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, adwivedi, ferruh.yigit,
andrew.rybchenko, Akhil Goyal
Reassembly is a costly operation if it is done in
software, however, if it is offloaded to HW, it can
considerably save application cycles.
The operation becomes even more costlier if IP fragmants
are encrypted.
To resolve above two issues, a new offload
DEV_RX_OFFLOAD_REASSEMBLY is introduced in ethdev for
devices which can attempt reassembly of packets in hardware.
rte_eth_dev_info is added with the reassembly capabilities
which a device can support.
Now, if IP fragments are encrypted, reassembly can also be
attempted while doing inline IPsec processing.
This is controlled by a flag in rte_security_ipsec_sa_options
to enable reassembly of encrypted IP fragments in the inline
path.
The resulting reassembled packet would be a typical
segmented mbuf in case of success.
And if reassembly of fragments is failed or is incomplete (if
fragments do not come before the reass_timeout), the mbuf is
updated with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and
mbuf is returned as is. Now application may decide the fate
of the packet to wait more for fragments to come or drop.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/rte_ethdev.c | 1 +
lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
lib/mbuf/rte_mbuf_core.h | 3 ++-
lib/security/rte_security.h | 10 ++++++++++
4 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..1ab3a093cf 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -119,6 +119,7 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
+ RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
RTE_RX_OFFLOAD_BIT2STR(SCATTER),
RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
RTE_RX_OFFLOAD_BIT2STR(SECURITY),
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351f..e89a4dc1eb 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1360,6 +1360,7 @@ struct rte_eth_conf {
#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
+#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
#define DEV_RX_OFFLOAD_SCATTER 0x00002000
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
@@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
*/
#define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX)
+/**
+ * Reassembly capabilities that a device can support.
+ * The device which can support reassembly offload should set
+ * DEV_RX_OFFLOAD_REASSEMBLY
+ */
+struct rte_eth_reass_capa {
+ /** Maximum time in ns that a fragment can wait for further fragments */
+ uint64_t reass_timeout;
+ /** Maximum number of fragments that device can reassemble */
+ uint16_t max_frags;
+ /** Reserved for future capabilities */
+ uint16_t reserved[3];
+};
+
/**
* Ethernet device associated switch information
*/
@@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
* embedded managed interconnect/switch.
*/
struct rte_eth_switch_info switch_info;
+ /* Reassembly capabilities of a device for reassembly offload */
+ struct rte_eth_reass_capa reass_capa;
- uint64_t reserved_64s[2]; /**< Reserved for future fields */
void *reserved_ptrs[2]; /**< Reserved for future fields */
};
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f581..cea25c87f7 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -200,10 +200,11 @@ extern "C" {
#define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
#define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
#define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
+#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
-#define PKT_FIRST_FREE (1ULL << 23)
+#define PKT_FIRST_FREE (1ULL << 24)
#define PKT_LAST_FREE (1ULL << 40)
/* add new TX flags here, don't forget to update PKT_LAST_FREE */
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de0a6..364eeb5cd4 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
* * 0: Disable per session security statistics collection for this SA.
*/
uint32_t stats : 1;
+
+ /** Enable reassembly on incoming packets.
+ *
+ * * 1: Enable driver to try reassembly of encrypted IP packets for
+ * this SA, if supported by the driver. This feature will work
+ * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
+ * inline ethernet device.
+ * * 0: Disable reassembly of packets (default).
+ */
+ uint32_t reass_en : 1;
};
/** IPSec security association direction */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
2021-08-23 10:02 [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload Akhil Goyal
@ 2021-08-23 10:18 ` Andrew Rybchenko
2021-08-29 13:14 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-09-07 8:47 ` [dpdk-dev] " Ferruh Yigit
` (2 subsequent siblings)
3 siblings, 1 reply; 184+ messages in thread
From: Andrew Rybchenko @ 2021-08-23 10:18 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, adwivedi, ferruh.yigit
On 8/23/21 1:02 PM, Akhil Goyal wrote:
> Reassembly is a costly operation if it is done in
> software, however, if it is offloaded to HW, it can
> considerably save application cycles.
> The operation becomes even more costlier if IP fragmants
> are encrypted.
>
> To resolve above two issues, a new offload
> DEV_RX_OFFLOAD_REASSEMBLY is introduced in ethdev for
> devices which can attempt reassembly of packets in hardware.
> rte_eth_dev_info is added with the reassembly capabilities
> which a device can support.
> Now, if IP fragments are encrypted, reassembly can also be
> attempted while doing inline IPsec processing.
> This is controlled by a flag in rte_security_ipsec_sa_options
> to enable reassembly of encrypted IP fragments in the inline
> path.
>
> The resulting reassembled packet would be a typical
> segmented mbuf in case of success.
>
> And if reassembly of fragments is failed or is incomplete (if
> fragments do not come before the reass_timeout), the mbuf is
> updated with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and
> mbuf is returned as is. Now application may decide the fate
> of the packet to wait more for fragments to come or drop.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Is it IPv4 only or IPv6 as well? I guess IPv4 only to start
with. If so, I think offload name should say so. See below.
I'd say that the feature should be added to
doc/guides/nics/features.rst
Do we really need RX_REASSEMBLY_INCOMPLETE if we provide
buffered packets for incomplete reassembly anyway?
I guess it is sufficient to cover simply reassembly case
only in HW when there is no overlapping fragments etc.
Everything else should be handled in SW anyway as without
the offload support at all.
> ---
> lib/ethdev/rte_ethdev.c | 1 +
> lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> lib/mbuf/rte_mbuf_core.h | 3 ++-
> lib/security/rte_security.h | 10 ++++++++++
> 4 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 9d95cd11e1..1ab3a093cf 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -119,6 +119,7 @@ static const struct {
> RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index d2b27c351f..e89a4dc1eb 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
I think it should be:
RTE_ETH_RX_OFFLOAD_IPV4_REASSEMBLY
i.e. have correct prefix similar to
RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT and mention IPv4.
If we'd like to cover IPv6 as well, it could be
RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY and have IPv4/6
support bits in the offload capabilities below.
> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> /**
> * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> */
> #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX)
>
> +/**
> + * Reassembly capabilities that a device can support.
> + * The device which can support reassembly offload should set
> + * DEV_RX_OFFLOAD_REASSEMBLY
> + */
> +struct rte_eth_reass_capa {
> + /** Maximum time in ns that a fragment can wait for further fragments */
> + uint64_t reass_timeout;
> + /** Maximum number of fragments that device can reassemble */
> + uint16_t max_frags;
> + /** Reserved for future capabilities */
> + uint16_t reserved[3];
> +};
> +
> /**
> * Ethernet device associated switch information
> */
> @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> * embedded managed interconnect/switch.
> */
> struct rte_eth_switch_info switch_info;
> + /* Reassembly capabilities of a device for reassembly offload */
> + struct rte_eth_reass_capa reass_capa;
>
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> index bb38d7f581..cea25c87f7 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -200,10 +200,11 @@ extern "C" {
> #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
In accordance with deprecation notice it should be
RTE_MBUF_F_RX_REASSEMBLY_INCOMPLETE
>
> /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>
> -#define PKT_FIRST_FREE (1ULL << 23)
> +#define PKT_FIRST_FREE (1ULL << 24)
> #define PKT_LAST_FREE (1ULL << 40)
>
> /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 88d31de0a6..364eeb5cd4 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> * * 0: Disable per session security statistics collection for this SA.
> */
> uint32_t stats : 1;
> +
> + /** Enable reassembly on incoming packets.
> + *
> + * * 1: Enable driver to try reassembly of encrypted IP packets for
> + * this SA, if supported by the driver. This feature will work
> + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> + * inline ethernet device.
ethernet -> Ethernet
> + * * 0: Disable reassembly of packets (default).
> + */
> + uint32_t reass_en : 1;
> };
>
> /** IPSec security association direction */
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-08-23 10:18 ` Andrew Rybchenko
@ 2021-08-29 13:14 ` Akhil Goyal
2021-09-21 19:59 ` Thomas Monjalon
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2021-08-29 13:14 UTC (permalink / raw)
To: Andrew Rybchenko, dev
Cc: Anoob Joseph, radu.nicolau, declan.doherty, hemant.agrawal,
matan, konstantin.ananyev, thomas, Ankur Dwivedi, ferruh.yigit
> On 8/23/21 1:02 PM, Akhil Goyal wrote:
> > Reassembly is a costly operation if it is done in
> > software, however, if it is offloaded to HW, it can
> > considerably save application cycles.
> > The operation becomes even more costlier if IP fragmants
> > are encrypted.
> >
> > To resolve above two issues, a new offload
> > DEV_RX_OFFLOAD_REASSEMBLY is introduced in ethdev for
> > devices which can attempt reassembly of packets in hardware.
> > rte_eth_dev_info is added with the reassembly capabilities
> > which a device can support.
> > Now, if IP fragments are encrypted, reassembly can also be
> > attempted while doing inline IPsec processing.
> > This is controlled by a flag in rte_security_ipsec_sa_options
> > to enable reassembly of encrypted IP fragments in the inline
> > path.
> >
> > The resulting reassembled packet would be a typical
> > segmented mbuf in case of success.
> >
> > And if reassembly of fragments is failed or is incomplete (if
> > fragments do not come before the reass_timeout), the mbuf is
> > updated with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and
> > mbuf is returned as is. Now application may decide the fate
> > of the packet to wait more for fragments to come or drop.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>
> Is it IPv4 only or IPv6 as well? I guess IPv4 only to start
> with. If so, I think offload name should say so. See below.
>
We can update spec for both and update capabilities for both.
See below.
> I'd say that the feature should be added to
> doc/guides/nics/features.rst
OK will update in next version
>
> Do we really need RX_REASSEMBLY_INCOMPLETE if we provide
> buffered packets for incomplete reassembly anyway?
> I guess it is sufficient to cover simply reassembly case
> only in HW when there is no overlapping fragments etc.
> Everything else should be handled in SW anyway as without
> the offload support at all.
>
In that case, application would need to again parse the packet
to check whether it is a fragment or not even when the reassembly
is not required. However, we would consider your suggestion in
implementation.
> > ---
> > lib/ethdev/rte_ethdev.c | 1 +
> > lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> > lib/mbuf/rte_mbuf_core.h | 3 ++-
> > lib/security/rte_security.h | 10 ++++++++++
> > 4 files changed, 30 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index 9d95cd11e1..1ab3a093cf 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -119,6 +119,7 @@ static const struct {
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> > RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> > + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> > RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> > RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> > RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index d2b27c351f..e89a4dc1eb 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> > #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> > #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> > #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> > +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
>
> I think it should be:
> RTE_ETH_RX_OFFLOAD_IPV4_REASSEMBLY
>
> i.e. have correct prefix similar to
> RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT and mention IPv4.
>
> If we'd like to cover IPv6 as well, it could be
> RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY and have IPv4/6
> support bits in the offload capabilities below.
Intention is to update spec for both.
Will update the capabilities accordingly to have both IPv4 and IPv6.
>
> > #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> > /**
> > * Timestamp is set by the driver in
> RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> > @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> > */
> > #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> (UINT16_MAX)
> >
> > +/**
> > + * Reassembly capabilities that a device can support.
> > + * The device which can support reassembly offload should set
> > + * DEV_RX_OFFLOAD_REASSEMBLY
> > + */
> > +struct rte_eth_reass_capa {
> > + /** Maximum time in ns that a fragment can wait for further
> fragments */
> > + uint64_t reass_timeout;
> > + /** Maximum number of fragments that device can reassemble */
> > + uint16_t max_frags;
> > + /** Reserved for future capabilities */
> > + uint16_t reserved[3];
> > +};
> > +
> > /**
> > * Ethernet device associated switch information
> > */
> > @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> > * embedded managed interconnect/switch.
> > */
> > struct rte_eth_switch_info switch_info;
> > + /* Reassembly capabilities of a device for reassembly offload */
> > + struct rte_eth_reass_capa reass_capa;
> >
> > - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > };
> >
> > diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> > index bb38d7f581..cea25c87f7 100644
> > --- a/lib/mbuf/rte_mbuf_core.h
> > +++ b/lib/mbuf/rte_mbuf_core.h
> > @@ -200,10 +200,11 @@ extern "C" {
> > #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> > +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
>
> In accordance with deprecation notice it should be
> RTE_MBUF_F_RX_REASSEMBLY_INCOMPLETE
>
Ok will correct in next version.
> >
> > /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> >
> > -#define PKT_FIRST_FREE (1ULL << 23)
> > +#define PKT_FIRST_FREE (1ULL << 24)
> > #define PKT_LAST_FREE (1ULL << 40)
> >
> > /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index 88d31de0a6..364eeb5cd4 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> > * * 0: Disable per session security statistics collection for this SA.
> > */
> > uint32_t stats : 1;
> > +
> > + /** Enable reassembly on incoming packets.
> > + *
> > + * * 1: Enable driver to try reassembly of encrypted IP packets for
> > + * this SA, if supported by the driver. This feature will work
> > + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> > + * inline ethernet device.
>
> ethernet -> Ethernet
>
> > + * * 0: Disable reassembly of packets (default).
> > + */
> > + uint32_t reass_en : 1;
> > };
> >
> > /** IPSec security association direction */
> >
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
2021-08-23 10:02 [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload Akhil Goyal
2021-08-23 10:18 ` Andrew Rybchenko
@ 2021-09-07 8:47 ` Ferruh Yigit
2021-09-08 10:29 ` [dpdk-dev] [EXT] " Anoob Joseph
2021-09-08 6:34 ` [dpdk-dev] " Xu, Rosen
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
3 siblings, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2021-09-07 8:47 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, adwivedi, andrew.rybchenko
On 8/23/2021 11:02 AM, Akhil Goyal wrote:
> Reassembly is a costly operation if it is done in
> software, however, if it is offloaded to HW, it can
> considerably save application cycles.
> The operation becomes even more costlier if IP fragmants
> are encrypted.
>
> To resolve above two issues, a new offload
> DEV_RX_OFFLOAD_REASSEMBLY is introduced in ethdev for
> devices which can attempt reassembly of packets in hardware.
> rte_eth_dev_info is added with the reassembly capabilities
> which a device can support.
> Now, if IP fragments are encrypted, reassembly can also be
> attempted while doing inline IPsec processing.
> This is controlled by a flag in rte_security_ipsec_sa_options
> to enable reassembly of encrypted IP fragments in the inline
> path.
>
> The resulting reassembled packet would be a typical
> segmented mbuf in case of success.
>
> And if reassembly of fragments is failed or is incomplete (if
> fragments do not come before the reass_timeout), the mbuf is
> updated with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and
> mbuf is returned as is. Now application may decide the fate
> of the packet to wait more for fragments to come or drop.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> lib/ethdev/rte_ethdev.c | 1 +
> lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> lib/mbuf/rte_mbuf_core.h | 3 ++-
> lib/security/rte_security.h | 10 ++++++++++
> 4 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 9d95cd11e1..1ab3a093cf 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -119,6 +119,7 @@ static const struct {
> RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index d2b27c351f..e89a4dc1eb 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been long that
offload has been removed, but not sure if it cause any problem to re-use it.
> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> /**
> * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> */
> #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX)
>
> +/**
> + * Reassembly capabilities that a device can support.
> + * The device which can support reassembly offload should set
> + * DEV_RX_OFFLOAD_REASSEMBLY
> + */
> +struct rte_eth_reass_capa {
> + /** Maximum time in ns that a fragment can wait for further fragments */
> + uint64_t reass_timeout;
> + /** Maximum number of fragments that device can reassemble */
> + uint16_t max_frags;
> + /** Reserved for future capabilities */
> + uint16_t reserved[3];
> +};
> +
I wonder if there is any other hardware around supports reassembly offload, it
would be good to get more feedback on the capabilities list.
> /**
> * Ethernet device associated switch information
> */
> @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> * embedded managed interconnect/switch.
> */
> struct rte_eth_switch_info switch_info;
> + /* Reassembly capabilities of a device for reassembly offload */
> + struct rte_eth_reass_capa reass_capa;
>
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
Reserved fields were added to be able to update the struct without breaking the
ABI, so that a critical change doesn't have to wait until next ABI break release.
Since this is ABI break release, we can keep the reserved field and add the new
struct. Or this can be an opportunity to get rid of the reserved field.
Personally I have no objection to get rid of the reserved field, but better to
agree on this explicitly.
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> index bb38d7f581..cea25c87f7 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -200,10 +200,11 @@ extern "C" {
> #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
>
Similar comment with Andrew's, what is the expectation from application if this
flag exists? Can we drop it to simplify the logic in the application?
> /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>
> -#define PKT_FIRST_FREE (1ULL << 23)
> +#define PKT_FIRST_FREE (1ULL << 24)
> #define PKT_LAST_FREE (1ULL << 40)
>
> /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 88d31de0a6..364eeb5cd4 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> * * 0: Disable per session security statistics collection for this SA.
> */
> uint32_t stats : 1;
> +
> + /** Enable reassembly on incoming packets.
> + *
> + * * 1: Enable driver to try reassembly of encrypted IP packets for
> + * this SA, if supported by the driver. This feature will work
> + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> + * inline ethernet device.
> + * * 0: Disable reassembly of packets (default).
> + */
> + uint32_t reass_en : 1;
> };
>
> /** IPSec security association direction */
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
2021-08-23 10:02 [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload Akhil Goyal
2021-08-23 10:18 ` Andrew Rybchenko
2021-09-07 8:47 ` [dpdk-dev] " Ferruh Yigit
@ 2021-09-08 6:34 ` Xu, Rosen
2021-09-08 6:36 ` Xu, Rosen
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
3 siblings, 1 reply; 184+ messages in thread
From: Xu, Rosen @ 2021-09-08 6:34 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan,
Ananyev, Konstantin, thomas, adwivedi, Yigit, Ferruh,
andrew.rybchenko
Hi,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Akhil Goyal
> Sent: Monday, August 23, 2021 18:03
> To: dev@dpdk.org
> Cc: anoobj@marvell.com; Nicolau, Radu <radu.nicolau@intel.com>; Doherty,
> Declan <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
> matan@nvidia.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> thomas@monjalon.net; adwivedi@marvell.com; Yigit, Ferruh
> <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru; Akhil Goyal
> <gakhil@marvell.com>
> Subject: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
>
> Reassembly is a costly operation if it is done in software, however, if it is
> offloaded to HW, it can considerably save application cycles.
> The operation becomes even more costlier if IP fragmants are encrypted.
>
> To resolve above two issues, a new offload DEV_RX_OFFLOAD_REASSEMBLY
> is introduced in ethdev for devices which can attempt reassembly of packets
> in hardware.
> rte_eth_dev_info is added with the reassembly capabilities which a device
> can support.
> Now, if IP fragments are encrypted, reassembly can also be attempted while
> doing inline IPsec processing.
> This is controlled by a flag in rte_security_ipsec_sa_options to enable
> reassembly of encrypted IP fragments in the inline path.
>
> The resulting reassembled packet would be a typical segmented mbuf in case
> of success.
>
> And if reassembly of fragments is failed or is incomplete (if fragments do not
> come before the reass_timeout), the mbuf is updated with an ol_flag
> PKT_RX_REASSEMBLY_INCOMPLETE and mbuf is returned as is. Now
> application may decide the fate of the packet to wait more for fragments to
> come or drop.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> lib/ethdev/rte_ethdev.c | 1 +
> lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> lib/mbuf/rte_mbuf_core.h | 3 ++-
> lib/security/rte_security.h | 10 ++++++++++
> 4 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
> 9d95cd11e1..1ab3a093cf 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -119,6 +119,7 @@ static const struct {
> RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> d2b27c351f..e89a4dc1eb 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> /**
> * Timestamp is set by the driver in
> RTE_MBUF_DYNFIELD_TIMESTAMP_NAME @@ -1477,6 +1478,20 @@ struct
> rte_eth_dev_portconf {
> */
> #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> (UINT16_MAX)
>
> +/**
> + * Reassembly capabilities that a device can support.
> + * The device which can support reassembly offload should set
> + * DEV_RX_OFFLOAD_REASSEMBLY
> + */
> +struct rte_eth_reass_capa {
> + /** Maximum time in ns that a fragment can wait for further
> fragments */
> + uint64_t reass_timeout;
> + /** Maximum number of fragments that device can reassemble */
> + uint16_t max_frags;
> + /** Reserved for future capabilities */
> + uint16_t reserved[3];
> +};
IP reassembly occurs at the final recipient of the message, NIC attempts to do it has a fer challenges. The reason is that having NICs need to worry about reassembling fragments would increase their complexity, so most likely it only can handle range length of datagrams. Seems rte_eth_reass_capa miss the max original datagrams length which NIC can support, this features is better to be negotiated between NIC and SW as well.
> /**
> * Ethernet device associated switch information
> */
> @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> * embedded managed interconnect/switch.
> */
> struct rte_eth_switch_info switch_info;
> + /* Reassembly capabilities of a device for reassembly offload */
> + struct rte_eth_reass_capa reass_capa;
>
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index
> bb38d7f581..cea25c87f7 100644
> --- a/lib/mbuf/rte_mbuf_core.h
> +++ b/lib/mbuf/rte_mbuf_core.h
> @@ -200,10 +200,11 @@ extern "C" {
> #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
>
> /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>
> -#define PKT_FIRST_FREE (1ULL << 23)
> +#define PKT_FIRST_FREE (1ULL << 24)
> #define PKT_LAST_FREE (1ULL << 40)
>
> /* add new TX flags here, don't forget to update PKT_LAST_FREE */ diff --git
> a/lib/security/rte_security.h b/lib/security/rte_security.h index
> 88d31de0a6..364eeb5cd4 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> * * 0: Disable per session security statistics collection for this SA.
> */
> uint32_t stats : 1;
> +
> + /** Enable reassembly on incoming packets.
> + *
> + * * 1: Enable driver to try reassembly of encrypted IP packets for
> + * this SA, if supported by the driver. This feature will work
> + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> + * inline ethernet device.
> + * * 0: Disable reassembly of packets (default).
> + */
> + uint32_t reass_en : 1;
> };
>
> /** IPSec security association direction */
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
2021-09-08 6:34 ` [dpdk-dev] " Xu, Rosen
@ 2021-09-08 6:36 ` Xu, Rosen
0 siblings, 0 replies; 184+ messages in thread
From: Xu, Rosen @ 2021-09-08 6:36 UTC (permalink / raw)
To: Xu, Rosen, Akhil Goyal, dev
Cc: anoobj, Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan,
Ananyev, Konstantin, thomas, adwivedi, Yigit, Ferruh,
andrew.rybchenko, Xu, Rosen
Cc myself
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Xu, Rosen
> Sent: Wednesday, September 08, 2021 14:34
> To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org
> Cc: anoobj@marvell.com; Nicolau, Radu <radu.nicolau@intel.com>; Doherty,
> Declan <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
> matan@nvidia.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> thomas@monjalon.net; adwivedi@marvell.com; Yigit, Ferruh
> <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru
> Subject: Re: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
>
> Hi,
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Akhil Goyal
> > Sent: Monday, August 23, 2021 18:03
> > To: dev@dpdk.org
> > Cc: anoobj@marvell.com; Nicolau, Radu <radu.nicolau@intel.com>;
> > Doherty, Declan <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
> > matan@nvidia.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>;
> > thomas@monjalon.net; adwivedi@marvell.com; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru; Akhil Goyal
> > <gakhil@marvell.com>
> > Subject: [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload
> >
> > Reassembly is a costly operation if it is done in software, however,
> > if it is offloaded to HW, it can considerably save application cycles.
> > The operation becomes even more costlier if IP fragmants are encrypted.
> >
> > To resolve above two issues, a new offload
> DEV_RX_OFFLOAD_REASSEMBLY
> > is introduced in ethdev for devices which can attempt reassembly of
> > packets in hardware.
> > rte_eth_dev_info is added with the reassembly capabilities which a
> > device can support.
> > Now, if IP fragments are encrypted, reassembly can also be attempted
> > while doing inline IPsec processing.
> > This is controlled by a flag in rte_security_ipsec_sa_options to
> > enable reassembly of encrypted IP fragments in the inline path.
> >
> > The resulting reassembled packet would be a typical segmented mbuf in
> > case of success.
> >
> > And if reassembly of fragments is failed or is incomplete (if
> > fragments do not come before the reass_timeout), the mbuf is updated
> > with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and mbuf is returned
> as
> > is. Now application may decide the fate of the packet to wait more for
> > fragments to come or drop.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > lib/ethdev/rte_ethdev.c | 1 +
> > lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> > lib/mbuf/rte_mbuf_core.h | 3 ++-
> > lib/security/rte_security.h | 10 ++++++++++
> > 4 files changed, 30 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
> > 9d95cd11e1..1ab3a093cf 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -119,6 +119,7 @@ static const struct {
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> > RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> > + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> > RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> > RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> > RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > d2b27c351f..e89a4dc1eb 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> > #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> > #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> > #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> > +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
> > #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> > /**
> > * Timestamp is set by the driver in
> > RTE_MBUF_DYNFIELD_TIMESTAMP_NAME @@ -1477,6 +1478,20 @@
> struct
> > rte_eth_dev_portconf {
> > */
> > #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> > (UINT16_MAX)
> >
> > +/**
> > + * Reassembly capabilities that a device can support.
> > + * The device which can support reassembly offload should set
> > + * DEV_RX_OFFLOAD_REASSEMBLY
> > + */
> > +struct rte_eth_reass_capa {
> > + /** Maximum time in ns that a fragment can wait for further
> > fragments */
> > + uint64_t reass_timeout;
> > + /** Maximum number of fragments that device can reassemble */
> > + uint16_t max_frags;
> > + /** Reserved for future capabilities */
> > + uint16_t reserved[3];
> > +};
>
> IP reassembly occurs at the final recipient of the message, NIC attempts to
> do it has a fer challenges. The reason is that having NICs need to worry about
> reassembling fragments would increase their complexity, so most likely it
> only can handle range length of datagrams. Seems rte_eth_reass_capa miss
> the max original datagrams length which NIC can support, this features is
> better to be negotiated between NIC and SW as well.
>
> > /**
> > * Ethernet device associated switch information
> > */
> > @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> > * embedded managed interconnect/switch.
> > */
> > struct rte_eth_switch_info switch_info;
> > + /* Reassembly capabilities of a device for reassembly offload */
> > + struct rte_eth_reass_capa reass_capa;
> >
> > - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > };
> >
> > diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index
> > bb38d7f581..cea25c87f7 100644
> > --- a/lib/mbuf/rte_mbuf_core.h
> > +++ b/lib/mbuf/rte_mbuf_core.h
> > @@ -200,10 +200,11 @@ extern "C" {
> > #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL
> << 22))
> > +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
> >
> > /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> >
> > -#define PKT_FIRST_FREE (1ULL << 23)
> > +#define PKT_FIRST_FREE (1ULL << 24)
> > #define PKT_LAST_FREE (1ULL << 40)
> >
> > /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index
> > 88d31de0a6..364eeb5cd4 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> > * * 0: Disable per session security statistics collection for this SA.
> > */
> > uint32_t stats : 1;
> > +
> > + /** Enable reassembly on incoming packets.
> > + *
> > + * * 1: Enable driver to try reassembly of encrypted IP packets for
> > + * this SA, if supported by the driver. This feature will work
> > + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> > + * inline ethernet device.
> > + * * 0: Disable reassembly of packets (default).
> > + */
> > + uint32_t reass_en : 1;
> > };
> >
> > /** IPSec security association direction */
> > --
> > 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-09-07 8:47 ` [dpdk-dev] " Ferruh Yigit
@ 2021-09-08 10:29 ` Anoob Joseph
2021-09-13 6:56 ` Xu, Rosen
0 siblings, 1 reply; 184+ messages in thread
From: Anoob Joseph @ 2021-09-08 10:29 UTC (permalink / raw)
To: Ferruh Yigit, Xu, Rosen, Andrew Rybchenko
Cc: radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, Ankur Dwivedi, andrew.rybchenko,
Akhil Goyal, dev
Hi Ferruh, Rosen, Andrew,
Please see inline.
Thanks,
Anoob
> Subject: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>
> External Email
>
> ----------------------------------------------------------------------
> On 8/23/2021 11:02 AM, Akhil Goyal wrote:
> > Reassembly is a costly operation if it is done in software, however,
> > if it is offloaded to HW, it can considerably save application cycles.
> > The operation becomes even more costlier if IP fragmants are
> > encrypted.
> >
> > To resolve above two issues, a new offload
> DEV_RX_OFFLOAD_REASSEMBLY
> > is introduced in ethdev for devices which can attempt reassembly of
> > packets in hardware.
> > rte_eth_dev_info is added with the reassembly capabilities which a
> > device can support.
> > Now, if IP fragments are encrypted, reassembly can also be attempted
> > while doing inline IPsec processing.
> > This is controlled by a flag in rte_security_ipsec_sa_options to
> > enable reassembly of encrypted IP fragments in the inline path.
> >
> > The resulting reassembled packet would be a typical segmented mbuf in
> > case of success.
> >
> > And if reassembly of fragments is failed or is incomplete (if
> > fragments do not come before the reass_timeout), the mbuf is updated
> > with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and mbuf is returned
> as
> > is. Now application may decide the fate of the packet to wait more for
> > fragments to come or drop.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > lib/ethdev/rte_ethdev.c | 1 +
> > lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> > lib/mbuf/rte_mbuf_core.h | 3 ++-
> > lib/security/rte_security.h | 10 ++++++++++
> > 4 files changed, 30 insertions(+), 2 deletions(-)
> >
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
> > 9d95cd11e1..1ab3a093cf 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -119,6 +119,7 @@ static const struct {
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> > RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> > RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> > + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> > RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> > RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> > RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > d2b27c351f..e89a4dc1eb 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> > #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> > #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> > #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> > +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
>
> previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been long
> that offload has been removed, but not sure if it cause any problem to re-
> use it.
>
> > #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> > /**
> > * Timestamp is set by the driver in
> RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> > @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> > */
> > #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> (UINT16_MAX)
> >
> > +/**
> > + * Reassembly capabilities that a device can support.
> > + * The device which can support reassembly offload should set
> > + * DEV_RX_OFFLOAD_REASSEMBLY
> > + */
> > +struct rte_eth_reass_capa {
> > + /** Maximum time in ns that a fragment can wait for further
> fragments */
> > + uint64_t reass_timeout;
> > + /** Maximum number of fragments that device can reassemble */
> > + uint16_t max_frags;
> > + /** Reserved for future capabilities */
> > + uint16_t reserved[3];
> > +};
> > +
>
> I wonder if there is any other hardware around supports reassembly offload,
> it would be good to get more feedback on the capabilities list.
>
> > /**
> > * Ethernet device associated switch information
> > */
> > @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> > * embedded managed interconnect/switch.
> > */
> > struct rte_eth_switch_info switch_info;
> > + /* Reassembly capabilities of a device for reassembly offload */
> > + struct rte_eth_reass_capa reass_capa;
> >
> > - uint64_t reserved_64s[2]; /**< Reserved for future fields */
>
> Reserved fields were added to be able to update the struct without breaking
> the ABI, so that a critical change doesn't have to wait until next ABI break
> release.
> Since this is ABI break release, we can keep the reserved field and add the
> new struct. Or this can be an opportunity to get rid of the reserved field.
>
> Personally I have no objection to get rid of the reserved field, but better to
> agree on this explicitly.
>
> > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > };
> >
> > diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h index
> > bb38d7f581..cea25c87f7 100644
> > --- a/lib/mbuf/rte_mbuf_core.h
> > +++ b/lib/mbuf/rte_mbuf_core.h
> > @@ -200,10 +200,11 @@ extern "C" {
> > #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL
> << 22))
> > +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
> >
>
> Similar comment with Andrew's, what is the expectation from application if
> this flag exists? Can we drop it to simplify the logic in the application?
[Anoob] There can be few cases where hardware/NIC attempts inline reassembly but it fails to complete it
1. Number of fragments is larger than what is supported by the hardware
2. Hardware reassembly resources are exhausted (due to limited reassembly contexts etc)
3. Reassembly errors such as overlapping fragments
4. Wait time exhausted (or reassembly timeout)
In such cases, application would be required to retrieve the original fragments so that it can attempt reassembly in software. The incomplete flag is useful for 2 purposes basically,
1. Application would need to retrieve the time the fragment has already spend in hardware reassembly so that software reassembly attempt can compensate for it. Otherwise, reassembly timeout across hardware + software will not be accurate
2. Retrieve original fragments. With this proposal, an incomplete reassembly would result in a chained mbuf but the segments need not be consecutive. To explain bit more,
Suppose we have a packet that is fragmented into 3 fragments, and fragment 3 & fragment 1 arrives in that order. Fragment 2 didn't arrive and hardware ultimately pushes it. In that case, application would be receiving a chained/segmented mbuf with fragment 1 & fragment 3 chained.
Now, this chained mbuf can't be treated like a regular chained mbuf. Each fragment would have its IP hdr and there are fragments missing in between. The only thing application is expected to do is, retrieve fragments, push it to s/w reassembly.
>
> > /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> >
> > -#define PKT_FIRST_FREE (1ULL << 23)
> > +#define PKT_FIRST_FREE (1ULL << 24)
> > #define PKT_LAST_FREE (1ULL << 40)
> >
> > /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index 88d31de0a6..364eeb5cd4 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> > * * 0: Disable per session security statistics collection for this SA.
> > */
> > uint32_t stats : 1;
> > +
> > + /** Enable reassembly on incoming packets.
> > + *
> > + * * 1: Enable driver to try reassembly of encrypted IP packets for
> > + * this SA, if supported by the driver. This feature will work
> > + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> > + * inline ethernet device.
> > + * * 0: Disable reassembly of packets (default).
> > + */
> > + uint32_t reass_en : 1;
> > };
> >
> > /** IPSec security association direction */
> >
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-09-08 10:29 ` [dpdk-dev] [EXT] " Anoob Joseph
@ 2021-09-13 6:56 ` Xu, Rosen
2021-09-13 7:22 ` Andrew Rybchenko
0 siblings, 1 reply; 184+ messages in thread
From: Xu, Rosen @ 2021-09-13 6:56 UTC (permalink / raw)
To: Anoob Joseph, Yigit, Ferruh, Andrew Rybchenko
Cc: Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan, Ananyev,
Konstantin, thomas, Ankur Dwivedi, andrew.rybchenko, Akhil Goyal,
dev, Xu, Rosen
Hi,
> -----Original Message-----
> From: Anoob Joseph <anoobj@marvell.com>
> Sent: Wednesday, September 08, 2021 18:30
> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Xu, Rosen <rosen.xu@intel.com>;
> Andrew Rybchenko <arybchenko@solarflare.com>
> Cc: Nicolau, Radu <radu.nicolau@intel.com>; Doherty, Declan
> <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
> matan@nvidia.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> thomas@monjalon.net; Ankur Dwivedi <adwivedi@marvell.com>;
> andrew.rybchenko@oktetlabs.ru; Akhil Goyal <gakhil@marvell.com>;
> dev@dpdk.org
> Subject: RE: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>
> Hi Ferruh, Rosen, Andrew,
>
> Please see inline.
>
> Thanks,
> Anoob
>
> > Subject: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > On 8/23/2021 11:02 AM, Akhil Goyal wrote:
> > > Reassembly is a costly operation if it is done in software, however,
> > > if it is offloaded to HW, it can considerably save application cycles.
> > > The operation becomes even more costlier if IP fragmants are
> > > encrypted.
> > >
> > > To resolve above two issues, a new offload
> > DEV_RX_OFFLOAD_REASSEMBLY
> > > is introduced in ethdev for devices which can attempt reassembly of
> > > packets in hardware.
> > > rte_eth_dev_info is added with the reassembly capabilities which a
> > > device can support.
> > > Now, if IP fragments are encrypted, reassembly can also be attempted
> > > while doing inline IPsec processing.
> > > This is controlled by a flag in rte_security_ipsec_sa_options to
> > > enable reassembly of encrypted IP fragments in the inline path.
> > >
> > > The resulting reassembled packet would be a typical segmented mbuf
> > > in case of success.
> > >
> > > And if reassembly of fragments is failed or is incomplete (if
> > > fragments do not come before the reass_timeout), the mbuf is updated
> > > with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and mbuf is returned
> > as
> > > is. Now application may decide the fate of the packet to wait more
> > > for fragments to come or drop.
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > ---
> > > lib/ethdev/rte_ethdev.c | 1 +
> > > lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> > > lib/mbuf/rte_mbuf_core.h | 3 ++-
> > > lib/security/rte_security.h | 10 ++++++++++
> > > 4 files changed, 30 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
> > > 9d95cd11e1..1ab3a093cf 100644
> > > --- a/lib/ethdev/rte_ethdev.c
> > > +++ b/lib/ethdev/rte_ethdev.c
> > > @@ -119,6 +119,7 @@ static const struct {
> > > RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> > > RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> > > RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> > > + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> > > RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> > > RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> > > RTE_RX_OFFLOAD_BIT2STR(SECURITY),
> > > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > d2b27c351f..e89a4dc1eb 100644
> > > --- a/lib/ethdev/rte_ethdev.h
> > > +++ b/lib/ethdev/rte_ethdev.h
> > > @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> > > #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> > > #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> > > #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> > > +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
> >
> > previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been
> long
> > that offload has been removed, but not sure if it cause any problem to
> > re- use it.
> >
> > > #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> > > /**
> > > * Timestamp is set by the driver in
> > RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> > > @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> > > */
> > > #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> > (UINT16_MAX)
> > >
> > > +/**
> > > + * Reassembly capabilities that a device can support.
> > > + * The device which can support reassembly offload should set
> > > + * DEV_RX_OFFLOAD_REASSEMBLY
> > > + */
> > > +struct rte_eth_reass_capa {
> > > + /** Maximum time in ns that a fragment can wait for further
> > fragments */
> > > + uint64_t reass_timeout;
> > > + /** Maximum number of fragments that device can reassemble */
> > > + uint16_t max_frags;
> > > + /** Reserved for future capabilities */
> > > + uint16_t reserved[3];
> > > +};
> > > +
> >
> > I wonder if there is any other hardware around supports reassembly
> > offload, it would be good to get more feedback on the capabilities list.
> >
> > > /**
> > > * Ethernet device associated switch information
> > > */
> > > @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> > > * embedded managed interconnect/switch.
> > > */
> > > struct rte_eth_switch_info switch_info;
> > > + /* Reassembly capabilities of a device for reassembly offload */
> > > + struct rte_eth_reass_capa reass_capa;
> > >
> > > - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> >
> > Reserved fields were added to be able to update the struct without
> > breaking the ABI, so that a critical change doesn't have to wait until
> > next ABI break release.
> > Since this is ABI break release, we can keep the reserved field and
> > add the new struct. Or this can be an opportunity to get rid of the reserved
> field.
> >
> > Personally I have no objection to get rid of the reserved field, but
> > better to agree on this explicitly.
> >
> > > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > > };
> > >
> > > diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> > > index
> > > bb38d7f581..cea25c87f7 100644
> > > --- a/lib/mbuf/rte_mbuf_core.h
> > > +++ b/lib/mbuf/rte_mbuf_core.h
> > > @@ -200,10 +200,11 @@ extern "C" {
> > > #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> > > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL
> > << 22))
> > > +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
> > >
> >
> > Similar comment with Andrew's, what is the expectation from
> > application if this flag exists? Can we drop it to simplify the logic in the
> application?
>
> [Anoob] There can be few cases where hardware/NIC attempts inline
> reassembly but it fails to complete it
>
> 1. Number of fragments is larger than what is supported by the hardware 2.
> Hardware reassembly resources are exhausted (due to limited reassembly
> contexts etc) 3. Reassembly errors such as overlapping fragments 4. Wait
> time exhausted (or reassembly timeout)
>
> In such cases, application would be required to retrieve the original
> fragments so that it can attempt reassembly in software. The incomplete flag
> is useful for 2 purposes basically, 1. Application would need to retrieve the
> time the fragment has already spend in hardware reassembly so that
> software reassembly attempt can compensate for it. Otherwise, reassembly
> timeout across hardware + software will not be accurate 2. Retrieve original
> fragments. With this proposal, an incomplete reassembly would result in a
> chained mbuf but the segments need not be consecutive. To explain bit more,
>
> Suppose we have a packet that is fragmented into 3 fragments, and fragment
> 3 & fragment 1 arrives in that order. Fragment 2 didn't arrive and hardware
> ultimately pushes it. In that case, application would be receiving a
> chained/segmented mbuf with fragment 1 & fragment 3 chained.
>
> Now, this chained mbuf can't be treated like a regular chained mbuf. Each
> fragment would have its IP hdr and there are fragments missing in between.
> The only thing application is expected to do is, retrieve fragments, push it to
> s/w reassembly.
What you mentioned is error identification. But actually a negotiation about max frame size is needed before datagrams tx/rx.
> >
> > > /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> > >
> > > -#define PKT_FIRST_FREE (1ULL << 23)
> > > +#define PKT_FIRST_FREE (1ULL << 24)
> > > #define PKT_LAST_FREE (1ULL << 40)
> > >
> > > /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> > > diff --git a/lib/security/rte_security.h
> > > b/lib/security/rte_security.h index 88d31de0a6..364eeb5cd4 100644
> > > --- a/lib/security/rte_security.h
> > > +++ b/lib/security/rte_security.h
> > > @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> > > * * 0: Disable per session security statistics collection for this SA.
> > > */
> > > uint32_t stats : 1;
> > > +
> > > + /** Enable reassembly on incoming packets.
> > > + *
> > > + * * 1: Enable driver to try reassembly of encrypted IP packets for
> > > + * this SA, if supported by the driver. This feature will work
> > > + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> > > + * inline ethernet device.
> > > + * * 0: Disable reassembly of packets (default).
> > > + */
> > > + uint32_t reass_en : 1;
> > > };
> > >
> > > /** IPSec security association direction */
> > >
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-09-13 6:56 ` Xu, Rosen
@ 2021-09-13 7:22 ` Andrew Rybchenko
2021-09-14 5:14 ` Anoob Joseph
0 siblings, 1 reply; 184+ messages in thread
From: Andrew Rybchenko @ 2021-09-13 7:22 UTC (permalink / raw)
To: Xu, Rosen, Anoob Joseph, Yigit, Ferruh, Andrew Rybchenko
Cc: Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan, Ananyev,
Konstantin, thomas, Ankur Dwivedi, Akhil Goyal, dev
On 9/13/21 9:56 AM, Xu, Rosen wrote:
> Hi,
>
>> -----Original Message-----
>> From: Anoob Joseph <anoobj@marvell.com>
>> Sent: Wednesday, September 08, 2021 18:30
>> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Xu, Rosen <rosen.xu@intel.com>;
>> Andrew Rybchenko <arybchenko@solarflare.com>
>> Cc: Nicolau, Radu <radu.nicolau@intel.com>; Doherty, Declan
>> <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
>> matan@nvidia.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
>> thomas@monjalon.net; Ankur Dwivedi <adwivedi@marvell.com>;
>> andrew.rybchenko@oktetlabs.ru; Akhil Goyal <gakhil@marvell.com>;
>> dev@dpdk.org
>> Subject: RE: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>>
>> Hi Ferruh, Rosen, Andrew,
>>
>> Please see inline.
>>
>> Thanks,
>> Anoob
>>
>>> Subject: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>>>
>>> External Email
>>>
>>> ----------------------------------------------------------------------
>>> On 8/23/2021 11:02 AM, Akhil Goyal wrote:
>>>> Reassembly is a costly operation if it is done in software, however,
>>>> if it is offloaded to HW, it can considerably save application cycles.
>>>> The operation becomes even more costlier if IP fragmants are
>>>> encrypted.
>>>>
>>>> To resolve above two issues, a new offload
>>> DEV_RX_OFFLOAD_REASSEMBLY
>>>> is introduced in ethdev for devices which can attempt reassembly of
>>>> packets in hardware.
>>>> rte_eth_dev_info is added with the reassembly capabilities which a
>>>> device can support.
>>>> Now, if IP fragments are encrypted, reassembly can also be attempted
>>>> while doing inline IPsec processing.
>>>> This is controlled by a flag in rte_security_ipsec_sa_options to
>>>> enable reassembly of encrypted IP fragments in the inline path.
>>>>
>>>> The resulting reassembled packet would be a typical segmented mbuf
>>>> in case of success.
>>>>
>>>> And if reassembly of fragments is failed or is incomplete (if
>>>> fragments do not come before the reass_timeout), the mbuf is updated
>>>> with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and mbuf is returned
>>> as
>>>> is. Now application may decide the fate of the packet to wait more
>>>> for fragments to come or drop.
>>>>
>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>>> ---
>>>> lib/ethdev/rte_ethdev.c | 1 +
>>>> lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
>>>> lib/mbuf/rte_mbuf_core.h | 3 ++-
>>>> lib/security/rte_security.h | 10 ++++++++++
>>>> 4 files changed, 30 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index
>>>> 9d95cd11e1..1ab3a093cf 100644
>>>> --- a/lib/ethdev/rte_ethdev.c
>>>> +++ b/lib/ethdev/rte_ethdev.c
>>>> @@ -119,6 +119,7 @@ static const struct {
>>>> RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
>>>> RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
>>>> RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
>>>> + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
>>>> RTE_RX_OFFLOAD_BIT2STR(SCATTER),
>>>> RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
>>>> RTE_RX_OFFLOAD_BIT2STR(SECURITY),
>>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
>>>> d2b27c351f..e89a4dc1eb 100644
>>>> --- a/lib/ethdev/rte_ethdev.h
>>>> +++ b/lib/ethdev/rte_ethdev.h
>>>> @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
>>>> #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
>>>> #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
>>>> #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
>>>> +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
>>>
>>> previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been
>> long
>>> that offload has been removed, but not sure if it cause any problem to
>>> re- use it.
>>>
>>>> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
>>>> /**
>>>> * Timestamp is set by the driver in
>>> RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
>>>> @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
>>>> */
>>>> #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
>>> (UINT16_MAX)
>>>>
>>>> +/**
>>>> + * Reassembly capabilities that a device can support.
>>>> + * The device which can support reassembly offload should set
>>>> + * DEV_RX_OFFLOAD_REASSEMBLY
>>>> + */
>>>> +struct rte_eth_reass_capa {
>>>> + /** Maximum time in ns that a fragment can wait for further
>>> fragments */
>>>> + uint64_t reass_timeout;
>>>> + /** Maximum number of fragments that device can reassemble */
>>>> + uint16_t max_frags;
>>>> + /** Reserved for future capabilities */
>>>> + uint16_t reserved[3];
>>>> +};
>>>> +
>>>
>>> I wonder if there is any other hardware around supports reassembly
>>> offload, it would be good to get more feedback on the capabilities list.
>>>
>>>> /**
>>>> * Ethernet device associated switch information
>>>> */
>>>> @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
>>>> * embedded managed interconnect/switch.
>>>> */
>>>> struct rte_eth_switch_info switch_info;
>>>> + /* Reassembly capabilities of a device for reassembly offload */
>>>> + struct rte_eth_reass_capa reass_capa;
>>>>
>>>> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
>>>
>>> Reserved fields were added to be able to update the struct without
>>> breaking the ABI, so that a critical change doesn't have to wait until
>>> next ABI break release.
>>> Since this is ABI break release, we can keep the reserved field and
>>> add the new struct. Or this can be an opportunity to get rid of the reserved
>> field.
>>>
>>> Personally I have no objection to get rid of the reserved field, but
>>> better to agree on this explicitly.
>>>
>>>> void *reserved_ptrs[2]; /**< Reserved for future fields */
>>>> };
>>>>
>>>> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
>>>> index
>>>> bb38d7f581..cea25c87f7 100644
>>>> --- a/lib/mbuf/rte_mbuf_core.h
>>>> +++ b/lib/mbuf/rte_mbuf_core.h
>>>> @@ -200,10 +200,11 @@ extern "C" {
>>>> #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
>>>> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
>>>> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL
>>> << 22))
>>>> +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
>>>>
>>>
>>> Similar comment with Andrew's, what is the expectation from
>>> application if this flag exists? Can we drop it to simplify the logic in the
>> application?
>>
>> [Anoob] There can be few cases where hardware/NIC attempts inline
>> reassembly but it fails to complete it
>>
>> 1. Number of fragments is larger than what is supported by the hardware 2.
>> Hardware reassembly resources are exhausted (due to limited reassembly
>> contexts etc) 3. Reassembly errors such as overlapping fragments 4. Wait
>> time exhausted (or reassembly timeout)
>>
>> In such cases, application would be required to retrieve the original
>> fragments so that it can attempt reassembly in software. The incomplete flag
>> is useful for 2 purposes basically, 1. Application would need to retrieve the
>> time the fragment has already spend in hardware reassembly so that
>> software reassembly attempt can compensate for it. Otherwise, reassembly
>> timeout across hardware + software will not be accurate
Could you clarify how application will find out the time spent
in HW.
>> 2. Retrieve original
>> fragments. With this proposal, an incomplete reassembly would result in a
>> chained mbuf but the segments need not be consecutive. To explain bit more,
>>
>> Suppose we have a packet that is fragmented into 3 fragments, and fragment
>> 3 & fragment 1 arrives in that order. Fragment 2 didn't arrive and hardware
>> ultimately pushes it. In that case, application would be receiving a
>> chained/segmented mbuf with fragment 1 & fragment 3 chained.
>>
>> Now, this chained mbuf can't be treated like a regular chained mbuf. Each
>> fragment would have its IP hdr and there are fragments missing in between.
>> The only thing application is expected to do is, retrieve fragments, push it to
>> s/w reassembly.
It sounds like it conflicts with SCATTER and BUFFER_SPLIT
offloads which allow to return chained mbuf's. Don't know
if it is good or bad, but anyway it must be documented.
>
> What you mentioned is error identification. But actually a negotiation about max frame size is needed before datagrams tx/rx.
It sounds like it is OK for informational purposes, but
right now I don't understand how it could be used by the
application. Application still has to support reassembly
in SW regardless of the information.
>>>
>>>> /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>>>>
>>>> -#define PKT_FIRST_FREE (1ULL << 23)
>>>> +#define PKT_FIRST_FREE (1ULL << 24)
>>>> #define PKT_LAST_FREE (1ULL << 40)
>>>>
>>>> /* add new TX flags here, don't forget to update PKT_LAST_FREE */
>>>> diff --git a/lib/security/rte_security.h
>>>> b/lib/security/rte_security.h index 88d31de0a6..364eeb5cd4 100644
>>>> --- a/lib/security/rte_security.h
>>>> +++ b/lib/security/rte_security.h
>>>> @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
>>>> * * 0: Disable per session security statistics collection for this SA.
>>>> */
>>>> uint32_t stats : 1;
>>>> +
>>>> + /** Enable reassembly on incoming packets.
>>>> + *
>>>> + * * 1: Enable driver to try reassembly of encrypted IP packets for
>>>> + * this SA, if supported by the driver. This feature will work
>>>> + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
>>>> + * inline ethernet device.
>>>> + * * 0: Disable reassembly of packets (default).
>>>> + */
>>>> + uint32_t reass_en : 1;
>>>> };
>>>>
>>>> /** IPSec security association direction */
>>>>
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-09-13 7:22 ` Andrew Rybchenko
@ 2021-09-14 5:14 ` Anoob Joseph
0 siblings, 0 replies; 184+ messages in thread
From: Anoob Joseph @ 2021-09-14 5:14 UTC (permalink / raw)
To: Andrew Rybchenko, Xu, Rosen, Yigit, Ferruh, Andrew Rybchenko
Cc: Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan, Ananyev,
Konstantin, thomas, Ankur Dwivedi, Akhil Goyal, dev
Hi Andrew, Rosen,
Please see inline.
Thanks,
Anoob
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, September 13, 2021 12:52 PM
> To: Xu, Rosen <rosen.xu@intel.com>; Anoob Joseph
> <anoobj@marvell.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Andrew
> Rybchenko <arybchenko@solarflare.com>
> Cc: Nicolau, Radu <radu.nicolau@intel.com>; Doherty, Declan
> <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
> matan@nvidia.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> thomas@monjalon.net; Ankur Dwivedi <adwivedi@marvell.com>; Akhil
> Goyal <gakhil@marvell.com>; dev@dpdk.org
> Subject: Re: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
>
> On 9/13/21 9:56 AM, Xu, Rosen wrote:
> > Hi,
> >
> >> -----Original Message-----
> >> From: Anoob Joseph <anoobj@marvell.com>
> >> Sent: Wednesday, September 08, 2021 18:30
> >> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Xu, Rosen
> >> <rosen.xu@intel.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>
> >> Cc: Nicolau, Radu <radu.nicolau@intel.com>; Doherty, Declan
> >> <declan.doherty@intel.com>; hemant.agrawal@nxp.com;
> matan@nvidia.com;
> >> Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> >> thomas@monjalon.net; Ankur Dwivedi <adwivedi@marvell.com>;
> >> andrew.rybchenko@oktetlabs.ru; Akhil Goyal <gakhil@marvell.com>;
> >> dev@dpdk.org
> >> Subject: RE: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
> >>
> >> Hi Ferruh, Rosen, Andrew,
> >>
> >> Please see inline.
> >>
> >> Thanks,
> >> Anoob
> >>
> >>> Subject: [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
> >>>
> >>> External Email
> >>>
> >>> --------------------------------------------------------------------
> >>> -- On 8/23/2021 11:02 AM, Akhil Goyal wrote:
> >>>> Reassembly is a costly operation if it is done in software,
> >>>> however, if it is offloaded to HW, it can considerably save application
> cycles.
> >>>> The operation becomes even more costlier if IP fragmants are
> >>>> encrypted.
> >>>>
> >>>> To resolve above two issues, a new offload
> >>> DEV_RX_OFFLOAD_REASSEMBLY
> >>>> is introduced in ethdev for devices which can attempt reassembly of
> >>>> packets in hardware.
> >>>> rte_eth_dev_info is added with the reassembly capabilities which a
> >>>> device can support.
> >>>> Now, if IP fragments are encrypted, reassembly can also be
> >>>> attempted while doing inline IPsec processing.
> >>>> This is controlled by a flag in rte_security_ipsec_sa_options to
> >>>> enable reassembly of encrypted IP fragments in the inline path.
> >>>>
> >>>> The resulting reassembled packet would be a typical segmented mbuf
> >>>> in case of success.
> >>>>
> >>>> And if reassembly of fragments is failed or is incomplete (if
> >>>> fragments do not come before the reass_timeout), the mbuf is
> >>>> updated with an ol_flag PKT_RX_REASSEMBLY_INCOMPLETE and mbuf
> is
> >>>> returned
> >>> as
> >>>> is. Now application may decide the fate of the packet to wait more
> >>>> for fragments to come or drop.
> >>>>
> >>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> >>>> ---
> >>>> lib/ethdev/rte_ethdev.c | 1 +
> >>>> lib/ethdev/rte_ethdev.h | 18 +++++++++++++++++-
> >>>> lib/mbuf/rte_mbuf_core.h | 3 ++-
> >>>> lib/security/rte_security.h | 10 ++++++++++
> >>>> 4 files changed, 30 insertions(+), 2 deletions(-)
> >>>>
> >>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >>>> index 9d95cd11e1..1ab3a093cf 100644
> >>>> --- a/lib/ethdev/rte_ethdev.c
> >>>> +++ b/lib/ethdev/rte_ethdev.c
> >>>> @@ -119,6 +119,7 @@ static const struct {
> >>>> RTE_RX_OFFLOAD_BIT2STR(VLAN_FILTER),
> >>>> RTE_RX_OFFLOAD_BIT2STR(VLAN_EXTEND),
> >>>> RTE_RX_OFFLOAD_BIT2STR(JUMBO_FRAME),
> >>>> + RTE_RX_OFFLOAD_BIT2STR(REASSEMBLY),
> >>>> RTE_RX_OFFLOAD_BIT2STR(SCATTER),
> >>>> RTE_RX_OFFLOAD_BIT2STR(TIMESTAMP),
> >>>> RTE_RX_OFFLOAD_BIT2STR(SECURITY), diff --git
> >>>> a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> >>>> d2b27c351f..e89a4dc1eb 100644
> >>>> --- a/lib/ethdev/rte_ethdev.h
> >>>> +++ b/lib/ethdev/rte_ethdev.h
> >>>> @@ -1360,6 +1360,7 @@ struct rte_eth_conf {
> >>>> #define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
> >>>> #define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
> >>>> #define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
> >>>> +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
> >>>
> >>> previous '0x00001000' was 'DEV_RX_OFFLOAD_CRC_STRIP', it has been
> >> long
> >>> that offload has been removed, but not sure if it cause any problem
> >>> to
> >>> re- use it.
> >>>
> >>>> #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> >>>> /**
> >>>> * Timestamp is set by the driver in
> >>> RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> >>>> @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> >>>> */
> >>>> #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> >>> (UINT16_MAX)
> >>>>
> >>>> +/**
> >>>> + * Reassembly capabilities that a device can support.
> >>>> + * The device which can support reassembly offload should set
> >>>> + * DEV_RX_OFFLOAD_REASSEMBLY
> >>>> + */
> >>>> +struct rte_eth_reass_capa {
> >>>> + /** Maximum time in ns that a fragment can wait for further
> >>> fragments */
> >>>> + uint64_t reass_timeout;
> >>>> + /** Maximum number of fragments that device can reassemble */
> >>>> + uint16_t max_frags;
> >>>> + /** Reserved for future capabilities */
> >>>> + uint16_t reserved[3];
> >>>> +};
> >>>> +
> >>>
> >>> I wonder if there is any other hardware around supports reassembly
> >>> offload, it would be good to get more feedback on the capabilities list.
> >>>
> >>>> /**
> >>>> * Ethernet device associated switch information
> >>>> */
> >>>> @@ -1582,8 +1597,9 @@ struct rte_eth_dev_info {
> >>>> * embedded managed interconnect/switch.
> >>>> */
> >>>> struct rte_eth_switch_info switch_info;
> >>>> + /* Reassembly capabilities of a device for reassembly offload */
> >>>> + struct rte_eth_reass_capa reass_capa;
> >>>>
> >>>> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> >>>
> >>> Reserved fields were added to be able to update the struct without
> >>> breaking the ABI, so that a critical change doesn't have to wait
> >>> until next ABI break release.
> >>> Since this is ABI break release, we can keep the reserved field and
> >>> add the new struct. Or this can be an opportunity to get rid of the
> >>> reserved
> >> field.
> >>>
> >>> Personally I have no objection to get rid of the reserved field, but
> >>> better to agree on this explicitly.
> >>>
> >>>> void *reserved_ptrs[2]; /**< Reserved for future fields */
> >>>> };
> >>>>
> >>>> diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
> >>>> index
> >>>> bb38d7f581..cea25c87f7 100644
> >>>> --- a/lib/mbuf/rte_mbuf_core.h
> >>>> +++ b/lib/mbuf/rte_mbuf_core.h
> >>>> @@ -200,10 +200,11 @@ extern "C" {
> >>>> #define PKT_RX_OUTER_L4_CKSUM_BAD (1ULL << 21)
> >>>> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> >>>> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL
> >>> << 22))
> >>>> +#define PKT_RX_REASSEMBLY_INCOMPLETE (1ULL << 23)
> >>>>
> >>>
> >>> Similar comment with Andrew's, what is the expectation from
> >>> application if this flag exists? Can we drop it to simplify the
> >>> logic in the
> >> application?
> >>
> >> [Anoob] There can be few cases where hardware/NIC attempts inline
> >> reassembly but it fails to complete it
> >>
> >> 1. Number of fragments is larger than what is supported by the hardware
> 2.
> >> Hardware reassembly resources are exhausted (due to limited
> >> reassembly contexts etc) 3. Reassembly errors such as overlapping
> >> fragments 4. Wait time exhausted (or reassembly timeout)
> >>
> >> In such cases, application would be required to retrieve the original
> >> fragments so that it can attempt reassembly in software. The
> >> incomplete flag is useful for 2 purposes basically, 1. Application
> >> would need to retrieve the time the fragment has already spend in
> >> hardware reassembly so that software reassembly attempt can
> >> compensate for it. Otherwise, reassembly timeout across hardware +
> >> software will not be accurate
>
> Could you clarify how application will find out the time spent in HW.
[Anoob] We could use rte_mbuf dynamic fields for the same. Looks like RFC hasn't touched on this aspect yet.
>
> >> 2. Retrieve original
> >> fragments. With this proposal, an incomplete reassembly would result
> >> in a chained mbuf but the segments need not be consecutive. To
> >> explain bit more,
> >>
> >> Suppose we have a packet that is fragmented into 3 fragments, and
> >> fragment
> >> 3 & fragment 1 arrives in that order. Fragment 2 didn't arrive and
> >> hardware ultimately pushes it. In that case, application would be
> >> receiving a chained/segmented mbuf with fragment 1 & fragment 3
> chained.
> >>
> >> Now, this chained mbuf can't be treated like a regular chained mbuf.
> >> Each fragment would have its IP hdr and there are fragments missing in
> between.
> >> The only thing application is expected to do is, retrieve fragments,
> >> push it to s/w reassembly.
>
> It sounds like it conflicts with SCATTER and BUFFER_SPLIT offloads which
> allow to return chained mbuf's. Don't know if it is good or bad, but anyway it
> must be documented.
[Anoob] Agreed.
>
> >
> > What you mentioned is error identification. But actually a negotiation about
> max frame size is needed before datagrams tx/rx.
[Anoob] The actually reassembly settings would be negotiated by the s/w. The offload can be thought of like how checksum is being done now. S/w negotiates with peer and then enables the hardware to accelerate. If hardware is able to reassemble, then well and good. If not, we would have software compensate for it.
>
> It sounds like it is OK for informational purposes, but right now I don't
> understand how it could be used by the application. Application still has to
> support reassembly in SW regardless of the information.
[Anoob] The additional information from "incomplete reassembly" attempt would be useful for software to properly compensate for the hardware reassembly attempt (basically, the reassembly timeout is honored across s/w + h/w reassembly attempt).
Benefit of such an offload is in accelerating reassembly in hardware for performance use cases. If application expects heavy fragmentation, then every packet would have a cost of ~1000 cycles (typically) to get it reassembled. By offloading this (atleast some portion of it) to hardware, application would be able to save significant cycles.
Since IP reassembly presents varying challenges depending on hardware implementation, we cannot expect complete reassembly offload in hardware. For some vendors, maximum number of fragments supported could be limited. Some vendors could have limited reassembly timeout (or wait_time). Some vendors could have limitations depending on datagram sizes. So s/w reassembly is not going away even with the proposed hardware assisted inline reassembly.
>
> >>>
> >>>> /* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> >>>>
> >>>> -#define PKT_FIRST_FREE (1ULL << 23)
> >>>> +#define PKT_FIRST_FREE (1ULL << 24)
> >>>> #define PKT_LAST_FREE (1ULL << 40)
> >>>>
> >>>> /* add new TX flags here, don't forget to update PKT_LAST_FREE */
> >>>> diff --git a/lib/security/rte_security.h
> >>>> b/lib/security/rte_security.h index 88d31de0a6..364eeb5cd4 100644
> >>>> --- a/lib/security/rte_security.h
> >>>> +++ b/lib/security/rte_security.h
> >>>> @@ -181,6 +181,16 @@ struct rte_security_ipsec_sa_options {
> >>>> * * 0: Disable per session security statistics collection for this SA.
> >>>> */
> >>>> uint32_t stats : 1;
> >>>> +
> >>>> + /** Enable reassembly on incoming packets.
> >>>> + *
> >>>> + * * 1: Enable driver to try reassembly of encrypted IP packets for
> >>>> + * this SA, if supported by the driver. This feature will work
> >>>> + * only if rx_offload DEV_RX_OFFLOAD_REASSEMBLY is set in
> >>>> + * inline ethernet device.
> >>>> + * * 0: Disable reassembly of packets (default).
> >>>> + */
> >>>> + uint32_t reass_en : 1;
> >>>> };
> >>>>
> >>>> /** IPSec security association direction */
> >>>>
> >
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [dpdk-dev] [EXT] Re: [PATCH] RFC: ethdev: add reassembly offload
2021-08-29 13:14 ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-09-21 19:59 ` Thomas Monjalon
0 siblings, 0 replies; 184+ messages in thread
From: Thomas Monjalon @ 2021-09-21 19:59 UTC (permalink / raw)
To: Akhil Goyal
Cc: Andrew Rybchenko, dev, Anoob Joseph, radu.nicolau,
declan.doherty, hemant.agrawal, matan, konstantin.ananyev,
Ankur Dwivedi, ferruh.yigit
29/08/2021 15:14, Akhil Goyal:
> > On 8/23/21 1:02 PM, Akhil Goyal wrote:
> > > +#define DEV_RX_OFFLOAD_REASSEMBLY 0x00001000
> >
> > I think it should be:
> > RTE_ETH_RX_OFFLOAD_IPV4_REASSEMBLY
> >
> > i.e. have correct prefix similar to
> > RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT and mention IPv4.
> >
> > If we'd like to cover IPv6 as well, it could be
> > RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY and have IPv4/6
> > support bits in the offload capabilities below.
>
> Intention is to update spec for both.
> Will update the capabilities accordingly to have both IPv4 and IPv6.
>
> >
> > > #define DEV_RX_OFFLOAD_SCATTER 0x00002000
> > > /**
> > > * Timestamp is set by the driver in
> > RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
> > > @@ -1477,6 +1478,20 @@ struct rte_eth_dev_portconf {
> > > */
> > > #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID
> > (UINT16_MAX)
> > >
> > > +/**
> > > + * Reassembly capabilities that a device can support.
> > > + * The device which can support reassembly offload should set
> > > + * DEV_RX_OFFLOAD_REASSEMBLY
> > > + */
> > > +struct rte_eth_reass_capa {
Please add "IP" in flags, struct and comments.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 0/8] ethdev: introduce IP reassembly offload
2021-08-23 10:02 [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload Akhil Goyal
` (2 preceding siblings ...)
2021-09-08 6:34 ` [dpdk-dev] " Xu, Rosen
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-03 15:08 ` [PATCH 1/8] " Akhil Goyal
` (9 more replies)
3 siblings, 10 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the RX offload and an application to test it.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation is
added as a separate patchset.
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
Akhil Goyal (8):
ethdev: introduce IP reassembly offload
ethdev: add dev op for IP reassembly configuration
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
app/test: add unit cases for inline IPsec offload
app/test: add IP reassembly case with no frags
app/test: add IP reassembly cases with multiple fragments
app/test: add IP reassembly negative cases
app/test/meson.build | 1 +
app/test/test_inline_ipsec.c | 1036 +++++++++++++++++
.../test_inline_ipsec_reassembly_vectors.h | 790 +++++++++++++
doc/guides/nics/features.rst | 12 +
lib/ethdev/ethdev_driver.h | 27 +
lib/ethdev/rte_ethdev.c | 47 +
lib/ethdev/rte_ethdev.h | 117 +-
lib/ethdev/version.map | 5 +
lib/mbuf/rte_mbuf_core.h | 3 +-
lib/security/rte_security.h | 12 +-
10 files changed, 2047 insertions(+), 3 deletions(-)
create mode 100644 app/test/test_inline_ipsec.c
create mode 100644 app/test/test_inline_ipsec_reassembly_vectors.h
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 1/8] ethdev: introduce IP reassembly offload
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-11 16:03 ` Ananyev, Konstantin
2022-01-22 7:38 ` Andrew Rybchenko
2022-01-03 15:08 ` [PATCH 2/8] ethdev: add dev op for IP reassembly configuration Akhil Goyal
` (8 subsequent siblings)
9 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragmants are encrypted.
However, if it is offloaded to HW, it can considerably save application cycles.
Hence, a new offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is introduced in
ethdev for devices which can attempt reassembly of packets in hardware.
rte_eth_dev_info is updated with the reassembly capabilities which a device
can support.
The resulting reassembled packet would be a typical segmented mbuf in
case of success.
And if reassembly of fragments is failed or is incomplete (if fragments do
not come before the reass_timeout), the mbuf ol_flags can be updated.
This is updated in a subsequent patch.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
doc/guides/nics/features.rst | 12 ++++++++++++
lib/ethdev/rte_ethdev.c | 1 +
lib/ethdev/rte_ethdev.h | 32 +++++++++++++++++++++++++++++++-
3 files changed, 44 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 27be2d2576..1dfdee9602 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -602,6 +602,18 @@ Supports inner packet L4 checksum.
``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+.. _nic_features_ip_reassembly:
+
+IP reassembly
+-------------
+
+Supports IP reassembly in hardware.
+
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY``.
+* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_IP_REASSEMBLY_INCOMPLETE``.
+* **[provides] rte_eth_dev_info**: ``reass_capa``.
+
+
.. _nic_features_shared_rx_queue:
Shared Rx queue
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index a1d475a292..d9a03f12f9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -126,6 +126,7 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+ RTE_RX_OFFLOAD_BIT2STR(IP_REASSEMBLY),
};
#undef RTE_RX_OFFLOAD_BIT2STR
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index fa299c8ad7..11427b2e4d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1586,6 +1586,7 @@ struct rte_eth_conf {
#define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19)
#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20)
+#define RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY RTE_BIT64(21)
#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
@@ -1781,6 +1782,33 @@ enum rte_eth_representor_type {
RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
};
+/* Flag to offload IP reassembly for IPv4 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
+/* Flag to offload IP reassembly for IPv6 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice.
+ *
+ * A structure used to set IP reassembly configuration.
+ *
+ * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
+ * the PMD will attempt IP reassembly for the received packets as per
+ * properties defined in this structure:
+ *
+ */
+struct rte_eth_ip_reass_params {
+ /** Maximum time in ms which PMD can wait for other fragments. */
+ uint32_t reass_timeout;
+ /** Maximum number of fragments that can be reassembled. */
+ uint16_t max_frags;
+ /**
+ * Flags to enable reassembly of packet types -
+ * RTE_ETH_DEV_REASSEMBLY_F_xxx.
+ */
+ uint16_t flags;
+};
+
/**
* A structure used to retrieve the contextual information of
* an Ethernet device, such as the controlling driver of the
@@ -1841,8 +1869,10 @@ struct rte_eth_dev_info {
* embedded managed interconnect/switch.
*/
struct rte_eth_switch_info switch_info;
+ /** IP reassembly offload capabilities that a device can support. */
+ struct rte_eth_ip_reass_params reass_capa;
- uint64_t reserved_64s[2]; /**< Reserved for future fields */
+ uint64_t reserved_64s[1]; /**< Reserved for future fields */
void *reserved_ptrs[2]; /**< Reserved for future fields */
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
2022-01-03 15:08 ` [PATCH 1/8] " Akhil Goyal
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-11 16:09 ` Ananyev, Konstantin
2022-01-03 15:08 ` [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
` (7 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
A new ethernet device op is added to give application control over
the IP reassembly configuration. This operation is an optional
call from the application, default values are set by PMD and
exposed via rte_eth_dev_info.
Application should always first retreive the capabilities from
rte_eth_dev_info and then set the fields accordingly.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/ethdev_driver.h | 19 +++++++++++++++++++
lib/ethdev/rte_ethdev.c | 30 ++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 28 ++++++++++++++++++++++++++++
lib/ethdev/version.map | 3 +++
4 files changed, 80 insertions(+)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..0ed53c14f3 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -990,6 +990,22 @@ typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
uint64_t *features);
+/**
+ * @internal
+ * Set configuration parameters for enabling IP reassembly offload in hardware.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[in] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reass_params *conf);
+
/**
* @internal A structure containing the functions exported by an Ethernet driver.
*/
@@ -1186,6 +1202,9 @@ struct eth_dev_ops {
* kinds of metadata to the PMD
*/
eth_rx_metadata_negotiate_t rx_metadata_negotiate;
+
+ /** Set IP reassembly configuration */
+ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
};
/**
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index d9a03f12f9..ecc6c1fe37 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6473,6 +6473,36 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
(*dev->dev_ops->rx_metadata_negotiate)(dev, features));
}
+int
+rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if ((dev->data->dev_conf.rxmode.offloads &
+ RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "The port (ID=%"PRIu16") is not configured for IP reassembly\n",
+ port_id);
+ return -EINVAL;
+ }
+
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid IP reassembly configuration (NULL)\n");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_set,
+ -ENOTSUP);
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 11427b2e4d..891f9a6e06 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5218,6 +5218,34 @@ int rte_eth_representor_info_get(uint16_t port_id,
__rte_experimental
int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set IP reassembly configuration parameters if device rx offload
+ * flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is enabled and the PMD
+ * supports IP reassembly offload. User should first check the
+ * reass_capa in rte_eth_dev_info before setting the configuration.
+ * The values of configuration parameters must not exceed the device
+ * capabilities. The use of this API is optional and if called, it
+ * should be called before rte_eth_dev_start().
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-EINVAL) if offload is not enabled in rte_eth_conf.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf);
+
+
#include <rte_ethdev_core.h>
/**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..f08fe72044 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -256,6 +256,9 @@ EXPERIMENTAL {
rte_flow_flex_item_create;
rte_flow_flex_item_release;
rte_flow_pick_transfer_proxy;
+
+ #added in 22.03
+ rte_eth_ip_reassembly_conf_set;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
2022-01-03 15:08 ` [PATCH 1/8] " Akhil Goyal
2022-01-03 15:08 ` [PATCH 2/8] ethdev: add dev op for IP reassembly configuration Akhil Goyal
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-11 17:04 ` Ananyev, Konstantin
2022-01-03 15:08 ` [PATCH 4/8] security: add IPsec option for " Akhil Goyal
` (6 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
Hardware IP reassembly may be incomplete for multiple reasons like
reassembly timeout reached, duplicate fragments, etc.
To save application cycles to process these packets again, a new
mbuf ol_flag (RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) is added to
show that the mbuf received is not reassembled properly.
Now if this flag is set, application can retreive corresponding chain of
mbufs using mbuf dynfield set by the PMD. Now, it will be upto
application to either drop those fragments or wait for more time.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/ethdev_driver.h | 8 ++++++
lib/ethdev/rte_ethdev.c | 16 +++++++++++
lib/ethdev/rte_ethdev.h | 57 ++++++++++++++++++++++++++++++++++++++
lib/ethdev/version.map | 2 ++
lib/mbuf/rte_mbuf_core.h | 3 +-
5 files changed, 85 insertions(+), 1 deletion(-)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 0ed53c14f3..9a0bab9a61 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1671,6 +1671,14 @@ int
rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
uint32_t direction);
+/**
+ * @internal
+ * Register mbuf dynamic field for IP reassembly incomplete case.
+ */
+__rte_internal
+int
+rte_eth_ip_reass_dynfield_register(void);
+
/*
* Legacy ethdev API used internally by drivers.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index ecc6c1fe37..d53ce4eaca 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6503,6 +6503,22 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
}
+#define RTE_ETH_IP_REASS_DYNFIELD_NAME "rte_eth_ip_reass_dynfield"
+int rte_eth_ip_reass_dynfield_offset = -1;
+
+int
+rte_eth_ip_reass_dynfield_register(void)
+{
+ static const struct rte_mbuf_dynfield dynfield_desc = {
+ .name = RTE_ETH_IP_REASS_DYNFIELD_NAME,
+ .size = sizeof(rte_eth_ip_reass_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reass_dynfield_t),
+ };
+ rte_eth_ip_reass_dynfield_offset =
+ rte_mbuf_dynfield_register(&dynfield_desc);
+ return rte_eth_ip_reass_dynfield_offset;
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 891f9a6e06..c4024d2265 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5245,6 +5245,63 @@ __rte_experimental
int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
struct rte_eth_ip_reass_params *conf);
+/**
+ * In case of IP reassembly offload failure, ol_flags in mbuf will be set
+ * with RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE and packets will be returned
+ * without alteration. The application can retrieve the attached fragments
+ * using mbuf dynamic field.
+ */
+typedef struct {
+ /**
+ * Next fragment packet. Application should fetch dynamic field of
+ * each fragment until a NULL is received and nb_frags is 0.
+ */
+ struct rte_mbuf *next_frag;
+ /** Time spent(in ms) by HW in waiting for further fragments. */
+ uint16_t time_spent;
+ /** Number of more fragments attached in mbuf dynamic fields. */
+ uint16_t nb_frags;
+} rte_eth_ip_reass_dynfield_t;
+
+extern int rte_eth_ip_reass_dynfield_offset;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get pointer to mbuf dynamic field for getting incomplete
+ * reassembled fragments.
+ *
+ * For performance reason, no check is done,
+ * the dynamic field may not be registered.
+ * @see rte_eth_ip_reass_dynfield_is_registered
+ *
+ * @param mbuf packet to access
+ * @return pointer to mbuf dynamic field
+ */
+__rte_experimental
+static inline rte_eth_ip_reass_dynfield_t *
+rte_eth_ip_reass_dynfield(struct rte_mbuf *mbuf)
+{
+ return RTE_MBUF_DYNFIELD(mbuf,
+ rte_eth_ip_reass_dynfield_offset,
+ rte_eth_ip_reass_dynfield_t *);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check whether the dynamic field is registered.
+ *
+ * @return true if rte_eth_ip_reass_dynfield_register() has been called.
+ */
+__rte_experimental
+static inline bool rte_eth_ip_reass_dynfield_is_registered(void)
+{
+ return rte_eth_ip_reass_dynfield_offset >= 0;
+}
+
#include <rte_ethdev_core.h>
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index f08fe72044..e824b776b1 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -259,6 +259,7 @@ EXPERIMENTAL {
#added in 22.03
rte_eth_ip_reassembly_conf_set;
+ rte_eth_ip_reass_dynfield_offset;
};
INTERNAL {
@@ -282,6 +283,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_bind;
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
+ rte_eth_ip_reass_dynfield_register;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index 321a419c71..2cd1f95ae4 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -233,10 +233,11 @@ extern "C" {
#define PKT_RX_OUTER_L4_CKSUM_INVALID \
RTE_DEPRECATED(PKT_RX_OUTER_L4_CKSUM_INVALID) \
RTE_MBUF_F_RX_OUTER_L4_CKSUM_INVALID
+#define RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE (1ULL << 23)
/* add new RX flags here, don't forget to update RTE_MBUF_F_FIRST_FREE */
-#define RTE_MBUF_F_FIRST_FREE (1ULL << 23)
+#define RTE_MBUF_F_FIRST_FREE (1ULL << 24)
#define PKT_FIRST_FREE RTE_DEPRECATED(PKT_FIRST_FREE) RTE_MBUF_F_FIRST_FREE
#define RTE_MBUF_F_LAST_FREE (1ULL << 40)
#define PKT_LAST_FREE RTE_DEPRECATED(PKT_LAST_FREE) RTE_MBUF_F_LAST_FREE
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 4/8] security: add IPsec option for IP reassembly
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
` (2 preceding siblings ...)
2022-01-03 15:08 ` [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-03 15:08 ` [PATCH 5/8] app/test: add unit cases for inline IPsec offload Akhil Goyal
` (5 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
A new option is added in IPsec to enable and attempt reassembly
of inbound packets.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/security/rte_security.h | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 1228b6c8b1..168b837a82 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
*/
uint32_t l4_csum_enable : 1;
+ /** Enable reassembly on incoming packets.
+ *
+ * * 1: Enable driver to try reassembly of encrypted IP packets for
+ * this SA, if supported by the driver. This feature will work
+ * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
+ * inline Ethernet device.
+ * * 0: Disable reassembly of packets (default).
+ */
+ uint32_t reass_en : 1;
+
/** Reserved bit fields for future extension
*
* User should ensure reserved_opts is cleared as it may change in
@@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
*
* Note: Reduce number of bits in reserved_opts for every new option.
*/
- uint32_t reserved_opts : 18;
+ uint32_t reserved_opts : 17;
};
/** IPSec security association direction */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 5/8] app/test: add unit cases for inline IPsec offload
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
` (3 preceding siblings ...)
2022-01-03 15:08 ` [PATCH 4/8] security: add IPsec option for " Akhil Goyal
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-01-03 15:08 ` [PATCH 6/8] app/test: add IP reassembly case with no frags Akhil Goyal
` (4 subsequent siblings)
9 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
A new test suite is added in test app to test inline IPsec protocol
offload. In this patch, a couple of predefined plain and cipher test
vectors are used to verify the IPsec functionality without the need of
external traffic generators. The sent packet is loopbacked onto the same
interface which is received and matched with the expected output.
The test suite can be updated further with other functional test cases.
The testsuite can be run using:
RTE> inline_ipsec_autotest
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/meson.build | 1 +
app/test/test_inline_ipsec.c | 728 ++++++++++++++++++
.../test_inline_ipsec_reassembly_vectors.h | 198 +++++
3 files changed, 927 insertions(+)
create mode 100644 app/test/test_inline_ipsec.c
create mode 100644 app/test/test_inline_ipsec_reassembly_vectors.h
diff --git a/app/test/meson.build b/app/test/meson.build
index 2b480adfba..9c88240e3f 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -74,6 +74,7 @@ test_sources = files(
'test_hash_readwrite.c',
'test_hash_perf.c',
'test_hash_readwrite_lf_perf.c',
+ 'test_inline_ipsec.c',
'test_interrupts.c',
'test_ipfrag.c',
'test_ipsec.c',
diff --git a/app/test/test_inline_ipsec.c b/app/test/test_inline_ipsec.c
new file mode 100644
index 0000000000..54b56ba9e8
--- /dev/null
+++ b/app/test/test_inline_ipsec.c
@@ -0,0 +1,728 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <signal.h>
+#include <unistd.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_security.h>
+#include <rte_ipsec.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include "test_inline_ipsec_reassembly_vectors.h"
+#include "test.h"
+
+#define NB_ETHPORTS_USED (1)
+#define NB_SOCKETS (2)
+#define MEMPOOL_CACHE_SIZE 32
+#define MAX_PKT_BURST (32)
+#define RTE_TEST_RX_DESC_DEFAULT (1024)
+#define RTE_TEST_TX_DESC_DEFAULT (1024)
+#define RTE_PORT_ALL (~(uint16_t)0x0)
+
+/*
+ * RX and TX Prefetch, Host, and Write-back threshold values should be
+ * carefully set for optimal performance. Consult the network
+ * controller's datasheet and supporting DPDK documentation for guidance
+ * on how these parameters should be set.
+ */
+#define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */
+#define RX_HTHRESH 8 /**< Default values of RX host threshold reg. */
+#define RX_WTHRESH 0 /**< Default values of RX write-back threshold reg. */
+
+#define TX_PTHRESH 32 /**< Default values of TX prefetch threshold reg. */
+#define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */
+#define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */
+
+#define MAX_TRAFFIC_BURST 2048
+
+#define NB_MBUF 1024
+
+#define APP_REASS_TIMEOUT 20
+
+static struct rte_mempool *mbufpool[NB_SOCKETS];
+static struct rte_mempool *sess_pool[NB_SOCKETS];
+static struct rte_mempool *sess_priv_pool[NB_SOCKETS];
+/* ethernet addresses of ports */
+static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
+ .split_hdr_size = 0,
+ .offloads = RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY |
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SECURITY,
+ },
+ .txmode = {
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_SECURITY |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ },
+ .lpbk_mode = 1, /* enable loopback */
+};
+
+static struct rte_eth_rxconf rx_conf = {
+ .rx_thresh = {
+ .pthresh = RX_PTHRESH,
+ .hthresh = RX_HTHRESH,
+ .wthresh = RX_WTHRESH,
+ },
+ .rx_free_thresh = 32,
+};
+
+static struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = 32, /* Use PMD default values */
+ .tx_rs_thresh = 32, /* Use PMD default values */
+};
+
+enum {
+ LCORE_INVALID = 0,
+ LCORE_AVAIL,
+ LCORE_USED,
+};
+
+struct lcore_cfg {
+ uint8_t status;
+ uint8_t socketid;
+ uint16_t nb_ports;
+ uint16_t port;
+} __rte_cache_aligned;
+
+struct lcore_cfg lcore_cfg;
+
+static uint64_t link_mbps;
+
+/* Create Inline IPsec session */
+static int
+create_inline_ipsec_session(struct ipsec_session_data *sa,
+ uint16_t portid, struct rte_ipsec_session *ips,
+ enum rte_security_ipsec_sa_direction dir,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ int32_t ret = 0;
+ struct rte_security_ctx *sec_ctx;
+ uint32_t src_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 0));
+ uint32_t dst_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 1));
+ uint16_t src_v6[8] = {0x2607, 0xf8b0, 0x400c, 0x0c03, 0x0000, 0x0000,
+ 0x0000, 0x001a};
+ uint16_t dst_v6[8] = {0x2001, 0x0470, 0xe5bf, 0xdead, 0x4957, 0x2174,
+ 0xe82c, 0x4887};
+ struct rte_security_session_conf sess_conf = {
+ .action_type = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = sa->ipsec_xform,
+ .crypto_xform = &sa->xform.aead,
+ .userdata = NULL,
+ };
+ sess_conf.ipsec.direction = dir;
+
+ const struct rte_security_capability *sec_cap;
+
+ sec_ctx = (struct rte_security_ctx *)
+ rte_eth_dev_get_sec_ctx(portid);
+
+ if (sec_ctx == NULL) {
+ printf("Ethernet device doesn't support security features.\n");
+ return TEST_SKIPPED;
+ }
+
+ sess_conf.crypto_xform->aead.key.data = sa->key.data;
+
+ /* Save SA as userdata for the security session. When
+ * the packet is received, this userdata will be
+ * retrieved using the metadata from the packet.
+ *
+ * The PMD is expected to set similar metadata for other
+ * operations, like rte_eth_event, which are tied to
+ * security session. In such cases, the userdata could
+ * be obtained to uniquely identify the security
+ * parameters denoted.
+ */
+
+ sess_conf.userdata = (void *) sa;
+ sess_conf.ipsec.tunnel.type = tun_type;
+ if (tun_type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+ memcpy(&sess_conf.ipsec.tunnel.ipv4.src_ip, &src_v4,
+ sizeof(src_v4));
+ memcpy(&sess_conf.ipsec.tunnel.ipv4.dst_ip, &dst_v4,
+ sizeof(dst_v4));
+ } else {
+ memcpy(&sess_conf.ipsec.tunnel.ipv6.src_addr, &src_v6,
+ sizeof(src_v6));
+ memcpy(&sess_conf.ipsec.tunnel.ipv6.dst_addr, &dst_v6,
+ sizeof(dst_v6));
+ }
+ ips->security.ses = rte_security_session_create(sec_ctx,
+ &sess_conf, sess_pool[lcore_cfg.socketid],
+ sess_priv_pool[lcore_cfg.socketid]);
+ if (ips->security.ses == NULL) {
+ printf("SEC Session init failed: err: %d\n", ret);
+ return TEST_FAILED;
+ }
+
+ sec_cap = rte_security_capabilities_get(sec_ctx);
+ if (sec_cap == NULL) {
+ printf("No capabilities registered\n");
+ return TEST_SKIPPED;
+ }
+
+ /* iterate until ESP tunnel*/
+ while (sec_cap->action !=
+ RTE_SECURITY_ACTION_TYPE_NONE) {
+ if (sec_cap->action == sess_conf.action_type &&
+ sec_cap->protocol ==
+ RTE_SECURITY_PROTOCOL_IPSEC &&
+ sec_cap->ipsec.mode ==
+ sess_conf.ipsec.mode &&
+ sec_cap->ipsec.direction == dir)
+ break;
+ sec_cap++;
+ }
+
+ if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) {
+ printf("No suitable security capability found\n");
+ return TEST_SKIPPED;
+ }
+
+ ips->security.ol_flags = sec_cap->ol_flags;
+ ips->security.ctx = sec_ctx;
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 3s, and print them finally */
+static void
+check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 30 /* 3s (30 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+ int ret;
+ char link_status[RTE_ETH_LINK_MAX_STR_LEN];
+
+ printf("Checking link statuses...\n");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ all_ports_up = 1;
+ for (portid = 0; portid < port_num; portid++) {
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ ret = rte_eth_link_get_nowait(portid, &link);
+ if (ret < 0) {
+ all_ports_up = 0;
+ if (print_flag == 1)
+ printf("Port %u link get failed: %s\n",
+ portid, rte_strerror(-ret));
+ continue;
+ }
+
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status && link_mbps == 0)
+ link_mbps = link.link_speed;
+
+ rte_eth_link_to_str(link_status,
+ sizeof(link_status), &link);
+ printf("Port %d %s\n", portid, link_status);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1))
+ print_flag = 1;
+ }
+}
+
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
+static void
+copy_buf_to_pkt_segs(void *buf, unsigned len, struct rte_mbuf *pkt,
+ unsigned offset)
+{
+ struct rte_mbuf *seg;
+ void *seg_buf;
+ unsigned copy_len;
+
+ seg = pkt;
+ while (offset >= seg->data_len) {
+ offset -= seg->data_len;
+ seg = seg->next;
+ }
+ copy_len = seg->data_len - offset;
+ seg_buf = rte_pktmbuf_mtod_offset(seg, char *, offset);
+ while (len > copy_len) {
+ rte_memcpy(seg_buf, buf, (size_t) copy_len);
+ len -= copy_len;
+ buf = ((char *) buf + copy_len);
+ seg = seg->next;
+ seg_buf = rte_pktmbuf_mtod(seg, void *);
+ }
+ rte_memcpy(seg_buf, buf, (size_t) len);
+}
+
+static inline void
+copy_buf_to_pkt(void *buf, unsigned len, struct rte_mbuf *pkt, unsigned offset)
+{
+ if (offset + len <= pkt->data_len) {
+ rte_memcpy(rte_pktmbuf_mtod_offset(pkt, char *, offset), buf,
+ (size_t) len);
+ return;
+ }
+ copy_buf_to_pkt_segs(buf, len, pkt, offset);
+}
+
+static inline int
+init_traffic(struct rte_mempool *mp,
+ struct rte_mbuf **pkts_burst,
+ struct ipsec_test_packet *vectors[],
+ uint32_t nb_pkts)
+{
+ struct rte_mbuf *pkt;
+ uint32_t i;
+
+ for (i = 0; i < nb_pkts; i++) {
+ pkt = rte_pktmbuf_alloc(mp);
+ if (pkt == NULL) {
+ return TEST_FAILED;
+ }
+ pkt->data_len = vectors[i]->len;
+ pkt->pkt_len = vectors[i]->len;
+ copy_buf_to_pkt(vectors[i]->data, vectors[i]->len,
+ pkt, vectors[i]->l2_offset);
+
+ pkts_burst[i] = pkt;
+ }
+ return i;
+}
+
+static int
+init_lcore(void)
+{
+ unsigned lcore_id;
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ lcore_cfg.socketid =
+ rte_lcore_to_socket_id(lcore_id);
+ if (rte_lcore_is_enabled(lcore_id) == 0) {
+ lcore_cfg.status = LCORE_INVALID;
+ continue;
+ } else {
+ lcore_cfg.status = LCORE_AVAIL;
+ break;
+ }
+ }
+ return 0;
+}
+
+static int
+init_mempools(unsigned nb_mbuf)
+{
+ struct rte_security_ctx *sec_ctx;
+ int socketid;
+ unsigned lcore_id;
+ uint16_t nb_sess = 64;
+ uint32_t sess_sz;
+ char s[64];
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ if (rte_lcore_is_enabled(lcore_id) == 0)
+ continue;
+
+ socketid = rte_lcore_to_socket_id(lcore_id);
+ if (socketid >= NB_SOCKETS) {
+ rte_exit(EXIT_FAILURE,
+ "Socket %d of lcore %u is out of range %d\n",
+ socketid, lcore_id, NB_SOCKETS);
+ }
+ if (mbufpool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "mbuf_pool_%d", socketid);
+ mbufpool[socketid] =
+ rte_pktmbuf_pool_create(s, nb_mbuf,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+ if (mbufpool[socketid] == NULL)
+ rte_exit(EXIT_FAILURE,
+ "Cannot init mbuf pool on socket %d\n",
+ socketid);
+ else
+ printf("Allocated mbuf pool on socket %d\n",
+ socketid);
+ }
+
+ sec_ctx = rte_eth_dev_get_sec_ctx(lcore_cfg.port);
+ if (sec_ctx == NULL)
+ continue;
+
+ sess_sz = rte_security_session_get_size(sec_ctx);
+ if (sess_pool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "sess_pool_%d", socketid);
+ sess_pool[socketid] =
+ rte_mempool_create(s, nb_sess,
+ sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ socketid, 0);
+ if (sess_pool[socketid] == NULL) {
+ printf("Cannot init sess pool on socket %d\n",
+ socketid);
+ return TEST_FAILED;
+ } else
+ printf("Allocated sess pool on socket %d\n",
+ socketid);
+ }
+ if (sess_priv_pool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "sess_priv_pool_%d", socketid);
+ sess_priv_pool[socketid] =
+ rte_mempool_create(s, nb_sess,
+ sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ socketid, 0);
+ if (sess_priv_pool[socketid] == NULL) {
+ printf("Cannot init sess_priv pool on socket %d\n",
+ socketid);
+ return TEST_FAILED;
+ } else
+ printf("Allocated sess_priv pool on socket %d\n",
+ socketid);
+ }
+ }
+ return 0;
+}
+
+static void
+create_default_flow(uint16_t port_id)
+{
+ struct rte_flow_action action[2];
+ struct rte_flow_item pattern[2];
+ struct rte_flow_attr attr = {0};
+ struct rte_flow_error err;
+ struct rte_flow *flow;
+ int ret;
+
+ /* Add the default rte_flow to enable SECURITY for all ESP packets */
+
+ pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP;
+ pattern[0].spec = NULL;
+ pattern[0].mask = NULL;
+ pattern[0].last = NULL;
+ pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+ action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ action[0].conf = NULL;
+ action[1].type = RTE_FLOW_ACTION_TYPE_END;
+ action[1].conf = NULL;
+
+ attr.ingress = 1;
+
+ ret = rte_flow_validate(port_id, &attr, pattern, action, &err);
+ if (ret)
+ return;
+
+ flow = rte_flow_create(port_id, &attr, pattern, action, &err);
+ if (flow == NULL)
+ return;
+}
+
+struct rte_mbuf **tx_pkts_burst;
+
+static int
+test_ipsec(struct reassembly_vector *vector,
+ enum rte_security_ipsec_sa_direction dir,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ unsigned i, portid, nb_rx = 0, nb_tx = 1;
+ struct rte_ipsec_session ips = {0};
+ struct rte_eth_dev_info dev_info = {0};
+
+ portid = lcore_cfg.port;
+ rte_eth_dev_info_get(portid, &dev_info);
+ if (dev_info.reass_capa.max_frags < nb_tx)
+ return TEST_SKIPPED;
+
+ init_traffic(mbufpool[lcore_cfg.socketid],
+ tx_pkts_burst, vector->frags, nb_tx);
+
+ /* Create Inline IPsec session. */
+ if (create_inline_ipsec_session(vector->sa_data, portid, &ips, dir,
+ tun_type))
+ return TEST_FAILED;
+ if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ create_default_flow(portid);
+ else {
+ for (i = 0; i < nb_tx; i++) {
+ if (ips.security.ol_flags &
+ RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ips.security.ctx,
+ ips.security.ses, tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ tx_pkts_burst[i]->l2_len = 14;
+ }
+ }
+
+ nb_tx = rte_eth_tx_burst(portid, 0, tx_pkts_burst, nb_tx);
+
+ rte_pause();
+
+ do {
+ nb_rx = rte_eth_rx_burst(portid, 0, pkts_burst, MAX_PKT_BURST);
+ } while (nb_rx == 0);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ips.security.ctx, ips.security.ses);
+
+ /* Compare results with known vectors. */
+ if (nb_rx == 1) {
+ if (memcmp(rte_pktmbuf_mtod(pkts_burst[0], char *),
+ vector->full_pkt->data,
+ (size_t) vector->full_pkt->len)) {
+ printf("\n====Inline IPsec case failed: Data Mismatch");
+ rte_hexdump(stdout, "received",
+ rte_pktmbuf_mtod(pkts_burst[0], char *),
+ vector->full_pkt->len);
+ rte_hexdump(stdout, "reference",
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+ } else
+ return TEST_FAILED;
+}
+
+static int
+ut_setup_inline_ipsec(void)
+{
+ uint16_t portid = lcore_cfg.port;
+ int ret;
+
+ /* Set IP reassembly configuration. */
+ struct rte_eth_dev_info dev_info = {0};
+ rte_eth_dev_info_get(portid, &dev_info);
+
+ ret = rte_eth_ip_reassembly_conf_set(portid, &dev_info.reass_capa);
+ if (ret < 0) {
+ printf("IP reassembly configuration err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+
+ /* Start device */
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ /* always eanble promiscuous */
+ ret = rte_eth_promiscuous_enable(portid);
+ if (ret != 0) {
+ printf("rte_eth_promiscuous_enable: err=%s, port=%d\n",
+ rte_strerror(-ret), portid);
+ return ret;
+ }
+ lcore_cfg.port = portid;
+ check_all_ports_link_status(1, RTE_PORT_ALL);
+
+ return 0;
+}
+
+static void
+ut_teardown_inline_ipsec(void)
+{
+ uint16_t portid = lcore_cfg.port;
+ int socketid = lcore_cfg.socketid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (socketid != rte_eth_dev_socket_id(portid))
+ continue;
+
+ ret = rte_eth_dev_stop(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+
+static int
+testsuite_setup(void)
+{
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint16_t nb_ports;
+ int socketid, ret;
+ uint16_t nb_rx_queue = 1, nb_tx_queue = 1;
+ uint16_t portid = lcore_cfg.port;
+
+ printf("Start inline IPsec test.\n");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports < NB_ETHPORTS_USED) {
+ printf("At least %u port(s) used for test\n",
+ NB_ETHPORTS_USED);
+ return -1;
+ }
+
+ init_lcore();
+
+ init_mempools(NB_MBUF);
+
+ socketid = lcore_cfg.socketid;
+ if (tx_pkts_burst == NULL) {
+ tx_pkts_burst = (struct rte_mbuf **)
+ rte_calloc_socket("tx_buff",
+ MAX_TRAFFIC_BURST * nb_ports,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE, socketid);
+ if (!tx_pkts_burst)
+ return -1;
+ }
+
+ printf("Generate %d packets @socket %d\n",
+ MAX_TRAFFIC_BURST * nb_ports, socketid);
+
+ nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+ /* port configure */
+ ret = rte_eth_dev_configure(portid, nb_rx_queue,
+ nb_tx_queue, &port_conf);
+ if (ret < 0) {
+ printf("Cannot configure device: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ ret = rte_eth_macaddr_get(portid, &ports_eth_addr[portid]);
+ if (ret < 0) {
+ printf("Cannot get mac address: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ printf("Port %u ", portid);
+ print_ethaddr("Address:", &ports_eth_addr[portid]);
+ printf("\n");
+
+ /* tx queue setup */
+ ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+ socketid, &tx_conf);
+ if (ret < 0) {
+ printf("rte_eth_tx_queue_setup: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ /* rx queue steup */
+ ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+ socketid, &rx_conf,
+ mbufpool[socketid]);
+ if (ret < 0) {
+ printf("rte_eth_rx_queue_setup: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+
+
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ int ret;
+ uint16_t portid = lcore_cfg.port;
+ uint16_t socketid = lcore_cfg.socketid;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (socketid != rte_eth_dev_socket_id(portid))
+ continue;
+
+ ret = rte_eth_dev_stop(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+static int
+test_ipsec_ipv4_encap_nofrag(void) {
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_gcm128_cipher,
+ .frags[0] = &pkt_ipv4_plain,
+ };
+ return test_ipsec(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_ipsec_ipv4_decap_nofrag(void) {
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_plain,
+ .frags[0] = &pkt_ipv4_gcm128_cipher,
+ };
+ return test_ipsec(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static struct unit_test_suite inline_ipsec_testsuite = {
+ .suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_ipsec_ipv4_encap_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_ipsec_ipv4_decap_nofrag),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static int
+test_inline_ipsec(void)
+{
+ return unit_test_suite_runner(&inline_ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(inline_ipsec_autotest, test_inline_ipsec);
diff --git a/app/test/test_inline_ipsec_reassembly_vectors.h b/app/test/test_inline_ipsec_reassembly_vectors.h
new file mode 100644
index 0000000000..68066a0957
--- /dev/null
+++ b/app/test/test_inline_ipsec_reassembly_vectors.h
@@ -0,0 +1,198 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+
+#define MAX_FRAG_LEN 1500
+#define MAX_FRAGS 6
+#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
+struct ipsec_session_data {
+ struct {
+ uint8_t data[32];
+ } key;
+ struct {
+ uint8_t data[4];
+ unsigned int len;
+ } salt;
+ struct {
+ uint8_t data[16];
+ } iv;
+ struct rte_security_ipsec_xform ipsec_xform;
+ bool aead;
+ union {
+ struct {
+ struct rte_crypto_sym_xform cipher;
+ struct rte_crypto_sym_xform auth;
+ } chain;
+ struct rte_crypto_sym_xform aead;
+ } xform;
+};
+
+struct ipsec_test_packet {
+ uint32_t len;
+ uint32_t l2_offset;
+ uint32_t l3_offset;
+ uint32_t l4_offset;
+ uint8_t data[MAX_PKT_LEN];
+};
+
+struct reassembly_vector {
+ struct ipsec_session_data *sa_data;
+ struct ipsec_test_packet *full_pkt;
+ struct ipsec_test_packet *frags[MAX_FRAGS];
+};
+
+struct ipsec_test_packet pkt_ipv4_plain = {
+ .len = 76,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0x3e, 0x69, 0x8f, 0x00, 0x00,
+ 0x80, 0x11, 0x4d, 0xcc, 0xc0, 0xa8, 0x01, 0x02,
+ 0xc0, 0xa8, 0x01, 0x01,
+
+ /* UDP */
+ 0x0a, 0x98, 0x00, 0x35, 0x00, 0x2a, 0x23, 0x43,
+ 0xb2, 0xd0, 0x01, 0x00, 0x00, 0x01, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x03, 0x73, 0x69, 0x70,
+ 0x09, 0x63, 0x79, 0x62, 0x65, 0x72, 0x63, 0x69,
+ 0x74, 0x79, 0x02, 0x64, 0x6b, 0x00, 0x00, 0x01,
+ 0x00, 0x01,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_gcm128_cipher = {
+ .len = 130,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP - outer header */
+ 0x45, 0x00, 0x00, 0x74, 0x69, 0x8f, 0x00, 0x00,
+ 0x80, 0x32, 0x4d, 0x75, 0xc0, 0xa8, 0x01, 0x02,
+ 0xc0, 0xa8, 0x01, 0x01,
+
+ /* ESP */
+ 0x00, 0x00, 0xa5, 0xf8, 0x00, 0x00, 0x00, 0x01,
+
+ /* IV */
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88,
+
+ /* Data */
+ 0xde, 0xb2, 0x2c, 0xd9, 0xb0, 0x7c, 0x72, 0xc1,
+ 0x6e, 0x3a, 0x65, 0xbe, 0xeb, 0x8d, 0xf3, 0x04,
+ 0xa5, 0xa5, 0x89, 0x7d, 0x33, 0xae, 0x53, 0x0f,
+ 0x1b, 0xa7, 0x6d, 0x5d, 0x11, 0x4d, 0x2a, 0x5c,
+ 0x3d, 0xe8, 0x18, 0x27, 0xc1, 0x0e, 0x9a, 0x4f,
+ 0x51, 0x33, 0x0d, 0x0e, 0xec, 0x41, 0x66, 0x42,
+ 0xcf, 0xbb, 0x85, 0xa5, 0xb4, 0x7e, 0x48, 0xa4,
+ 0xec, 0x3b, 0x9b, 0xa9, 0x5d, 0x91, 0x8b, 0xd4,
+ 0x29, 0xc7, 0x37, 0x57, 0x9f, 0xf1, 0x9e, 0x58,
+ 0xcf, 0xfc, 0x60, 0x7a, 0x3b, 0xce, 0x89, 0x94,
+ },
+};
+
+static inline void
+test_vector_payload_populate(struct ipsec_test_packet *pkt,
+ bool first_frag)
+{
+ uint32_t i = pkt->l4_offset;
+
+ /* For non-fragmented packets and first frag, skip 8 bytes from
+ * l4_offset for UDP header */
+
+ if (first_frag)
+ i += 8;
+
+ for (; i < pkt->len; i++)
+ pkt->data[i] = 0x58;
+}
+
+static inline unsigned int
+reass_test_vectors_init(struct reassembly_vector *vector)
+{
+ unsigned int i = 0;
+
+ if (vector->frags[0] != NULL && vector->frags[1] == NULL)
+ return 1;
+
+ test_vector_payload_populate(vector->full_pkt, true);
+ for (;vector->frags[i] != NULL && i < MAX_FRAGS; i++)
+ test_vector_payload_populate(vector->frags[i],
+ (i == 0) ? true : false);
+ return i;
+}
+
+struct ipsec_session_data conf_aes_128_gcm = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.reass_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 6/8] app/test: add IP reassembly case with no frags
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
` (4 preceding siblings ...)
2022-01-03 15:08 ` [PATCH 5/8] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-03 15:08 ` [PATCH 7/8] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
` (3 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
test_inline_ipsec testsuite is extended to test IP reassembly of inbound
fragmented packets. The fragmented packet is sent on an interface
which encrypts the packet and then it is loopbacked on the
same interface which decrypts the packet and then attempts IP reassembly
of the decrypted packets.
In this patch, a case is added for packets without fragmentation to
verify the complete path. Other cases are added in subsequent patches.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_inline_ipsec.c | 154 +++++++++++++++++++++++++++++++++++
1 file changed, 154 insertions(+)
diff --git a/app/test/test_inline_ipsec.c b/app/test/test_inline_ipsec.c
index 54b56ba9e8..f704725c0f 100644
--- a/app/test/test_inline_ipsec.c
+++ b/app/test/test_inline_ipsec.c
@@ -460,6 +460,145 @@ create_default_flow(uint16_t port_id)
struct rte_mbuf **tx_pkts_burst;
+static int
+compare_pkt_data(struct rte_mbuf *m, uint8_t *ref, unsigned int tot_len)
+{
+ unsigned int len;
+ unsigned int nb_segs = m->nb_segs;
+ unsigned int matched = 0;
+
+ while (m && nb_segs != 0) {
+ len = tot_len;
+ if (len > m->data_len)
+ len = m->data_len;
+ if (len != 0) {
+ if (memcmp(rte_pktmbuf_mtod(m, char *),
+ ref + matched, len)) {
+ printf("\n====Reassembly case failed: Data Mismatch");
+ rte_hexdump(stdout, "Reassembled",
+ rte_pktmbuf_mtod(m, char *),
+ len);
+ rte_hexdump(stdout, "reference",
+ ref + matched,
+ len);
+ return TEST_FAILED;
+ }
+ }
+ tot_len -= len;
+ matched += len;
+ m = m->next;
+ nb_segs--;
+ }
+ return TEST_SUCCESS;
+}
+
+static int
+test_reassembly(struct reassembly_vector *vector,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ unsigned i, portid, nb_rx = 0, nb_tx = 0;
+ struct rte_ipsec_session out_ips = {0};
+ struct rte_ipsec_session in_ips = {0};
+ struct rte_eth_dev_info dev_info = {0};
+ int ret = 0;
+
+ /* Initialize mbuf with test vectors. */
+ nb_tx = reass_test_vectors_init(vector);
+
+ portid = lcore_cfg.port;
+ rte_eth_dev_info_get(portid, &dev_info);
+ if (dev_info.reass_capa.max_frags < nb_tx)
+ return TEST_SKIPPED;
+
+ /**
+ * Set some finite value in timeout incase PMD support much
+ * more than requied in this app.
+ */
+ if (dev_info.reass_capa.reass_timeout > APP_REASS_TIMEOUT) {
+ dev_info.reass_capa.reass_timeout = APP_REASS_TIMEOUT;
+ rte_eth_ip_reassembly_conf_set(portid, &dev_info.reass_capa);
+ }
+
+ init_traffic(mbufpool[lcore_cfg.socketid],
+ tx_pkts_burst, vector->frags, nb_tx);
+
+ /* Create Inline IPsec outbound session. */
+ ret = create_inline_ipsec_session(vector->sa_data, portid, &out_ips,
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS, tun_type);
+ if (ret)
+ return ret;
+ for (i = 0; i < nb_tx; i++) {
+ if (out_ips.security.ol_flags &
+ RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(out_ips.security.ctx,
+ out_ips.security.ses, tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ tx_pkts_burst[i]->l2_len = RTE_ETHER_HDR_LEN;
+ }
+ /* Create Inline IPsec inbound session. */
+ create_inline_ipsec_session(vector->sa_data, portid, &in_ips,
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS, tun_type);
+ create_default_flow(portid);
+
+ nb_tx = rte_eth_tx_burst(portid, 0, tx_pkts_burst, nb_tx);
+
+ rte_pause();
+
+ do {
+ nb_rx = rte_eth_rx_burst(portid, 0, pkts_burst, MAX_PKT_BURST);
+ for (i = 0; i < nb_rx; i++) {
+ if ((pkts_burst[i]->ol_flags &
+ RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) &&
+ rte_eth_ip_reass_dynfield_is_registered()) {
+ rte_eth_ip_reass_dynfield_t *dynfield[MAX_PKT_BURST];
+ int j = 0;
+
+ dynfield[j] = rte_eth_ip_reass_dynfield(pkts_burst[i]);
+ while ((dynfield[j]->next_frag->ol_flags &
+ RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) &&
+ dynfield[j]->nb_frags > 0) {
+
+ rte_pktmbuf_dump(stdout,
+ dynfield[j]->next_frag,
+ dynfield[j]->next_frag->data_len);
+ j++;
+ dynfield[j] = rte_eth_ip_reass_dynfield(
+ dynfield[j-1]->next_frag);
+ }
+ /**
+ * IP reassembly offload is incomplete, and
+ * fragments are listed in dynfield which
+ * can be reassembled in SW.
+ */
+ printf("\nHW IP Reassembly failed,"
+ "\nAttempt SW IP Reassembly,"
+ "\nmbuf is chained with fragments.\n");
+ }
+ }
+ } while (nb_rx == 0);
+
+ /* Clear session data. */
+ rte_security_session_destroy(out_ips.security.ctx,
+ out_ips.security.ses);
+ rte_security_session_destroy(in_ips.security.ctx,
+ in_ips.security.ses);
+
+ /* Compare results with known vectors. */
+ if (nb_rx == 1) {
+ if (vector->full_pkt->len == pkts_burst[0]->pkt_len)
+ return compare_pkt_data(pkts_burst[0],
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ else {
+ rte_pktmbuf_dump(stdout, pkts_burst[0],
+ pkts_burst[0]->pkt_len);
+ }
+ }
+
+ return TEST_FAILED;
+}
+
static int
test_ipsec(struct reassembly_vector *vector,
enum rte_security_ipsec_sa_direction dir,
@@ -703,6 +842,18 @@ test_ipsec_ipv4_decap_nofrag(void) {
RTE_SECURITY_IPSEC_TUNNEL_IPV4);
}
+static int
+test_reassembly_ipv4_nofrag(void) {
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_plain,
+ .frags[0] = &pkt_ipv4_plain,
+ };
+ return test_reassembly(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = testsuite_setup,
@@ -714,6 +865,9 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_ipsec_ipv4_decap_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_nofrag),
TEST_CASES_END() /**< NULL terminate unit test array */
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 7/8] app/test: add IP reassembly cases with multiple fragments
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
` (5 preceding siblings ...)
2022-01-03 15:08 ` [PATCH 6/8] app/test: add IP reassembly case with no frags Akhil Goyal
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-03 15:08 ` [PATCH 8/8] app/test: add IP reassembly negative cases Akhil Goyal
` (2 subsequent siblings)
9 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
More cases are added in test_inline_ipsec test suite to verify packets
having multiple IP(v4/v6) fragments. These fragments are encrypted
and then decrypted as per inline IPsec processing and then an attempt
is made to reassemble the fragments. The reassembled packet
content is matched with the known test vectors.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_inline_ipsec.c | 101 +++
.../test_inline_ipsec_reassembly_vectors.h | 592 ++++++++++++++++++
2 files changed, 693 insertions(+)
diff --git a/app/test/test_inline_ipsec.c b/app/test/test_inline_ipsec.c
index f704725c0f..3f3731760d 100644
--- a/app/test/test_inline_ipsec.c
+++ b/app/test/test_inline_ipsec.c
@@ -853,6 +853,89 @@ test_reassembly_ipv4_nofrag(void) {
RTE_SECURITY_IPSEC_TUNNEL_IPV4);
}
+static int
+test_reassembly_ipv4_2frag(void) {
+ struct reassembly_vector ipv4_2frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f2,
+
+ };
+ return test_reassembly(&ipv4_2frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_2frag(void) {
+ struct reassembly_vector ipv6_2frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p1,
+ .frags[0] = &pkt_ipv6_udp_p1_f1,
+ .frags[1] = &pkt_ipv6_udp_p1_f2,
+ };
+ return test_reassembly(&ipv6_2frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
+static int
+test_reassembly_ipv4_4frag(void) {
+ struct reassembly_vector ipv4_4frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ };
+ return test_reassembly(&ipv4_4frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_4frag(void) {
+ struct reassembly_vector ipv6_4frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p2,
+ .frags[0] = &pkt_ipv6_udp_p2_f1,
+ .frags[1] = &pkt_ipv6_udp_p2_f2,
+ .frags[2] = &pkt_ipv6_udp_p2_f3,
+ .frags[3] = &pkt_ipv6_udp_p2_f4,
+ };
+ return test_reassembly(&ipv6_4frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
+static int
+test_reassembly_ipv4_5frag(void) {
+ struct reassembly_vector ipv4_5frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p3,
+ .frags[0] = &pkt_ipv4_udp_p3_f1,
+ .frags[1] = &pkt_ipv4_udp_p3_f2,
+ .frags[2] = &pkt_ipv4_udp_p3_f3,
+ .frags[3] = &pkt_ipv4_udp_p3_f4,
+ .frags[4] = &pkt_ipv4_udp_p3_f5,
+ };
+ return test_reassembly(&ipv4_5frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_5frag(void) {
+ struct reassembly_vector ipv6_5frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p3,
+ .frags[0] = &pkt_ipv6_udp_p3_f1,
+ .frags[1] = &pkt_ipv6_udp_p3_f2,
+ .frags[2] = &pkt_ipv6_udp_p3_f3,
+ .frags[3] = &pkt_ipv6_udp_p3_f4,
+ .frags[4] = &pkt_ipv6_udp_p3_f5,
+ };
+ return test_reassembly(&ipv6_5frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
@@ -868,6 +951,24 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_reassembly_ipv4_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_2frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_2frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_4frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_4frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_5frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_5frag),
TEST_CASES_END() /**< NULL terminate unit test array */
}
diff --git a/app/test/test_inline_ipsec_reassembly_vectors.h b/app/test/test_inline_ipsec_reassembly_vectors.h
index 68066a0957..04cc3367c1 100644
--- a/app/test/test_inline_ipsec_reassembly_vectors.h
+++ b/app/test/test_inline_ipsec_reassembly_vectors.h
@@ -4,6 +4,47 @@
#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+/* The source file includes below test vectors */
+/* IPv6:
+ *
+ * 1) pkt_ipv6_udp_p1
+ * pkt_ipv6_udp_p1_f1
+ * pkt_ipv6_udp_p1_f2
+ *
+ * 2) pkt_ipv6_udp_p2
+ * pkt_ipv6_udp_p2_f1
+ * pkt_ipv6_udp_p2_f2
+ * pkt_ipv6_udp_p2_f3
+ * pkt_ipv6_udp_p2_f4
+ *
+ * 3) pkt_ipv6_udp_p3
+ * pkt_ipv6_udp_p3_f1
+ * pkt_ipv6_udp_p3_f2
+ * pkt_ipv6_udp_p3_f3
+ * pkt_ipv6_udp_p3_f4
+ * pkt_ipv6_udp_p3_f5
+ */
+
+/* IPv4:
+ *
+ * 1) pkt_ipv4_udp_p1
+ * pkt_ipv4_udp_p1_f1
+ * pkt_ipv4_udp_p1_f2
+ *
+ * 2) pkt_ipv4_udp_p2
+ * pkt_ipv4_udp_p2_f1
+ * pkt_ipv4_udp_p2_f2
+ * pkt_ipv4_udp_p2_f3
+ * pkt_ipv4_udp_p2_f4
+ *
+ * 3) pkt_ipv4_udp_p3
+ * pkt_ipv4_udp_p3_f1
+ * pkt_ipv4_udp_p3_f2
+ * pkt_ipv4_udp_p3_f3
+ * pkt_ipv4_udp_p3_f4
+ * pkt_ipv4_udp_p3_f5
+ */
+
#define MAX_FRAG_LEN 1500
#define MAX_FRAGS 6
#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
@@ -43,6 +84,557 @@ struct reassembly_vector {
struct ipsec_test_packet *frags[MAX_FRAGS];
};
+struct ipsec_test_packet pkt_ipv6_udp_p1 = {
+ .len = 1514,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 54,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0xb4, 0x2C, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p1_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x5c, 0x92, 0xac, 0xf1,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p1_f2 = {
+ .len = 186,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x00, 0x84, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x38, 0x5c, 0x92, 0xac, 0xf1,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2 = {
+ .len = 4496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 54,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x11, 0x5a, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x64, 0x6c, 0x68, 0x9f,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f2 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f3 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f4 = {
+ .len = 496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0xba, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa8, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3 = {
+ .len = 5796,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x16, 0x6e, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x65, 0xcf, 0x5a, 0xae,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f2 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f3 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f4 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa9, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f5 = {
+ .len = 460,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0x96, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x14, 0xe0, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1 = {
+ .len = 1514,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0xdc, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x66, 0x0d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x01, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1_f2 = {
+ .len = 114,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0x64, 0x00, 0x01, 0x00, 0xaf,
+ 0x40, 0x11, 0x6a, 0xd6, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2 = {
+ .len = 4496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x11, 0x82, 0x00, 0x02, 0x00, 0x00,
+ 0x40, 0x11, 0x5a, 0x66, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5c, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f2 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xad, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f3 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfe, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f4 = {
+ .len = 296,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x01, 0x1a, 0x00, 0x02, 0x02, 0x0d,
+ 0x40, 0x11, 0x68, 0xc1, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3 = {
+ .len = 5796,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x16, 0x96, 0x00, 0x03, 0x00, 0x00,
+ 0x40, 0x11, 0x55, 0x51, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5b, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f2 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xac, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f3 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfd, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f4 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x22, 0x0d,
+ 0x40, 0x11, 0x44, 0x4e, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f5 = {
+ .len = 196,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0xb6, 0x00, 0x03, 0x02, 0xbc,
+ 0x40, 0x11, 0x68, 0x75, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
struct ipsec_test_packet pkt_ipv4_plain = {
.len = 76,
.l2_offset = 0,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH 8/8] app/test: add IP reassembly negative cases
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
` (6 preceding siblings ...)
2022-01-03 15:08 ` [PATCH 7/8] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
@ 2022-01-03 15:08 ` Akhil Goyal
2022-01-06 9:51 ` [PATCH 0/8] ethdev: introduce IP reassembly offload David Marchand
2022-01-20 16:26 ` [PATCH v2 0/4] " Akhil Goyal
9 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-03 15:08 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, Akhil Goyal
test_inline_ipsec testsuite is added with cases where the IP reassembly
is incomplete and software will need to reassemble them later.
The failure cases added are:
- all fragments are not received.
- same fragment is received more than once.
- out of order fragments.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_inline_ipsec.c | 53 ++++++++++++++++++++++++++++++++++++
1 file changed, 53 insertions(+)
diff --git a/app/test/test_inline_ipsec.c b/app/test/test_inline_ipsec.c
index 3f3731760d..0d74e23359 100644
--- a/app/test/test_inline_ipsec.c
+++ b/app/test/test_inline_ipsec.c
@@ -936,6 +936,50 @@ test_reassembly_ipv6_5frag(void) {
RTE_SECURITY_IPSEC_TUNNEL_IPV6);
}
+static int
+test_reassembly_incomplete(void) {
+ /* Negative test case, not sending all fragments. */
+ struct reassembly_vector ipv4_incomplete_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = NULL,
+ .frags[3] = NULL,
+ };
+ return test_reassembly(&ipv4_incomplete_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_overlap(void) {
+ /* Negative test case, sending 1 fragment twice. */
+ struct reassembly_vector ipv4_overlap_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f2, /* overlap */
+ .frags[3] = &pkt_ipv4_udp_p2_f3,
+ };
+ return test_reassembly(&ipv4_overlap_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_out_of_order(void) {
+ /* Negative test case, sending 1 fragment twice. */
+ struct reassembly_vector ipv4_ooo_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f4,
+ .frags[1] = &pkt_ipv4_udp_p2_f3,
+ .frags[2] = &pkt_ipv4_udp_p2_f1,
+ .frags[3] = &pkt_ipv4_udp_p2_f2,
+ };
+ return test_reassembly(&ipv4_ooo_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
@@ -969,6 +1013,15 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_reassembly_ipv6_5frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_incomplete),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_overlap),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_out_of_order),
TEST_CASES_END() /**< NULL terminate unit test array */
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH 0/8] ethdev: introduce IP reassembly offload
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
` (7 preceding siblings ...)
2022-01-03 15:08 ` [PATCH 8/8] app/test: add IP reassembly negative cases Akhil Goyal
@ 2022-01-06 9:51 ` David Marchand
2022-01-06 9:54 ` [EXT] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 0/4] " Akhil Goyal
9 siblings, 1 reply; 184+ messages in thread
From: David Marchand @ 2022-01-06 9:51 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, Anoob Joseph, Radu Nicolau, Declan Doherty, Hemant Agrawal,
Matan Azrad, Ananyev, Konstantin, Thomas Monjalon, Yigit, Ferruh,
Andrew Rybchenko, Olivier Matz, Rosen Xu
On Mon, Jan 3, 2022 at 4:08 PM Akhil Goyal <gakhil@marvell.com> wrote:
>
> As discussed in the RFC[1] sent in 21.11, a new offload is
> introduced in ethdev for IP reassembly.
>
> This patchset add the RX offload and an application to test it.
> Currently, the offload is tested along with inline IPsec processing.
> It can also be updated as a standalone offload without IPsec, if there
> are some hardware available to test it.
> The patchset is tested on cnxk platform. The driver implementation is
> added as a separate patchset.
>
> [1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
>
>
> Akhil Goyal (8):
> ethdev: introduce IP reassembly offload
> ethdev: add dev op for IP reassembly configuration
> ethdev: add mbuf dynfield for incomplete IP reassembly
> security: add IPsec option for IP reassembly
> app/test: add unit cases for inline IPsec offload
> app/test: add IP reassembly case with no frags
> app/test: add IP reassembly cases with multiple fragments
> app/test: add IP reassembly negative cases
>
> app/test/meson.build | 1 +
> app/test/test_inline_ipsec.c | 1036 +++++++++++++++++
> .../test_inline_ipsec_reassembly_vectors.h | 790 +++++++++++++
I see no update in MAINTAINERS for those new files.
So I think they end up in the "main" repo scope.
You can either update MAINTAINERS (changing the app/test/test_ipsec*
pattern as app/test/test_*ipsec*) or rename files as
app/test/test_ipsec_inline.c, for example.
--
David Marchand
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH 0/8] ethdev: introduce IP reassembly offload
2022-01-06 9:51 ` [PATCH 0/8] ethdev: introduce IP reassembly offload David Marchand
@ 2022-01-06 9:54 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-06 9:54 UTC (permalink / raw)
To: David Marchand
Cc: dev, Anoob Joseph, Radu Nicolau, Declan Doherty, Hemant Agrawal,
Matan Azrad, Ananyev, Konstantin, Thomas Monjalon, Yigit, Ferruh,
Andrew Rybchenko, Olivier Matz, Rosen Xu
> > Akhil Goyal (8):
> > ethdev: introduce IP reassembly offload
> > ethdev: add dev op for IP reassembly configuration
> > ethdev: add mbuf dynfield for incomplete IP reassembly
> > security: add IPsec option for IP reassembly
> > app/test: add unit cases for inline IPsec offload
> > app/test: add IP reassembly case with no frags
> > app/test: add IP reassembly cases with multiple fragments
> > app/test: add IP reassembly negative cases
> >
> > app/test/meson.build | 1 +
> > app/test/test_inline_ipsec.c | 1036 +++++++++++++++++
> > .../test_inline_ipsec_reassembly_vectors.h | 790 +++++++++++++
>
> I see no update in MAINTAINERS for those new files.
> So I think they end up in the "main" repo scope.
>
> You can either update MAINTAINERS (changing the app/test/test_ipsec*
> pattern as app/test/test_*ipsec*) or rename files as
> app/test/test_ipsec_inline.c, for example.
>
Thanks for the update David,
There are a few other issues in the patchset, I will post a new version in next few days
With MAINTAINERS updated.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 1/8] ethdev: introduce IP reassembly offload
2022-01-03 15:08 ` [PATCH 1/8] " Akhil Goyal
@ 2022-01-11 16:03 ` Ananyev, Konstantin
2022-01-22 7:38 ` Andrew Rybchenko
1 sibling, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-11 16:03 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan,
thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu, Rosen
> IP Reassembly is a costly operation if it is done in software.
> The operation becomes even more costlier if IP fragmants are encrypted.
> However, if it is offloaded to HW, it can considerably save application cycles.
>
> Hence, a new offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is introduced in
> ethdev for devices which can attempt reassembly of packets in hardware.
> rte_eth_dev_info is updated with the reassembly capabilities which a device
> can support.
>
> The resulting reassembled packet would be a typical segmented mbuf in
> case of success.
>
> And if reassembly of fragments is failed or is incomplete (if fragments do
> not come before the reass_timeout), the mbuf ol_flags can be updated.
> This is updated in a subsequent patch.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> doc/guides/nics/features.rst | 12 ++++++++++++
> lib/ethdev/rte_ethdev.c | 1 +
> lib/ethdev/rte_ethdev.h | 32 +++++++++++++++++++++++++++++++-
> 3 files changed, 44 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 27be2d2576..1dfdee9602 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -602,6 +602,18 @@ Supports inner packet L4 checksum.
> ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
>
>
> +.. _nic_features_ip_reassembly:
> +
> +IP reassembly
> +-------------
> +
> +Supports IP reassembly in hardware.
> +
> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY``.
> +* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_IP_REASSEMBLY_INCOMPLETE``.
> +* **[provides] rte_eth_dev_info**: ``reass_capa``.
> +
> +
> .. _nic_features_shared_rx_queue:
>
> Shared Rx queue
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index a1d475a292..d9a03f12f9 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -126,6 +126,7 @@ static const struct {
> RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
> RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
> RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
> + RTE_RX_OFFLOAD_BIT2STR(IP_REASSEMBLY),
> };
>
> #undef RTE_RX_OFFLOAD_BIT2STR
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index fa299c8ad7..11427b2e4d 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1586,6 +1586,7 @@ struct rte_eth_conf {
> #define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19)
> #define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
> #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20)
> +#define RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY RTE_BIT64(21)
>
> #define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
> RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
> @@ -1781,6 +1782,33 @@ enum rte_eth_representor_type {
> RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
> };
>
> +/* Flag to offload IP reassembly for IPv4 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
> +/* Flag to offload IP reassembly for IPv6 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice.
> + *
> + * A structure used to set IP reassembly configuration.
> + *
> + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> + * the PMD will attempt IP reassembly for the received packets as per
> + * properties defined in this structure:
> + *
> + */
> +struct rte_eth_ip_reass_params {
> + /** Maximum time in ms which PMD can wait for other fragments. */
> + uint32_t reass_timeout;
> + /** Maximum number of fragments that can be reassembled. */
> + uint16_t max_frags;
> + /**
> + * Flags to enable reassembly of packet types -
> + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> + */
> + uint16_t flags;
> +};
> +
> /**
> * A structure used to retrieve the contextual information of
> * an Ethernet device, such as the controlling driver of the
> @@ -1841,8 +1869,10 @@ struct rte_eth_dev_info {
> * embedded managed interconnect/switch.
> */
> struct rte_eth_switch_info switch_info;
> + /** IP reassembly offload capabilities that a device can support. */
> + struct rte_eth_ip_reass_params reass_capa;
>
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
> --
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-03 15:08 ` [PATCH 2/8] ethdev: add dev op for IP reassembly configuration Akhil Goyal
@ 2022-01-11 16:09 ` Ananyev, Konstantin
2022-01-11 18:54 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-11 16:09 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan,
thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu, Rosen
> A new ethernet device op is added to give application control over
> the IP reassembly configuration. This operation is an optional
> call from the application, default values are set by PMD and
> exposed via rte_eth_dev_info.
> Application should always first retreive the capabilities from
> rte_eth_dev_info and then set the fields accordingly.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> lib/ethdev/ethdev_driver.h | 19 +++++++++++++++++++
> lib/ethdev/rte_ethdev.c | 30 ++++++++++++++++++++++++++++++
> lib/ethdev/rte_ethdev.h | 28 ++++++++++++++++++++++++++++
> lib/ethdev/version.map | 3 +++
> 4 files changed, 80 insertions(+)
>
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index d95605a355..0ed53c14f3 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -990,6 +990,22 @@ typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev,
> typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
> uint64_t *features);
>
> +/**
> + * @internal
> + * Set configuration parameters for enabling IP reassembly offload in hardware.
> + *
> + * @param dev
> + * Port (ethdev) handle
> + *
> + * @param[in] conf
> + * Configuration parameters for IP reassembly.
> + *
> + * @return
> + * Negative errno value on error, zero otherwise
> + */
> +typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
> + struct rte_eth_ip_reass_params *conf);
> +
> /**
> * @internal A structure containing the functions exported by an Ethernet driver.
> */
> @@ -1186,6 +1202,9 @@ struct eth_dev_ops {
> * kinds of metadata to the PMD
> */
> eth_rx_metadata_negotiate_t rx_metadata_negotiate;
> +
> + /** Set IP reassembly configuration */
> + eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
> };
>
> /**
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index d9a03f12f9..ecc6c1fe37 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -6473,6 +6473,36 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
> (*dev->dev_ops->rx_metadata_negotiate)(dev, features));
> }
>
> +int
> +rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> + struct rte_eth_ip_reass_params *conf)
> +{
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
Should we check here that device is properly configured, but not started yet?
Another question - if we have reassembly_conf_set() would it make sense to
have also reassembly_conf_get?
So user can retrieve current ip_reassembly config values?
> +
> + if ((dev->data->dev_conf.rxmode.offloads &
> + RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) == 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "The port (ID=%"PRIu16") is not configured for IP reassembly\n",
> + port_id);
> + return -EINVAL;
> + }
> +
> +
> + if (conf == NULL) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid IP reassembly configuration (NULL)\n");
> + return -EINVAL;
> + }
> +
> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_set,
> + -ENOTSUP);
> + return eth_err(port_id,
> + (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
> +}
> +
> RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
>
> RTE_INIT(ethdev_init_telemetry)
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-03 15:08 ` [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-01-11 17:04 ` Ananyev, Konstantin
2022-01-11 18:44 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-11 17:04 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, Nicolau, Radu, Doherty, Declan, hemant.agrawal, matan,
thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu, Rosen
> Hardware IP reassembly may be incomplete for multiple reasons like
> reassembly timeout reached, duplicate fragments, etc.
> To save application cycles to process these packets again, a new
> mbuf ol_flag (RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) is added to
> show that the mbuf received is not reassembled properly.
If we use dynfiled for data, why not use dynflag for RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE?
That way we can avoid introduced hardcoded (always defined) flags for that case.
>
> Now if this flag is set, application can retreive corresponding chain of
> mbufs using mbuf dynfield set by the PMD. Now, it will be upto
> application to either drop those fragments or wait for more time.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> lib/ethdev/ethdev_driver.h | 8 ++++++
> lib/ethdev/rte_ethdev.c | 16 +++++++++++
> lib/ethdev/rte_ethdev.h | 57 ++++++++++++++++++++++++++++++++++++++
> lib/ethdev/version.map | 2 ++
> lib/mbuf/rte_mbuf_core.h | 3 +-
> 5 files changed, 85 insertions(+), 1 deletion(-)
>
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 0ed53c14f3..9a0bab9a61 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1671,6 +1671,14 @@ int
> rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
> uint32_t direction);
>
> +/**
> + * @internal
> + * Register mbuf dynamic field for IP reassembly incomplete case.
> + */
> +__rte_internal
> +int
> +rte_eth_ip_reass_dynfield_register(void);
> +
>
> /*
> * Legacy ethdev API used internally by drivers.
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index ecc6c1fe37..d53ce4eaca 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -6503,6 +6503,22 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
> }
>
> +#define RTE_ETH_IP_REASS_DYNFIELD_NAME "rte_eth_ip_reass_dynfield"
> +int rte_eth_ip_reass_dynfield_offset = -1;
> +
> +int
> +rte_eth_ip_reass_dynfield_register(void)
> +{
> + static const struct rte_mbuf_dynfield dynfield_desc = {
> + .name = RTE_ETH_IP_REASS_DYNFIELD_NAME,
> + .size = sizeof(rte_eth_ip_reass_dynfield_t),
> + .align = __alignof__(rte_eth_ip_reass_dynfield_t),
> + };
> + rte_eth_ip_reass_dynfield_offset =
> + rte_mbuf_dynfield_register(&dynfield_desc);
> + return rte_eth_ip_reass_dynfield_offset;
> +}
> +
> RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
>
> RTE_INIT(ethdev_init_telemetry)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 891f9a6e06..c4024d2265 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -5245,6 +5245,63 @@ __rte_experimental
> int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> struct rte_eth_ip_reass_params *conf);
>
> +/**
> + * In case of IP reassembly offload failure, ol_flags in mbuf will be set
> + * with RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE and packets will be returned
> + * without alteration. The application can retrieve the attached fragments
> + * using mbuf dynamic field.
> + */
> +typedef struct {
> + /**
> + * Next fragment packet. Application should fetch dynamic field of
> + * each fragment until a NULL is received and nb_frags is 0.
> + */
> + struct rte_mbuf *next_frag;
> + /** Time spent(in ms) by HW in waiting for further fragments. */
> + uint16_t time_spent;
> + /** Number of more fragments attached in mbuf dynamic fields. */
> + uint16_t nb_frags;
> +} rte_eth_ip_reass_dynfield_t;
Looks like a bit of overkill to me:
We do already have 'next' and 'nb_frags' fields inside mbuf,
why can't they be used here? Why a separate ones are necessary?
> +
> +extern int rte_eth_ip_reass_dynfield_offset;
> +
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-11 17:04 ` Ananyev, Konstantin
@ 2022-01-11 18:44 ` Akhil Goyal
2022-01-12 10:30 ` Ananyev, Konstantin
2022-01-13 13:18 ` Akhil Goyal
0 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-11 18:44 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
>
> > Hardware IP reassembly may be incomplete for multiple reasons like
> > reassembly timeout reached, duplicate fragments, etc.
> > To save application cycles to process these packets again, a new
> > mbuf ol_flag (RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) is added to
> > show that the mbuf received is not reassembled properly.
>
> If we use dynfiled for data, why not use dynflag for
> RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE?
> That way we can avoid introduced hardcoded (always defined) flags for that
> case.
I have not looked into using dynflag. Will explore if it can be used.
> >
> > +/**
> > + * In case of IP reassembly offload failure, ol_flags in mbuf will be set
> > + * with RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE and packets will be
> returned
> > + * without alteration. The application can retrieve the attached fragments
> > + * using mbuf dynamic field.
> > + */
> > +typedef struct {
> > + /**
> > + * Next fragment packet. Application should fetch dynamic field of
> > + * each fragment until a NULL is received and nb_frags is 0.
> > + */
> > + struct rte_mbuf *next_frag;
> > + /** Time spent(in ms) by HW in waiting for further fragments. */
> > + uint16_t time_spent;
> > + /** Number of more fragments attached in mbuf dynamic fields. */
> > + uint16_t nb_frags;
> > +} rte_eth_ip_reass_dynfield_t;
>
>
> Looks like a bit of overkill to me:
> We do already have 'next' and 'nb_frags' fields inside mbuf,
> why can't they be used here? Why a separate ones are necessary?
>
The next and nb_frags in mbuf is for segmented buffers and not IP fragments.
But here we will have separate mbufs in each dynfield denoting each of the
fragments which may have further segmented buffers.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-11 16:09 ` Ananyev, Konstantin
@ 2022-01-11 18:54 ` Akhil Goyal
2022-01-12 10:22 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-11 18:54 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index d9a03f12f9..ecc6c1fe37 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -6473,6 +6473,36 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id,
> uint64_t *features)
> > (*dev->dev_ops->rx_metadata_negotiate)(dev, features));
> > }
> >
> > +int
> > +rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> > + struct rte_eth_ip_reass_params *conf)
> > +{
> > + struct rte_eth_dev *dev;
> > +
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + dev = &rte_eth_devices[port_id];
>
> Should we check here that device is properly configured, but not started yet?
Ok will add checks for dev->data->dev_configured and dev->data->dev_started
> Another question - if we have reassembly_conf_set() would it make sense to
> have also reassembly_conf_get?
> So user can retrieve current ip_reassembly config values?
>
The set/supported values can be retrieved using rte_eth_dev_info :: reass_capa
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-11 18:54 ` Akhil Goyal
@ 2022-01-12 10:22 ` Ananyev, Konstantin
2022-01-12 10:32 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-12 10:22 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> > > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > > index d9a03f12f9..ecc6c1fe37 100644
> > > --- a/lib/ethdev/rte_ethdev.c
> > > +++ b/lib/ethdev/rte_ethdev.c
> > > @@ -6473,6 +6473,36 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id,
> > uint64_t *features)
> > > (*dev->dev_ops->rx_metadata_negotiate)(dev, features));
> > > }
> > >
> > > +int
> > > +rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> > > + struct rte_eth_ip_reass_params *conf)
> > > +{
> > > + struct rte_eth_dev *dev;
> > > +
> > > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > > + dev = &rte_eth_devices[port_id];
> >
> > Should we check here that device is properly configured, but not started yet?
> Ok will add checks for dev->data->dev_configured and dev->data->dev_started
>
> > Another question - if we have reassembly_conf_set() would it make sense to
> > have also reassembly_conf_get?
> > So user can retrieve current ip_reassembly config values?
> >
> The set/supported values can be retrieved using rte_eth_dev_info :: reass_capa
Hmm, I thought rte_eth_dev_info :: reass_capa reports
max supported values, not currently set values.
Did I misunderstand something?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-11 18:44 ` Akhil Goyal
@ 2022-01-12 10:30 ` Ananyev, Konstantin
2022-01-12 10:59 ` Akhil Goyal
2022-01-13 13:18 ` Akhil Goyal
1 sibling, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-12 10:30 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> >
> > > Hardware IP reassembly may be incomplete for multiple reasons like
> > > reassembly timeout reached, duplicate fragments, etc.
> > > To save application cycles to process these packets again, a new
> > > mbuf ol_flag (RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) is added to
> > > show that the mbuf received is not reassembled properly.
> >
> > If we use dynfiled for data, why not use dynflag for
> > RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE?
> > That way we can avoid introduced hardcoded (always defined) flags for that
> > case.
>
> I have not looked into using dynflag. Will explore if it can be used.
>
>
> > >
> > > +/**
> > > + * In case of IP reassembly offload failure, ol_flags in mbuf will be set
> > > + * with RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE and packets will be
> > returned
> > > + * without alteration. The application can retrieve the attached fragments
> > > + * using mbuf dynamic field.
> > > + */
> > > +typedef struct {
> > > + /**
> > > + * Next fragment packet. Application should fetch dynamic field of
> > > + * each fragment until a NULL is received and nb_frags is 0.
> > > + */
> > > + struct rte_mbuf *next_frag;
> > > + /** Time spent(in ms) by HW in waiting for further fragments. */
> > > + uint16_t time_spent;
> > > + /** Number of more fragments attached in mbuf dynamic fields. */
> > > + uint16_t nb_frags;
> > > +} rte_eth_ip_reass_dynfield_t;
> >
> >
> > Looks like a bit of overkill to me:
> > We do already have 'next' and 'nb_frags' fields inside mbuf,
> > why can't they be used here? Why a separate ones are necessary?
> >
> The next and nb_frags in mbuf is for segmented buffers and not IP fragments.
> But here we will have separate mbufs in each dynfield denoting each of the
> fragments which may have further segmented buffers.
Makes sense, thanks for explanation.
Though in that case just 'struct rte_mbuf *next_frag' might be enough
(user will walk though the list till mbuf->next_frag != NULL)?
The reason I am asking: current sizeof(rte_eth_ip_reass_dynfield_t) is 16B,
which is quite a lot for mbuf, especially considering that it has to be continuous 16B.
Making it smaller (8B) or even splitting into 2 fileds (8+4) will give it more chances
to coexist with other dynfields.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-12 10:22 ` Ananyev, Konstantin
@ 2022-01-12 10:32 ` Akhil Goyal
2022-01-12 10:48 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-12 10:32 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> > > Another question - if we have reassembly_conf_set() would it make sense to
> > > have also reassembly_conf_get?
> > > So user can retrieve current ip_reassembly config values?
> > >
> > The set/supported values can be retrieved using rte_eth_dev_info ::
> reass_capa
>
> Hmm, I thought rte_eth_dev_info :: reass_capa reports
> max supported values, not currently set values.
> Did I misunderstand something?
>
Reassembly configuration is expected to be a one-time setting and is not expected
to change multiple times in the application.
You are correct that rte_eth_dev_info :: reass_capa reports max supported values
by the PMD.
But if somebody uses the _set API, dev_info values will be overwritten.
However, a get API can be added, if we have some use case.
IMO, we can add it later if it will be required.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-12 10:32 ` Akhil Goyal
@ 2022-01-12 10:48 ` Ananyev, Konstantin
2022-01-12 11:06 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-12 10:48 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> > > > Another question - if we have reassembly_conf_set() would it make sense to
> > > > have also reassembly_conf_get?
> > > > So user can retrieve current ip_reassembly config values?
> > > >
> > > The set/supported values can be retrieved using rte_eth_dev_info ::
> > reass_capa
> >
> > Hmm, I thought rte_eth_dev_info :: reass_capa reports
> > max supported values, not currently set values.
> > Did I misunderstand something?
> >
> Reassembly configuration is expected to be a one-time setting and is not expected
> to change multiple times in the application.
> You are correct that rte_eth_dev_info :: reass_capa reports max supported values
> by the PMD.
> But if somebody uses the _set API, dev_info values will be overwritten.
> However, a get API can be added, if we have some use case.
> IMO, we can add it later if it will be required.
Basically you forbid user to reconfigure this feature
during application life-time?
That sounds like a really strange approach to me and
Probably will affect its usability in a negative way.
Wonder why it has to be that restrictive?
Also with the model you suggest, what would happen after user will do:
dev_stop(); dev_configure();?
Would rte_eth_dev_info :: reass_capa be reset to initial values,
or user values will be preserved, or ...?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-12 10:30 ` Ananyev, Konstantin
@ 2022-01-12 10:59 ` Akhil Goyal
2022-01-13 22:29 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-12 10:59 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> > > >
> > > > +/**
> > > > + * In case of IP reassembly offload failure, ol_flags in mbuf will be set
> > > > + * with RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE and packets will
> be
> > > returned
> > > > + * without alteration. The application can retrieve the attached fragments
> > > > + * using mbuf dynamic field.
> > > > + */
> > > > +typedef struct {
> > > > + /**
> > > > + * Next fragment packet. Application should fetch dynamic field of
> > > > + * each fragment until a NULL is received and nb_frags is 0.
> > > > + */
> > > > + struct rte_mbuf *next_frag;
> > > > + /** Time spent(in ms) by HW in waiting for further fragments. */
> > > > + uint16_t time_spent;
> > > > + /** Number of more fragments attached in mbuf dynamic fields. */
> > > > + uint16_t nb_frags;
> > > > +} rte_eth_ip_reass_dynfield_t;
> > >
> > >
> > > Looks like a bit of overkill to me:
> > > We do already have 'next' and 'nb_frags' fields inside mbuf,
> > > why can't they be used here? Why a separate ones are necessary?
> > >
> > The next and nb_frags in mbuf is for segmented buffers and not IP fragments.
> > But here we will have separate mbufs in each dynfield denoting each of the
> > fragments which may have further segmented buffers.
>
> Makes sense, thanks for explanation.
> Though in that case just 'struct rte_mbuf *next_frag' might be enough
> (user will walk though the list till mbuf->next_frag != NULL)?
> The reason I am asking: current sizeof(rte_eth_ip_reass_dynfield_t) is 16B,
> which is quite a lot for mbuf, especially considering that it has to be continuous
> 16B.
> Making it smaller (8B) or even splitting into 2 fileds (8+4) will give it more
> chances
> to coexist with other dynfields.
Even if we drop nb_frags, we will be left with uint16_t time_spent.
Are you suggesting to use separate dynfield altogether for 2 bytes of time_spent?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-12 10:48 ` Ananyev, Konstantin
@ 2022-01-12 11:06 ` Akhil Goyal
2022-01-13 13:31 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-12 11:06 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> > > > > Another question - if we have reassembly_conf_set() would it make sense
> to
> > > > > have also reassembly_conf_get?
> > > > > So user can retrieve current ip_reassembly config values?
> > > > >
> > > > The set/supported values can be retrieved using rte_eth_dev_info ::
> > > reass_capa
> > >
> > > Hmm, I thought rte_eth_dev_info :: reass_capa reports
> > > max supported values, not currently set values.
> > > Did I misunderstand something?
> > >
> > Reassembly configuration is expected to be a one-time setting and is not
> expected
> > to change multiple times in the application.
> > You are correct that rte_eth_dev_info :: reass_capa reports max supported
> values
> > by the PMD.
> > But if somebody uses the _set API, dev_info values will be overwritten.
> > However, a get API can be added, if we have some use case.
> > IMO, we can add it later if it will be required.
>
> Basically you forbid user to reconfigure this feature
> during application life-time?
> That sounds like a really strange approach to me and
> Probably will affect its usability in a negative way.
> Wonder why it has to be that restrictive?
> Also with the model you suggest, what would happen after user will do:
> dev_stop(); dev_configure();?
> Would rte_eth_dev_info :: reass_capa be reset to initial values,
> or user values will be preserved, or ...?
>
I am not restricting the user to not reconfigure the feature.
When dev_configure() is called again after dev_stop(), it will reset the previously
set values to max ones.
However, if you insist the get API can be added. No strong opinion on that.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-11 18:44 ` Akhil Goyal
2022-01-12 10:30 ` Ananyev, Konstantin
@ 2022-01-13 13:18 ` Akhil Goyal
2022-01-13 14:36 ` Ananyev, Konstantin
1 sibling, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-13 13:18 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
Hi Konstantin,
> > > Hardware IP reassembly may be incomplete for multiple reasons like
> > > reassembly timeout reached, duplicate fragments, etc.
> > > To save application cycles to process these packets again, a new
> > > mbuf ol_flag (RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) is added to
> > > show that the mbuf received is not reassembled properly.
> >
> > If we use dynfiled for data, why not use dynflag for
> > RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE?
> > That way we can avoid introduced hardcoded (always defined) flags for that
> > case.
>
> I have not looked into using dynflag. Will explore if it can be used.
The intent of adding this feature is to reduce application cycles for IP reassembly.
But if we use dynflag, it will take a lot of cycles to check if dyn flag is set or not.
As I understand, it first need to be looked up in a linked list and then checked.
And this will be checked for each packet even if there is no reassembly involved.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-12 11:06 ` Akhil Goyal
@ 2022-01-13 13:31 ` Akhil Goyal
2022-01-13 14:41 ` Ananyev, Konstantin
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-13 13:31 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
Hi Konstantin,
> > > > > > Another question - if we have reassembly_conf_set() would it make
> sense
> > to
> > > > > > have also reassembly_conf_get?
> > > > > > So user can retrieve current ip_reassembly config values?
> > > > > >
> > > > > The set/supported values can be retrieved using rte_eth_dev_info ::
> > > > reass_capa
> > > >
> > > > Hmm, I thought rte_eth_dev_info :: reass_capa reports
> > > > max supported values, not currently set values.
> > > > Did I misunderstand something?
> > > >
> > > Reassembly configuration is expected to be a one-time setting and is not
> > expected
> > > to change multiple times in the application.
> > > You are correct that rte_eth_dev_info :: reass_capa reports max supported
> > values
> > > by the PMD.
> > > But if somebody uses the _set API, dev_info values will be overwritten.
> > > However, a get API can be added, if we have some use case.
> > > IMO, we can add it later if it will be required.
> >
> > Basically you forbid user to reconfigure this feature
> > during application life-time?
> > That sounds like a really strange approach to me and
> > Probably will affect its usability in a negative way.
> > Wonder why it has to be that restrictive?
> > Also with the model you suggest, what would happen after user will do:
> > dev_stop(); dev_configure();?
> > Would rte_eth_dev_info :: reass_capa be reset to initial values,
> > or user values will be preserved, or ...?
> >
> I am not restricting the user to not reconfigure the feature.
> When dev_configure() is called again after dev_stop(), it will reset the previously
> set values to max ones.
> However, if you insist the get API can be added. No strong opinion on that.
On another thought, setting dev_info :: reass_capa to a max value and not changing it
in reassembly_conf_set() will make more sense.
The most common case, would be to get the max values and if they are not good
Enough for the application, set lesser values using the new API.
I do not see a use case to get the current values set. However, it may be used for debugging
some driver issue related to these values. But, I believe that can be managed internally
in the PMD. Do you suspect any other use case for get API?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-13 13:18 ` Akhil Goyal
@ 2022-01-13 14:36 ` Ananyev, Konstantin
2022-01-13 15:04 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-13 14:36 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
Hi Akhil,
> Hi Konstantin,
> > > > Hardware IP reassembly may be incomplete for multiple reasons like
> > > > reassembly timeout reached, duplicate fragments, etc.
> > > > To save application cycles to process these packets again, a new
> > > > mbuf ol_flag (RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) is added to
> > > > show that the mbuf received is not reassembled properly.
> > >
> > > If we use dynfiled for data, why not use dynflag for
> > > RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE?
> > > That way we can avoid introduced hardcoded (always defined) flags for that
> > > case.
> >
> > I have not looked into using dynflag. Will explore if it can be used.
> The intent of adding this feature is to reduce application cycles for IP reassembly.
> But if we use dynflag, it will take a lot of cycles to check if dyn flag is set or not.
> As I understand, it first need to be looked up in a linked list and then checked.
> And this will be checked for each packet even if there is no reassembly involved.
No, I don't think it is correct understanding.
For dyn-flag it is the same approach as for dyn-field.
At init time it selects the bit which will be used and return it'e value to the user.
Then user will set/check the at runtime.
So no linking list walks at runtime.
All you missing comparing to hard-coded values: complier optimizations.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 2/8] ethdev: add dev op for IP reassembly configuration
2022-01-13 13:31 ` Akhil Goyal
@ 2022-01-13 14:41 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-13 14:41 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> > > > > > > Another question - if we have reassembly_conf_set() would it make
> > sense
> > > to
> > > > > > > have also reassembly_conf_get?
> > > > > > > So user can retrieve current ip_reassembly config values?
> > > > > > >
> > > > > > The set/supported values can be retrieved using rte_eth_dev_info ::
> > > > > reass_capa
> > > > >
> > > > > Hmm, I thought rte_eth_dev_info :: reass_capa reports
> > > > > max supported values, not currently set values.
> > > > > Did I misunderstand something?
> > > > >
> > > > Reassembly configuration is expected to be a one-time setting and is not
> > > expected
> > > > to change multiple times in the application.
> > > > You are correct that rte_eth_dev_info :: reass_capa reports max supported
> > > values
> > > > by the PMD.
> > > > But if somebody uses the _set API, dev_info values will be overwritten.
> > > > However, a get API can be added, if we have some use case.
> > > > IMO, we can add it later if it will be required.
> > >
> > > Basically you forbid user to reconfigure this feature
> > > during application life-time?
> > > That sounds like a really strange approach to me and
> > > Probably will affect its usability in a negative way.
> > > Wonder why it has to be that restrictive?
> > > Also with the model you suggest, what would happen after user will do:
> > > dev_stop(); dev_configure();?
> > > Would rte_eth_dev_info :: reass_capa be reset to initial values,
> > > or user values will be preserved, or ...?
> > >
> > I am not restricting the user to not reconfigure the feature.
> > When dev_configure() is called again after dev_stop(), it will reset the previously
> > set values to max ones.
> > However, if you insist the get API can be added. No strong opinion on that.
>
> On another thought, setting dev_info :: reass_capa to a max value and not changing it
> in reassembly_conf_set() will make more sense.
Yes, agree.
> The most common case, would be to get the max values and if they are not good
> Enough for the application, set lesser values using the new API.
> I do not see a use case to get the current values set. However, it may be used for debugging
> some driver issue related to these values. But, I believe that can be managed internally
> in the PMD. Do you suspect any other use case for get API?
I think it would be really plausible for both user and ethdev layer to have an ability to get
values that are currently in place.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-13 14:36 ` Ananyev, Konstantin
@ 2022-01-13 15:04 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-13 15:04 UTC (permalink / raw)
To: Ananyev, Konstantin, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> Hi Akhil,
>
> > Hi Konstantin,
> > > > > Hardware IP reassembly may be incomplete for multiple reasons like
> > > > > reassembly timeout reached, duplicate fragments, etc.
> > > > > To save application cycles to process these packets again, a new
> > > > > mbuf ol_flag (RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE) is added
> to
> > > > > show that the mbuf received is not reassembled properly.
> > > >
> > > > If we use dynfiled for data, why not use dynflag for
> > > > RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE?
> > > > That way we can avoid introduced hardcoded (always defined) flags for
> that
> > > > case.
> > >
> > > I have not looked into using dynflag. Will explore if it can be used.
> > The intent of adding this feature is to reduce application cycles for IP
> reassembly.
> > But if we use dynflag, it will take a lot of cycles to check if dyn flag is set or
> not.
> > As I understand, it first need to be looked up in a linked list and then checked.
> > And this will be checked for each packet even if there is no reassembly
> involved.
>
> No, I don't think it is correct understanding.
> For dyn-flag it is the same approach as for dyn-field.
> At init time it selects the bit which will be used and return it'e value to the user.
> Then user will set/check the at runtime.
> So no linking list walks at runtime.
> All you missing comparing to hard-coded values: complier optimizations.
>
Ok, got it. rte_mbuf_dynflag_lookup() need to happen only for the first mbuf.
I was checking is_timestamp_enabled() in test-pmd. Didn't see that dynflag was
a static variable.
I thought it was happening for each packet.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-12 10:59 ` Akhil Goyal
@ 2022-01-13 22:29 ` Ananyev, Konstantin
0 siblings, 0 replies; 184+ messages in thread
From: Ananyev, Konstantin @ 2022-01-13 22:29 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
matan, thomas, Yigit, Ferruh, andrew.rybchenko, olivier.matz, Xu,
Rosen
> > > > >
> > > > > +/**
> > > > > + * In case of IP reassembly offload failure, ol_flags in mbuf will be set
> > > > > + * with RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE and packets will
> > be
> > > > returned
> > > > > + * without alteration. The application can retrieve the attached fragments
> > > > > + * using mbuf dynamic field.
> > > > > + */
> > > > > +typedef struct {
> > > > > + /**
> > > > > + * Next fragment packet. Application should fetch dynamic field of
> > > > > + * each fragment until a NULL is received and nb_frags is 0.
> > > > > + */
> > > > > + struct rte_mbuf *next_frag;
> > > > > + /** Time spent(in ms) by HW in waiting for further fragments. */
> > > > > + uint16_t time_spent;
> > > > > + /** Number of more fragments attached in mbuf dynamic fields. */
> > > > > + uint16_t nb_frags;
> > > > > +} rte_eth_ip_reass_dynfield_t;
> > > >
> > > >
> > > > Looks like a bit of overkill to me:
> > > > We do already have 'next' and 'nb_frags' fields inside mbuf,
> > > > why can't they be used here? Why a separate ones are necessary?
> > > >
> > > The next and nb_frags in mbuf is for segmented buffers and not IP fragments.
> > > But here we will have separate mbufs in each dynfield denoting each of the
> > > fragments which may have further segmented buffers.
> >
> > Makes sense, thanks for explanation.
> > Though in that case just 'struct rte_mbuf *next_frag' might be enough
> > (user will walk though the list till mbuf->next_frag != NULL)?
> > The reason I am asking: current sizeof(rte_eth_ip_reass_dynfield_t) is 16B,
> > which is quite a lot for mbuf, especially considering that it has to be continuous
> > 16B.
> > Making it smaller (8B) or even splitting into 2 fileds (8+4) will give it more
> > chances
> > to coexist with other dynfields.
>
> Even if we drop nb_frags, we will be left with uint16_t time_spent.
> Are you suggesting to use separate dynfield altogether for 2 bytes of time_spent?
Yes, that's was my thought - split it into two separate fields, if possible.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 0/4] ethdev: introduce IP reassembly offload
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
` (8 preceding siblings ...)
2022-01-06 9:51 ` [PATCH 0/8] ethdev: introduce IP reassembly offload David Marchand
@ 2022-01-20 16:26 ` Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 1/4] " Akhil Goyal
` (4 more replies)
9 siblings, 5 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:26 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (4):
ethdev: introduce IP reassembly offload
ethdev: add dev op to set/get IP reassembly configuration
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 19 ++++++
doc/guides/nics/features.rst | 11 ++++
lib/ethdev/ethdev_driver.h | 45 ++++++++++++++
lib/ethdev/rte_ethdev.c | 110 +++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 104 ++++++++++++++++++++++++++++++++-
lib/ethdev/version.map | 5 ++
lib/security/rte_security.h | 12 +++-
7 files changed, 304 insertions(+), 2 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 1/4] ethdev: introduce IP reassembly offload
2022-01-20 16:26 ` [PATCH v2 0/4] " Akhil Goyal
@ 2022-01-20 16:26 ` Akhil Goyal
2022-01-20 16:45 ` Stephen Hemminger
2022-01-20 16:26 ` [PATCH v2 2/4] ethdev: add dev op to set/get IP reassembly configuration Akhil Goyal
` (3 subsequent siblings)
4 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:26 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offloaded to HW, it can considerably save application
cycles.
Hence, a new offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is introduced in
ethdev for devices which can attempt reassembly of packets in hardware.
rte_eth_dev_info is updated with the reassembly capabilities which a device
can support.
The resulting reassembled packet would be a typical segmented mbuf in
case of success.
And if reassembly of fragments is failed or is incomplete (if fragments do
not come before the reass_timeout), the mbuf ol_flags can be updated.
This is updated in a subsequent patch.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
devtools/libabigail.abignore | 5 +++++
doc/guides/nics/features.rst | 11 +++++++++++
lib/ethdev/rte_ethdev.c | 1 +
lib/ethdev/rte_ethdev.h | 32 +++++++++++++++++++++++++++++++-
4 files changed, 48 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 4b676f317d..90f449c43a 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,3 +11,8 @@
; Ignore generated PMD information strings
[suppress_variable]
name_regexp = _pmd_info$
+
+; Ignore fields inserted in place of reserved_64s of rte_eth_dev_info
+[suppress_type]
+ name = rte_eth_dev_info
+ has_data_member_inserted_between = {offset_of(reserved_64s), end}
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 27be2d2576..b45bce4a78 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -602,6 +602,17 @@ Supports inner packet L4 checksum.
``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+.. _nic_features_ip_reassembly:
+
+IP reassembly
+-------------
+
+Supports IP reassembly in hardware.
+
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY``.
+* **[provides] rte_eth_dev_info**: ``reass_capa``.
+
+
.. _nic_features_shared_rx_queue:
Shared Rx queue
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index a1d475a292..d9a03f12f9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -126,6 +126,7 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+ RTE_RX_OFFLOAD_BIT2STR(IP_REASSEMBLY),
};
#undef RTE_RX_OFFLOAD_BIT2STR
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index fa299c8ad7..11427b2e4d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1586,6 +1586,7 @@ struct rte_eth_conf {
#define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19)
#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20)
+#define RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY RTE_BIT64(21)
#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
@@ -1781,6 +1782,33 @@ enum rte_eth_representor_type {
RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
};
+/* Flag to offload IP reassembly for IPv4 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
+/* Flag to offload IP reassembly for IPv6 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
+/**
+ * @warning
+ * @b EXPERIMENTAL: this structure may change without prior notice.
+ *
+ * A structure used to set IP reassembly configuration.
+ *
+ * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
+ * the PMD will attempt IP reassembly for the received packets as per
+ * properties defined in this structure:
+ *
+ */
+struct rte_eth_ip_reass_params {
+ /** Maximum time in ms which PMD can wait for other fragments. */
+ uint32_t reass_timeout;
+ /** Maximum number of fragments that can be reassembled. */
+ uint16_t max_frags;
+ /**
+ * Flags to enable reassembly of packet types -
+ * RTE_ETH_DEV_REASSEMBLY_F_xxx.
+ */
+ uint16_t flags;
+};
+
/**
* A structure used to retrieve the contextual information of
* an Ethernet device, such as the controlling driver of the
@@ -1841,8 +1869,10 @@ struct rte_eth_dev_info {
* embedded managed interconnect/switch.
*/
struct rte_eth_switch_info switch_info;
+ /** IP reassembly offload capabilities that a device can support. */
+ struct rte_eth_ip_reass_params reass_capa;
- uint64_t reserved_64s[2]; /**< Reserved for future fields */
+ uint64_t reserved_64s[1]; /**< Reserved for future fields */
void *reserved_ptrs[2]; /**< Reserved for future fields */
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 2/4] ethdev: add dev op to set/get IP reassembly configuration
2022-01-20 16:26 ` [PATCH v2 0/4] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 1/4] " Akhil Goyal
@ 2022-01-20 16:26 ` Akhil Goyal
2022-01-22 8:17 ` Andrew Rybchenko
2022-01-20 16:26 ` [PATCH v2 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
` (2 subsequent siblings)
4 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:26 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
A new ethernet device op is added to give application control over
the IP reassembly configuration. This operation is an optional
call from the application, default values are set by PMD and
exposed via rte_eth_dev_info.
Application should always first retrieve the capabilities from
rte_eth_dev_info and then set the fields accordingly.
User can get the currently set values using the get API.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/ethdev_driver.h | 37 +++++++++++++++++
lib/ethdev/rte_ethdev.c | 81 ++++++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 51 ++++++++++++++++++++++++
lib/ethdev/version.map | 4 ++
4 files changed, 173 insertions(+)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..a310001648 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -990,6 +990,38 @@ typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
uint64_t *features);
+/**
+ * @internal
+ * Get IP reassembly offload configuration parameters set in PMD.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[out] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reass_params *conf);
+
+/**
+ * @internal
+ * Set configuration parameters for enabling IP reassembly offload in hardware.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[in] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reass_params *conf);
+
/**
* @internal A structure containing the functions exported by an Ethernet driver.
*/
@@ -1186,6 +1218,11 @@ struct eth_dev_ops {
* kinds of metadata to the PMD
*/
eth_rx_metadata_negotiate_t rx_metadata_negotiate;
+
+ /** Get IP reassembly configuration */
+ eth_ip_reassembly_conf_get_t ip_reassembly_conf_get;
+ /** Set IP reassembly configuration */
+ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
};
/**
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index d9a03f12f9..4bd31034a6 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6473,6 +6473,87 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
(*dev->dev_ops->rx_metadata_negotiate)(dev, features));
}
+int
+rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" is not configured.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->dev_started != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" started,\n"
+ "cannot configure IP reassembly params.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if ((dev->data->dev_conf.rxmode.offloads &
+ RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "The port (ID=%"PRIu16") is not configured for IP reassembly\n",
+ port_id);
+ return -EINVAL;
+ }
+
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid IP reassembly configuration (NULL)\n");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_set,
+ -ENOTSUP);
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
+}
+
+int
+rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL");
+ return -EINVAL;
+ }
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" is not configured.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if ((dev->data->dev_conf.rxmode.offloads &
+ RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "The port (ID=%"PRIu16") is not configured for IP reassembly\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_get,
+ -ENOTSUP);
+ memset(conf, 0, sizeof(struct rte_eth_ip_reass_params));
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 11427b2e4d..53af158bcb 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5218,6 +5218,57 @@ int rte_eth_representor_info_get(uint16_t port_id,
__rte_experimental
int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get IP reassembly configuration parameters currently set in PMD,
+ * if device rx offload flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is
+ * enabled and the PMD supports IP reassembly offload.
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-EINVAL) if offload is not enabled in rte_eth_conf.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set IP reassembly configuration parameters if device rx offload
+ * flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is enabled and the PMD
+ * supports IP reassembly offload. User should first check the
+ * reass_capa in rte_eth_dev_info before setting the configuration.
+ * The values of configuration parameters must not exceed the device
+ * capabilities. The use of this API is optional and if called, it
+ * should be called before rte_eth_dev_start().
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-EINVAL) if offload is not enabled in rte_eth_conf.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf);
+
+
#include <rte_ethdev_core.h>
/**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..ad829dd47e 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -256,6 +256,10 @@ EXPERIMENTAL {
rte_flow_flex_item_create;
rte_flow_flex_item_release;
rte_flow_pick_transfer_proxy;
+
+ #added in 22.03
+ rte_eth_ip_reassembly_conf_get;
+ rte_eth_ip_reassembly_conf_set;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-20 16:26 ` [PATCH v2 0/4] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 1/4] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 2/4] ethdev: add dev op to set/get IP reassembly configuration Akhil Goyal
@ 2022-01-20 16:26 ` Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 4/4] security: add IPsec option for " Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:26 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
Hardware IP reassembly may be incomplete for multiple reasons like
reassembly timeout reached, duplicate fragments, etc.
To save application cycles to process these packets again, a new
mbuf dynflag is added to show that the mbuf received is not
reassembled properly.
Now if this dynflag is set, application can retrieve corresponding
chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
up to application to either drop those fragments or wait for more time.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/ethdev_driver.h | 8 ++++++++
lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 21 +++++++++++++++++++++
lib/ethdev/version.map | 1 +
4 files changed, 58 insertions(+)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index a310001648..7499a4fbf5 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1689,6 +1689,14 @@ int
rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
uint32_t direction);
+/**
+ * @internal
+ * Register mbuf dynamic field and flag for IP reassembly incomplete case.
+ */
+__rte_internal
+int
+rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
+
/*
* Legacy ethdev API used internally by drivers.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 4bd31034a6..f6a155dceb 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6554,6 +6554,34 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
}
+int
+rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag_offset)
+{
+ static const struct rte_mbuf_dynfield field_desc = {
+ .name = RTE_ETH_IP_REASS_DYNFIELD_NAME,
+ .size = sizeof(rte_eth_ip_reass_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reass_dynfield_t),
+ };
+ static const struct rte_mbuf_dynflag ip_reass_dynflag = {
+ .name = RTE_ETH_IP_REASS_INCOMPLETE_DYNFLAG_NAME,
+ };
+ int offset;
+
+ offset = rte_mbuf_dynfield_register(&field_desc);
+ if (offset < 0)
+ return -1;
+ if (field_offset != NULL)
+ *field_offset = offset;
+
+ offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
+ if (offset < 0)
+ return -1;
+ if (flag_offset != NULL)
+ *flag_offset = offset;
+
+ return 0;
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 53af158bcb..a6b43bcf2c 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5268,6 +5268,27 @@ __rte_experimental
int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
struct rte_eth_ip_reass_params *conf);
+#define RTE_ETH_IP_REASS_DYNFIELD_NAME "rte_eth_ip_reass_dynfield"
+#define RTE_ETH_IP_REASS_INCOMPLETE_DYNFLAG_NAME "rte_eth_ip_reass_incomplete_dynflag"
+
+/**
+ * In case of IP reassembly offload failure, ol_flags in mbuf will be set
+ * with RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE and packets will be returned
+ * without alteration. The application can retrieve the attached fragments
+ * using mbuf dynamic field.
+ */
+typedef struct {
+ /**
+ * Next fragment packet. Application should fetch dynamic field of
+ * each fragment until a NULL is received and nb_frags is 0.
+ */
+ struct rte_mbuf *next_frag;
+ /** Time spent(in ms) by HW in waiting for further fragments. */
+ uint16_t time_spent;
+ /** Number of more fragments attached in mbuf dynamic fields. */
+ uint16_t nb_frags;
+} rte_eth_ip_reass_dynfield_t;
+
#include <rte_ethdev_core.h>
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index ad829dd47e..8b7578471a 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -283,6 +283,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_bind;
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
+ rte_eth_ip_reass_dynfield_register;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 4/4] security: add IPsec option for IP reassembly
2022-01-20 16:26 ` [PATCH v2 0/4] " Akhil Goyal
` (2 preceding siblings ...)
2022-01-20 16:26 ` [PATCH v2 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-01-20 16:26 ` Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:26 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
A new option is added in IPsec to enable and attempt reassembly
of inbound packets.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
devtools/libabigail.abignore | 14 ++++++++++++++
lib/security/rte_security.h | 12 +++++++++++-
2 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 90f449c43a..c6e304282f 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -16,3 +16,17 @@
[suppress_type]
name = rte_eth_dev_info
has_data_member_inserted_between = {offset_of(reserved_64s), end}
+
+; Ignore fields inserted in place of reserved_opts of rte_security_ipsec_sa_options
+[suppress_type]
+ name = rte_ipsec_sa_prm
+ name = rte_security_ipsec_sa_options
+ has_data_member_inserted_between = {offset_of(reserved_opts), end}
+
+[suppress_type]
+ name = rte_security_capability
+ has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
+
+[suppress_type]
+ name = rte_security_session_conf
+ has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 1228b6c8b1..168b837a82 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
*/
uint32_t l4_csum_enable : 1;
+ /** Enable reassembly on incoming packets.
+ *
+ * * 1: Enable driver to try reassembly of encrypted IP packets for
+ * this SA, if supported by the driver. This feature will work
+ * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
+ * inline Ethernet device.
+ * * 0: Disable reassembly of packets (default).
+ */
+ uint32_t reass_en : 1;
+
/** Reserved bit fields for future extension
*
* User should ensure reserved_opts is cleared as it may change in
@@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
*
* Note: Reduce number of bits in reserved_opts for every new option.
*/
- uint32_t reserved_opts : 18;
+ uint32_t reserved_opts : 17;
};
/** IPSec security association direction */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v2 1/4] ethdev: introduce IP reassembly offload
2022-01-20 16:26 ` [PATCH v2 1/4] " Akhil Goyal
@ 2022-01-20 16:45 ` Stephen Hemminger
2022-01-20 17:11 ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Stephen Hemminger @ 2022-01-20 16:45 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj
On Thu, 20 Jan 2022 21:56:24 +0530
Akhil Goyal <gakhil@marvell.com> wrote:
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice.
> + *
> + * A structure used to set IP reassembly configuration.
> + *
> + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> + * the PMD will attempt IP reassembly for the received packets as per
> + * properties defined in this structure:
> + *
> + */
> +struct rte_eth_ip_reass_params {
> + /** Maximum time in ms which PMD can wait for other fragments. */
> + uint32_t reass_timeout;
> + /** Maximum number of fragments that can be reassembled. */
> + uint16_t max_frags;
> + /**
> + * Flags to enable reassembly of packet types -
> + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> + */
> + uint16_t flags;
> +};
> +
Actually, this is not experimental. You are embedding this in dev_info
and dev_info is not experimental; therefore the reassembly parameters
can never change without breaking ABI of dev_info.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 0/4] app/test: add inline IPsec and reassembly cases
2022-01-03 15:08 ` [PATCH 5/8] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-01-20 16:48 ` Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 1/4] app/test: add unit cases for inline IPsec offload Akhil Goyal
` (4 more replies)
0 siblings, 5 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:48 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
IP reassembly RX offload is introduced in [1].
This patchset is added to test the IP reassembly RX offload and
to test other inline IPsec test cases which need to be verified
before testing IP reassembly in inline inbound cases.
In this app, plain IP packets(with/without IP fragments) are sent
on one interface for outbound processing and then the packets are
received back on the same interface using loopback mode.
While receiving the packets, the packets are processed for inline
inbound IPsec processing and if the packets are fragmented, they will
be reassembled before getting received in the driver/app.
v1 of this patchset was sent along with the ethdev changes in [2].
v2 is split so that it can be reviewed separately.
changes in v2:
- added IPsec burst mode case
- updated as per the latest ethdev changes in [1].
[1] http://patches.dpdk.org/project/dpdk/list/?series=21283
[2] http://patches.dpdk.org/project/dpdk/list/?series=21052
Akhil Goyal (4):
app/test: add unit cases for inline IPsec offload
app/test: add IP reassembly case with no frags
app/test: add IP reassembly cases with multiple fragments
app/test: add IP reassembly negative cases
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_security_inline_proto.c | 1299 +++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 778 ++++++++++
4 files changed, 2079 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 1/4] app/test: add unit cases for inline IPsec offload
2022-01-20 16:48 ` [PATCH v2 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
@ 2022-01-20 16:48 ` Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 2/4] app/test: add IP reassembly case with no frags Akhil Goyal
` (3 subsequent siblings)
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:48 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal, Nithin Dabilpuram
A new test suite is added in test app to test inline IPsec protocol
offload. In this patch, a couple of predefined plain and cipher test
vectors are used to verify the IPsec functionality without the need of
external traffic generators. The sent packet is loopbacked onto the same
interface which is received and matched with the expected output.
The test suite can be updated further with other functional test cases.
The testsuite can be run using:
RTE> inline_ipsec_autotest
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_security_inline_proto.c | 758 ++++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 185 +++++
4 files changed, 945 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
diff --git a/MAINTAINERS b/MAINTAINERS
index f46cec0c55..832bff3609 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -439,7 +439,7 @@ M: Declan Doherty <declan.doherty@intel.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
-F: app/test/test_security.c
+F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <roy.fan.zhang@intel.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 344a609a4d..2161afa7be 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -131,6 +131,7 @@ test_sources = files(
'test_rwlock.c',
'test_sched.c',
'test_security.c',
+ 'test_security_inline_proto.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
new file mode 100644
index 0000000000..4738792cb8
--- /dev/null
+++ b/app/test/test_security_inline_proto.c
@@ -0,0 +1,758 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <signal.h>
+#include <unistd.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_security.h>
+#include <rte_ipsec.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include "test_security_inline_proto_vectors.h"
+#include "test.h"
+
+#define NB_ETHPORTS_USED (1)
+#define NB_SOCKETS (2)
+#define MEMPOOL_CACHE_SIZE 32
+#define MAX_PKT_BURST (32)
+#define RTE_TEST_RX_DESC_DEFAULT (1024)
+#define RTE_TEST_TX_DESC_DEFAULT (1024)
+#define RTE_PORT_ALL (~(uint16_t)0x0)
+
+/*
+ * RX and TX Prefetch, Host, and Write-back threshold values should be
+ * carefully set for optimal performance. Consult the network
+ * controller's datasheet and supporting DPDK documentation for guidance
+ * on how these parameters should be set.
+ */
+#define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */
+#define RX_HTHRESH 8 /**< Default values of RX host threshold reg. */
+#define RX_WTHRESH 0 /**< Default values of RX write-back threshold reg. */
+
+#define TX_PTHRESH 32 /**< Default values of TX prefetch threshold reg. */
+#define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */
+#define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */
+
+#define MAX_TRAFFIC_BURST 2048
+
+#define NB_MBUF 1024
+
+#define APP_REASS_TIMEOUT 10
+
+static struct rte_mempool *mbufpool[NB_SOCKETS];
+static struct rte_mempool *sess_pool[NB_SOCKETS];
+static struct rte_mempool *sess_priv_pool[NB_SOCKETS];
+/* ethernet addresses of ports */
+static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
+ .split_hdr_size = 0,
+ .offloads = RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY |
+ RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SECURITY,
+ },
+ .txmode = {
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_SECURITY |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ },
+ .lpbk_mode = 1, /* enable loopback */
+};
+
+static struct rte_eth_rxconf rx_conf = {
+ .rx_thresh = {
+ .pthresh = RX_PTHRESH,
+ .hthresh = RX_HTHRESH,
+ .wthresh = RX_WTHRESH,
+ },
+ .rx_free_thresh = 32,
+};
+
+static struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = 32, /* Use PMD default values */
+ .tx_rs_thresh = 32, /* Use PMD default values */
+};
+
+enum {
+ LCORE_INVALID = 0,
+ LCORE_AVAIL,
+ LCORE_USED,
+};
+
+struct lcore_cfg {
+ uint8_t status;
+ uint8_t socketid;
+ uint16_t nb_ports;
+ uint16_t port;
+} __rte_cache_aligned;
+
+struct lcore_cfg lcore_cfg;
+
+static uint64_t link_mbps;
+
+static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
+
+/* Create Inline IPsec session */
+static int
+create_inline_ipsec_session(struct ipsec_session_data *sa,
+ uint16_t portid, struct rte_ipsec_session *ips,
+ enum rte_security_ipsec_sa_direction dir,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ int32_t ret = 0;
+ struct rte_security_ctx *sec_ctx;
+ uint32_t src_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 2));
+ uint32_t dst_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 1));
+ uint16_t src_v6[8] = {0x2607, 0xf8b0, 0x400c, 0x0c03, 0x0000, 0x0000,
+ 0x0000, 0x001a};
+ uint16_t dst_v6[8] = {0x2001, 0x0470, 0xe5bf, 0xdead, 0x4957, 0x2174,
+ 0xe82c, 0x4887};
+ struct rte_security_session_conf sess_conf = {
+ .action_type = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = sa->ipsec_xform,
+ .crypto_xform = &sa->xform.aead,
+ .userdata = NULL,
+ };
+ sess_conf.ipsec.direction = dir;
+
+ const struct rte_security_capability *sec_cap;
+
+ sec_ctx = (struct rte_security_ctx *)
+ rte_eth_dev_get_sec_ctx(portid);
+
+ if (sec_ctx == NULL) {
+ printf("Ethernet device doesn't support security features.\n");
+ return TEST_SKIPPED;
+ }
+
+ sess_conf.crypto_xform->aead.key.data = sa->key.data;
+
+ /* Save SA as userdata for the security session. When
+ * the packet is received, this userdata will be
+ * retrieved using the metadata from the packet.
+ *
+ * The PMD is expected to set similar metadata for other
+ * operations, like rte_eth_event, which are tied to
+ * security session. In such cases, the userdata could
+ * be obtained to uniquely identify the security
+ * parameters denoted.
+ */
+
+ sess_conf.userdata = (void *) sa;
+ sess_conf.ipsec.tunnel.type = tun_type;
+ if (tun_type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+ memcpy(&sess_conf.ipsec.tunnel.ipv4.src_ip, &src_v4,
+ sizeof(src_v4));
+ memcpy(&sess_conf.ipsec.tunnel.ipv4.dst_ip, &dst_v4,
+ sizeof(dst_v4));
+ } else {
+ memcpy(&sess_conf.ipsec.tunnel.ipv6.src_addr, &src_v6,
+ sizeof(src_v6));
+ memcpy(&sess_conf.ipsec.tunnel.ipv6.dst_addr, &dst_v6,
+ sizeof(dst_v6));
+ }
+ ips->security.ses = rte_security_session_create(sec_ctx,
+ &sess_conf, sess_pool[lcore_cfg.socketid],
+ sess_priv_pool[lcore_cfg.socketid]);
+ if (ips->security.ses == NULL) {
+ printf("SEC Session init failed: err: %d\n", ret);
+ return TEST_FAILED;
+ }
+
+ sec_cap = rte_security_capabilities_get(sec_ctx);
+ if (sec_cap == NULL) {
+ printf("No capabilities registered\n");
+ return TEST_SKIPPED;
+ }
+
+ /* iterate until ESP tunnel*/
+ while (sec_cap->action !=
+ RTE_SECURITY_ACTION_TYPE_NONE) {
+ if (sec_cap->action == sess_conf.action_type &&
+ sec_cap->protocol ==
+ RTE_SECURITY_PROTOCOL_IPSEC &&
+ sec_cap->ipsec.mode ==
+ sess_conf.ipsec.mode &&
+ sec_cap->ipsec.direction == dir)
+ break;
+ sec_cap++;
+ }
+
+ if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) {
+ printf("No suitable security capability found\n");
+ return TEST_SKIPPED;
+ }
+
+ ips->security.ol_flags = sec_cap->ol_flags;
+ ips->security.ctx = sec_ctx;
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 3s, and print them finally */
+static void
+check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 30 /* 3s (30 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+ int ret;
+ char link_status[RTE_ETH_LINK_MAX_STR_LEN];
+
+ printf("Checking link statuses...\n");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ all_ports_up = 1;
+ for (portid = 0; portid < port_num; portid++) {
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ ret = rte_eth_link_get_nowait(portid, &link);
+ if (ret < 0) {
+ all_ports_up = 0;
+ if (print_flag == 1)
+ printf("Port %u link get failed: %s\n",
+ portid, rte_strerror(-ret));
+ continue;
+ }
+
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status && link_mbps == 0)
+ link_mbps = link.link_speed;
+
+ rte_eth_link_to_str(link_status,
+ sizeof(link_status), &link);
+ printf("Port %d %s\n", portid, link_status);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1))
+ print_flag = 1;
+ }
+}
+
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
+static void
+copy_buf_to_pkt_segs(void *buf, unsigned int len,
+ struct rte_mbuf *pkt, unsigned int offset)
+{
+ struct rte_mbuf *seg;
+ void *seg_buf;
+ unsigned int copy_len;
+
+ seg = pkt;
+ while (offset >= seg->data_len) {
+ offset -= seg->data_len;
+ seg = seg->next;
+ }
+ copy_len = seg->data_len - offset;
+ seg_buf = rte_pktmbuf_mtod_offset(seg, char *, offset);
+ while (len > copy_len) {
+ rte_memcpy(seg_buf, buf, (size_t) copy_len);
+ len -= copy_len;
+ buf = ((char *) buf + copy_len);
+ seg = seg->next;
+ seg_buf = rte_pktmbuf_mtod(seg, void *);
+ }
+ rte_memcpy(seg_buf, buf, (size_t) len);
+}
+
+static inline void
+copy_buf_to_pkt(void *buf, unsigned int len,
+ struct rte_mbuf *pkt, unsigned int offset)
+{
+ if (offset + len <= pkt->data_len) {
+ rte_memcpy(rte_pktmbuf_mtod_offset(pkt, char *, offset), buf,
+ (size_t) len);
+ return;
+ }
+ copy_buf_to_pkt_segs(buf, len, pkt, offset);
+}
+
+static inline int
+init_traffic(struct rte_mempool *mp,
+ struct rte_mbuf **pkts_burst,
+ struct ipsec_test_packet *vectors[],
+ uint32_t nb_pkts)
+{
+ struct rte_mbuf *pkt;
+ uint32_t i;
+
+ for (i = 0; i < nb_pkts; i++) {
+ pkt = rte_pktmbuf_alloc(mp);
+ if (pkt == NULL)
+ return TEST_FAILED;
+
+ pkt->data_len = vectors[i]->len;
+ pkt->pkt_len = vectors[i]->len;
+ copy_buf_to_pkt(vectors[i]->data, vectors[i]->len,
+ pkt, vectors[i]->l2_offset);
+
+ pkts_burst[i] = pkt;
+ }
+ return i;
+}
+
+static int
+init_lcore(void)
+{
+ unsigned int lcore_id;
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ lcore_cfg.socketid =
+ rte_lcore_to_socket_id(lcore_id);
+ if (rte_lcore_is_enabled(lcore_id) == 0) {
+ lcore_cfg.status = LCORE_INVALID;
+ continue;
+ } else {
+ lcore_cfg.status = LCORE_AVAIL;
+ break;
+ }
+ }
+ return 0;
+}
+
+static int
+init_mempools(unsigned int nb_mbuf)
+{
+ struct rte_security_ctx *sec_ctx;
+ int socketid;
+ unsigned int lcore_id;
+ uint16_t nb_sess = 512;
+ uint32_t sess_sz;
+ char s[64];
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ if (rte_lcore_is_enabled(lcore_id) == 0)
+ continue;
+
+ socketid = rte_lcore_to_socket_id(lcore_id);
+ if (socketid >= NB_SOCKETS)
+ printf("Socket %d of lcore %u is out of range %d\n",
+ socketid, lcore_id, NB_SOCKETS);
+
+ if (mbufpool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "mbuf_pool_%d", socketid);
+ mbufpool[socketid] = rte_pktmbuf_pool_create(s, nb_mbuf,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+ if (mbufpool[socketid] == NULL)
+ printf("Cannot init mbuf pool on socket %d\n",
+ socketid);
+ printf("Allocated mbuf pool on socket %d\n", socketid);
+ }
+
+ sec_ctx = rte_eth_dev_get_sec_ctx(lcore_cfg.port);
+ if (sec_ctx == NULL)
+ continue;
+
+ sess_sz = rte_security_session_get_size(sec_ctx);
+ if (sess_pool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "sess_pool_%d", socketid);
+ sess_pool[socketid] =
+ rte_mempool_create(s, nb_sess,
+ sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ socketid, 0);
+ if (sess_pool[socketid] == NULL) {
+ printf("Cannot init sess pool on socket %d\n",
+ socketid);
+ return TEST_FAILED;
+ }
+ printf("Allocated sess pool on socket %d\n", socketid);
+ }
+ if (sess_priv_pool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "sess_priv_pool_%d", socketid);
+ sess_priv_pool[socketid] =
+ rte_mempool_create(s, nb_sess,
+ sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ socketid, 0);
+ if (sess_priv_pool[socketid] == NULL) {
+ printf("Cannot init sess_priv pool on socket %d\n",
+ socketid);
+ return TEST_FAILED;
+ }
+ printf("Allocated sess_priv pool on socket %d\n",
+ socketid);
+ }
+ }
+ return 0;
+}
+
+static void
+create_default_flow(uint16_t port_id)
+{
+ struct rte_flow_action action[2];
+ struct rte_flow_item pattern[2];
+ struct rte_flow_attr attr = {0};
+ struct rte_flow_error err;
+ struct rte_flow *flow;
+ int ret;
+
+ /* Add the default rte_flow to enable SECURITY for all ESP packets */
+
+ pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP;
+ pattern[0].spec = NULL;
+ pattern[0].mask = NULL;
+ pattern[0].last = NULL;
+ pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+ action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ action[0].conf = NULL;
+ action[1].type = RTE_FLOW_ACTION_TYPE_END;
+ action[1].conf = NULL;
+
+ attr.ingress = 1;
+
+ ret = rte_flow_validate(port_id, &attr, pattern, action, &err);
+ if (ret)
+ return;
+
+ flow = rte_flow_create(port_id, &attr, pattern, action, &err);
+ if (flow == NULL) {
+ printf("\nDefault flow rule create failed\n");
+ return;
+ }
+
+ default_flow[port_id] = flow;
+}
+
+static void
+destroy_default_flow(uint16_t port_id)
+{
+ struct rte_flow_error err;
+ int ret;
+ if (!default_flow[port_id])
+ return;
+ ret = rte_flow_destroy(port_id, default_flow[port_id], &err);
+ if (ret) {
+ printf("\nDefault flow rule destroy failed\n");
+ return;
+ }
+ default_flow[port_id] = NULL;
+}
+
+struct rte_mbuf **tx_pkts_burst;
+struct rte_mbuf **rx_pkts_burst;
+
+static int
+test_ipsec(struct reassembly_vector *vector,
+ enum rte_security_ipsec_sa_direction dir,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ unsigned int i, portid, nb_rx = 0, nb_tx = 1;
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ struct rte_eth_dev_info dev_info = {0};
+ struct rte_ipsec_session ips = {0};
+
+ portid = lcore_cfg.port;
+ rte_eth_dev_info_get(portid, &dev_info);
+ if (dev_info.reass_capa.max_frags < nb_tx)
+ return TEST_SKIPPED;
+
+ init_traffic(mbufpool[lcore_cfg.socketid],
+ tx_pkts_burst, vector->frags, nb_tx);
+
+ /* Create Inline IPsec session. */
+ if (create_inline_ipsec_session(vector->sa_data, portid, &ips, dir,
+ tun_type))
+ return TEST_FAILED;
+ if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ create_default_flow(portid);
+ else {
+ for (i = 0; i < nb_tx; i++) {
+ if (ips.security.ol_flags &
+ RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ips.security.ctx,
+ ips.security.ses, tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ tx_pkts_burst[i]->l2_len = 14;
+ }
+ }
+
+ nb_tx = rte_eth_tx_burst(portid, 0, tx_pkts_burst, nb_tx);
+
+ rte_pause();
+
+ int j = 0;
+ do {
+ nb_rx = rte_eth_rx_burst(portid, 0, pkts_burst, MAX_PKT_BURST);
+ rte_delay_ms(100);
+ j++;
+ } while (nb_rx == 0 && j < 5);
+
+ destroy_default_flow(portid);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ips.security.ctx, ips.security.ses);
+
+ /* Compare results with known vectors. */
+ if (nb_rx == 1) {
+ if (memcmp(rte_pktmbuf_mtod(pkts_burst[0], char *),
+ vector->full_pkt->data,
+ (size_t) vector->full_pkt->len)) {
+ printf("\n====Inline IPsec case failed: Data Mismatch");
+ rte_hexdump(stdout, "received",
+ rte_pktmbuf_mtod(pkts_burst[0], char *),
+ vector->full_pkt->len);
+ rte_hexdump(stdout, "reference",
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+ } else
+ return TEST_FAILED;
+}
+
+static int
+ut_setup_inline_ipsec(void)
+{
+ uint16_t portid = lcore_cfg.port;
+ int ret;
+
+ /* Start device */
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ /* always enable promiscuous */
+ ret = rte_eth_promiscuous_enable(portid);
+ if (ret != 0) {
+ printf("rte_eth_promiscuous_enable: err=%s, port=%d\n",
+ rte_strerror(-ret), portid);
+ return ret;
+ }
+ lcore_cfg.port = portid;
+ check_all_ports_link_status(1, RTE_PORT_ALL);
+
+ return 0;
+}
+
+static void
+ut_teardown_inline_ipsec(void)
+{
+ uint16_t portid = lcore_cfg.port;
+ int socketid = lcore_cfg.socketid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (socketid != rte_eth_dev_socket_id(portid))
+ continue;
+
+ ret = rte_eth_dev_stop(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+
+static int
+testsuite_setup(void)
+{
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint16_t nb_ports;
+ int socketid, ret;
+ uint16_t nb_rx_queue = 1, nb_tx_queue = 1;
+ uint16_t portid = lcore_cfg.port;
+ struct rte_eth_dev_info dev_info = {0};
+
+ printf("Start inline IPsec test.\n");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports < NB_ETHPORTS_USED) {
+ printf("At least %u port(s) used for test\n",
+ NB_ETHPORTS_USED);
+ return -1;
+ }
+
+ init_lcore();
+
+ init_mempools(NB_MBUF);
+
+ socketid = lcore_cfg.socketid;
+ if (tx_pkts_burst == NULL) {
+ tx_pkts_burst = (struct rte_mbuf **)
+ rte_calloc_socket("tx_buff",
+ MAX_TRAFFIC_BURST * nb_ports,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE, socketid);
+ if (!tx_pkts_burst)
+ return -1;
+
+ rx_pkts_burst = (struct rte_mbuf **)
+ rte_calloc_socket("rx_buff",
+ MAX_TRAFFIC_BURST * nb_ports,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE, socketid);
+ if (!rx_pkts_burst)
+ return -1;
+ }
+
+ printf("Generate %d packets @socket %d\n",
+ MAX_TRAFFIC_BURST * nb_ports, socketid);
+
+ nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+ /* port configure */
+ ret = rte_eth_dev_configure(portid, nb_rx_queue,
+ nb_tx_queue, &port_conf);
+ if (ret < 0) {
+ printf("Cannot configure device: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ ret = rte_eth_macaddr_get(portid, &ports_eth_addr[portid]);
+ if (ret < 0) {
+ printf("Cannot get mac address: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ printf("Port %u ", portid);
+ print_ethaddr("Address:", &ports_eth_addr[portid]);
+ printf("\n");
+
+ /* tx queue setup */
+ ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+ socketid, &tx_conf);
+ if (ret < 0) {
+ printf("rte_eth_tx_queue_setup: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ /* rx queue steup */
+ ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+ socketid, &rx_conf,
+ mbufpool[socketid]);
+ if (ret < 0) {
+ printf("rte_eth_rx_queue_setup: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+
+ rte_eth_dev_info_get(portid, &dev_info);
+
+ if (dev_info.reass_capa.reass_timeout > APP_REASS_TIMEOUT) {
+ dev_info.reass_capa.reass_timeout = APP_REASS_TIMEOUT;
+ rte_eth_ip_reassembly_conf_set(portid, &dev_info.reass_capa);
+ }
+
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ int ret;
+ uint16_t portid = lcore_cfg.port;
+ uint16_t socketid = lcore_cfg.socketid;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (socketid != rte_eth_dev_socket_id(portid))
+ continue;
+
+ ret = rte_eth_dev_reset(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_reset: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+static int
+test_ipsec_ipv4_encap_nofrag(void)
+{
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_gcm128_cipher,
+ .frags[0] = &pkt_ipv4_plain,
+ .nb_frags = 1,
+ };
+ return test_ipsec(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_ipsec_ipv4_decap_nofrag(void)
+{
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_plain,
+ .frags[0] = &pkt_ipv4_gcm128_cipher,
+ .nb_frags = 1,
+ };
+ return test_ipsec(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static struct unit_test_suite inline_ipsec_testsuite = {
+ .suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_ipsec_ipv4_encap_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_ipsec_ipv4_decap_nofrag),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static int
+test_inline_ipsec(void)
+{
+ return unit_test_suite_runner(&inline_ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(inline_ipsec_autotest, test_inline_ipsec);
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
new file mode 100644
index 0000000000..08e6868b0d
--- /dev/null
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+
+#define MAX_FRAG_LEN 1500
+#define MAX_FRAGS 6
+#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
+struct ipsec_session_data {
+ struct {
+ uint8_t data[32];
+ } key;
+ struct {
+ uint8_t data[4];
+ unsigned int len;
+ } salt;
+ struct {
+ uint8_t data[16];
+ } iv;
+ struct rte_security_ipsec_xform ipsec_xform;
+ bool aead;
+ union {
+ struct {
+ struct rte_crypto_sym_xform cipher;
+ struct rte_crypto_sym_xform auth;
+ } chain;
+ struct rte_crypto_sym_xform aead;
+ } xform;
+};
+
+struct ipsec_test_packet {
+ uint32_t len;
+ uint32_t l2_offset;
+ uint32_t l3_offset;
+ uint32_t l4_offset;
+ uint8_t data[MAX_PKT_LEN];
+};
+
+struct reassembly_vector {
+ struct ipsec_session_data *sa_data;
+ struct ipsec_test_packet *full_pkt;
+ struct ipsec_test_packet *frags[MAX_FRAGS];
+ uint16_t nb_frags;
+};
+
+struct ipsec_test_packet pkt_ipv4_plain = {
+ .len = 76,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0x3e, 0x69, 0x8f, 0x00, 0x00,
+ 0x80, 0x11, 0x4d, 0xcc, 0xc0, 0xa8, 0x01, 0x02,
+ 0xc0, 0xa8, 0x01, 0x01,
+
+ /* UDP */
+ 0x0a, 0x98, 0x00, 0x35, 0x00, 0x2a, 0x23, 0x43,
+ 0xb2, 0xd0, 0x01, 0x00, 0x00, 0x01, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x03, 0x73, 0x69, 0x70,
+ 0x09, 0x63, 0x79, 0x62, 0x65, 0x72, 0x63, 0x69,
+ 0x74, 0x79, 0x02, 0x64, 0x6b, 0x00, 0x00, 0x01,
+ 0x00, 0x01,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_gcm128_cipher = {
+ .len = 130,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP - outer header */
+ 0x45, 0x00, 0x00, 0x74, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x32, 0xf7, 0x03, 0xc0, 0xa8, 0x01, 0x02,
+ 0xc0, 0xa8, 0x01, 0x01,
+
+ /* ESP */
+ 0x00, 0x00, 0xa5, 0xf8, 0x00, 0x00, 0x00, 0x01,
+
+ /* IV */
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88,
+
+ /* Data */
+ 0xde, 0xb2, 0x2c, 0xd9, 0xb0, 0x7c, 0x72, 0xc1,
+ 0x6e, 0x3a, 0x65, 0xbe, 0xeb, 0x8d, 0xf3, 0x04,
+ 0xa5, 0xa5, 0x89, 0x7d, 0x33, 0xae, 0x53, 0x0f,
+ 0x1b, 0xa7, 0x6d, 0x5d, 0x11, 0x4d, 0x2a, 0x5c,
+ 0x3d, 0xe8, 0x18, 0x27, 0xc1, 0x0e, 0x9a, 0x4f,
+ 0x51, 0x33, 0x0d, 0x0e, 0xec, 0x41, 0x66, 0x42,
+ 0xcf, 0xbb, 0x85, 0xa5, 0xb4, 0x7e, 0x48, 0xa4,
+ 0xec, 0x3b, 0x9b, 0xa9, 0x5d, 0x91, 0x8b, 0xd4,
+ 0x29, 0xc7, 0x37, 0x57, 0x9f, 0xf1, 0x9e, 0x58,
+ 0xcf, 0xfc, 0x60, 0x7a, 0x3b, 0xce, 0x89, 0x94,
+ },
+};
+
+static inline void
+test_vector_payload_populate(struct ipsec_test_packet *pkt,
+ bool first_frag)
+{
+ uint32_t i = pkt->l4_offset;
+
+ /**
+ * For non-fragmented packets and first frag, skip 8 bytes from
+ * l4_offset for UDP header.
+ */
+ if (first_frag)
+ i += 8;
+
+ for (; i < pkt->len; i++)
+ pkt->data[i] = 0x58;
+}
+
+struct ipsec_session_data conf_aes_128_gcm = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.reass_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 2/4] app/test: add IP reassembly case with no frags
2022-01-20 16:48 ` [PATCH v2 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 1/4] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-01-20 16:48 ` Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 3/4] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
` (2 subsequent siblings)
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:48 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal, Nithin Dabilpuram
test_inline_ipsec testsuite is extended to test IP reassembly of inbound
fragmented packets. The fragmented packet is sent on an interface
which encrypts the packet and then it is loopbacked on the
same interface which decrypts the packet and then attempts IP reassembly
of the decrypted packets.
In this patch, a case is added for packets without fragmentation to
verify the complete path. Other cases are added in subsequent patches.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
app/test/test_security_inline_proto.c | 325 ++++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 1 +
2 files changed, 326 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 4738792cb8..9dc083369a 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -25,6 +25,8 @@
#define RTE_TEST_TX_DESC_DEFAULT (1024)
#define RTE_PORT_ALL (~(uint16_t)0x0)
+#define ENCAP_DECAP_BURST_SZ 33
+
/*
* RX and TX Prefetch, Host, and Write-back threshold values should be
* carefully set for optimal performance. Consult the network
@@ -103,6 +105,8 @@ struct lcore_cfg lcore_cfg;
static uint64_t link_mbps;
+static int ip_reass_dynfield_offset = -1;
+
static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
/* Create Inline IPsec session */
@@ -477,6 +481,293 @@ destroy_default_flow(uint16_t port_id)
struct rte_mbuf **tx_pkts_burst;
struct rte_mbuf **rx_pkts_burst;
+static int
+compare_pkt_data(struct rte_mbuf *m, uint8_t *ref, unsigned int tot_len)
+{
+ unsigned int len;
+ unsigned int nb_segs = m->nb_segs;
+ unsigned int matched = 0;
+ struct rte_mbuf *save = m;
+
+ while (m && nb_segs != 0) {
+ len = tot_len;
+ if (len > m->data_len)
+ len = m->data_len;
+ if (len != 0) {
+ if (memcmp(rte_pktmbuf_mtod(m, char *),
+ ref + matched, len)) {
+ printf("\n====Reassembly case failed: Data Mismatch");
+ rte_hexdump(stdout, "Reassembled",
+ rte_pktmbuf_mtod(m, char *),
+ len);
+ rte_hexdump(stdout, "reference",
+ ref + matched,
+ len);
+ return TEST_FAILED;
+ }
+ }
+ tot_len -= len;
+ matched += len;
+ m = m->next;
+ nb_segs--;
+ }
+
+ if (tot_len) {
+ printf("\n====Reassembly case failed: Data Missing %u",
+ tot_len);
+ printf("\n====nb_segs %u, tot_len %u", nb_segs, tot_len);
+ rte_pktmbuf_dump(stderr, save, -1);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+}
+
+static inline bool
+is_ip_reassembly_incomplete(struct rte_mbuf *mbuf)
+{
+ static uint64_t ip_reass_dynflag;
+ int ip_reass_dynflag_offset;
+
+ if (ip_reass_dynflag == 0) {
+ ip_reass_dynflag_offset = rte_mbuf_dynflag_lookup(
+ RTE_ETH_IP_REASS_INCOMPLETE_DYNFLAG_NAME, NULL);
+ if (ip_reass_dynflag_offset < 0)
+ return false;
+ ip_reass_dynflag = RTE_BIT64(ip_reass_dynflag_offset);
+ }
+
+ return (mbuf->ol_flags & ip_reass_dynflag) != 0;
+}
+
+static void
+free_mbuf(struct rte_mbuf *mbuf)
+{
+ rte_eth_ip_reass_dynfield_t dynfield;
+
+ if (!mbuf)
+ return;
+
+ if (!is_ip_reassembly_incomplete(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ } else {
+ if (ip_reass_dynfield_offset < 0)
+ return;
+
+ while (mbuf) {
+ dynfield = *RTE_MBUF_DYNFIELD(mbuf, ip_reass_dynfield_offset,
+ rte_eth_ip_reass_dynfield_t *);
+ rte_pktmbuf_free(mbuf);
+ mbuf = dynfield.next_frag;
+ }
+ }
+}
+
+
+static int
+get_and_verify_incomplete_frags(struct rte_mbuf *mbuf,
+ struct reassembly_vector *vector)
+{
+ rte_eth_ip_reass_dynfield_t *dynfield[MAX_PKT_BURST];
+ int j = 0, ret;
+ /**
+ * IP reassembly offload is incomplete, and fragments are listed in
+ * dynfield which can be reassembled in SW.
+ */
+ printf("\nHW IP Reassembly is not complete; attempt SW IP Reassembly,"
+ "\nMatching with original frags.");
+
+ if (ip_reass_dynfield_offset < 0)
+ return -1;
+
+ printf("\ncomparing frag: %d", j);
+ ret = compare_pkt_data(mbuf, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(mbuf, ip_reass_dynfield_offset,
+ rte_eth_ip_reass_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ ret = compare_pkt_data(dynfield[j]->next_frag, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+
+ while ((dynfield[j]->nb_frags > 1) &&
+ is_ip_reassembly_incomplete(dynfield[j]->next_frag)) {
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(dynfield[j-1]->next_frag,
+ ip_reass_dynfield_offset,
+ rte_eth_ip_reass_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ ret = compare_pkt_data(dynfield[j]->next_frag,
+ vector->frags[j]->data, vector->frags[j]->len);
+ if (ret)
+ return ret;
+ }
+ return ret;
+}
+
+static int
+test_ipsec_encap_decap(struct reassembly_vector *vector,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ struct rte_ipsec_session out_ips[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_ipsec_session in_ips[ENCAP_DECAP_BURST_SZ] = {0};
+ unsigned int nb_tx, burst_sz, nb_sent = 0;
+ struct rte_eth_dev_info dev_info = {0};
+ unsigned int i, portid, nb_rx = 0, j;
+ struct ipsec_session_data sa_data;
+ int ret = 0;
+
+ burst_sz = vector->burst ? ENCAP_DECAP_BURST_SZ : 1;
+
+ portid = lcore_cfg.port;
+ rte_eth_dev_info_get(portid, &dev_info);
+ if (dev_info.reass_capa.max_frags < vector->nb_frags)
+ return TEST_SKIPPED;
+
+ nb_tx = vector->nb_frags * burst_sz;
+ memset(tx_pkts_burst, 0, sizeof(tx_pkts_burst[0]) * nb_tx);
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_tx);
+
+ for (i = 0; i < nb_tx; i += vector->nb_frags) {
+ ret = init_traffic(mbufpool[lcore_cfg.socketid],
+ &tx_pkts_burst[i], vector->frags,
+ vector->nb_frags);
+ if (ret != vector->nb_frags) {
+ ret = -1;
+ goto out;
+ }
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data, sizeof(sa_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+
+ /* Create Inline IPsec outbound session. */
+ ret = create_inline_ipsec_session(&sa_data, portid, &out_ips[i],
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ tun_type);
+ if (ret)
+ goto out;
+ }
+
+ j = 0;
+ for (i = 0; i < nb_tx; i++) {
+ if (out_ips[j].security.ol_flags &
+ RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(out_ips[j].security.ctx,
+ out_ips[j].security.ses, tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ tx_pkts_burst[i]->l2_len = RTE_ETHER_HDR_LEN;
+
+ /* Move to next SA after nb_frags */
+ if ((i + 1) % vector->nb_frags == 0)
+ j++;
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data, sizeof(sa_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+
+ /* Create Inline IPsec inbound session. */
+ ret = create_inline_ipsec_session(&sa_data, portid, &in_ips[i],
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ tun_type);
+ if (ret)
+ goto out;
+ }
+
+ /* Retrieve reassembly dynfield offset if available */
+ if (ip_reass_dynfield_offset < 0 && vector->nb_frags > 1)
+ ip_reass_dynfield_offset = rte_mbuf_dynfield_lookup(
+ RTE_ETH_IP_REASS_DYNFIELD_NAME, NULL);
+
+
+ create_default_flow(portid);
+
+ nb_sent = rte_eth_tx_burst(portid, 0, tx_pkts_burst, nb_tx);
+ if (nb_sent != nb_tx) {
+ ret = -1;
+ printf("\nFailed to tx %u pkts", nb_tx);
+ goto out;
+ }
+
+ rte_delay_ms(100);
+
+ /* Retry few times before giving up */
+ nb_rx = 0;
+ j = 0;
+ do {
+ nb_rx += rte_eth_rx_burst(portid, 0, &rx_pkts_burst[nb_rx],
+ nb_tx - nb_rx);
+ j++;
+ if (nb_rx >= nb_tx)
+ break;
+ rte_delay_ms(100);
+ } while (j < 5 || !nb_rx);
+
+ /* Check for minimum number of Rx packets expected */
+ if ((vector->nb_frags == 1 && nb_rx != nb_tx) ||
+ (vector->nb_frags > 1 && nb_rx < burst_sz)) {
+ printf("\nreceived less Rx pkts(%u) pkts\n", nb_rx);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ if (vector->nb_frags > 1 &&
+ is_ip_reassembly_incomplete(rx_pkts_burst[i])) {
+ ret = get_and_verify_incomplete_frags(rx_pkts_burst[i],
+ vector);
+ if (ret != TEST_SUCCESS)
+ break;
+ continue;
+ }
+
+ if (rx_pkts_burst[i]->ol_flags &
+ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED ||
+ !(rx_pkts_burst[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+ printf("\nsecurity offload failed\n");
+ ret = TEST_FAILED;
+ break;
+ }
+
+ if (vector->full_pkt->len != rx_pkts_burst[i]->pkt_len) {
+ printf("\nreassembled/decrypted packet length mismatch\n");
+ ret = TEST_FAILED;
+ break;
+ }
+ ret = compare_pkt_data(rx_pkts_burst[i],
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ if (ret != TEST_SUCCESS)
+ break;
+ }
+
+out:
+ destroy_default_flow(portid);
+
+ /* Clear session data. */
+ for (i = 0; i < burst_sz; i++) {
+ if (out_ips[i].security.ses)
+ rte_security_session_destroy(out_ips[i].security.ctx,
+ out_ips[i].security.ses);
+ if (in_ips[i].security.ses)
+ rte_security_session_destroy(in_ips[i].security.ctx,
+ in_ips[i].security.ses);
+ }
+
+ for (i = nb_sent; i < nb_tx; i++)
+ free_mbuf(tx_pkts_burst[i]);
+ for (i = 0; i < nb_rx; i++)
+ free_mbuf(rx_pkts_burst[i]);
+ return ret;
+}
+
static int
test_ipsec(struct reassembly_vector *vector,
enum rte_security_ipsec_sa_direction dir,
@@ -733,6 +1024,34 @@ test_ipsec_ipv4_decap_nofrag(void)
RTE_SECURITY_IPSEC_TUNNEL_IPV4);
}
+static int
+test_reassembly_ipv4_nofrag(void)
+{
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_plain,
+ .frags[0] = &pkt_ipv4_plain,
+ .nb_frags = 1,
+ };
+ return test_ipsec_encap_decap(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+
+static int
+test_ipsec_ipv4_burst_encap_decap(void)
+{
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_plain,
+ .frags[0] = &pkt_ipv4_plain,
+ .nb_frags = 1,
+ .burst = true,
+ };
+ return test_ipsec_encap_decap(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = testsuite_setup,
@@ -744,6 +1063,12 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_ipsec_ipv4_decap_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_ipsec_ipv4_burst_encap_decap),
TEST_CASES_END() /**< NULL terminate unit test array */
}
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index 08e6868b0d..861c4fad48 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -42,6 +42,7 @@ struct reassembly_vector {
struct ipsec_test_packet *full_pkt;
struct ipsec_test_packet *frags[MAX_FRAGS];
uint16_t nb_frags;
+ bool burst;
};
struct ipsec_test_packet pkt_ipv4_plain = {
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 3/4] app/test: add IP reassembly cases with multiple fragments
2022-01-20 16:48 ` [PATCH v2 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 1/4] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 2/4] app/test: add IP reassembly case with no frags Akhil Goyal
@ 2022-01-20 16:48 ` Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 4/4] app/test: add IP reassembly negative cases Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:48 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
More cases are added in test_inline_ipsec test suite to verify packets
having multiple IP(v4/v6) fragments. These fragments are encrypted
and then decrypted as per inline IPsec processing and then an attempt
is made to reassemble the fragments. The reassembled packet
content is matched with the known test vectors.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 147 ++++-
app/test/test_security_inline_proto_vectors.h | 592 ++++++++++++++++++
2 files changed, 738 insertions(+), 1 deletion(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 9dc083369a..d05325b205 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1037,7 +1037,6 @@ test_reassembly_ipv4_nofrag(void)
RTE_SECURITY_IPSEC_TUNNEL_IPV4);
}
-
static int
test_ipsec_ipv4_burst_encap_decap(void)
{
@@ -1052,6 +1051,134 @@ test_ipsec_ipv4_burst_encap_decap(void)
RTE_SECURITY_IPSEC_TUNNEL_IPV4);
}
+static int
+test_reassembly_ipv4_2frag(void)
+{
+ struct reassembly_vector ipv4_2frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 2,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p1_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p1_f2, false);
+
+ return test_ipsec_encap_decap(&ipv4_2frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_2frag(void)
+{
+ struct reassembly_vector ipv6_2frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p1,
+ .frags[0] = &pkt_ipv6_udp_p1_f1,
+ .frags[1] = &pkt_ipv6_udp_p1_f2,
+ .nb_frags = 2,
+ };
+ test_vector_payload_populate(&pkt_ipv6_udp_p1, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p1_f1, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p1_f2, false);
+
+ return test_ipsec_encap_decap(&ipv6_2frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
+static int
+test_reassembly_ipv4_4frag(void)
+{
+ struct reassembly_vector ipv4_4frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f2, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f3, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f4, false);
+
+ return test_ipsec_encap_decap(&ipv4_4frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_4frag(void)
+{
+ struct reassembly_vector ipv6_4frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p2,
+ .frags[0] = &pkt_ipv6_udp_p2_f1,
+ .frags[1] = &pkt_ipv6_udp_p2_f2,
+ .frags[2] = &pkt_ipv6_udp_p2_f3,
+ .frags[3] = &pkt_ipv6_udp_p2_f4,
+ .nb_frags = 4,
+ };
+ test_vector_payload_populate(&pkt_ipv6_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p2_f2, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p2_f3, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p2_f4, false);
+
+ return test_ipsec_encap_decap(&ipv6_4frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
+static int
+test_reassembly_ipv4_5frag(void)
+{
+ struct reassembly_vector ipv4_5frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p3,
+ .frags[0] = &pkt_ipv4_udp_p3_f1,
+ .frags[1] = &pkt_ipv4_udp_p3_f2,
+ .frags[2] = &pkt_ipv4_udp_p3_f3,
+ .frags[3] = &pkt_ipv4_udp_p3_f4,
+ .frags[4] = &pkt_ipv4_udp_p3_f5,
+ .nb_frags = 5,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p3, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f2, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f3, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f4, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f5, false);
+
+ return test_ipsec_encap_decap(&ipv4_5frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_5frag(void)
+{
+ struct reassembly_vector ipv6_5frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p3,
+ .frags[0] = &pkt_ipv6_udp_p3_f1,
+ .frags[1] = &pkt_ipv6_udp_p3_f2,
+ .frags[2] = &pkt_ipv6_udp_p3_f3,
+ .frags[3] = &pkt_ipv6_udp_p3_f4,
+ .frags[4] = &pkt_ipv6_udp_p3_f5,
+ .nb_frags = 5,
+ };
+ test_vector_payload_populate(&pkt_ipv6_udp_p3, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f1, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f2, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f3, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f4, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f5, false);
+
+ return test_ipsec_encap_decap(&ipv6_5frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = testsuite_setup,
@@ -1069,6 +1196,24 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_ipsec_ipv4_burst_encap_decap),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_2frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_2frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_4frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_4frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_5frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_5frag),
TEST_CASES_END() /**< NULL terminate unit test array */
}
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index 861c4fad48..49d94f37df 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -4,6 +4,47 @@
#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+/* The source file includes below test vectors */
+/* IPv6:
+ *
+ * 1) pkt_ipv6_udp_p1
+ * pkt_ipv6_udp_p1_f1
+ * pkt_ipv6_udp_p1_f2
+ *
+ * 2) pkt_ipv6_udp_p2
+ * pkt_ipv6_udp_p2_f1
+ * pkt_ipv6_udp_p2_f2
+ * pkt_ipv6_udp_p2_f3
+ * pkt_ipv6_udp_p2_f4
+ *
+ * 3) pkt_ipv6_udp_p3
+ * pkt_ipv6_udp_p3_f1
+ * pkt_ipv6_udp_p3_f2
+ * pkt_ipv6_udp_p3_f3
+ * pkt_ipv6_udp_p3_f4
+ * pkt_ipv6_udp_p3_f5
+ */
+
+/* IPv4:
+ *
+ * 1) pkt_ipv4_udp_p1
+ * pkt_ipv4_udp_p1_f1
+ * pkt_ipv4_udp_p1_f2
+ *
+ * 2) pkt_ipv4_udp_p2
+ * pkt_ipv4_udp_p2_f1
+ * pkt_ipv4_udp_p2_f2
+ * pkt_ipv4_udp_p2_f3
+ * pkt_ipv4_udp_p2_f4
+ *
+ * 3) pkt_ipv4_udp_p3
+ * pkt_ipv4_udp_p3_f1
+ * pkt_ipv4_udp_p3_f2
+ * pkt_ipv4_udp_p3_f3
+ * pkt_ipv4_udp_p3_f4
+ * pkt_ipv4_udp_p3_f5
+ */
+
#define MAX_FRAG_LEN 1500
#define MAX_FRAGS 6
#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
@@ -45,6 +86,557 @@ struct reassembly_vector {
bool burst;
};
+struct ipsec_test_packet pkt_ipv6_udp_p1 = {
+ .len = 1514,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 54,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0xb4, 0x2C, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p1_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x5c, 0x92, 0xac, 0xf1,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p1_f2 = {
+ .len = 186,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x00, 0x84, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x38, 0x5c, 0x92, 0xac, 0xf1,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2 = {
+ .len = 4496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 54,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x11, 0x5a, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x64, 0x6c, 0x68, 0x9f,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f2 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f3 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f4 = {
+ .len = 496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0xba, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa8, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3 = {
+ .len = 5796,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 54,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x16, 0x6e, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x65, 0xcf, 0x5a, 0xae,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f2 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f3 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f4 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa9, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f5 = {
+ .len = 460,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0x96, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x14, 0xe0, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1 = {
+ .len = 1514,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0xdc, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x66, 0x0d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x01, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1_f2 = {
+ .len = 114,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0x64, 0x00, 0x01, 0x00, 0xaf,
+ 0x40, 0x11, 0x6a, 0xd6, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2 = {
+ .len = 4496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x11, 0x82, 0x00, 0x02, 0x00, 0x00,
+ 0x40, 0x11, 0x5a, 0x66, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5c, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f2 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xad, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f3 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfe, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f4 = {
+ .len = 296,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x01, 0x1a, 0x00, 0x02, 0x02, 0x0d,
+ 0x40, 0x11, 0x68, 0xc1, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3 = {
+ .len = 5796,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x16, 0x96, 0x00, 0x03, 0x00, 0x00,
+ 0x40, 0x11, 0x55, 0x51, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5b, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f2 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xac, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f3 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfd, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f4 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x22, 0x0d,
+ 0x40, 0x11, 0x44, 0x4e, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f5 = {
+ .len = 196,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0xb6, 0x00, 0x03, 0x02, 0xbc,
+ 0x40, 0x11, 0x68, 0x75, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
struct ipsec_test_packet pkt_ipv4_plain = {
.len = 76,
.l2_offset = 0,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v2 4/4] app/test: add IP reassembly negative cases
2022-01-20 16:48 ` [PATCH v2 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (2 preceding siblings ...)
2022-01-20 16:48 ` [PATCH v2 3/4] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
@ 2022-01-20 16:48 ` Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 16:48 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
test_inline_ipsec testsuite is added with cases where the IP reassembly
is incomplete and software will need to reassemble them later.
The failure cases added are:
- all fragments are not received.
- same fragment is received more than once.
- out of order fragments.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 71 +++++++++++++++++++++++++++
1 file changed, 71 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index d05325b205..b1794c1bc7 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1179,6 +1179,68 @@ test_reassembly_ipv6_5frag(void)
RTE_SECURITY_IPSEC_TUNNEL_IPV6);
}
+static int
+test_reassembly_incomplete(void)
+{
+ /* Negative test case, not sending all fragments. */
+ struct reassembly_vector ipv4_incomplete_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .nb_frags = 2,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f2, false);
+
+ return test_ipsec_encap_decap(&ipv4_incomplete_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_overlap(void)
+{
+ /* Negative test case, sending 1 fragment twice. */
+ struct reassembly_vector ipv4_overlap_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f1, /* Overlap */
+ .frags[2] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 3,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f2, false);
+
+ return test_ipsec_encap_decap(&ipv4_overlap_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_out_of_order(void)
+{
+ /* Negative test case, out of order fragments. */
+ struct reassembly_vector ipv4_ooo_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f3,
+ .frags[2] = &pkt_ipv4_udp_p2_f4,
+ .frags[3] = &pkt_ipv4_udp_p2_f2,
+ .nb_frags = 4,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f2, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f3, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f4, false);
+
+ return test_ipsec_encap_decap(&ipv4_ooo_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = testsuite_setup,
@@ -1214,6 +1276,15 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_reassembly_ipv6_5frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_incomplete),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_overlap),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_out_of_order),
TEST_CASES_END() /**< NULL terminate unit test array */
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v2 1/4] ethdev: introduce IP reassembly offload
2022-01-20 16:45 ` Stephen Hemminger
@ 2022-01-20 17:11 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-20 17:11 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Anoob Joseph, radu.nicolau, declan.doherty, hemant.agrawal,
matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, olivier.matz, rosen.xu,
Jerin Jacob Kollanukkaran
> On Thu, 20 Jan 2022 21:56:24 +0530
> Akhil Goyal <gakhil@marvell.com> wrote:
>
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior notice.
> > + *
> > + * A structure used to set IP reassembly configuration.
> > + *
> > + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> > + * the PMD will attempt IP reassembly for the received packets as per
> > + * properties defined in this structure:
> > + *
> > + */
> > +struct rte_eth_ip_reass_params {
> > + /** Maximum time in ms which PMD can wait for other fragments. */
> > + uint32_t reass_timeout;
> > + /** Maximum number of fragments that can be reassembled. */
> > + uint16_t max_frags;
> > + /**
> > + * Flags to enable reassembly of packet types -
> > + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> > + */
> > + uint16_t flags;
> > +};
> > +
>
> Actually, this is not experimental. You are embedding this in dev_info
> and dev_info is not experimental; therefore the reassembly parameters
> can never change without breaking ABI of dev_info.
Agreed, will remove the experimental tag from this struct.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH 1/8] ethdev: introduce IP reassembly offload
2022-01-03 15:08 ` [PATCH 1/8] " Akhil Goyal
2022-01-11 16:03 ` Ananyev, Konstantin
@ 2022-01-22 7:38 ` Andrew Rybchenko
2022-01-30 16:53 ` [EXT] " Akhil Goyal
1 sibling, 1 reply; 184+ messages in thread
From: Andrew Rybchenko @ 2022-01-22 7:38 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, olivier.matz, rosen.xu
On 1/3/22 18:08, Akhil Goyal wrote:
> IP Reassembly is a costly operation if it is done in software.
> The operation becomes even more costlier if IP fragmants are encrypted.
> However, if it is offloaded to HW, it can considerably save application cycles.
>
> Hence, a new offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is introduced in
> ethdev for devices which can attempt reassembly of packets in hardware.
> rte_eth_dev_info is updated with the reassembly capabilities which a device
> can support.
Yes, reassembly is really complicated process taking possibility to have
overlapping fragments, out-of-order etc.
There are network attacks based on IP reassembly.
Will it simply result in IP reassembly failure if no buffers are left
for IP fragments? What will be reported in mbuf if some packets overlap?
Just raw packets as is or reassembly result with holes?
I think behavour should be specified/
> The resulting reassembled packet would be a typical segmented mbuf in
> case of success.
>
> And if reassembly of fragments is failed or is incomplete (if fragments do
> not come before the reass_timeout), the mbuf ol_flags can be updated.
> This is updated in a subsequent patch.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> doc/guides/nics/features.rst | 12 ++++++++++++
> lib/ethdev/rte_ethdev.c | 1 +
> lib/ethdev/rte_ethdev.h | 32 +++++++++++++++++++++++++++++++-
> 3 files changed, 44 insertions(+), 1 deletion(-)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 27be2d2576..1dfdee9602 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -602,6 +602,18 @@ Supports inner packet L4 checksum.
> ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
>
>
> +.. _nic_features_ip_reassembly:
> +
> +IP reassembly
> +-------------
> +
> +Supports IP reassembly in hardware.
> +
> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY``.
Looking at the patch I see no changes and usage of rte_eth_rxconf and
rte_eth_rxmode. It should be added here later if corresponding changes
come in subsequent patches.
> +* **[provides] mbuf**: ``mbuf.ol_flags:RTE_MBUF_F_RX_IP_REASSEMBLY_INCOMPLETE``
Same here. The flag is not defined yet. So, it must not be mentioned in
the patch.
.
> +* **[provides] rte_eth_dev_info**: ``reass_capa``.
> +
> +
> .. _nic_features_shared_rx_queue:
>
> Shared Rx queue
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index fa299c8ad7..11427b2e4d 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
[snip]
> @@ -1781,6 +1782,33 @@ enum rte_eth_representor_type {
> RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
> };
>
> +/* Flag to offload IP reassembly for IPv4 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
> +/* Flag to offload IP reassembly for IPv6 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice.
> + *
> + * A structure used to set IP reassembly configuration.
In the patch the structure is used to provide capabilities,
not to set configuration.
If you are going to use the same structure in capabilities and
configuration, it could be handy, but really confusing since
interpretation of fields should be different.
As a bare minimum the difference must be specified in comments.
Right now all fields makes sense in capabilities and configuration:
maximum possible vs actual value, however, not everything could be
really configurable and it will become confusing. It is really hard
to discuss right now since the patch does not provide usage of the
structure for the configuration.
> + *
> + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> + * the PMD will attempt IP reassembly for the received packets as per
> + * properties defined in this structure:
> + *
> + */
> +struct rte_eth_ip_reass_params {
> + /** Maximum time in ms which PMD can wait for other fragments. */
> + uint32_t reass_timeout;
Please, specify units. May be even in field name. E.g. reass_timeout_ms.
> + /** Maximum number of fragments that can be reassembled. */
> + uint16_t max_frags;
> + /**
> + * Flags to enable reassembly of packet types -
> + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> + */
> + uint16_t flags;
If it is just for packet types, I'd suggest to name the field more
precise. Also it will avoid flags vs frags misreading.
Just an idea. Up to you.
> +};
> +
> /**
> * A structure used to retrieve the contextual information of
> * an Ethernet device, such as the controlling driver of the
> @@ -1841,8 +1869,10 @@ struct rte_eth_dev_info {
> * embedded managed interconnect/switch.
> */
> struct rte_eth_switch_info switch_info;
> + /** IP reassembly offload capabilities that a device can support. */
> + struct rte_eth_ip_reass_params reass_capa;
>
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v2 2/4] ethdev: add dev op to set/get IP reassembly configuration
2022-01-20 16:26 ` [PATCH v2 2/4] ethdev: add dev op to set/get IP reassembly configuration Akhil Goyal
@ 2022-01-22 8:17 ` Andrew Rybchenko
2022-01-30 16:30 ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Andrew Rybchenko @ 2022-01-22 8:17 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, olivier.matz, rosen.xu,
jerinj
On 1/20/22 19:26, Akhil Goyal wrote:
> A new ethernet device op is added to give application control over
ethernet -> Ethernet
> the IP reassembly configuration. This operation is an optional
> call from the application, default values are set by PMD and
> exposed via rte_eth_dev_info.
Are defaults or maximum support values exposed via rte_eth_dev_info?
I guess it should be maximum. Defaults can be obtained using
get without set.
> Application should always first retrieve the capabilities from
> rte_eth_dev_info and then set the fields accordingly.
> User can get the currently set values using the get API.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
[snip]
> +/**
> + * @internal
> + * Set configuration parameters for enabling IP reassembly offload in hardware.
> + *
> + * @param dev
> + * Port (ethdev) handle
> + *
> + * @param[in] conf
> + * Configuration parameters for IP reassembly.
> + *
> + * @return
> + * Negative errno value on error, zero otherwise
> + */
> +typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
> + struct rte_eth_ip_reass_params *conf);
const
[snip]
> +int
> +rte_eth_ip_reassembly_conf_get(uint16_t port_id,
> + struct rte_eth_ip_reass_params *conf)
Please, preserve order everywhere. If get comes first, it must be first
everywhere.
> +{
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
> +
> + if (conf == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL");
> + return -EINVAL;
> + }
Why is order of check different in set and get?
> +
> + if (dev->data->dev_configured == 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "Device with port_id=%"PRIu16" is not configured.\n",
> + port_id);
> + return -EINVAL;
> + }
> +
> + if ((dev->data->dev_conf.rxmode.offloads &
> + RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) == 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "The port (ID=%"PRIu16") is not configured for IP reassembly\n",
> + port_id);
> + return -EINVAL;
> + }
> +
> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_get,
> + -ENOTSUP);
> + memset(conf, 0, sizeof(struct rte_eth_ip_reass_params));
> + return eth_err(port_id,
> + (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
> +}
> +
> RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
>
> RTE_INIT(ethdev_init_telemetry)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 11427b2e4d..53af158bcb 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -5218,6 +5218,57 @@ int rte_eth_representor_info_get(uint16_t port_id,
> __rte_experimental
> int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Get IP reassembly configuration parameters currently set in PMD,
> + * if device rx offload flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is
rx -> Rx
> + * enabled and the PMD supports IP reassembly offload.
> + *
> + * @param port_id
> + * The port identifier of the device.
> + * @param conf
> + * A pointer to rte_eth_ip_reass_params structure.
> + * @return
> + * - (-ENOTSUP) if offload configuration is not supported by device.
> + * - (-EINVAL) if offload is not enabled in rte_eth_conf.
> + * - (-ENODEV) if *port_id* invalid.
> + * - (-EIO) if device is removed.
> + * - (0) on success.
> + */
> +__rte_experimental
> +int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
> + struct rte_eth_ip_reass_params *conf);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Set IP reassembly configuration parameters if device rx offload
rx -> Rx
> + * flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is enabled and the PMD
> + * supports IP reassembly offload. User should first check the
> + * reass_capa in rte_eth_dev_info before setting the configuration.
> + * The values of configuration parameters must not exceed the device
> + * capabilities.
It sounds like set API should retrieve dev_info and check set
values vs maximums.
> The use of this API is optional and if called, it
> + * should be called before rte_eth_dev_start().
It should be highlighted that the device must be already configured.
> + *
> + * @param port_id
> + * The port identifier of the device.
> + * @param conf
> + * A pointer to rte_eth_ip_reass_params structure.
> + * @return
> + * - (-ENOTSUP) if offload configuration is not supported by device.
> + * - (-EINVAL) if offload is not enabled in rte_eth_conf.
> + * - (-ENODEV) if *port_id* invalid.
> + * - (-EIO) if device is removed.
> + * - (0) on success.
> + */
> +__rte_experimental
> +int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> + struct rte_eth_ip_reass_params *conf);
> +
> +
> #include <rte_ethdev_core.h>
>
> /**
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index c2fb0669a4..ad829dd47e 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -256,6 +256,10 @@ EXPERIMENTAL {
> rte_flow_flex_item_create;
> rte_flow_flex_item_release;
> rte_flow_pick_transfer_proxy;
> +
> + #added in 22.03
> + rte_eth_ip_reassembly_conf_get;
> + rte_eth_ip_reassembly_conf_set;
> };
>
> INTERNAL {
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v2 2/4] ethdev: add dev op to set/get IP reassembly configuration
2022-01-22 8:17 ` Andrew Rybchenko
@ 2022-01-30 16:30 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-30 16:30 UTC (permalink / raw)
To: Andrew Rybchenko, dev
Cc: Anoob Joseph, radu.nicolau, declan.doherty, hemant.agrawal,
matan, konstantin.ananyev, thomas, ferruh.yigit, olivier.matz,
rosen.xu, Jerin Jacob Kollanukkaran
Hi Andrew,
> On 1/20/22 19:26, Akhil Goyal wrote:
> > A new ethernet device op is added to give application control over
>
> ethernet -> Ethernet
Ok
>
> > the IP reassembly configuration. This operation is an optional
> > call from the application, default values are set by PMD and
> > exposed via rte_eth_dev_info.
>
> Are defaults or maximum support values exposed via rte_eth_dev_info?
> I guess it should be maximum. Defaults can be obtained using
> get without set.
>
Rte_eth_dev_info gives the maximum values/capabilities that a device can support
And also the default values set if user does not call set() API.
And get() op will give the currently set values.
> > Application should always first retrieve the capabilities from
> > rte_eth_dev_info and then set the fields accordingly.
> > User can get the currently set values using the get API.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>
> [snip]
>
>
> > +/**
> > + * @internal
> > + * Set configuration parameters for enabling IP reassembly offload in
> hardware.
> > + *
> > + * @param dev
> > + * Port (ethdev) handle
> > + *
> > + * @param[in] conf
> > + * Configuration parameters for IP reassembly.
> > + *
> > + * @return
> > + * Negative errno value on error, zero otherwise
> > + */
> > +typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
> > + struct rte_eth_ip_reass_params *conf);
>
> const
>
> [snip]
>
> > +int
> > +rte_eth_ip_reassembly_conf_get(uint16_t port_id,
> > + struct rte_eth_ip_reass_params *conf)
>
> Please, preserve order everywhere. If get comes first, it must be first
> everywhere.
ok
>
> > +{
> > + struct rte_eth_dev *dev;
> > +
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + dev = &rte_eth_devices[port_id];
> > +
> > + if (conf == NULL) {
> > + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL");
> > + return -EINVAL;
> > + }
>
> Why is order of check different in set and get?
Ok will correct it.
>
> > +
> > + if (dev->data->dev_configured == 0) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Device with port_id=%"PRIu16" is not configured.\n",
> > + port_id);
> > + return -EINVAL;
> > + }
> > +
> > + if ((dev->data->dev_conf.rxmode.offloads &
> > + RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) == 0) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "The port (ID=%"PRIu16") is not configured for IP
> reassembly\n",
> > + port_id);
> > + return -EINVAL;
> > + }
> > +
> > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >ip_reassembly_conf_get,
> > + -ENOTSUP);
> > + memset(conf, 0, sizeof(struct rte_eth_ip_reass_params));
> > + return eth_err(port_id,
> > + (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
> > +}
> > +
> > RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
> >
> > RTE_INIT(ethdev_init_telemetry)
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index 11427b2e4d..53af158bcb 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -5218,6 +5218,57 @@ int rte_eth_representor_info_get(uint16_t port_id,
> > __rte_experimental
> > int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Get IP reassembly configuration parameters currently set in PMD,
> > + * if device rx offload flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is
>
> rx -> Rx
>
> > + * enabled and the PMD supports IP reassembly offload.
> > + *
> > + * @param port_id
> > + * The port identifier of the device.
> > + * @param conf
> > + * A pointer to rte_eth_ip_reass_params structure.
> > + * @return
> > + * - (-ENOTSUP) if offload configuration is not supported by device.
> > + * - (-EINVAL) if offload is not enabled in rte_eth_conf.
> > + * - (-ENODEV) if *port_id* invalid.
> > + * - (-EIO) if device is removed.
> > + * - (0) on success.
> > + */
> > +__rte_experimental
> > +int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
> > + struct rte_eth_ip_reass_params *conf);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Set IP reassembly configuration parameters if device rx offload
>
> rx -> Rx
>
Ok
> > + * flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is enabled and the PMD
> > + * supports IP reassembly offload. User should first check the
> > + * reass_capa in rte_eth_dev_info before setting the configuration.
> > + * The values of configuration parameters must not exceed the device
> > + * capabilities.
>
> It sounds like set API should retrieve dev_info and check set
> values vs maximums.
Yes.
>
> > The use of this API is optional and if called, it
> > + * should be called before rte_eth_dev_start().
>
> It should be highlighted that the device must be already configured.
Where should this be highlighted?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH 1/8] ethdev: introduce IP reassembly offload
2022-01-22 7:38 ` Andrew Rybchenko
@ 2022-01-30 16:53 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-30 16:53 UTC (permalink / raw)
To: Andrew Rybchenko, dev
Cc: Anoob Joseph, radu.nicolau, declan.doherty, hemant.agrawal,
matan, konstantin.ananyev, thomas, ferruh.yigit, olivier.matz,
rosen.xu
Hi Andrew,
Thanks for the review.
> On 1/3/22 18:08, Akhil Goyal wrote:
> > IP Reassembly is a costly operation if it is done in software.
> > The operation becomes even more costlier if IP fragmants are encrypted.
> > However, if it is offloaded to HW, it can considerably save application cycles.
> >
> > Hence, a new offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is introduced
> in
> > ethdev for devices which can attempt reassembly of packets in hardware.
> > rte_eth_dev_info is updated with the reassembly capabilities which a device
> > can support.
>
> Yes, reassembly is really complicated process taking possibility to have
> overlapping fragments, out-of-order etc.
> There are network attacks based on IP reassembly.
> Will it simply result in IP reassembly failure if no buffers are left
> for IP fragments? What will be reported in mbuf if some packets overlap?
> Just raw packets as is or reassembly result with holes?
> I think behavour should be specified/
The PMD will set the reassembly incomplete dynflag and the user can retrieve the
Mbufs using dynfields. This is shown in v2 of the patchset and a test app is also added
To cover this negative case.
>
> > The resulting reassembled packet would be a typical segmented mbuf in
> > case of success.
> >
> > And if reassembly of fragments is failed or is incomplete (if fragments do
> > not come before the reass_timeout), the mbuf ol_flags can be updated.
> > This is updated in a subsequent patch.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > doc/guides/nics/features.rst | 12 ++++++++++++
> > lib/ethdev/rte_ethdev.c | 1 +
> > lib/ethdev/rte_ethdev.h | 32 +++++++++++++++++++++++++++++++-
> > 3 files changed, 44 insertions(+), 1 deletion(-)
> >
> > diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> > index 27be2d2576..1dfdee9602 100644
> > --- a/doc/guides/nics/features.rst
> > +++ b/doc/guides/nics/features.rst
> > @@ -602,6 +602,18 @@ Supports inner packet L4 checksum.
> >
> ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UD
> P_CKSUM``.
> >
> >
> > +.. _nic_features_ip_reassembly:
> > +
> > +IP reassembly
> > +-------------
> > +
> > +Supports IP reassembly in hardware.
> > +
> > +* **[uses] rte_eth_rxconf,rte_eth_rxmode**:
> ``offloads:RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY``.
>
> Looking at the patch I see no changes and usage of rte_eth_rxconf and
> rte_eth_rxmode. It should be added here later if corresponding changes
> come in subsequent patches.
>
> > +* **[provides] mbuf**:
> ``mbuf.ol_flags:RTE_MBUF_F_RX_IP_REASSEMBLY_INCOMPLETE``
>
> Same here. The flag is not defined yet. So, it must not be mentioned in
> the patch.
> .
Ok
> > +* **[provides] rte_eth_dev_info**: ``reass_capa``.
> > +
> > +
> > .. _nic_features_shared_rx_queue:
> >
> > Shared Rx queue
>
>
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index fa299c8ad7..11427b2e4d 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
>
> [snip]
>
> > @@ -1781,6 +1782,33 @@ enum rte_eth_representor_type {
> > RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
> > };
> >
> > +/* Flag to offload IP reassembly for IPv4 packets. */
> > +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
> > +/* Flag to offload IP reassembly for IPv6 packets. */
> > +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior notice.
> > + *
> > + * A structure used to set IP reassembly configuration.
>
> In the patch the structure is used to provide capabilities,
> not to set configuration.
>
> If you are going to use the same structure in capabilities and
> configuration, it could be handy, but really confusing since
> interpretation of fields should be different.
> As a bare minimum the difference must be specified in comments.
> Right now all fields makes sense in capabilities and configuration:
> maximum possible vs actual value, however, not everything could be
> really configurable and it will become confusing. It is really hard
> to discuss right now since the patch does not provide usage of the
> structure for the configuration.
The idea is to use it both for capabilities as well as configuration of those values.
And from library perspective not creating any limitation about which param can
Be configured which cannot be configured.
However will add a comment to specify both usage.
Could you please check v2 and test app for the usage?
>
> > + *
> > + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> > + * the PMD will attempt IP reassembly for the received packets as per
> > + * properties defined in this structure:
> > + *
> > + */
> > +struct rte_eth_ip_reass_params {
> > + /** Maximum time in ms which PMD can wait for other fragments. */
> > + uint32_t reass_timeout;
>
> Please, specify units. May be even in field name. E.g. reass_timeout_ms.
Ok
>
> > + /** Maximum number of fragments that can be reassembled. */
> > + uint16_t max_frags;
> > + /**
> > + * Flags to enable reassembly of packet types -
> > + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> > + */
> > + uint16_t flags;
>
> If it is just for packet types, I'd suggest to name the field more
> precise. Also it will avoid flags vs frags misreading.
> Just an idea. Up to you.
Currently it is for packet type, but it can be extended for further usage in future.
Hence thought of making it generic - flags.
What do you suggest? How about renaming it to 'options'?
Regards,
Akhil
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 0/4] ethdev: introduce IP reassembly offload
2022-01-20 16:26 ` [PATCH v2 0/4] " Akhil Goyal
` (3 preceding siblings ...)
2022-01-20 16:26 ` [PATCH v2 4/4] security: add IPsec option for " Akhil Goyal
@ 2022-01-30 17:59 ` Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 1/4] " Akhil Goyal
` (5 more replies)
4 siblings, 6 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-30 17:59 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, jerinj, stephen, mdr, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
changes in v3:
- incorporated comments from Andrew and Stephen Hemminger
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (4):
ethdev: introduce IP reassembly offload
ethdev: add dev op to set/get IP reassembly configuration
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 19 ++++++
doc/guides/nics/features.rst | 12 ++++
lib/ethdev/ethdev_driver.h | 45 +++++++++++++++
lib/ethdev/rte_ethdev.c | 109 +++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 100 +++++++++++++++++++++++++++++++-
lib/ethdev/version.map | 5 ++
lib/security/rte_security.h | 12 +++-
7 files changed, 300 insertions(+), 2 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 1/4] ethdev: introduce IP reassembly offload
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
@ 2022-01-30 17:59 ` Akhil Goyal
2022-02-01 14:11 ` Ferruh Yigit
2022-01-30 17:59 ` [PATCH v3 2/4] ethdev: add dev op to set/get IP reassembly configuration Akhil Goyal
` (4 subsequent siblings)
5 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-30 17:59 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, jerinj, stephen, mdr, Akhil Goyal
IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offloaded to HW, it can considerably save application
cycles.
Hence, a new offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is introduced in
ethdev for devices which can attempt reassembly of packets in hardware.
rte_eth_dev_info is updated with the reassembly capabilities which a device
can support.
The resulting reassembled packet would be a typical segmented mbuf in
case of success.
And if reassembly of fragments is failed or is incomplete (if fragments do
not come before the reass_timeout), the mbuf ol_flags can be updated.
This is updated in a subsequent patch.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
devtools/libabigail.abignore | 5 +++++
doc/guides/nics/features.rst | 11 +++++++++++
lib/ethdev/rte_ethdev.c | 1 +
lib/ethdev/rte_ethdev.h | 28 +++++++++++++++++++++++++++-
4 files changed, 44 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 4b676f317d..90f449c43a 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,3 +11,8 @@
; Ignore generated PMD information strings
[suppress_variable]
name_regexp = _pmd_info$
+
+; Ignore fields inserted in place of reserved_64s of rte_eth_dev_info
+[suppress_type]
+ name = rte_eth_dev_info
+ has_data_member_inserted_between = {offset_of(reserved_64s), end}
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 27be2d2576..b45bce4a78 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -602,6 +602,17 @@ Supports inner packet L4 checksum.
``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+.. _nic_features_ip_reassembly:
+
+IP reassembly
+-------------
+
+Supports IP reassembly in hardware.
+
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY``.
+* **[provides] rte_eth_dev_info**: ``reass_capa``.
+
+
.. _nic_features_shared_rx_queue:
Shared Rx queue
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index a1d475a292..d9a03f12f9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -126,6 +126,7 @@ static const struct {
RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+ RTE_RX_OFFLOAD_BIT2STR(IP_REASSEMBLY),
};
#undef RTE_RX_OFFLOAD_BIT2STR
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index fa299c8ad7..cfaf7a5afc 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1586,6 +1586,7 @@ struct rte_eth_conf {
#define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19)
#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20)
+#define RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY RTE_BIT64(21)
#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
@@ -1781,6 +1782,29 @@ enum rte_eth_representor_type {
RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
};
+/* Flag to offload IP reassembly for IPv4 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
+/* Flag to offload IP reassembly for IPv6 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
+/**
+ * A structure used to get/set IP reassembly configuration.
+ *
+ * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
+ * the PMD will attempt IP reassembly for the received packets as per
+ * properties defined in this structure.
+ */
+struct rte_eth_ip_reass_params {
+ /** Maximum time in ms which PMD can wait for other fragments. */
+ uint32_t reass_timeout_ms;
+ /** Maximum number of fragments that can be reassembled. */
+ uint16_t max_frags;
+ /**
+ * Flags to enable reassembly of packet types -
+ * RTE_ETH_DEV_REASSEMBLY_F_xxx.
+ */
+ uint16_t flags;
+};
+
/**
* A structure used to retrieve the contextual information of
* an Ethernet device, such as the controlling driver of the
@@ -1841,8 +1865,10 @@ struct rte_eth_dev_info {
* embedded managed interconnect/switch.
*/
struct rte_eth_switch_info switch_info;
+ /** IP reassembly offload capabilities that a device can support. */
+ struct rte_eth_ip_reass_params reass_capa;
- uint64_t reserved_64s[2]; /**< Reserved for future fields */
+ uint64_t reserved_64s[1]; /**< Reserved for future fields */
void *reserved_ptrs[2]; /**< Reserved for future fields */
};
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 2/4] ethdev: add dev op to set/get IP reassembly configuration
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 1/4] " Akhil Goyal
@ 2022-01-30 17:59 ` Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
` (3 subsequent siblings)
5 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-01-30 17:59 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, jerinj, stephen, mdr, Akhil Goyal
A new Ethernet device op is added to give application control over
the IP reassembly configuration. This operation is an optional
call from the application, default/max values are set by PMD and
exposed via rte_eth_dev_info.
Application should always first retrieve the capabilities from
rte_eth_dev_info and then set the fields accordingly. The set
API should be called before the starting the device.
User can get the currently set values using the get API.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
doc/guides/nics/features.rst | 1 +
lib/ethdev/ethdev_driver.h | 37 +++++++++++++++++
lib/ethdev/rte_ethdev.c | 80 ++++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 51 +++++++++++++++++++++++
lib/ethdev/version.map | 4 ++
5 files changed, 173 insertions(+)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index b45bce4a78..2a3cf09066 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -611,6 +611,7 @@ Supports IP reassembly in hardware.
* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY``.
* **[provides] rte_eth_dev_info**: ``reass_capa``.
+* **[provides] eth_dev_ops**: ``ip_reassembly_conf_get:ip_reassembly_conf_set``.
.. _nic_features_shared_rx_queue:
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..a310001648 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -990,6 +990,38 @@ typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
uint64_t *features);
+/**
+ * @internal
+ * Get IP reassembly offload configuration parameters set in PMD.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[out] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reass_params *conf);
+
+/**
+ * @internal
+ * Set configuration parameters for enabling IP reassembly offload in hardware.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[in] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reass_params *conf);
+
/**
* @internal A structure containing the functions exported by an Ethernet driver.
*/
@@ -1186,6 +1218,11 @@ struct eth_dev_ops {
* kinds of metadata to the PMD
*/
eth_rx_metadata_negotiate_t rx_metadata_negotiate;
+
+ /** Get IP reassembly configuration */
+ eth_ip_reassembly_conf_get_t ip_reassembly_conf_get;
+ /** Set IP reassembly configuration */
+ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
};
/**
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index d9a03f12f9..6e9a8cf33b 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6473,6 +6473,86 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
(*dev->dev_ops->rx_metadata_negotiate)(dev, features));
}
+int
+rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" is not configured.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if ((dev->data->dev_conf.rxmode.offloads &
+ RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "The port (ID=%"PRIu16") is not configured for IP reassembly\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_get,
+ -ENOTSUP);
+ memset(conf, 0, sizeof(struct rte_eth_ip_reass_params));
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
+}
+
+int
+rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" is not configured.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->dev_started != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" started,\n"
+ "cannot configure IP reassembly params.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if ((dev->data->dev_conf.rxmode.offloads &
+ RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "The port (ID=%"PRIu16") is not configured for IP reassembly\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid IP reassembly configuration (NULL)\n");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_set,
+ -ENOTSUP);
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index cfaf7a5afc..e3532591f4 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5214,6 +5214,57 @@ int rte_eth_representor_info_get(uint16_t port_id,
__rte_experimental
int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get IP reassembly configuration parameters currently set in PMD,
+ * if device Rx offload flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is
+ * enabled and the PMD supports IP reassembly offload.
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-EINVAL) if offload is not enabled in rte_eth_conf.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set IP reassembly configuration parameters if device Rx offload
+ * flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is enabled and the PMD
+ * supports IP reassembly offload. User should first check the
+ * reass_capa in rte_eth_dev_info before setting the configuration.
+ * The values of configuration parameters must not exceed the device
+ * capabilities. The use of this API is optional and if called, it
+ * should be called before rte_eth_dev_start().
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-EINVAL) if offload is not enabled in rte_eth_conf.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf);
+
+
#include <rte_ethdev_core.h>
/**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..ad829dd47e 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -256,6 +256,10 @@ EXPERIMENTAL {
rte_flow_flex_item_create;
rte_flow_flex_item_release;
rte_flow_pick_transfer_proxy;
+
+ #added in 22.03
+ rte_eth_ip_reassembly_conf_get;
+ rte_eth_ip_reassembly_conf_set;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 1/4] " Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 2/4] ethdev: add dev op to set/get IP reassembly configuration Akhil Goyal
@ 2022-01-30 17:59 ` Akhil Goyal
2022-02-01 14:11 ` Ferruh Yigit
2022-01-30 17:59 ` [PATCH v3 4/4] security: add IPsec option for " Akhil Goyal
` (2 subsequent siblings)
5 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-30 17:59 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, jerinj, stephen, mdr, Akhil Goyal
Hardware IP reassembly may be incomplete for multiple reasons like
reassembly timeout reached, duplicate fragments, etc.
To save application cycles to process these packets again, a new
mbuf dynflag is added to show that the mbuf received is not
reassembled properly.
Now if this dynflag is set, application can retrieve corresponding
chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
up to application to either drop those fragments or wait for more time.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/ethdev_driver.h | 8 ++++++++
lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 21 +++++++++++++++++++++
lib/ethdev/version.map | 1 +
4 files changed, 58 insertions(+)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index a310001648..7499a4fbf5 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1689,6 +1689,14 @@ int
rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
uint32_t direction);
+/**
+ * @internal
+ * Register mbuf dynamic field and flag for IP reassembly incomplete case.
+ */
+__rte_internal
+int
+rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
+
/*
* Legacy ethdev API used internally by drivers.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 6e9a8cf33b..3c68a951c0 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6553,6 +6553,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
}
+int
+rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag_offset)
+{
+ static const struct rte_mbuf_dynfield field_desc = {
+ .name = RTE_ETH_IP_REASS_DYNFIELD_NAME,
+ .size = sizeof(rte_eth_ip_reass_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reass_dynfield_t),
+ };
+ static const struct rte_mbuf_dynflag ip_reass_dynflag = {
+ .name = RTE_ETH_IP_REASS_INCOMPLETE_DYNFLAG_NAME,
+ };
+ int offset;
+
+ offset = rte_mbuf_dynfield_register(&field_desc);
+ if (offset < 0)
+ return -1;
+ if (field_offset != NULL)
+ *field_offset = offset;
+
+ offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
+ if (offset < 0)
+ return -1;
+ if (flag_offset != NULL)
+ *flag_offset = offset;
+
+ return 0;
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index e3532591f4..e3e6368a1d 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5264,6 +5264,27 @@ __rte_experimental
int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
struct rte_eth_ip_reass_params *conf);
+#define RTE_ETH_IP_REASS_DYNFIELD_NAME "rte_eth_ip_reass_dynfield"
+#define RTE_ETH_IP_REASS_INCOMPLETE_DYNFLAG_NAME "rte_eth_ip_reass_incomplete_dynflag"
+
+/**
+ * In case of IP reassembly offload failure, ol_flags in mbuf will be set
+ * with RTE_MBUF_F_RX_IPREASSEMBLY_INCOMPLETE and packets will be returned
+ * without alteration. The application can retrieve the attached fragments
+ * using mbuf dynamic field.
+ */
+typedef struct {
+ /**
+ * Next fragment packet. Application should fetch dynamic field of
+ * each fragment until a NULL is received and nb_frags is 0.
+ */
+ struct rte_mbuf *next_frag;
+ /** Time spent(in ms) by HW in waiting for further fragments. */
+ uint16_t time_spent_ms;
+ /** Number of more fragments attached in mbuf dynamic fields. */
+ uint16_t nb_frags;
+} rte_eth_ip_reass_dynfield_t;
+
#include <rte_ethdev_core.h>
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index ad829dd47e..8b7578471a 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -283,6 +283,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_bind;
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
+ rte_eth_ip_reass_dynfield_register;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 4/4] security: add IPsec option for IP reassembly
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
` (2 preceding siblings ...)
2022-01-30 17:59 ` [PATCH v3 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-01-30 17:59 ` Akhil Goyal
2022-02-01 14:12 ` Ferruh Yigit
2022-02-01 14:10 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Ferruh Yigit
2022-02-04 22:13 ` [PATCH v4 0/3] " Akhil Goyal
5 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-01-30 17:59 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, jerinj, stephen, mdr, Akhil Goyal
A new option is added in IPsec to enable and attempt reassembly
of inbound packets.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
devtools/libabigail.abignore | 14 ++++++++++++++
lib/security/rte_security.h | 12 +++++++++++-
2 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 90f449c43a..c6e304282f 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -16,3 +16,17 @@
[suppress_type]
name = rte_eth_dev_info
has_data_member_inserted_between = {offset_of(reserved_64s), end}
+
+; Ignore fields inserted in place of reserved_opts of rte_security_ipsec_sa_options
+[suppress_type]
+ name = rte_ipsec_sa_prm
+ name = rte_security_ipsec_sa_options
+ has_data_member_inserted_between = {offset_of(reserved_opts), end}
+
+[suppress_type]
+ name = rte_security_capability
+ has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
+
+[suppress_type]
+ name = rte_security_session_conf
+ has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 1228b6c8b1..168b837a82 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
*/
uint32_t l4_csum_enable : 1;
+ /** Enable reassembly on incoming packets.
+ *
+ * * 1: Enable driver to try reassembly of encrypted IP packets for
+ * this SA, if supported by the driver. This feature will work
+ * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
+ * inline Ethernet device.
+ * * 0: Disable reassembly of packets (default).
+ */
+ uint32_t reass_en : 1;
+
/** Reserved bit fields for future extension
*
* User should ensure reserved_opts is cleared as it may change in
@@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
*
* Note: Reduce number of bits in reserved_opts for every new option.
*/
- uint32_t reserved_opts : 18;
+ uint32_t reserved_opts : 17;
};
/** IPSec security association direction */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v3 0/4] ethdev: introduce IP reassembly offload
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
` (3 preceding siblings ...)
2022-01-30 17:59 ` [PATCH v3 4/4] security: add IPsec option for " Akhil Goyal
@ 2022-02-01 14:10 ` Ferruh Yigit
2022-02-02 9:05 ` [EXT] " Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 0/3] " Akhil Goyal
5 siblings, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-01 14:10 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, jerinj, stephen, mdr
On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> As discussed in the RFC[1] sent in 21.11, a new offload is
> introduced in ethdev for IP reassembly.
>
> This patchset add the IP reassembly RX offload.
> Currently, the offload is tested along with inline IPsec processing.
> It can also be updated as a standalone offload without IPsec, if there
> are some hardware available to test it.
> The patchset is tested on cnxk platform. The driver implementation
> and a test app are added as separate patchsets.>
Can you please share the links of those sets?
> [1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
>
> changes in v3:
> - incorporated comments from Andrew and Stephen Hemminger
>
> changes in v2:
> - added abi ignore exceptions for modifications in reserved fields.
> Added a crude way to subside the rte_security and rte_ipsec ABI issue.
> Please suggest a better way.
> - incorporated Konstantin's comment for extra checks in new API
> introduced.
> - converted static mbuf ol_flag to mbuf dynflag (Konstantin)
> - added a get API for reassembly configuration (Konstantin)
> - Fixed checkpatch issues.
> - Dynfield is NOT split into 2 parts as it would cause an extra fetch in
> case of IP reassembly failure.
> - Application patches are split into a separate series.
>
>
> Akhil Goyal (4):
> ethdev: introduce IP reassembly offload
> ethdev: add dev op to set/get IP reassembly configuration
> ethdev: add mbuf dynfield for incomplete IP reassembly
> security: add IPsec option for IP reassembly
>
> devtools/libabigail.abignore | 19 ++++++
> doc/guides/nics/features.rst | 12 ++++
> lib/ethdev/ethdev_driver.h | 45 +++++++++++++++
> lib/ethdev/rte_ethdev.c | 109 +++++++++++++++++++++++++++++++++++
> lib/ethdev/rte_ethdev.h | 100 +++++++++++++++++++++++++++++++-
> lib/ethdev/version.map | 5 ++
> lib/security/rte_security.h | 12 +++-
> 7 files changed, 300 insertions(+), 2 deletions(-)
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v3 1/4] ethdev: introduce IP reassembly offload
2022-01-30 17:59 ` [PATCH v3 1/4] " Akhil Goyal
@ 2022-02-01 14:11 ` Ferruh Yigit
2022-02-02 10:57 ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-01 14:11 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, jerinj, stephen, mdr
On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> IP Reassembly is a costly operation if it is done in software.
> The operation becomes even more costlier if IP fragments are encrypted.
> However, if it is offloaded to HW, it can considerably save application
> cycles.
>
> Hence, a new offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is introduced in
> ethdev for devices which can attempt reassembly of packets in hardware.
> rte_eth_dev_info is updated with the reassembly capabilities which a device
> can support.
>
> The resulting reassembled packet would be a typical segmented mbuf in
> case of success.
>
> And if reassembly of fragments is failed or is incomplete (if fragments do
> not come before the reass_timeout), the mbuf ol_flags can be updated.
> This is updated in a subsequent patch.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
> devtools/libabigail.abignore | 5 +++++
> doc/guides/nics/features.rst | 11 +++++++++++
> lib/ethdev/rte_ethdev.c | 1 +
> lib/ethdev/rte_ethdev.h | 28 +++++++++++++++++++++++++++-
> 4 files changed, 44 insertions(+), 1 deletion(-)
>
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 4b676f317d..90f449c43a 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -11,3 +11,8 @@
> ; Ignore generated PMD information strings
> [suppress_variable]
> name_regexp = _pmd_info$
> +
> +; Ignore fields inserted in place of reserved_64s of rte_eth_dev_info
> +[suppress_type]
> + name = rte_eth_dev_info
> + has_data_member_inserted_between = {offset_of(reserved_64s), end}
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 27be2d2576..b45bce4a78 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -602,6 +602,17 @@ Supports inner packet L4 checksum.
> ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
>
>
> +.. _nic_features_ip_reassembly:
> +
> +IP reassembly
> +-------------
> +
> +Supports IP reassembly in hardware.
> +
> +* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY``.
> +* **[provides] rte_eth_dev_info**: ``reass_capa``.
> +
> +
> .. _nic_features_shared_rx_queue:
>
> Shared Rx queue
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index a1d475a292..d9a03f12f9 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -126,6 +126,7 @@ static const struct {
> RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
> RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
> RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
> + RTE_RX_OFFLOAD_BIT2STR(IP_REASSEMBLY),
> };
>
> #undef RTE_RX_OFFLOAD_BIT2STR
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index fa299c8ad7..cfaf7a5afc 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1586,6 +1586,7 @@ struct rte_eth_conf {
> #define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19)
> #define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
> #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20)
> +#define RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY RTE_BIT64(21)
>
> #define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
> RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
> @@ -1781,6 +1782,29 @@ enum rte_eth_representor_type {
> RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
> };
>
> +/* Flag to offload IP reassembly for IPv4 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
> +/* Flag to offload IP reassembly for IPv6 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
> +/**
> + * A structure used to get/set IP reassembly configuration.
> + *
> + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> + * the PMD will attempt IP reassembly for the received packets as per
> + * properties defined in this structure.
> + */
> +struct rte_eth_ip_reass_params {
As a generic comment, what do you think to use full 'reassembly' instead
of 'reass' short version, to clarify/simplify the meaning?
> + /** Maximum time in ms which PMD can wait for other fragments. */
> + uint32_t reass_timeout_ms;
> + /** Maximum number of fragments that can be reassembled. */
> + uint16_t max_frags;
> + /**
> + * Flags to enable reassembly of packet types -
> + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> + */
> + uint16_t flags;
> +};
> +
> /**
> * A structure used to retrieve the contextual information of
> * an Ethernet device, such as the controlling driver of the
> @@ -1841,8 +1865,10 @@ struct rte_eth_dev_info {
> * embedded managed interconnect/switch.
> */
> struct rte_eth_switch_info switch_info;
> + /** IP reassembly offload capabilities that a device can support. */
> + struct rte_eth_ip_reass_params reass_capa;
>
"struct rte_eth_dev_info" & 'rte_eth_dev_info_get()' are very common,
all applications that use net devices and even some internal APIs rely on
this struct and API.
It makes me uneasy to extend this struct with rarely used features,
worrying on loading to much (capability/status/config) on single
API/struct can cause an unmaintainable code by time.
Also most of the time (if not always) offload flag is just an on/off flag
to the PMD, application set/unset offload flag and PMD knows what to do.
But for this case some capability variables, and a configuration API is
required/involved.
For considering above two cases, what do you think implement this as
control plane APIs instead of offload flag?
There are already 'conf_set()' and 'conf_get()' APIs introduced in coming
patches, introducing an additional 'capability_get()' API removes the need
of change in "struct rte_eth_dev_info" and 'RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY'
can be removed.
Thomas, Andrew, what do you think?
> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> void *reserved_ptrs[2]; /**< Reserved for future fields */
> };
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v3 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-01-30 17:59 ` [PATCH v3 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-02-01 14:11 ` Ferruh Yigit
2022-02-02 9:13 ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-01 14:11 UTC (permalink / raw)
To: Akhil Goyal, dev, Olivier Matz
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, jerinj, stephen, mdr
On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> Hardware IP reassembly may be incomplete for multiple reasons like
> reassembly timeout reached, duplicate fragments, etc.
> To save application cycles to process these packets again, a new
> mbuf dynflag is added to show that the mbuf received is not
> reassembled properly.
>
> Now if this dynflag is set, application can retrieve corresponding
> chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
> up to application to either drop those fragments or wait for more time.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
<...>
> index e3532591f4..e3e6368a1d 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -5264,6 +5264,27 @@ __rte_experimental
> int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> struct rte_eth_ip_reass_params *conf);
>
> +#define RTE_ETH_IP_REASS_DYNFIELD_NAME "rte_eth_ip_reass_dynfield"
> +#define RTE_ETH_IP_REASS_INCOMPLETE_DYNFLAG_NAME "rte_eth_ip_reass_incomplete_dynflag"
For other dynfield/dynflag these defines resides in mbuf library, not sure
if these also should go there. cc'ed Olivier for comment.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v3 4/4] security: add IPsec option for IP reassembly
2022-01-30 17:59 ` [PATCH v3 4/4] security: add IPsec option for " Akhil Goyal
@ 2022-02-01 14:12 ` Ferruh Yigit
2022-02-02 9:15 ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-01 14:12 UTC (permalink / raw)
To: Akhil Goyal, dev, Radu Nicolau
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, jerinj, stephen, mdr
On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> A new option is added in IPsec to enable and attempt reassembly
> of inbound packets.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> devtools/libabigail.abignore | 14 ++++++++++++++
> lib/security/rte_security.h | 12 +++++++++++-
+Radu for review
> 2 files changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 90f449c43a..c6e304282f 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -16,3 +16,17 @@
> [suppress_type]
> name = rte_eth_dev_info
> has_data_member_inserted_between = {offset_of(reserved_64s), end}
> +
> +; Ignore fields inserted in place of reserved_opts of rte_security_ipsec_sa_options
> +[suppress_type]
> + name = rte_ipsec_sa_prm
> + name = rte_security_ipsec_sa_options
> + has_data_member_inserted_between = {offset_of(reserved_opts), end}
> +
> +[suppress_type]
> + name = rte_security_capability
> + has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
> +
> +[suppress_type]
> + name = rte_security_session_conf
> + has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> index 1228b6c8b1..168b837a82 100644
> --- a/lib/security/rte_security.h
> +++ b/lib/security/rte_security.h
> @@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
> */
> uint32_t l4_csum_enable : 1;
>
> + /** Enable reassembly on incoming packets.
> + *
> + * * 1: Enable driver to try reassembly of encrypted IP packets for
> + * this SA, if supported by the driver. This feature will work
> + * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
> + * inline Ethernet device.
> + * * 0: Disable reassembly of packets (default).
> + */
> + uint32_t reass_en : 1;
> +
> /** Reserved bit fields for future extension
> *
> * User should ensure reserved_opts is cleared as it may change in
> @@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
> *
> * Note: Reduce number of bits in reserved_opts for every new option.
> */
> - uint32_t reserved_opts : 18;
> + uint32_t reserved_opts : 17;
> };
>
> /** IPSec security association direction */
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v3 0/4] ethdev: introduce IP reassembly offload
2022-02-01 14:10 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Ferruh Yigit
@ 2022-02-02 9:05 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-02 9:05 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, Jerin Jacob Kollanukkaran, stephen,
mdr
> On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> > As discussed in the RFC[1] sent in 21.11, a new offload is
> > introduced in ethdev for IP reassembly.
> >
> > This patchset add the IP reassembly RX offload.
> > Currently, the offload is tested along with inline IPsec processing.
> > It can also be updated as a standalone offload without IPsec, if there
> > are some hardware available to test it.
> > The patchset is tested on cnxk platform. The driver implementation
> > and a test app are added as separate patchsets.>
>
> Can you please share the links of those sets?
APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v3 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-01 14:11 ` Ferruh Yigit
@ 2022-02-02 9:13 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-02 9:13 UTC (permalink / raw)
To: Ferruh Yigit, dev, Olivier Matz
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, Jerin Jacob Kollanukkaran, stephen,
mdr
> On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> > Hardware IP reassembly may be incomplete for multiple reasons like
> > reassembly timeout reached, duplicate fragments, etc.
> > To save application cycles to process these packets again, a new
> > mbuf dynflag is added to show that the mbuf received is not
> > reassembled properly.
> >
> > Now if this dynflag is set, application can retrieve corresponding
> > chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
> > up to application to either drop those fragments or wait for more time.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>
> <...>
>
> > index e3532591f4..e3e6368a1d 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -5264,6 +5264,27 @@ __rte_experimental
> > int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> > struct rte_eth_ip_reass_params *conf);
> >
> > +#define RTE_ETH_IP_REASS_DYNFIELD_NAME "rte_eth_ip_reass_dynfield"
> > +#define RTE_ETH_IP_REASS_INCOMPLETE_DYNFLAG_NAME
> "rte_eth_ip_reass_incomplete_dynflag"
>
> For other dynfield/dynflag these defines resides in mbuf library, not sure
> if these also should go there. cc'ed Olivier for comment.
Ok I will move these in rte_mbuf_dyn.h
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v3 4/4] security: add IPsec option for IP reassembly
2022-02-01 14:12 ` Ferruh Yigit
@ 2022-02-02 9:15 ` Akhil Goyal
2022-02-02 14:04 ` Ferruh Yigit
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-02 9:15 UTC (permalink / raw)
To: Ferruh Yigit, dev, Radu Nicolau, mdr
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, Jerin Jacob Kollanukkaran, stephen
> On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> > A new option is added in IPsec to enable and attempt reassembly
> > of inbound packets.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > devtools/libabigail.abignore | 14 ++++++++++++++
> > lib/security/rte_security.h | 12 +++++++++++-
>
>
> +Radu for review
>
> > 2 files changed, 25 insertions(+), 1 deletion(-)
> >
> > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > index 90f449c43a..c6e304282f 100644
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -16,3 +16,17 @@
> > [suppress_type]
> > name = rte_eth_dev_info
> > has_data_member_inserted_between = {offset_of(reserved_64s), end}
> > +
> > +; Ignore fields inserted in place of reserved_opts of
> rte_security_ipsec_sa_options
> > +[suppress_type]
> > + name = rte_ipsec_sa_prm
> > + name = rte_security_ipsec_sa_options
> > + has_data_member_inserted_between = {offset_of(reserved_opts), end}
> > +
> > +[suppress_type]
> > + name = rte_security_capability
> > + has_data_member_inserted_between = {offset_of(reserved_opts),
> (offset_of(reserved_opts) + 18)}
> > +
> > +[suppress_type]
> > + name = rte_security_session_conf
> > + has_data_member_inserted_between = {offset_of(reserved_opts),
> (offset_of(reserved_opts) + 18)}
Could not find any better way to suppress the ABI warning.
Any better idea?
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index 1228b6c8b1..168b837a82 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
> > */
> > uint32_t l4_csum_enable : 1;
> >
> > + /** Enable reassembly on incoming packets.
> > + *
> > + * * 1: Enable driver to try reassembly of encrypted IP packets for
> > + * this SA, if supported by the driver. This feature will work
> > + * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
> > + * inline Ethernet device.
> > + * * 0: Disable reassembly of packets (default).
> > + */
> > + uint32_t reass_en : 1;
> > +
> > /** Reserved bit fields for future extension
> > *
> > * User should ensure reserved_opts is cleared as it may change in
> > @@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
> > *
> > * Note: Reduce number of bits in reserved_opts for every new option.
> > */
> > - uint32_t reserved_opts : 18;
> > + uint32_t reserved_opts : 17;
> > };
> >
> > /** IPSec security association direction */
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v3 1/4] ethdev: introduce IP reassembly offload
2022-02-01 14:11 ` Ferruh Yigit
@ 2022-02-02 10:57 ` Akhil Goyal
2022-02-02 14:05 ` Ferruh Yigit
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-02 10:57 UTC (permalink / raw)
To: Ferruh Yigit, dev, thomas, andrew.rybchenko
Cc: Anoob Joseph, matan, konstantin.ananyev, rosen.xu,
Jerin Jacob Kollanukkaran, stephen, mdr
> > +/* Flag to offload IP reassembly for IPv4 packets. */
> > +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
> > +/* Flag to offload IP reassembly for IPv6 packets. */
> > +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
> > +/**
> > + * A structure used to get/set IP reassembly configuration.
> > + *
> > + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> > + * the PMD will attempt IP reassembly for the received packets as per
> > + * properties defined in this structure.
> > + */
> > +struct rte_eth_ip_reass_params {
>
> As a generic comment, what do you think to use full 'reassembly' instead
> of 'reass' short version, to clarify/simplify the meaning?
Full reassembly was used in most places. But here the struct name would be too big.
IMO, reass is good enough here. Though, no strong opinion. Will change if you insist.
>
> > + /** Maximum time in ms which PMD can wait for other fragments. */
> > + uint32_t reass_timeout_ms;
> > + /** Maximum number of fragments that can be reassembled. */
> > + uint16_t max_frags;
> > + /**
> > + * Flags to enable reassembly of packet types -
> > + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> > + */
> > + uint16_t flags;
> > +};
> > +
> > /**
> > * A structure used to retrieve the contextual information of
> > * an Ethernet device, such as the controlling driver of the
> > @@ -1841,8 +1865,10 @@ struct rte_eth_dev_info {
> > * embedded managed interconnect/switch.
> > */
> > struct rte_eth_switch_info switch_info;
> > + /** IP reassembly offload capabilities that a device can support. */
> > + struct rte_eth_ip_reass_params reass_capa;
> >
>
> "struct rte_eth_dev_info" & 'rte_eth_dev_info_get()' are very common,
> all applications that use net devices and even some internal APIs rely on
> this struct and API.
> It makes me uneasy to extend this struct with rarely used features,
> worrying on loading to much (capability/status/config) on single
> API/struct can cause an unmaintainable code by time.
>
> Also most of the time (if not always) offload flag is just an on/off flag
> to the PMD, application set/unset offload flag and PMD knows what to do.
> But for this case some capability variables, and a configuration API is
> required/involved.
>
> For considering above two cases, what do you think implement this as
> control plane APIs instead of offload flag?
> There are already 'conf_set()' and 'conf_get()' APIs introduced in coming
> patches, introducing an additional 'capability_get()' API removes the need
> of change in "struct rte_eth_dev_info" and
> 'RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY'
> can be removed.
> Thomas, Andrew, what do you think?
We are ok to add a new dev_op for capability_get() if we agree on that.
Thomas, Andrew, let me know if you think otherwise.
>
>
> > - uint64_t reserved_64s[2]; /**< Reserved for future fields */
> > + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > };
> >
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v3 4/4] security: add IPsec option for IP reassembly
2022-02-02 9:15 ` [EXT] " Akhil Goyal
@ 2022-02-02 14:04 ` Ferruh Yigit
0 siblings, 0 replies; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-02 14:04 UTC (permalink / raw)
To: Akhil Goyal, dev, Radu Nicolau, mdr, David Marchand
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, Jerin Jacob Kollanukkaran, stephen
On 2/2/2022 9:15 AM, Akhil Goyal wrote:
>> On 1/30/2022 5:59 PM, Akhil Goyal wrote:
>>> A new option is added in IPsec to enable and attempt reassembly
>>> of inbound packets.
>>>
>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>> ---
>>> devtools/libabigail.abignore | 14 ++++++++++++++
>>> lib/security/rte_security.h | 12 +++++++++++-
>>
>>
>> +Radu for review
>>
>>> 2 files changed, 25 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
>>> index 90f449c43a..c6e304282f 100644
>>> --- a/devtools/libabigail.abignore
>>> +++ b/devtools/libabigail.abignore
>>> @@ -16,3 +16,17 @@
>>> [suppress_type]
>>> name = rte_eth_dev_info
>>> has_data_member_inserted_between = {offset_of(reserved_64s), end}
>>> +
>>> +; Ignore fields inserted in place of reserved_opts of
>> rte_security_ipsec_sa_options
>>> +[suppress_type]
>>> + name = rte_ipsec_sa_prm
>>> + name = rte_security_ipsec_sa_options
>>> + has_data_member_inserted_between = {offset_of(reserved_opts), end}
>>> +
>>> +[suppress_type]
>>> + name = rte_security_capability
>>> + has_data_member_inserted_between = {offset_of(reserved_opts),
>> (offset_of(reserved_opts) + 18)}
>>> +
>>> +[suppress_type]
>>> + name = rte_security_session_conf
>>> + has_data_member_inserted_between = {offset_of(reserved_opts),
>> (offset_of(reserved_opts) + 18)}
>
> Could not find any better way to suppress the ABI warning.
> Any better idea?
>
+David for it, who knows abigail better.
>>> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
>>> index 1228b6c8b1..168b837a82 100644
>>> --- a/lib/security/rte_security.h
>>> +++ b/lib/security/rte_security.h
>>> @@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
>>> */
>>> uint32_t l4_csum_enable : 1;
>>>
>>> + /** Enable reassembly on incoming packets.
>>> + *
>>> + * * 1: Enable driver to try reassembly of encrypted IP packets for
>>> + * this SA, if supported by the driver. This feature will work
>>> + * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
>>> + * inline Ethernet device.
>>> + * * 0: Disable reassembly of packets (default).
>>> + */
>>> + uint32_t reass_en : 1;
>>> +
>>> /** Reserved bit fields for future extension
>>> *
>>> * User should ensure reserved_opts is cleared as it may change in
>>> @@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
>>> *
>>> * Note: Reduce number of bits in reserved_opts for every new option.
>>> */
>>> - uint32_t reserved_opts : 18;
>>> + uint32_t reserved_opts : 17;
>>> };
>>>
>>> /** IPSec security association direction */
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v3 1/4] ethdev: introduce IP reassembly offload
2022-02-02 10:57 ` [EXT] " Akhil Goyal
@ 2022-02-02 14:05 ` Ferruh Yigit
0 siblings, 0 replies; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-02 14:05 UTC (permalink / raw)
To: Akhil Goyal, dev, thomas, andrew.rybchenko
Cc: Anoob Joseph, matan, konstantin.ananyev, rosen.xu,
Jerin Jacob Kollanukkaran, stephen, mdr
On 2/2/2022 10:57 AM, Akhil Goyal wrote:
>>> +/* Flag to offload IP reassembly for IPv4 packets. */
>>> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
>>> +/* Flag to offload IP reassembly for IPv6 packets. */
>>> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
>>> +/**
>>> + * A structure used to get/set IP reassembly configuration.
>>> + *
>>> + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
>>> + * the PMD will attempt IP reassembly for the received packets as per
>>> + * properties defined in this structure.
>>> + */
>>> +struct rte_eth_ip_reass_params {
>>
>> As a generic comment, what do you think to use full 'reassembly' instead
>> of 'reass' short version, to clarify/simplify the meaning?
>
> Full reassembly was used in most places. But here the struct name would be too big.
> IMO, reass is good enough here. Though, no strong opinion. Will change if you insist.
>
Only it doesn't remind me 'reassembly' when I see 'reass', so I think not clear,
we can wait for more comments from others.
>>
>>> + /** Maximum time in ms which PMD can wait for other fragments. */
>>> + uint32_t reass_timeout_ms;
>>> + /** Maximum number of fragments that can be reassembled. */
>>> + uint16_t max_frags;
>>> + /**
>>> + * Flags to enable reassembly of packet types -
>>> + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
>>> + */
>>> + uint16_t flags;
>>> +};
>>> +
>>> /**
>>> * A structure used to retrieve the contextual information of
>>> * an Ethernet device, such as the controlling driver of the
>>> @@ -1841,8 +1865,10 @@ struct rte_eth_dev_info {
>>> * embedded managed interconnect/switch.
>>> */
>>> struct rte_eth_switch_info switch_info;
>>> + /** IP reassembly offload capabilities that a device can support. */
>>> + struct rte_eth_ip_reass_params reass_capa;
>>>
>>
>> "struct rte_eth_dev_info" & 'rte_eth_dev_info_get()' are very common,
>> all applications that use net devices and even some internal APIs rely on
>> this struct and API.
>> It makes me uneasy to extend this struct with rarely used features,
>> worrying on loading to much (capability/status/config) on single
>> API/struct can cause an unmaintainable code by time.
>>
>> Also most of the time (if not always) offload flag is just an on/off flag
>> to the PMD, application set/unset offload flag and PMD knows what to do.
>> But for this case some capability variables, and a configuration API is
>> required/involved.
>>
>> For considering above two cases, what do you think implement this as
>> control plane APIs instead of offload flag?
>> There are already 'conf_set()' and 'conf_get()' APIs introduced in coming
>> patches, introducing an additional 'capability_get()' API removes the need
>> of change in "struct rte_eth_dev_info" and
>> 'RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY'
>> can be removed.
>> Thomas, Andrew, what do you think?
>
> We are ok to add a new dev_op for capability_get() if we agree on that.
> Thomas, Andrew, let me know if you think otherwise.
>
>>
>>
>>> - uint64_t reserved_64s[2]; /**< Reserved for future fields */
>>> + uint64_t reserved_64s[1]; /**< Reserved for future fields */
>>> void *reserved_ptrs[2]; /**< Reserved for future fields */
>>> };
>>>
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 0/3] ethdev: introduce IP reassembly offload
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
` (4 preceding siblings ...)
2022-02-01 14:10 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Ferruh Yigit
@ 2022-02-04 22:13 ` Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 1/3] " Akhil Goyal
` (3 more replies)
5 siblings, 4 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-04 22:13 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.[2][3]
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
[2]: APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
[3]: PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
Newer versions of app and PMD will be sent once library changes are
acked.
Changes in v4:
- removed rte_eth_dev_info update for capability (Ferruh)
- removed Rx offload flag (Ferruh)
- added capability_get() (Ferruh)
- moved dynfield and dynflag namedefines in rte_mbuf_dyn.h (Ferruh)
changes in v3:
- incorporated comments from Andrew and Stephen Hemminger
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (3):
ethdev: introduce IP reassembly offload
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 14 ++++
doc/guides/nics/features.rst | 13 ++++
lib/ethdev/ethdev_driver.h | 63 ++++++++++++++++++
lib/ethdev/rte_ethdev.c | 121 +++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 108 +++++++++++++++++++++++++++++++
lib/ethdev/version.map | 6 ++
lib/mbuf/rte_mbuf_dyn.h | 9 +++
lib/security/rte_security.h | 12 +++-
8 files changed, 345 insertions(+), 1 deletion(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 1/3] ethdev: introduce IP reassembly offload
2022-02-04 22:13 ` [PATCH v4 0/3] " Akhil Goyal
@ 2022-02-04 22:13 ` Akhil Goyal
2022-02-04 22:20 ` Akhil Goyal
2022-02-07 13:53 ` Ferruh Yigit
2022-02-04 22:13 ` [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
` (2 subsequent siblings)
3 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-04 22:13 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offloaded to HW, it can considerably save application
cycles.
Hence, a new offload feature is exposed in eth_dev ops for devices which can
attempt IP reassembly of packets in hardware.
- rte_eth_ip_reassembly_capability_get() - to get the maximum values
of reassembly configuration which can be set.
- rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
and to enable the feature in the PMD (to be called before rte_eth_dev_start()).
- rte_eth_ip_reassembly_conf_get() - to get the current configuration
set in PMD.
Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
the resulting reassembled IP packet would be a typical segmented mbuf in
case of success.
And if reassembly of IP fragments is failed or is incomplete (if fragments do
not come before the reass_timeout, overlap, etc), the mbuf dynamic flags can be
updated by the PMD. This is updated in a subsequent patch.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Change-Id: Ic20bb3af1ed599e8f2f3665d2d6c47b2e420e509
---
doc/guides/nics/features.rst | 13 +++++
lib/ethdev/ethdev_driver.h | 55 +++++++++++++++++++++
lib/ethdev/rte_ethdev.c | 93 ++++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 91 +++++++++++++++++++++++++++++++++++
lib/ethdev/version.map | 5 ++
5 files changed, 257 insertions(+)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 27be2d2576..e6e0bbe9d8 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -602,6 +602,19 @@ Supports inner packet L4 checksum.
``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+.. _nic_features_ip_reassembly:
+
+IP reassembly
+-------------
+
+Supports IP reassembly in hardware.
+
+* **[provides] eth_dev_ops**: ``ip_reassemly_capability_get``,
+ ``ip_reassembly_conf_get``, ``ip_reassembly_conf_set``.
+* **[related] API**: ``rte_eth_ip_reassembly_capability_get()``,
+ ``rte_eth_ip_reassembly_conf_get()``, ``rte_eth_ip_reassembly_conf_set()``.
+
+
.. _nic_features_shared_rx_queue:
Shared Rx queue
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..8fe77f283f 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -990,6 +990,54 @@ typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
uint64_t *features);
+/**
+ * @internal
+ * Get IP reassembly offload capability of a PMD.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[out] conf
+ * IP reassembly capability supported by the PMD
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_capability_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reass_params *capa);
+
+/**
+ * @internal
+ * Get IP reassembly offload configuration parameters set in PMD.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[out] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reass_params *conf);
+
+/**
+ * @internal
+ * Set configuration parameters for enabling IP reassembly offload in hardware.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[in] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
+ const struct rte_eth_ip_reass_params *conf);
+
/**
* @internal A structure containing the functions exported by an Ethernet driver.
*/
@@ -1186,6 +1234,13 @@ struct eth_dev_ops {
* kinds of metadata to the PMD
*/
eth_rx_metadata_negotiate_t rx_metadata_negotiate;
+
+ /** Get IP reassembly capability */
+ eth_ip_reassembly_capability_get_t ip_reassembly_capability_get;
+ /** Get IP reassembly configuration */
+ eth_ip_reassembly_conf_get_t ip_reassembly_conf_get;
+ /** Set IP reassembly configuration */
+ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
};
/**
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 29e21ad580..88ca4ce867 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6474,6 +6474,99 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
(*dev->dev_ops->rx_metadata_negotiate)(dev, features));
}
+int
+rte_eth_ip_reassembly_capability_get(uint16_t port_id,
+ struct rte_eth_ip_reass_params *reass_capa)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" is not configured.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (reass_capa == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_capability_get,
+ -ENOTSUP);
+ memset(reass_capa, 0, sizeof(struct rte_eth_ip_reass_params));
+
+ return eth_err(port_id, (*dev->dev_ops->ip_reassembly_capability_get)
+ (dev, reass_capa));
+}
+
+int
+rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" is not configured.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_get,
+ -ENOTSUP);
+ memset(conf, 0, sizeof(struct rte_eth_ip_reass_params));
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
+}
+
+int
+rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ const struct rte_eth_ip_reass_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" is not configured.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->dev_started != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%"PRIu16" started,\n"
+ "cannot configure IP reassembly params.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid IP reassembly configuration (NULL)\n");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_set,
+ -ENOTSUP);
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 147cc1ced3..ecc5cd50b9 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -1794,6 +1794,29 @@ enum rte_eth_representor_type {
RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
};
+/* Flag to offload IP reassembly for IPv4 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
+/* Flag to offload IP reassembly for IPv6 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
+/**
+ * A structure used to get/set IP reassembly configuration/capability.
+ *
+ * If rte_eth_ip_reassembly_capability_get() returns 0, IP reassembly can be
+ * enabled using rte_eth_ip_reassembly_conf_set() and params values lower than
+ * capability can be set in the PMD.
+ */
+struct rte_eth_ip_reass_params {
+ /** Maximum time in ms which PMD can wait for other fragments. */
+ uint32_t reass_timeout_ms;
+ /** Maximum number of fragments that can be reassembled. */
+ uint16_t max_frags;
+ /**
+ * Flags to enable reassembly of packet types -
+ * RTE_ETH_DEV_REASSEMBLY_F_xxx.
+ */
+ uint16_t flags;
+};
+
/**
* A structure used to retrieve the contextual information of
* an Ethernet device, such as the controlling driver of the
@@ -5202,6 +5225,74 @@ int rte_eth_representor_info_get(uint16_t port_id,
__rte_experimental
int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get IP reassembly capabilities supported by the PMD,
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_capability_get(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get IP reassembly configuration parameters currently set in PMD,
+ * if device Rx offload flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is
+ * enabled and the PMD supports IP reassembly offload.
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-EINVAL) if offload is not enabled in rte_eth_conf.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reass_params *conf);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set IP reassembly configuration parameters if the PMD supports IP reassembly
+ * offload. User should first call rte_eth_ip_reassembly_capability_get() to
+ * check the maximum values supported by the PMD before setting the
+ * configuration. The use of this API is mandatory to enable this feature and
+ * should be called before rte_eth_dev_start().
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ const struct rte_eth_ip_reass_params *conf);
+
+
#include <rte_ethdev_core.h>
/**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..e22c102818 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -256,6 +256,11 @@ EXPERIMENTAL {
rte_flow_flex_item_create;
rte_flow_flex_item_release;
rte_flow_pick_transfer_proxy;
+
+ # added in 22.03
+ rte_eth_ip_reassembly_capability_get;
+ rte_eth_ip_reassembly_conf_get;
+ rte_eth_ip_reassembly_conf_set;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-04 22:13 ` [PATCH v4 0/3] " Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 1/3] " Akhil Goyal
@ 2022-02-04 22:13 ` Akhil Goyal
2022-02-07 13:58 ` Ferruh Yigit
2022-02-07 17:23 ` Stephen Hemminger
2022-02-04 22:13 ` [PATCH v4 3/3] security: add IPsec option for " Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
3 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-04 22:13 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
Hardware IP reassembly may be incomplete for multiple reasons like
reassembly timeout reached, duplicate fragments, etc.
To save application cycles to process these packets again, a new
mbuf dynflag is added to show that the mbuf received is not
reassembled properly.
Now if this dynflag is set, application can retrieve corresponding
chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
up to application to either drop those fragments or wait for more time.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Change-Id: I118847cd6269da7e6313ac4e0d970d790dfef1ff
---
lib/ethdev/ethdev_driver.h | 8 ++++++++
lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 17 +++++++++++++++++
lib/ethdev/version.map | 1 +
lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
5 files changed, 63 insertions(+)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 8fe77f283f..6cfb266f7d 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1707,6 +1707,14 @@ int
rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
uint32_t direction);
+/**
+ * @internal
+ * Register mbuf dynamic field and flag for IP reassembly incomplete case.
+ */
+__rte_internal
+int
+rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
+
/*
* Legacy ethdev API used internally by drivers.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 88ca4ce867..48367dbec1 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6567,6 +6567,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
}
+int
+rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag_offset)
+{
+ static const struct rte_mbuf_dynfield field_desc = {
+ .name = RTE_MBUF_DYNFIELD_IP_REASS_NAME,
+ .size = sizeof(rte_eth_ip_reass_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reass_dynfield_t),
+ };
+ static const struct rte_mbuf_dynflag ip_reass_dynflag = {
+ .name = RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME,
+ };
+ int offset;
+
+ offset = rte_mbuf_dynfield_register(&field_desc);
+ if (offset < 0)
+ return -1;
+ if (field_offset != NULL)
+ *field_offset = offset;
+
+ offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
+ if (offset < 0)
+ return -1;
+ if (flag_offset != NULL)
+ *flag_offset = offset;
+
+ return 0;
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index ecc5cd50b9..ce35023c40 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5292,6 +5292,23 @@ __rte_experimental
int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
const struct rte_eth_ip_reass_params *conf);
+/**
+ * In case of IP reassembly offload failure, ol_flags in mbuf will be
+ * updated with dynamic flag and packets will be returned without alteration.
+ * The application can retrieve the attached fragments using mbuf dynamic field.
+ */
+typedef struct {
+ /**
+ * Next fragment packet. Application should fetch dynamic field of
+ * each fragment until a NULL is received and nb_frags is 0.
+ */
+ struct rte_mbuf *next_frag;
+ /** Time spent(in ms) by HW in waiting for further fragments. */
+ uint16_t time_spent;
+ /** Number of more fragments attached in mbuf dynamic fields. */
+ uint16_t nb_frags;
+} rte_eth_ip_reass_dynfield_t;
+
#include <rte_ethdev_core.h>
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index e22c102818..d6de8d402e 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -284,6 +284,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_bind;
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
+ rte_eth_ip_reass_dynfield_register;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index 29abe8da53..299638513b 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -320,6 +320,15 @@ int rte_mbuf_dyn_rx_timestamp_register(int *field_offset, uint64_t *rx_flag);
*/
int rte_mbuf_dyn_tx_timestamp_register(int *field_offset, uint64_t *tx_flag);
+/**
+ * For the PMDs which support IP reassembly of packets, PMD will updated the
+ * packet with RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME to denote that
+ * IP reassembly is incomplete and application can retrieve the packets back
+ * using RTE_MBUF_DYNFIELD_IP_REASS_NAME.
+ */
+#define RTE_MBUF_DYNFIELD_IP_REASS_NAME "rte_eth_ip_reass_dynfield"
+#define RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME "rte_eth_ip_reass_incomplete_dynflag"
+
#ifdef __cplusplus
}
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-04 22:13 ` [PATCH v4 0/3] " Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 1/3] " Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-02-04 22:13 ` Akhil Goyal
2022-02-08 9:01 ` David Marchand
2022-02-08 20:11 ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
3 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-04 22:13 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
A new option is added in IPsec to enable and attempt reassembly
of inbound packets.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Change-Id: I6f66f0b5a659550976a32629130594070cb16cb1
---
devtools/libabigail.abignore | 14 ++++++++++++++
lib/security/rte_security.h | 12 +++++++++++-
2 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 4b676f317d..3bd39042e8 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,3 +11,17 @@
; Ignore generated PMD information strings
[suppress_variable]
name_regexp = _pmd_info$
+
+; Ignore fields inserted in place of reserved_opts of rte_security_ipsec_sa_options
+[suppress_type]
+ name = rte_ipsec_sa_prm
+ name = rte_security_ipsec_sa_options
+ has_data_member_inserted_between = {offset_of(reserved_opts), end}
+
+[suppress_type]
+ name = rte_security_capability
+ has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
+
+[suppress_type]
+ name = rte_security_session_conf
+ has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 1228b6c8b1..168b837a82 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
*/
uint32_t l4_csum_enable : 1;
+ /** Enable reassembly on incoming packets.
+ *
+ * * 1: Enable driver to try reassembly of encrypted IP packets for
+ * this SA, if supported by the driver. This feature will work
+ * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
+ * inline Ethernet device.
+ * * 0: Disable reassembly of packets (default).
+ */
+ uint32_t reass_en : 1;
+
/** Reserved bit fields for future extension
*
* User should ensure reserved_opts is cleared as it may change in
@@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
*
* Note: Reduce number of bits in reserved_opts for every new option.
*/
- uint32_t reserved_opts : 18;
+ uint32_t reserved_opts : 17;
};
/** IPSec security association direction */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v4 1/3] ethdev: introduce IP reassembly offload
2022-02-04 22:13 ` [PATCH v4 1/3] " Akhil Goyal
@ 2022-02-04 22:20 ` Akhil Goyal
2022-02-07 13:53 ` Ferruh Yigit
1 sibling, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-04 22:20 UTC (permalink / raw)
To: Akhil Goyal, dev, ferruh.yigit, andrew.rybchenko, thomas,
olivier.matz, david.marchand
Cc: Anoob Joseph, matan, konstantin.ananyev, rosen.xu, radu.nicolau,
Jerin Jacob Kollanukkaran, stephen, mdr
> Subject: [PATCH v4 1/3] ethdev: introduce IP reassembly offload
>
> IP Reassembly is a costly operation if it is done in software.
> The operation becomes even more costlier if IP fragments are encrypted.
> However, if it is offloaded to HW, it can considerably save application
> cycles.
>
> Hence, a new offload feature is exposed in eth_dev ops for devices which can
> attempt IP reassembly of packets in hardware.
> - rte_eth_ip_reassembly_capability_get() - to get the maximum values
> of reassembly configuration which can be set.
> - rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
> and to enable the feature in the PMD (to be called before rte_eth_dev_start()).
> - rte_eth_ip_reassembly_conf_get() - to get the current configuration
> set in PMD.
>
> Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
> the resulting reassembled IP packet would be a typical segmented mbuf in
> case of success.
>
> And if reassembly of IP fragments is failed or is incomplete (if fragments do
> not come before the reass_timeout, overlap, etc), the mbuf dynamic flags can
> be
> updated by the PMD. This is updated in a subsequent patch.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Change-Id: Ic20bb3af1ed599e8f2f3665d2d6c47b2e420e509
Please ignore the change-Id, will remove it in next version, or can be removed while
applying if no further comments.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v4 1/3] ethdev: introduce IP reassembly offload
2022-02-04 22:13 ` [PATCH v4 1/3] " Akhil Goyal
2022-02-04 22:20 ` Akhil Goyal
@ 2022-02-07 13:53 ` Ferruh Yigit
2022-02-07 14:36 ` [EXT] " Akhil Goyal
1 sibling, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-07 13:53 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, olivier.matz, david.marchand, radu.nicolau, jerinj,
stephen, mdr
On 2/4/2022 10:13 PM, Akhil Goyal wrote:
> IP Reassembly is a costly operation if it is done in software.
> The operation becomes even more costlier if IP fragments are encrypted.
> However, if it is offloaded to HW, it can considerably save application
> cycles.
>
> Hence, a new offload feature is exposed in eth_dev ops for devices which can
> attempt IP reassembly of packets in hardware.
> - rte_eth_ip_reassembly_capability_get() - to get the maximum values
> of reassembly configuration which can be set.
> - rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
> and to enable the feature in the PMD (to be called before rte_eth_dev_start()).
> - rte_eth_ip_reassembly_conf_get() - to get the current configuration
> set in PMD.
>
> Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
> the resulting reassembled IP packet would be a typical segmented mbuf in
> case of success.
>
> And if reassembly of IP fragments is failed or is incomplete (if fragments do
> not come before the reass_timeout, overlap, etc), the mbuf dynamic flags can be
> updated by the PMD. This is updated in a subsequent patch.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Change-Id: Ic20bb3af1ed599e8f2f3665d2d6c47b2e420e509
> ---
> doc/guides/nics/features.rst | 13 +++++
> lib/ethdev/ethdev_driver.h | 55 +++++++++++++++++++++
> lib/ethdev/rte_ethdev.c | 93 ++++++++++++++++++++++++++++++++++++
> lib/ethdev/rte_ethdev.h | 91 +++++++++++++++++++++++++++++++++++
> lib/ethdev/version.map | 5 ++
> 5 files changed, 257 insertions(+)
>
> diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> index 27be2d2576..e6e0bbe9d8 100644
> --- a/doc/guides/nics/features.rst
> +++ b/doc/guides/nics/features.rst
> @@ -602,6 +602,19 @@ Supports inner packet L4 checksum.
> ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
>
>
> +.. _nic_features_ip_reassembly:
> +
> +IP reassembly
> +-------------
> +
> +Supports IP reassembly in hardware.
> +
> +* **[provides] eth_dev_ops**: ``ip_reassemly_capability_get``,
> + ``ip_reassembly_conf_get``, ``ip_reassembly_conf_set``.
> +* **[related] API**: ``rte_eth_ip_reassembly_capability_get()``,
> + ``rte_eth_ip_reassembly_conf_get()``, ``rte_eth_ip_reassembly_conf_set()``.
> +
> +
Need to update 'default.ini' to have this new feature.
> .. _nic_features_shared_rx_queue:
>
> Shared Rx queue
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index d95605a355..8fe77f283f 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -990,6 +990,54 @@ typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev,
> typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
> uint64_t *features);
>
> +/**
> + * @internal
> + * Get IP reassembly offload capability of a PMD.
> + *
> + * @param dev
> + * Port (ethdev) handle
> + *
> + * @param[out] conf
> + * IP reassembly capability supported by the PMD
> + *
> + * @return
> + * Negative errno value on error, zero otherwise
> + */
> +typedef int (*eth_ip_reassembly_capability_get_t)(struct rte_eth_dev *dev,
> + struct rte_eth_ip_reass_params *capa);
> +
> +/**
> + * @internal
> + * Get IP reassembly offload configuration parameters set in PMD.
> + *
> + * @param dev
> + * Port (ethdev) handle
> + *
> + * @param[out] conf
> + * Configuration parameters for IP reassembly.
> + *
> + * @return
> + * Negative errno value on error, zero otherwise
> + */
> +typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev,
> + struct rte_eth_ip_reass_params *conf);
> +
> +/**
> + * @internal
> + * Set configuration parameters for enabling IP reassembly offload in hardware.
> + *
> + * @param dev
> + * Port (ethdev) handle
> + *
> + * @param[in] conf
> + * Configuration parameters for IP reassembly.
> + *
> + * @return
> + * Negative errno value on error, zero otherwise
> + */
> +typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
> + const struct rte_eth_ip_reass_params *conf);
> +
> /**
> * @internal A structure containing the functions exported by an Ethernet driver.
> */
> @@ -1186,6 +1234,13 @@ struct eth_dev_ops {
> * kinds of metadata to the PMD
> */
> eth_rx_metadata_negotiate_t rx_metadata_negotiate;
> +
> + /** Get IP reassembly capability */
> + eth_ip_reassembly_capability_get_t ip_reassembly_capability_get;
> + /** Get IP reassembly configuration */
> + eth_ip_reassembly_conf_get_t ip_reassembly_conf_get;
> + /** Set IP reassembly configuration */
> + eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
> };
>
> /**
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 29e21ad580..88ca4ce867 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -6474,6 +6474,99 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
> (*dev->dev_ops->rx_metadata_negotiate)(dev, features));
> }
>
> +int
> +rte_eth_ip_reassembly_capability_get(uint16_t port_id,
> + struct rte_eth_ip_reass_params *reass_capa)
syntax, dpdk coding converntion doesn't align the next line, mostly it has
two tabs in next line.
Same comment valid for many instances below.
> +{
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
> +
> + if (dev->data->dev_configured == 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "Device with port_id=%"PRIu16" is not configured.\n",
- while printing port_id, can use %u, 'PRIu16' has no benefit as far as I can see,
and I see both are used in existing code, I prefer %u but no strong option there
- The log doesn't mention from ip reassembly capability at all, assume you see this
log, will it give enough context to understand problem is related to ip reassembly?
- Andrew has a comment before that each log message should be unique, to be able
to detect location easily, exact same log message is used a few other locations.
> + port_id);
> + return -EINVAL;
> + }
> +
> + if (reass_capa == NULL) {
I still think all 'reass' usage should be 'reassembly', but again that is me.
> + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL");
> + return -EINVAL;
> + }
> +
> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_capability_get,
> + -ENOTSUP);
> + memset(reass_capa, 0, sizeof(struct rte_eth_ip_reass_params));
> +
> + return eth_err(port_id, (*dev->dev_ops->ip_reassembly_capability_get)
> + (dev, reass_capa));
> +}
> +
> +int
> +rte_eth_ip_reassembly_conf_get(uint16_t port_id,
> + struct rte_eth_ip_reass_params *conf)
> +{
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
> +
> + if (dev->data->dev_configured == 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "Device with port_id=%"PRIu16" is not configured.\n",
> + port_id);
> + return -EINVAL;
> + }
> +
> + if (conf == NULL) {
> + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL");
> + return -EINVAL;
> + }
> +
> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_get,
> + -ENOTSUP);
> + memset(conf, 0, sizeof(struct rte_eth_ip_reass_params));
If user didn't call 'rte_eth_ip_reassembly_conf_set()' prior to this call, what
user will get here? All zeros, or can PMD fill with some defaults?
Or should this API set some kind of error in that case to highlight there is no
configuration?
> + return eth_err(port_id,
> + (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
> +}
> +
> +int
> +rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> + const struct rte_eth_ip_reass_params *conf)
> +{
> + struct rte_eth_dev *dev;
> +
> + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> + dev = &rte_eth_devices[port_id];
> +
> + if (dev->data->dev_configured == 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "Device with port_id=%"PRIu16" is not configured.\n",
> + port_id);
> + return -EINVAL;
> + }
> +
> + if (dev->data->dev_started != 0) {
> + RTE_ETHDEV_LOG(ERR,
> + "Device with port_id=%"PRIu16" started,\n"
> + "cannot configure IP reassembly params.\n",
> + port_id);
> + return -EINVAL;
> + }
> +
> + if (conf == NULL) {
> + RTE_ETHDEV_LOG(ERR,
> + "Invalid IP reassembly configuration (NULL)\n");
> + return -EINVAL;
> + }
> +
> + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_set,
> + -ENOTSUP);
> + return eth_err(port_id,
> + (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
> +}
> +
> RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
>
> RTE_INIT(ethdev_init_telemetry)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 147cc1ced3..ecc5cd50b9 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1794,6 +1794,29 @@ enum rte_eth_representor_type {
> RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */
> };
>
> +/* Flag to offload IP reassembly for IPv4 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
> +/* Flag to offload IP reassembly for IPv6 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
> +/**
> + * A structure used to get/set IP reassembly configuration/capability.
> + *
> + * If rte_eth_ip_reassembly_capability_get() returns 0, IP reassembly can be
> + * enabled using rte_eth_ip_reassembly_conf_set() and params values lower than
> + * capability can be set in the PMD.
Can you please clarify "params values lower than capability"? Is it refering to
'max_frags'?
> + */
> +struct rte_eth_ip_reass_params {
> + /** Maximum time in ms which PMD can wait for other fragments. */
> + uint32_t reass_timeout_ms;
Other variables in the struct is missing 'reass_' prefix, like it is 'max_frags'
instead of 'reass_max_frags', should we do the same for this variable?
> + /** Maximum number of fragments that can be reassembled. */
> + uint16_t max_frags;
> + /**
> + * Flags to enable reassembly of packet types -
> + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> + */
> + uint16_t flags;
> +};
> +
> /**
> * A structure used to retrieve the contextual information of
> * an Ethernet device, such as the controlling driver of the
> @@ -5202,6 +5225,74 @@ int rte_eth_representor_info_get(uint16_t port_id,
> __rte_experimental
> int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Get IP reassembly capabilities supported by the PMD,
> + *
> + * @param port_id
> + * The port identifier of the device.
> + * @param conf
> + * A pointer to rte_eth_ip_reass_params structure.
> + * @return
> + * - (-ENOTSUP) if offload configuration is not supported by device.
> + * - (-ENODEV) if *port_id* invalid.
> + * - (-EIO) if device is removed.
> + * - (0) on success.
> + */
> +__rte_experimental
> +int rte_eth_ip_reassembly_capability_get(uint16_t port_id,
> + struct rte_eth_ip_reass_params *conf);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Get IP reassembly configuration parameters currently set in PMD,
> + * if device Rx offload flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is
This flag is no more. Needs to updated the sentences.
> + * enabled and the PMD supports IP reassembly offload.
> + *
Device needs to be configured to call this API, should it be documented here?
> + * @param port_id
> + * The port identifier of the device.
> + * @param conf
> + * A pointer to rte_eth_ip_reass_params structure.
> + * @return
> + * - (-ENOTSUP) if offload configuration is not supported by device.
> + * - (-EINVAL) if offload is not enabled in rte_eth_conf.
> + * - (-ENODEV) if *port_id* invalid.
> + * - (-EIO) if device is removed.
> + * - (0) on success.
> + */
> +__rte_experimental
> +int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
> + struct rte_eth_ip_reass_params *conf);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Set IP reassembly configuration parameters if the PMD supports IP reassembly
> + * offload. User should first call rte_eth_ip_reassembly_capability_get() to
> + * check the maximum values supported by the PMD before setting the
> + * configuration. The use of this API is mandatory to enable this feature and
> + * should be called before rte_eth_dev_start().
> + *
Not sure if above cause confusion, what above mean is this API should be called
when device is stopped. When you call it should be called before 'te_eth_dev_start()'
can it be taken as it should be called from first ever start() API?
If you think above is clear enough we can continue as it is.
> + * @param port_id
> + * The port identifier of the device.
> + * @param conf
> + * A pointer to rte_eth_ip_reass_params structure.
> + * @return
> + * - (-ENOTSUP) if offload configuration is not supported by device.
> + * - (-ENODEV) if *port_id* invalid.
> + * - (-EIO) if device is removed.
> + * - (0) on success.
> + */
> +__rte_experimental
> +int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> + const struct rte_eth_ip_reass_params *conf);
> +
> +
> #include <rte_ethdev_core.h>
>
> /**
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index c2fb0669a4..e22c102818 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -256,6 +256,11 @@ EXPERIMENTAL {
> rte_flow_flex_item_create;
> rte_flow_flex_item_release;
> rte_flow_pick_transfer_proxy;
> +
> + # added in 22.03
> + rte_eth_ip_reassembly_capability_get;
> + rte_eth_ip_reassembly_conf_get;
> + rte_eth_ip_reassembly_conf_set;
> };
>
> INTERNAL {
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-04 22:13 ` [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-02-07 13:58 ` Ferruh Yigit
2022-02-07 14:20 ` [EXT] " Akhil Goyal
2022-02-07 17:23 ` Stephen Hemminger
1 sibling, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-07 13:58 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, olivier.matz, david.marchand, radu.nicolau, jerinj,
stephen, mdr
On 2/4/2022 10:13 PM, Akhil Goyal wrote:
> Hardware IP reassembly may be incomplete for multiple reasons like
> reassembly timeout reached, duplicate fragments, etc.
> To save application cycles to process these packets again, a new
> mbuf dynflag is added to show that the mbuf received is not
> reassembled properly.
>
> Now if this dynflag is set, application can retrieve corresponding
> chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
> up to application to either drop those fragments or wait for more time.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Change-Id: I118847cd6269da7e6313ac4e0d970d790dfef1ff
> ---
> lib/ethdev/ethdev_driver.h | 8 ++++++++
> lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
> lib/ethdev/rte_ethdev.h | 17 +++++++++++++++++
> lib/ethdev/version.map | 1 +
> lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
> 5 files changed, 63 insertions(+)
>
> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> index 8fe77f283f..6cfb266f7d 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1707,6 +1707,14 @@ int
> rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
> uint32_t direction);
>
> +/**
> + * @internal
> + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
> + */
> +__rte_internal
> +int
> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
> +
>
> /*
> * Legacy ethdev API used internally by drivers.
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 88ca4ce867..48367dbec1 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -6567,6 +6567,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
> }
>
> +int
> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag_offset)
> +{
> + static const struct rte_mbuf_dynfield field_desc = {
> + .name = RTE_MBUF_DYNFIELD_IP_REASS_NAME,
> + .size = sizeof(rte_eth_ip_reass_dynfield_t),
> + .align = __alignof__(rte_eth_ip_reass_dynfield_t),
> + };
> + static const struct rte_mbuf_dynflag ip_reass_dynflag = {
> + .name = RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME,
> + };
> + int offset;
> +
> + offset = rte_mbuf_dynfield_register(&field_desc);
> + if (offset < 0)
> + return -1;
> + if (field_offset != NULL)
> + *field_offset = offset;
> +
> + offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
> + if (offset < 0)
> + return -1;
> + if (flag_offset != NULL)
> + *flag_offset = offset;
> +
> + return 0;
> +}
> +
How mandatory is this field for the feature?
If 'rte_eth_ip_reass_dynfield_register()' fails, what PMD should do?
Should this API called before 'rte_eth_ip_reassembly_capability_get()' and
if registering dnyfield fails should PMD return feature as not supported?
Can you please describe this dependency, preferable in the
'rte_eth_ip_reassembly_capability_get()' doxygen comment?
> RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
>
> RTE_INIT(ethdev_init_telemetry)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index ecc5cd50b9..ce35023c40 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -5292,6 +5292,23 @@ __rte_experimental
> int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> const struct rte_eth_ip_reass_params *conf);
>
> +/**
> + * In case of IP reassembly offload failure, ol_flags in mbuf will be
> + * updated with dynamic flag and packets will be returned without alteration.
> + * The application can retrieve the attached fragments using mbuf dynamic field.
> + */
> +typedef struct {
> + /**
> + * Next fragment packet. Application should fetch dynamic field of
> + * each fragment until a NULL is received and nb_frags is 0.
> + */
> + struct rte_mbuf *next_frag;
> + /** Time spent(in ms) by HW in waiting for further fragments. */
> + uint16_t time_spent;
> + /** Number of more fragments attached in mbuf dynamic fields. */
> + uint16_t nb_frags;
> +} rte_eth_ip_reass_dynfield_t;
> +
>
> #include <rte_ethdev_core.h>
>
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index e22c102818..d6de8d402e 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -284,6 +284,7 @@ INTERNAL {
> rte_eth_hairpin_queue_peer_bind;
> rte_eth_hairpin_queue_peer_unbind;
> rte_eth_hairpin_queue_peer_update;
> + rte_eth_ip_reass_dynfield_register;
> rte_eth_representor_id_get;
> rte_eth_switch_domain_alloc;
> rte_eth_switch_domain_free;
> diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
> index 29abe8da53..299638513b 100644
> --- a/lib/mbuf/rte_mbuf_dyn.h
> +++ b/lib/mbuf/rte_mbuf_dyn.h
> @@ -320,6 +320,15 @@ int rte_mbuf_dyn_rx_timestamp_register(int *field_offset, uint64_t *rx_flag);
> */
> int rte_mbuf_dyn_tx_timestamp_register(int *field_offset, uint64_t *tx_flag);
>
> +/**
> + * For the PMDs which support IP reassembly of packets, PMD will updated the
> + * packet with RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME to denote that
> + * IP reassembly is incomplete and application can retrieve the packets back
> + * using RTE_MBUF_DYNFIELD_IP_REASS_NAME.
> + */
> +#define RTE_MBUF_DYNFIELD_IP_REASS_NAME "rte_eth_ip_reass_dynfield"
> +#define RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME "rte_eth_ip_reass_incomplete_dynflag"
> +
Needs Olivier's comment/ack.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 13:58 ` Ferruh Yigit
@ 2022-02-07 14:20 ` Akhil Goyal
2022-02-07 14:56 ` Ferruh Yigit
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-07 14:20 UTC (permalink / raw)
To: Ferruh Yigit, dev, olivier.matz
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, david.marchand, radu.nicolau,
Jerin Jacob Kollanukkaran, stephen, mdr
> On 2/4/2022 10:13 PM, Akhil Goyal wrote:
> > Hardware IP reassembly may be incomplete for multiple reasons like
> > reassembly timeout reached, duplicate fragments, etc.
> > To save application cycles to process these packets again, a new
> > mbuf dynflag is added to show that the mbuf received is not
> > reassembled properly.
> >
> > Now if this dynflag is set, application can retrieve corresponding
> > chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
> > up to application to either drop those fragments or wait for more time.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > Change-Id: I118847cd6269da7e6313ac4e0d970d790dfef1ff
> > ---
> > lib/ethdev/ethdev_driver.h | 8 ++++++++
> > lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
> > lib/ethdev/rte_ethdev.h | 17 +++++++++++++++++
> > lib/ethdev/version.map | 1 +
> > lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
> > 5 files changed, 63 insertions(+)
> >
> > diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> > index 8fe77f283f..6cfb266f7d 100644
> > --- a/lib/ethdev/ethdev_driver.h
> > +++ b/lib/ethdev/ethdev_driver.h
> > @@ -1707,6 +1707,14 @@ int
> > rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t
> cur_queue,
> > uint32_t direction);
> >
> > +/**
> > + * @internal
> > + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
> > + */
> > +__rte_internal
> > +int
> > +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
> > +
> >
> > /*
> > * Legacy ethdev API used internally by drivers.
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index 88ca4ce867..48367dbec1 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -6567,6 +6567,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t
> port_id,
> > (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
> > }
> >
> > +int
> > +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag_offset)
> > +{
> > + static const struct rte_mbuf_dynfield field_desc = {
> > + .name = RTE_MBUF_DYNFIELD_IP_REASS_NAME,
> > + .size = sizeof(rte_eth_ip_reass_dynfield_t),
> > + .align = __alignof__(rte_eth_ip_reass_dynfield_t),
> > + };
> > + static const struct rte_mbuf_dynflag ip_reass_dynflag = {
> > + .name =
> RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME,
> > + };
> > + int offset;
> > +
> > + offset = rte_mbuf_dynfield_register(&field_desc);
> > + if (offset < 0)
> > + return -1;
> > + if (field_offset != NULL)
> > + *field_offset = offset;
> > +
> > + offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
> > + if (offset < 0)
> > + return -1;
> > + if (flag_offset != NULL)
> > + *flag_offset = offset;
> > +
> > + return 0;
> > +}
> > +
>
> How mandatory is this field for the feature?
>
> If 'rte_eth_ip_reass_dynfield_register()' fails, what PMD should do?
> Should this API called before 'rte_eth_ip_reassembly_capability_get()' and
> if registering dnyfield fails should PMD return feature as not supported?
Dynfield is added for the error/ incomplete reassembly case.
If the dynfield is not registered, the feature can work well for success scenarios.
Dynfield registration is responsibility of PMD and it is upto the driver to decide
when to set the dynfield. The registration can be done in conf_set() API.
>
> Can you please describe this dependency, preferable in the
> 'rte_eth_ip_reassembly_capability_get()' doxygen comment?
Capability get is not a place where the feature is enabled.
Dynfield should be registered only in case the feature is enabled.
I will add following line in conf_set() doxygen comment.
The PMD should call 'rte_eth_ip_reass_dynfield_register()' when
the feature is enabled and return error if dynfield is not registered.
Dynfield is needed to give packets back to the application in case the
reassembly is not complete.
>
> > RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
> >
> > RTE_INIT(ethdev_init_telemetry)
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index ecc5cd50b9..ce35023c40 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -5292,6 +5292,23 @@ __rte_experimental
> > int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> > const struct rte_eth_ip_reass_params
> *conf);
> >
> > +/**
> > + * In case of IP reassembly offload failure, ol_flags in mbuf will be
> > + * updated with dynamic flag and packets will be returned without
> alteration.
> > + * The application can retrieve the attached fragments using mbuf
> dynamic field.
> > + */
> > +typedef struct {
> > + /**
> > + * Next fragment packet. Application should fetch dynamic field of
> > + * each fragment until a NULL is received and nb_frags is 0.
> > + */
> > + struct rte_mbuf *next_frag;
> > + /** Time spent(in ms) by HW in waiting for further fragments. */
> > + uint16_t time_spent;
> > + /** Number of more fragments attached in mbuf dynamic fields. */
> > + uint16_t nb_frags;
> > +} rte_eth_ip_reass_dynfield_t;
> > +
> >
> > #include <rte_ethdev_core.h>
> >
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index e22c102818..d6de8d402e 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -284,6 +284,7 @@ INTERNAL {
> > rte_eth_hairpin_queue_peer_bind;
> > rte_eth_hairpin_queue_peer_unbind;
> > rte_eth_hairpin_queue_peer_update;
> > + rte_eth_ip_reass_dynfield_register;
> > rte_eth_representor_id_get;
> > rte_eth_switch_domain_alloc;
> > rte_eth_switch_domain_free;
> > diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
> > index 29abe8da53..299638513b 100644
> > --- a/lib/mbuf/rte_mbuf_dyn.h
> > +++ b/lib/mbuf/rte_mbuf_dyn.h
> > @@ -320,6 +320,15 @@ int rte_mbuf_dyn_rx_timestamp_register(int
> *field_offset, uint64_t *rx_flag);
> > */
> > int rte_mbuf_dyn_tx_timestamp_register(int *field_offset, uint64_t
> *tx_flag);
> >
> > +/**
> > + * For the PMDs which support IP reassembly of packets, PMD will
> updated the
> > + * packet with RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME to
> denote that
> > + * IP reassembly is incomplete and application can retrieve the packets
> back
> > + * using RTE_MBUF_DYNFIELD_IP_REASS_NAME.
> > + */
> > +#define RTE_MBUF_DYNFIELD_IP_REASS_NAME
> "rte_eth_ip_reass_dynfield"
> > +#define RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME
> "rte_eth_ip_reass_incomplete_dynflag"
> > +
>
> Needs Olivier's comment/ack.
@Olivier: could you please comment on this?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 1/3] ethdev: introduce IP reassembly offload
2022-02-07 13:53 ` Ferruh Yigit
@ 2022-02-07 14:36 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-07 14:36 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, Jerin Jacob Kollanukkaran, stephen, mdr
Hi Ferruh,
Thanks for review,
Will send the next version soon.
Please see the comments inline.
> On 2/4/2022 10:13 PM, Akhil Goyal wrote:
> > IP Reassembly is a costly operation if it is done in software.
> > The operation becomes even more costlier if IP fragments are encrypted.
> > However, if it is offloaded to HW, it can considerably save application
> > cycles.
> >
> > Hence, a new offload feature is exposed in eth_dev ops for devices which
> can
> > attempt IP reassembly of packets in hardware.
> > - rte_eth_ip_reassembly_capability_get() - to get the maximum values
> > of reassembly configuration which can be set.
> > - rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
> > and to enable the feature in the PMD (to be called before
> rte_eth_dev_start()).
> > - rte_eth_ip_reassembly_conf_get() - to get the current configuration
> > set in PMD.
> >
> > Now when the offload is enabled using
> rte_eth_ip_reassembly_conf_set(),
> > the resulting reassembled IP packet would be a typical segmented mbuf in
> > case of success.
> >
> > And if reassembly of IP fragments is failed or is incomplete (if fragments do
> > not come before the reass_timeout, overlap, etc), the mbuf dynamic flags
> can be
> > updated by the PMD. This is updated in a subsequent patch.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > Change-Id: Ic20bb3af1ed599e8f2f3665d2d6c47b2e420e509
> > ---
> > doc/guides/nics/features.rst | 13 +++++
> > lib/ethdev/ethdev_driver.h | 55 +++++++++++++++++++++
> > lib/ethdev/rte_ethdev.c | 93
> ++++++++++++++++++++++++++++++++++++
> > lib/ethdev/rte_ethdev.h | 91
> +++++++++++++++++++++++++++++++++++
> > lib/ethdev/version.map | 5 ++
> > 5 files changed, 257 insertions(+)
> >
> > diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
> > index 27be2d2576..e6e0bbe9d8 100644
> > --- a/doc/guides/nics/features.rst
> > +++ b/doc/guides/nics/features.rst
> > @@ -602,6 +602,19 @@ Supports inner packet L4 checksum.
> >
> ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_
> UDP_CKSUM``.
> >
> >
> > +.. _nic_features_ip_reassembly:
> > +
> > +IP reassembly
> > +-------------
> > +
> > +Supports IP reassembly in hardware.
> > +
> > +* **[provides] eth_dev_ops**: ``ip_reassemly_capability_get``,
> > + ``ip_reassembly_conf_get``, ``ip_reassembly_conf_set``.
> > +* **[related] API**: ``rte_eth_ip_reassembly_capability_get()``,
> > + ``rte_eth_ip_reassembly_conf_get()``,
> ``rte_eth_ip_reassembly_conf_set()``.
> > +
> > +
>
> Need to update 'default.ini' to have this new feature.
Yes, It got missed.
>
> > .. _nic_features_shared_rx_queue:
> >
> > Shared Rx queue
> > diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> > index d95605a355..8fe77f283f 100644
> > --- a/lib/ethdev/ethdev_driver.h
> > +++ b/lib/ethdev/ethdev_driver.h
> > @@ -990,6 +990,54 @@ typedef int (*eth_representor_info_get_t)(struct
> rte_eth_dev *dev,
> > typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
> > uint64_t *features);
> >
> > +/**
> > + * @internal
> > + * Get IP reassembly offload capability of a PMD.
> > + *
> > + * @param dev
> > + * Port (ethdev) handle
> > + *
> > + * @param[out] conf
> > + * IP reassembly capability supported by the PMD
> > + *
> > + * @return
> > + * Negative errno value on error, zero otherwise
> > + */
> > +typedef int (*eth_ip_reassembly_capability_get_t)(struct rte_eth_dev
> *dev,
> > + struct rte_eth_ip_reass_params *capa);
> > +
> > +/**
> > + * @internal
> > + * Get IP reassembly offload configuration parameters set in PMD.
> > + *
> > + * @param dev
> > + * Port (ethdev) handle
> > + *
> > + * @param[out] conf
> > + * Configuration parameters for IP reassembly.
> > + *
> > + * @return
> > + * Negative errno value on error, zero otherwise
> > + */
> > +typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev,
> > + struct rte_eth_ip_reass_params *conf);
> > +
> > +/**
> > + * @internal
> > + * Set configuration parameters for enabling IP reassembly offload in
> hardware.
> > + *
> > + * @param dev
> > + * Port (ethdev) handle
> > + *
> > + * @param[in] conf
> > + * Configuration parameters for IP reassembly.
> > + *
> > + * @return
> > + * Negative errno value on error, zero otherwise
> > + */
> > +typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
> > + const struct rte_eth_ip_reass_params
> *conf);
> > +
> > /**
> > * @internal A structure containing the functions exported by an Ethernet
> driver.
> > */
> > @@ -1186,6 +1234,13 @@ struct eth_dev_ops {
> > * kinds of metadata to the PMD
> > */
> > eth_rx_metadata_negotiate_t rx_metadata_negotiate;
> > +
> > + /** Get IP reassembly capability */
> > + eth_ip_reassembly_capability_get_t ip_reassembly_capability_get;
> > + /** Get IP reassembly configuration */
> > + eth_ip_reassembly_conf_get_t ip_reassembly_conf_get;
> > + /** Set IP reassembly configuration */
> > + eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
> > };
> >
> > /**
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index 29e21ad580..88ca4ce867 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -6474,6 +6474,99 @@ rte_eth_rx_metadata_negotiate(uint16_t
> port_id, uint64_t *features)
> > (*dev->dev_ops->rx_metadata_negotiate)(dev,
> features));
> > }
> >
> > +int
> > +rte_eth_ip_reassembly_capability_get(uint16_t port_id,
> > + struct rte_eth_ip_reass_params
> *reass_capa)
>
> syntax, dpdk coding converntion doesn't align the next line, mostly it has
> two tabs in next line.
>
> Same comment valid for many instances below.
>
Ok
> > +{
> > + struct rte_eth_dev *dev;
> > +
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + dev = &rte_eth_devices[port_id];
> > +
> > + if (dev->data->dev_configured == 0) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Device with port_id=%"PRIu16" is not
> configured.\n",
>
> - while printing port_id, can use %u, 'PRIu16' has no benefit as far as I can
> see,
> and I see both are used in existing code, I prefer %u but no strong option
> there
>
> - The log doesn't mention from ip reassembly capability at all, assume you
> see this
> log, will it give enough context to understand problem is related to ip
> reassembly?
>
> - Andrew has a comment before that each log message should be unique, to
> be able
> to detect location easily, exact same log message is used a few other
> locations.
Ok will update the logs.
>
> > + port_id);
> > + return -EINVAL;
> > + }
> > +
> > + if (reass_capa == NULL) {
>
> I still think all 'reass' usage should be 'reassembly', but again that is me.
Ok will change it.
>
> > + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to
> NULL");
> > + return -EINVAL;
> > + }
> > +
> > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >ip_reassembly_capability_get,
> > + -ENOTSUP);
> > + memset(reass_capa, 0, sizeof(struct rte_eth_ip_reass_params));
> > +
> > + return eth_err(port_id, (*dev->dev_ops-
> >ip_reassembly_capability_get)
> > + (dev, reass_capa));
> > +}
> > +
> > +int
> > +rte_eth_ip_reassembly_conf_get(uint16_t port_id,
> > + struct rte_eth_ip_reass_params *conf)
> > +{
> > + struct rte_eth_dev *dev;
> > +
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + dev = &rte_eth_devices[port_id];
> > +
> > + if (dev->data->dev_configured == 0) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Device with port_id=%"PRIu16" is not
> configured.\n",
> > + port_id);
> > + return -EINVAL;
> > + }
> > +
> > + if (conf == NULL) {
> > + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to
> NULL");
> > + return -EINVAL;
> > + }
> > +
> > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >ip_reassembly_conf_get,
> > + -ENOTSUP);
> > + memset(conf, 0, sizeof(struct rte_eth_ip_reass_params));
>
> If user didn't call 'rte_eth_ip_reassembly_conf_set()' prior to this call, what
> user will get here? All zeros, or can PMD fill with some defaults?
> Or should this API set some kind of error in that case to highlight there is no
> configuration?
Ok I will update the doxygen comments for this. I believe it should give error.
>
> > + return eth_err(port_id,
> > + (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
> > +}
> > +
> > +int
> > +rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> > + const struct rte_eth_ip_reass_params *conf)
> > +{
> > + struct rte_eth_dev *dev;
> > +
> > + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > + dev = &rte_eth_devices[port_id];
> > +
> > + if (dev->data->dev_configured == 0) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Device with port_id=%"PRIu16" is not
> configured.\n",
> > + port_id);
> > + return -EINVAL;
> > + }
> > +
> > + if (dev->data->dev_started != 0) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Device with port_id=%"PRIu16" started,\n"
> > + "cannot configure IP reassembly params.\n",
> > + port_id);
> > + return -EINVAL;
> > + }
> > +
> > + if (conf == NULL) {
> > + RTE_ETHDEV_LOG(ERR,
> > + "Invalid IP reassembly configuration
> (NULL)\n");
> > + return -EINVAL;
> > + }
> > +
> > + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >ip_reassembly_conf_set,
> > + -ENOTSUP);
> > + return eth_err(port_id,
> > + (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
> > +}
> > +
> > RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
> >
> > RTE_INIT(ethdev_init_telemetry)
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index 147cc1ced3..ecc5cd50b9 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1794,6 +1794,29 @@ enum rte_eth_representor_type {
> > RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function.
> */
> > };
> >
> > +/* Flag to offload IP reassembly for IPv4 packets. */
> > +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
> > +/* Flag to offload IP reassembly for IPv6 packets. */
> > +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
> > +/**
> > + * A structure used to get/set IP reassembly configuration/capability.
> > + *
> > + * If rte_eth_ip_reassembly_capability_get() returns 0, IP reassembly can
> be
> > + * enabled using rte_eth_ip_reassembly_conf_set() and params values
> lower than
> > + * capability can be set in the PMD.
>
> Can you please clarify "params values lower than capability"? Is it refering to
> 'max_frags'?
It can be for all three params.
Timeout should be less than the capability of the driver.
It may be the case, app want to enable only IPv4 and not IPv6 but PMD can support both.
So conf_set() params should be lesser compared to capability.
>
> > + */
> > +struct rte_eth_ip_reass_params {
> > + /** Maximum time in ms which PMD can wait for other fragments.
> */
> > + uint32_t reass_timeout_ms;
>
> Other variables in the struct is missing 'reass_' prefix, like it is 'max_frags'
> instead of 'reass_max_frags', should we do the same for this variable?
>
Ok I will drop reass from reass_timeout_ms. As it is part of rte_eth_ip_reass_params.
Or I should say rte_eth_ip_reassembly_params - as per your comment.
> > + /** Maximum number of fragments that can be reassembled. */
> > + uint16_t max_frags;
> > + /**
> > + * Flags to enable reassembly of packet types -
> > + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> > + */
> > + uint16_t flags;
> > +};
> > +
> > /**
> > * A structure used to retrieve the contextual information of
> > * an Ethernet device, such as the controlling driver of the
> > @@ -5202,6 +5225,74 @@ int rte_eth_representor_info_get(uint16_t
> port_id,
> > __rte_experimental
> > int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t
> *features);
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Get IP reassembly capabilities supported by the PMD,
> > + *
> > + * @param port_id
> > + * The port identifier of the device.
> > + * @param conf
> > + * A pointer to rte_eth_ip_reass_params structure.
> > + * @return
> > + * - (-ENOTSUP) if offload configuration is not supported by device.
> > + * - (-ENODEV) if *port_id* invalid.
> > + * - (-EIO) if device is removed.
> > + * - (0) on success.
> > + */
> > +__rte_experimental
> > +int rte_eth_ip_reassembly_capability_get(uint16_t port_id,
> > + struct rte_eth_ip_reass_params
> *conf);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Get IP reassembly configuration parameters currently set in PMD,
> > + * if device Rx offload flag (RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY) is
>
> This flag is no more. Needs to updated the sentences.
Ahh my bad. Will correct it.
>
> > + * enabled and the PMD supports IP reassembly offload.
> > + *
>
> Device needs to be configured to call this API, should it be documented
> here?
Ok will add one.
>
> > + * @param port_id
> > + * The port identifier of the device.
> > + * @param conf
> > + * A pointer to rte_eth_ip_reass_params structure.
> > + * @return
> > + * - (-ENOTSUP) if offload configuration is not supported by device.
> > + * - (-EINVAL) if offload is not enabled in rte_eth_conf.
> > + * - (-ENODEV) if *port_id* invalid.
> > + * - (-EIO) if device is removed.
> > + * - (0) on success.
> > + */
> > +__rte_experimental
> > +int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
> > + struct rte_eth_ip_reass_params *conf);
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Set IP reassembly configuration parameters if the PMD supports IP
> reassembly
> > + * offload. User should first call rte_eth_ip_reassembly_capability_get()
> to
> > + * check the maximum values supported by the PMD before setting the
> > + * configuration. The use of this API is mandatory to enable this feature
> and
> > + * should be called before rte_eth_dev_start().
> > + *
>
> Not sure if above cause confusion, what above mean is this API should be
> called
> when device is stopped. When you call it should be called before
> 'te_eth_dev_start()'
> can it be taken as it should be called from first ever start() API?
> If you think above is clear enough we can continue as it is.
I believe dev_start() should be called after conf_set(). So we can continue as is.
No requirement for first ever start call.
>
> > + * @param port_id
> > + * The port identifier of the device.
> > + * @param conf
> > + * A pointer to rte_eth_ip_reass_params structure.
> > + * @return
> > + * - (-ENOTSUP) if offload configuration is not supported by device.
> > + * - (-ENODEV) if *port_id* invalid.
> > + * - (-EIO) if device is removed.
> > + * - (0) on success.
> > + */
> > +__rte_experimental
> > +int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
> > + const struct rte_eth_ip_reass_params
> *conf);
> > +
> > +
> > #include <rte_ethdev_core.h>
> >
> > /**
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index c2fb0669a4..e22c102818 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -256,6 +256,11 @@ EXPERIMENTAL {
> > rte_flow_flex_item_create;
> > rte_flow_flex_item_release;
> > rte_flow_pick_transfer_proxy;
> > +
> > + # added in 22.03
> > + rte_eth_ip_reassembly_capability_get;
> > + rte_eth_ip_reassembly_conf_get;
> > + rte_eth_ip_reassembly_conf_set;
> > };
> >
> > INTERNAL {
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 14:20 ` [EXT] " Akhil Goyal
@ 2022-02-07 14:56 ` Ferruh Yigit
2022-02-07 16:20 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-07 14:56 UTC (permalink / raw)
To: Akhil Goyal, dev, olivier.matz
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, david.marchand, radu.nicolau,
Jerin Jacob Kollanukkaran, stephen, mdr
On 2/7/2022 2:20 PM, Akhil Goyal wrote:
>> On 2/4/2022 10:13 PM, Akhil Goyal wrote:
>>> Hardware IP reassembly may be incomplete for multiple reasons like
>>> reassembly timeout reached, duplicate fragments, etc.
>>> To save application cycles to process these packets again, a new
>>> mbuf dynflag is added to show that the mbuf received is not
>>> reassembled properly.
>>>
>>> Now if this dynflag is set, application can retrieve corresponding
>>> chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
>>> up to application to either drop those fragments or wait for more time.
>>>
>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>> Change-Id: I118847cd6269da7e6313ac4e0d970d790dfef1ff
>>> ---
>>> lib/ethdev/ethdev_driver.h | 8 ++++++++
>>> lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
>>> lib/ethdev/rte_ethdev.h | 17 +++++++++++++++++
>>> lib/ethdev/version.map | 1 +
>>> lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
>>> 5 files changed, 63 insertions(+)
>>>
>>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>>> index 8fe77f283f..6cfb266f7d 100644
>>> --- a/lib/ethdev/ethdev_driver.h
>>> +++ b/lib/ethdev/ethdev_driver.h
>>> @@ -1707,6 +1707,14 @@ int
>>> rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t
>> cur_queue,
>>> uint32_t direction);
>>>
>>> +/**
>>> + * @internal
>>> + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
>>> + */
>>> +__rte_internal
>>> +int
>>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
>>> +
>>>
>>> /*
>>> * Legacy ethdev API used internally by drivers.
>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>> index 88ca4ce867..48367dbec1 100644
>>> --- a/lib/ethdev/rte_ethdev.c
>>> +++ b/lib/ethdev/rte_ethdev.c
>>> @@ -6567,6 +6567,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t
>> port_id,
>>> (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
>>> }
>>>
>>> +int
>>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag_offset)
>>> +{
>>> + static const struct rte_mbuf_dynfield field_desc = {
>>> + .name = RTE_MBUF_DYNFIELD_IP_REASS_NAME,
>>> + .size = sizeof(rte_eth_ip_reass_dynfield_t),
>>> + .align = __alignof__(rte_eth_ip_reass_dynfield_t),
>>> + };
>>> + static const struct rte_mbuf_dynflag ip_reass_dynflag = {
>>> + .name =
>> RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME,
>>> + };
>>> + int offset;
>>> +
>>> + offset = rte_mbuf_dynfield_register(&field_desc);
>>> + if (offset < 0)
>>> + return -1;
>>> + if (field_offset != NULL)
>>> + *field_offset = offset;
>>> +
>>> + offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
>>> + if (offset < 0)
>>> + return -1;
>>> + if (flag_offset != NULL)
>>> + *flag_offset = offset;
>>> +
>>> + return 0;
>>> +}
>>> +
>>
>> How mandatory is this field for the feature?
>>
>> If 'rte_eth_ip_reass_dynfield_register()' fails, what PMD should do?
>> Should this API called before 'rte_eth_ip_reassembly_capability_get()' and
>> if registering dnyfield fails should PMD return feature as not supported?
>
> Dynfield is added for the error/ incomplete reassembly case.
> If the dynfield is not registered, the feature can work well for success scenarios.
> Dynfield registration is responsibility of PMD and it is upto the driver to decide
> when to set the dynfield. The registration can be done in conf_set() API.
>
>>
>> Can you please describe this dependency, preferable in the
>> 'rte_eth_ip_reassembly_capability_get()' doxygen comment?
>
> Capability get is not a place where the feature is enabled.
> Dynfield should be registered only in case the feature is enabled.
> I will add following line in conf_set() doxygen comment.
>
> The PMD should call 'rte_eth_ip_reass_dynfield_register()' when
> the feature is enabled and return error if dynfield is not registered.
> Dynfield is needed to give packets back to the application in case the
> reassembly is not complete.
>
Can you also clarify what PMD should do related to the ip reassembly feature
when registering dynfield fails? Should it keep the feature enabled or disabled?
This will also clarify for the application, if application detects that
'RTE_MBUF_DYNFIELD_IP_REASS_NAME' is not registered how it should behave?
Ignore it? Fail? Disable ip reassembly?
>>
>>> RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
>>>
>>> RTE_INIT(ethdev_init_telemetry)
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index ecc5cd50b9..ce35023c40 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -5292,6 +5292,23 @@ __rte_experimental
>>> int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
>>> const struct rte_eth_ip_reass_params
>> *conf);
>>>
>>> +/**
>>> + * In case of IP reassembly offload failure, ol_flags in mbuf will be
>>> + * updated with dynamic flag and packets will be returned without
>> alteration.
>>> + * The application can retrieve the attached fragments using mbuf
>> dynamic field.
>>> + */
>>> +typedef struct {
>>> + /**
>>> + * Next fragment packet. Application should fetch dynamic field of
>>> + * each fragment until a NULL is received and nb_frags is 0.
>>> + */
>>> + struct rte_mbuf *next_frag;
>>> + /** Time spent(in ms) by HW in waiting for further fragments. */
>>> + uint16_t time_spent;
>>> + /** Number of more fragments attached in mbuf dynamic fields. */
>>> + uint16_t nb_frags;
>>> +} rte_eth_ip_reass_dynfield_t;
>>> +
>>>
>>> #include <rte_ethdev_core.h>
>>>
>>> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
>>> index e22c102818..d6de8d402e 100644
>>> --- a/lib/ethdev/version.map
>>> +++ b/lib/ethdev/version.map
>>> @@ -284,6 +284,7 @@ INTERNAL {
>>> rte_eth_hairpin_queue_peer_bind;
>>> rte_eth_hairpin_queue_peer_unbind;
>>> rte_eth_hairpin_queue_peer_update;
>>> + rte_eth_ip_reass_dynfield_register;
>>> rte_eth_representor_id_get;
>>> rte_eth_switch_domain_alloc;
>>> rte_eth_switch_domain_free;
>>> diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
>>> index 29abe8da53..299638513b 100644
>>> --- a/lib/mbuf/rte_mbuf_dyn.h
>>> +++ b/lib/mbuf/rte_mbuf_dyn.h
>>> @@ -320,6 +320,15 @@ int rte_mbuf_dyn_rx_timestamp_register(int
>> *field_offset, uint64_t *rx_flag);
>>> */
>>> int rte_mbuf_dyn_tx_timestamp_register(int *field_offset, uint64_t
>> *tx_flag);
>>>
>>> +/**
>>> + * For the PMDs which support IP reassembly of packets, PMD will
>> updated the
>>> + * packet with RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME to
>> denote that
>>> + * IP reassembly is incomplete and application can retrieve the packets
>> back
>>> + * using RTE_MBUF_DYNFIELD_IP_REASS_NAME.
>>> + */
>>> +#define RTE_MBUF_DYNFIELD_IP_REASS_NAME
>> "rte_eth_ip_reass_dynfield"
>>> +#define RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME
>> "rte_eth_ip_reass_incomplete_dynflag"
>>> +
>>
>> Needs Olivier's comment/ack.
> @Olivier: could you please comment on this?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 14:56 ` Ferruh Yigit
@ 2022-02-07 16:20 ` Akhil Goyal
2022-02-07 16:41 ` Ferruh Yigit
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-07 16:20 UTC (permalink / raw)
To: Ferruh Yigit, dev, olivier.matz
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, david.marchand, radu.nicolau,
Jerin Jacob Kollanukkaran, stephen, mdr
> On 2/7/2022 2:20 PM, Akhil Goyal wrote:
> >> On 2/4/2022 10:13 PM, Akhil Goyal wrote:
> >>> Hardware IP reassembly may be incomplete for multiple reasons like
> >>> reassembly timeout reached, duplicate fragments, etc.
> >>> To save application cycles to process these packets again, a new
> >>> mbuf dynflag is added to show that the mbuf received is not
> >>> reassembled properly.
> >>>
> >>> Now if this dynflag is set, application can retrieve corresponding
> >>> chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
> >>> up to application to either drop those fragments or wait for more time.
> >>>
> >>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> >>> Change-Id: I118847cd6269da7e6313ac4e0d970d790dfef1ff
> >>> ---
> >>> lib/ethdev/ethdev_driver.h | 8 ++++++++
> >>> lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
> >>> lib/ethdev/rte_ethdev.h | 17 +++++++++++++++++
> >>> lib/ethdev/version.map | 1 +
> >>> lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
> >>> 5 files changed, 63 insertions(+)
> >>>
> >>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> >>> index 8fe77f283f..6cfb266f7d 100644
> >>> --- a/lib/ethdev/ethdev_driver.h
> >>> +++ b/lib/ethdev/ethdev_driver.h
> >>> @@ -1707,6 +1707,14 @@ int
> >>> rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t
> >> cur_queue,
> >>> uint32_t direction);
> >>>
> >>> +/**
> >>> + * @internal
> >>> + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
> >>> + */
> >>> +__rte_internal
> >>> +int
> >>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
> >>> +
> >>>
> >>> /*
> >>> * Legacy ethdev API used internally by drivers.
> >>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >>> index 88ca4ce867..48367dbec1 100644
> >>> --- a/lib/ethdev/rte_ethdev.c
> >>> +++ b/lib/ethdev/rte_ethdev.c
> >>> @@ -6567,6 +6567,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t
> >> port_id,
> >>> (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
> >>> }
> >>>
> >>> +int
> >>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag_offset)
> >>> +{
> >>> + static const struct rte_mbuf_dynfield field_desc = {
> >>> + .name = RTE_MBUF_DYNFIELD_IP_REASS_NAME,
> >>> + .size = sizeof(rte_eth_ip_reass_dynfield_t),
> >>> + .align = __alignof__(rte_eth_ip_reass_dynfield_t),
> >>> + };
> >>> + static const struct rte_mbuf_dynflag ip_reass_dynflag = {
> >>> + .name =
> >> RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME,
> >>> + };
> >>> + int offset;
> >>> +
> >>> + offset = rte_mbuf_dynfield_register(&field_desc);
> >>> + if (offset < 0)
> >>> + return -1;
> >>> + if (field_offset != NULL)
> >>> + *field_offset = offset;
> >>> +
> >>> + offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
> >>> + if (offset < 0)
> >>> + return -1;
> >>> + if (flag_offset != NULL)
> >>> + *flag_offset = offset;
> >>> +
> >>> + return 0;
> >>> +}
> >>> +
> >>
> >> How mandatory is this field for the feature?
> >>
> >> If 'rte_eth_ip_reass_dynfield_register()' fails, what PMD should do?
> >> Should this API called before 'rte_eth_ip_reassembly_capability_get()' and
> >> if registering dnyfield fails should PMD return feature as not supported?
> >
> > Dynfield is added for the error/ incomplete reassembly case.
> > If the dynfield is not registered, the feature can work well for success
> scenarios.
> > Dynfield registration is responsibility of PMD and it is upto the driver to decide
> > when to set the dynfield. The registration can be done in conf_set() API.
> >
> >>
> >> Can you please describe this dependency, preferable in the
> >> 'rte_eth_ip_reassembly_capability_get()' doxygen comment?
> >
> > Capability get is not a place where the feature is enabled.
> > Dynfield should be registered only in case the feature is enabled.
> > I will add following line in conf_set() doxygen comment.
> >
> > The PMD should call 'rte_eth_ip_reass_dynfield_register()' when
> > the feature is enabled and return error if dynfield is not registered.
> > Dynfield is needed to give packets back to the application in case the
> > reassembly is not complete.
> >
>
> Can you also clarify what PMD should do related to the ip reassembly feature
> when registering dynfield fails? Should it keep the feature enabled or disabled?
>
> This will also clarify for the application, if application detects that
> 'RTE_MBUF_DYNFIELD_IP_REASS_NAME' is not registered how it should
> behave?
> Ignore it? Fail? Disable ip reassembly?
The PMD can return error in the conf_set API, if dynfield is not successfully
registered. Or in case of inline IPsec, the PMD can return the error while
creating inline security session.
Hence the application will get to know if the dynfield is successfully configured
Or not.
If the dynfield is not configured, PMD will return configuration error
(either in conf_set or in security_session_create) and feature
will not be enabled.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 16:20 ` Akhil Goyal
@ 2022-02-07 16:41 ` Ferruh Yigit
2022-02-07 17:17 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-07 16:41 UTC (permalink / raw)
To: Akhil Goyal, dev, olivier.matz
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, david.marchand, radu.nicolau,
Jerin Jacob Kollanukkaran, stephen, mdr
On 2/7/2022 4:20 PM, Akhil Goyal wrote:
>> On 2/7/2022 2:20 PM, Akhil Goyal wrote:
>>>> On 2/4/2022 10:13 PM, Akhil Goyal wrote:
>>>>> Hardware IP reassembly may be incomplete for multiple reasons like
>>>>> reassembly timeout reached, duplicate fragments, etc.
>>>>> To save application cycles to process these packets again, a new
>>>>> mbuf dynflag is added to show that the mbuf received is not
>>>>> reassembled properly.
>>>>>
>>>>> Now if this dynflag is set, application can retrieve corresponding
>>>>> chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
>>>>> up to application to either drop those fragments or wait for more time.
>>>>>
>>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>>>> Change-Id: I118847cd6269da7e6313ac4e0d970d790dfef1ff
>>>>> ---
>>>>> lib/ethdev/ethdev_driver.h | 8 ++++++++
>>>>> lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
>>>>> lib/ethdev/rte_ethdev.h | 17 +++++++++++++++++
>>>>> lib/ethdev/version.map | 1 +
>>>>> lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
>>>>> 5 files changed, 63 insertions(+)
>>>>>
>>>>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
>>>>> index 8fe77f283f..6cfb266f7d 100644
>>>>> --- a/lib/ethdev/ethdev_driver.h
>>>>> +++ b/lib/ethdev/ethdev_driver.h
>>>>> @@ -1707,6 +1707,14 @@ int
>>>>> rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t
>>>> cur_queue,
>>>>> uint32_t direction);
>>>>>
>>>>> +/**
>>>>> + * @internal
>>>>> + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
>>>>> + */
>>>>> +__rte_internal
>>>>> +int
>>>>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
>>>>> +
>>>>>
>>>>> /*
>>>>> * Legacy ethdev API used internally by drivers.
>>>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
>>>>> index 88ca4ce867..48367dbec1 100644
>>>>> --- a/lib/ethdev/rte_ethdev.c
>>>>> +++ b/lib/ethdev/rte_ethdev.c
>>>>> @@ -6567,6 +6567,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t
>>>> port_id,
>>>>> (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
>>>>> }
>>>>>
>>>>> +int
>>>>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag_offset)
>>>>> +{
>>>>> + static const struct rte_mbuf_dynfield field_desc = {
>>>>> + .name = RTE_MBUF_DYNFIELD_IP_REASS_NAME,
>>>>> + .size = sizeof(rte_eth_ip_reass_dynfield_t),
>>>>> + .align = __alignof__(rte_eth_ip_reass_dynfield_t),
>>>>> + };
>>>>> + static const struct rte_mbuf_dynflag ip_reass_dynflag = {
>>>>> + .name =
>>>> RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME,
>>>>> + };
>>>>> + int offset;
>>>>> +
>>>>> + offset = rte_mbuf_dynfield_register(&field_desc);
>>>>> + if (offset < 0)
>>>>> + return -1;
>>>>> + if (field_offset != NULL)
>>>>> + *field_offset = offset;
>>>>> +
>>>>> + offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
>>>>> + if (offset < 0)
>>>>> + return -1;
>>>>> + if (flag_offset != NULL)
>>>>> + *flag_offset = offset;
>>>>> +
>>>>> + return 0;
>>>>> +}
>>>>> +
>>>>
>>>> How mandatory is this field for the feature?
>>>>
>>>> If 'rte_eth_ip_reass_dynfield_register()' fails, what PMD should do?
>>>> Should this API called before 'rte_eth_ip_reassembly_capability_get()' and
>>>> if registering dnyfield fails should PMD return feature as not supported?
>>>
>>> Dynfield is added for the error/ incomplete reassembly case.
>>> If the dynfield is not registered, the feature can work well for success
>> scenarios.
>>> Dynfield registration is responsibility of PMD and it is upto the driver to decide
>>> when to set the dynfield. The registration can be done in conf_set() API.
>>>
>>>>
>>>> Can you please describe this dependency, preferable in the
>>>> 'rte_eth_ip_reassembly_capability_get()' doxygen comment?
>>>
>>> Capability get is not a place where the feature is enabled.
>>> Dynfield should be registered only in case the feature is enabled.
>>> I will add following line in conf_set() doxygen comment.
>>>
>>> The PMD should call 'rte_eth_ip_reass_dynfield_register()' when
>>> the feature is enabled and return error if dynfield is not registered.
>>> Dynfield is needed to give packets back to the application in case the
>>> reassembly is not complete.
>>>
>>
>> Can you also clarify what PMD should do related to the ip reassembly feature
>> when registering dynfield fails? Should it keep the feature enabled or disabled?
>>
>> This will also clarify for the application, if application detects that
>> 'RTE_MBUF_DYNFIELD_IP_REASS_NAME' is not registered how it should
>> behave?
>> Ignore it? Fail? Disable ip reassembly?
>
> The PMD can return error in the conf_set API, if dynfield is not successfully
> registered. Or in case of inline IPsec, the PMD can return the error while
> creating inline security session.
>
I think better to handle in the conf_set, since there can be other users than
IPSec.
> Hence the application will get to know if the dynfield is successfully configured
> Or not.
Application already can know if dynfield is registered or not via
'rte_mbuf_dynfield_lookup()'.
> If the dynfield is not configured, PMD will return configuration error
> (either in conf_set or in security_session_create) and feature
> will not be enabled.
>
ack.
What do you think to document this in the API documentation (doxygen comment)?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 16:41 ` Ferruh Yigit
@ 2022-02-07 17:17 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-07 17:17 UTC (permalink / raw)
To: Ferruh Yigit, dev, olivier.matz
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, david.marchand, radu.nicolau,
Jerin Jacob Kollanukkaran, stephen, mdr
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Monday, February 7, 2022 10:11 PM
> To: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org;
> olivier.matz@6wind.com
> Cc: Anoob Joseph <anoobj@marvell.com>; matan@nvidia.com;
> konstantin.ananyev@intel.com; thomas@monjalon.net;
> andrew.rybchenko@oktetlabs.ru; rosen.xu@intel.com;
> david.marchand@redhat.com; radu.nicolau@intel.com; Jerin Jacob
> Kollanukkaran <jerinj@marvell.com>; stephen@networkplumber.org;
> mdr@ashroe.eu
> Subject: Re: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for
> incomplete IP reassembly
>
> On 2/7/2022 4:20 PM, Akhil Goyal wrote:
> >> On 2/7/2022 2:20 PM, Akhil Goyal wrote:
> >>>> On 2/4/2022 10:13 PM, Akhil Goyal wrote:
> >>>>> Hardware IP reassembly may be incomplete for multiple reasons like
> >>>>> reassembly timeout reached, duplicate fragments, etc.
> >>>>> To save application cycles to process these packets again, a new
> >>>>> mbuf dynflag is added to show that the mbuf received is not
> >>>>> reassembled properly.
> >>>>>
> >>>>> Now if this dynflag is set, application can retrieve corresponding
> >>>>> chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
> >>>>> up to application to either drop those fragments or wait for more
> time.
> >>>>>
> >>>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> >>>>> Change-Id: I118847cd6269da7e6313ac4e0d970d790dfef1ff
> >>>>> ---
> >>>>> lib/ethdev/ethdev_driver.h | 8 ++++++++
> >>>>> lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
> >>>>> lib/ethdev/rte_ethdev.h | 17 +++++++++++++++++
> >>>>> lib/ethdev/version.map | 1 +
> >>>>> lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
> >>>>> 5 files changed, 63 insertions(+)
> >>>>>
> >>>>> diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
> >>>>> index 8fe77f283f..6cfb266f7d 100644
> >>>>> --- a/lib/ethdev/ethdev_driver.h
> >>>>> +++ b/lib/ethdev/ethdev_driver.h
> >>>>> @@ -1707,6 +1707,14 @@ int
> >>>>> rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t
> >>>> cur_queue,
> >>>>> uint32_t direction);
> >>>>>
> >>>>> +/**
> >>>>> + * @internal
> >>>>> + * Register mbuf dynamic field and flag for IP reassembly incomplete
> case.
> >>>>> + */
> >>>>> +__rte_internal
> >>>>> +int
> >>>>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
> >>>>> +
> >>>>>
> >>>>> /*
> >>>>> * Legacy ethdev API used internally by drivers.
> >>>>> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> >>>>> index 88ca4ce867..48367dbec1 100644
> >>>>> --- a/lib/ethdev/rte_ethdev.c
> >>>>> +++ b/lib/ethdev/rte_ethdev.c
> >>>>> @@ -6567,6 +6567,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t
> >>>> port_id,
> >>>>> (*dev->dev_ops->ip_reassembly_conf_set)(dev,
> conf));
> >>>>> }
> >>>>>
> >>>>> +int
> >>>>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int
> *flag_offset)
> >>>>> +{
> >>>>> + static const struct rte_mbuf_dynfield field_desc = {
> >>>>> + .name = RTE_MBUF_DYNFIELD_IP_REASS_NAME,
> >>>>> + .size = sizeof(rte_eth_ip_reass_dynfield_t),
> >>>>> + .align = __alignof__(rte_eth_ip_reass_dynfield_t),
> >>>>> + };
> >>>>> + static const struct rte_mbuf_dynflag ip_reass_dynflag = {
> >>>>> + .name =
> >>>> RTE_MBUF_DYNFLAG_IP_REASS_INCOMPLETE_NAME,
> >>>>> + };
> >>>>> + int offset;
> >>>>> +
> >>>>> + offset = rte_mbuf_dynfield_register(&field_desc);
> >>>>> + if (offset < 0)
> >>>>> + return -1;
> >>>>> + if (field_offset != NULL)
> >>>>> + *field_offset = offset;
> >>>>> +
> >>>>> + offset = rte_mbuf_dynflag_register(&ip_reass_dynflag);
> >>>>> + if (offset < 0)
> >>>>> + return -1;
> >>>>> + if (flag_offset != NULL)
> >>>>> + *flag_offset = offset;
> >>>>> +
> >>>>> + return 0;
> >>>>> +}
> >>>>> +
> >>>>
> >>>> How mandatory is this field for the feature?
> >>>>
> >>>> If 'rte_eth_ip_reass_dynfield_register()' fails, what PMD should do?
> >>>> Should this API called before 'rte_eth_ip_reassembly_capability_get()'
> and
> >>>> if registering dnyfield fails should PMD return feature as not
> supported?
> >>>
> >>> Dynfield is added for the error/ incomplete reassembly case.
> >>> If the dynfield is not registered, the feature can work well for success
> >> scenarios.
> >>> Dynfield registration is responsibility of PMD and it is upto the driver to
> decide
> >>> when to set the dynfield. The registration can be done in conf_set() API.
> >>>
> >>>>
> >>>> Can you please describe this dependency, preferable in the
> >>>> 'rte_eth_ip_reassembly_capability_get()' doxygen comment?
> >>>
> >>> Capability get is not a place where the feature is enabled.
> >>> Dynfield should be registered only in case the feature is enabled.
> >>> I will add following line in conf_set() doxygen comment.
> >>>
> >>> The PMD should call 'rte_eth_ip_reass_dynfield_register()' when
> >>> the feature is enabled and return error if dynfield is not registered.
> >>> Dynfield is needed to give packets back to the application in case the
> >>> reassembly is not complete.
> >>>
> >>
> >> Can you also clarify what PMD should do related to the ip reassembly
> feature
> >> when registering dynfield fails? Should it keep the feature enabled or
> disabled?
> >>
> >> This will also clarify for the application, if application detects that
> >> 'RTE_MBUF_DYNFIELD_IP_REASS_NAME' is not registered how it should
> >> behave?
> >> Ignore it? Fail? Disable ip reassembly?
> >
> > The PMD can return error in the conf_set API, if dynfield is not successfully
> > registered. Or in case of inline IPsec, the PMD can return the error while
> > creating inline security session.
> >
>
> I think better to handle in the conf_set, since there can be other users than
> IPSec.
I think it can be left to the PMD, as there can be cases when reassembly is supported
Only with IPsec flows and not with normal IP packets.
So reassembly may get enabled for IPsec flows only when security session is created.
In that case session creation will fail.
>
> > Hence the application will get to know if the dynfield is successfully
> configured
> > Or not.
>
> Application already can know if dynfield is registered or not via
> 'rte_mbuf_dynfield_lookup()'.
>
> > If the dynfield is not configured, PMD will return configuration error
> > (either in conf_set or in security_session_create) and feature
> > will not be enabled.
> >
>
> ack.
> What do you think to document this in the API documentation (doxygen
> comment)?
Ok I will try to clarify this in doxygen comments.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-04 22:13 ` [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
2022-02-07 13:58 ` Ferruh Yigit
@ 2022-02-07 17:23 ` Stephen Hemminger
2022-02-07 17:28 ` Ferruh Yigit
2022-02-07 17:29 ` Akhil Goyal
1 sibling, 2 replies; 184+ messages in thread
From: Stephen Hemminger @ 2022-02-07 17:23 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, mdr
On Sat, 5 Feb 2022 03:43:33 +0530
Akhil Goyal <gakhil@marvell.com> wrote:
> +/**
> + * @internal
> + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
> + */
> +__rte_internal
> +int
> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
Maybe use RTE_INIT() constructor for this?
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 17:23 ` Stephen Hemminger
@ 2022-02-07 17:28 ` Ferruh Yigit
2022-02-07 18:01 ` Stephen Hemminger
2022-02-07 17:29 ` Akhil Goyal
1 sibling, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-07 17:28 UTC (permalink / raw)
To: Stephen Hemminger, Akhil Goyal
Cc: dev, anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, olivier.matz, david.marchand, radu.nicolau, jerinj,
mdr
On 2/7/2022 5:23 PM, Stephen Hemminger wrote:
> On Sat, 5 Feb 2022 03:43:33 +0530
> Akhil Goyal <gakhil@marvell.com> wrote:
>
>> +/**
>> + * @internal
>> + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
>> + */
>> +__rte_internal
>> +int
>> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
>
> Maybe use RTE_INIT() constructor for this?
Dynfiled should be registered only when users asks for the feature.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 17:23 ` Stephen Hemminger
2022-02-07 17:28 ` Ferruh Yigit
@ 2022-02-07 17:29 ` Akhil Goyal
1 sibling, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-07 17:29 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Anoob Joseph, matan, konstantin.ananyev, thomas,
ferruh.yigit, andrew.rybchenko, rosen.xu, olivier.matz,
david.marchand, radu.nicolau, Jerin Jacob Kollanukkaran, mdr
> On Sat, 5 Feb 2022 03:43:33 +0530
> Akhil Goyal <gakhil@marvell.com> wrote:
>
> > +/**
> > + * @internal
> > + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
> > + */
> > +__rte_internal
> > +int
> > +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
>
> Maybe use RTE_INIT() constructor for this?
The same application can be used for non-reassembly case as well.
RTE_INIT would mean, dynfield would be registered always even if it is
enabled or not.
The current implementation would register it only when the
Reassembly option is enabled.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 17:28 ` Ferruh Yigit
@ 2022-02-07 18:01 ` Stephen Hemminger
2022-02-07 18:28 ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Stephen Hemminger @ 2022-02-07 18:01 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Akhil Goyal, dev, anoobj, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, mdr
On Mon, 7 Feb 2022 17:28:26 +0000
Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> On 2/7/2022 5:23 PM, Stephen Hemminger wrote:
> > On Sat, 5 Feb 2022 03:43:33 +0530
> > Akhil Goyal <gakhil@marvell.com> wrote:
> >
> >> +/**
> >> + * @internal
> >> + * Register mbuf dynamic field and flag for IP reassembly incomplete case.
> >> + */
> >> +__rte_internal
> >> +int
> >> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
> >
> > Maybe use RTE_INIT() constructor for this?
>
> Dynfiled should be registered only when users asks for the feature.
right but making the user ask can lead to errors, can it be done implicitly
on first use.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 18:01 ` Stephen Hemminger
@ 2022-02-07 18:28 ` Akhil Goyal
2022-02-07 19:08 ` Stephen Hemminger
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-07 18:28 UTC (permalink / raw)
To: Stephen Hemminger, Ferruh Yigit
Cc: dev, Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, Jerin Jacob Kollanukkaran, mdr
> On Mon, 7 Feb 2022 17:28:26 +0000
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> > On 2/7/2022 5:23 PM, Stephen Hemminger wrote:
> > > On Sat, 5 Feb 2022 03:43:33 +0530
> > > Akhil Goyal <gakhil@marvell.com> wrote:
> > >
> > >> +/**
> > >> + * @internal
> > >> + * Register mbuf dynamic field and flag for IP reassembly incomplete
> case.
> > >> + */
> > >> +__rte_internal
> > >> +int
> > >> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
> > >
> > > Maybe use RTE_INIT() constructor for this?
> >
> > Dynfiled should be registered only when users asks for the feature.
>
> right but making the user ask can lead to errors, can it be done implicitly
> on first use.
Registering dynfield is responsibility of PMD when the application asks for the feature.
So how can it lead to errors.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-07 18:28 ` [EXT] " Akhil Goyal
@ 2022-02-07 19:08 ` Stephen Hemminger
0 siblings, 0 replies; 184+ messages in thread
From: Stephen Hemminger @ 2022-02-07 19:08 UTC (permalink / raw)
To: Akhil Goyal
Cc: Ferruh Yigit, dev, Anoob Joseph, matan, konstantin.ananyev,
thomas, andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, Jerin Jacob Kollanukkaran, mdr
On Mon, 7 Feb 2022 18:28:03 +0000
Akhil Goyal <gakhil@marvell.com> wrote:
> > On Mon, 7 Feb 2022 17:28:26 +0000
> > Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >
> > > On 2/7/2022 5:23 PM, Stephen Hemminger wrote:
> > > > On Sat, 5 Feb 2022 03:43:33 +0530
> > > > Akhil Goyal <gakhil@marvell.com> wrote:
> > > >
> > > >> +/**
> > > >> + * @internal
> > > >> + * Register mbuf dynamic field and flag for IP reassembly incomplete
> > case.
> > > >> + */
> > > >> +__rte_internal
> > > >> +int
> > > >> +rte_eth_ip_reass_dynfield_register(int *field_offset, int *flag);
> > > >
> > > > Maybe use RTE_INIT() constructor for this?
> > >
> > > Dynfiled should be registered only when users asks for the feature.
> >
> > right but making the user ask can lead to errors, can it be done implicitly
> > on first use.
>
> Registering dynfield is responsibility of PMD when the application asks for the feature.
> So how can it lead to errors.
Sorry, forgot this is a PMD internal thing.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-04 22:13 ` [PATCH v4 3/3] security: add IPsec option for " Akhil Goyal
@ 2022-02-08 9:01 ` David Marchand
2022-02-08 9:18 ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: David Marchand @ 2022-02-08 9:01 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
Hello Akhil,
On Fri, Feb 4, 2022 at 11:14 PM Akhil Goyal <gakhil@marvell.com> wrote:
>
> A new option is added in IPsec to enable and attempt reassembly
> of inbound packets.
First, about extending this structure.
Copying the header:
/** Reserved bit fields for future extension
*
* User should ensure reserved_opts is cleared as it may change in
* subsequent releases to support new options.
*
* Note: Reduce number of bits in reserved_opts for every new option.
*/
uint32_t reserved_opts : 18;
I did not follow the introduction of the reserved_opts field, but
writing this comment in the API only is weak.
Why can't the rte_security API enforce reserved_opts == 0 (like in
rte_security_session_create)?
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Change-Id: I6f66f0b5a659550976a32629130594070cb16cb1
^^^
Internal tag, please remove.
> ---
> devtools/libabigail.abignore | 14 ++++++++++++++
> lib/security/rte_security.h | 12 +++++++++++-
> 2 files changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 4b676f317d..3bd39042e8 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -11,3 +11,17 @@
> ; Ignore generated PMD information strings
> [suppress_variable]
> name_regexp = _pmd_info$
> +
> +; Ignore fields inserted in place of reserved_opts of rte_security_ipsec_sa_options
> +[suppress_type]
> + name = rte_ipsec_sa_prm
> + name = rte_security_ipsec_sa_options
> + has_data_member_inserted_between = {offset_of(reserved_opts), end}
> +
> +[suppress_type]
> + name = rte_security_capability
> + has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
> +
> +[suppress_type]
> + name = rte_security_session_conf
> + has_data_member_inserted_between = {offset_of(reserved_opts), (offset_of(reserved_opts) + 18)}
Now, about the suppression rule, I don't understand the intention of
those 3 rules.
I would simply suppress modifications (after reserved_opts) to the
rte_security_ipsec_sa_options struct.
Like:
; Ignore fields inserted in place of reserved_opts of
rte_security_ipsec_sa_options
[suppress_type]
name = rte_security_ipsec_sa_options
has_data_member_inserted_between = {offset_of(reserved_opts), end}
--
David Marchand
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 9:01 ` David Marchand
@ 2022-02-08 9:18 ` Akhil Goyal
2022-02-08 9:27 ` David Marchand
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 9:18 UTC (permalink / raw)
To: David Marchand
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
> Hello Akhil,
>
>
> On Fri, Feb 4, 2022 at 11:14 PM Akhil Goyal <gakhil@marvell.com> wrote:
> >
> > A new option is added in IPsec to enable and attempt reassembly
> > of inbound packets.
>
> First, about extending this structure.
>
> Copying the header:
>
> /** Reserved bit fields for future extension
> *
> * User should ensure reserved_opts is cleared as it may change in
> * subsequent releases to support new options.
> *
> * Note: Reduce number of bits in reserved_opts for every new option.
> */
> uint32_t reserved_opts : 18;
>
> I did not follow the introduction of the reserved_opts field, but
> writing this comment in the API only is weak.
> Why can't the rte_security API enforce reserved_opts == 0 (like in
> rte_security_session_create)?
>
This was discussed here.
http://patches.dpdk.org/project/dpdk/patch/20211008204516.3497060-3-gakhil@marvell.com/
rte_security_ipsec_sa_options is being used at multiple places as listed below in abiignore.
Checking a particular field in each of the API does not make sense to me.
>
>
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > Change-Id: I6f66f0b5a659550976a32629130594070cb16cb1
> ^^^
> Internal tag, please remove.
>
Yes, missed that will remove.
>
> > ---
> > devtools/libabigail.abignore | 14 ++++++++++++++
> > lib/security/rte_security.h | 12 +++++++++++-
> > 2 files changed, 25 insertions(+), 1 deletion(-)
> >
> > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > index 4b676f317d..3bd39042e8 100644
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -11,3 +11,17 @@
> > ; Ignore generated PMD information strings
> > [suppress_variable]
> > name_regexp = _pmd_info$
> > +
> > +; Ignore fields inserted in place of reserved_opts of
> rte_security_ipsec_sa_options
> > +[suppress_type]
> > + name = rte_ipsec_sa_prm
> > + name = rte_security_ipsec_sa_options
> > + has_data_member_inserted_between = {offset_of(reserved_opts), end}
> > +
> > +[suppress_type]
> > + name = rte_security_capability
> > + has_data_member_inserted_between = {offset_of(reserved_opts),
> (offset_of(reserved_opts) + 18)}
> > +
> > +[suppress_type]
> > + name = rte_security_session_conf
> > + has_data_member_inserted_between = {offset_of(reserved_opts),
> (offset_of(reserved_opts) + 18)}
>
> Now, about the suppression rule, I don't understand the intention of
> those 3 rules.
>
> I would simply suppress modifications (after reserved_opts) to the
> rte_security_ipsec_sa_options struct.
> Like:
>
> ; Ignore fields inserted in place of reserved_opts of
> rte_security_ipsec_sa_options
> [suppress_type]
> name = rte_security_ipsec_sa_options
> has_data_member_inserted_between = {offset_of(reserved_opts), end}
>
I tried this in the first place but abi check was complaining in other structures which included
rte_security_ipsec_sa_options. So I had to add suppression for those as well.
Can you try at your end?
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 9:18 ` [EXT] " Akhil Goyal
@ 2022-02-08 9:27 ` David Marchand
2022-02-08 10:45 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: David Marchand @ 2022-02-08 9:27 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
On Tue, Feb 8, 2022 at 10:19 AM Akhil Goyal <gakhil@marvell.com> wrote:
>
> > Hello Akhil,
> >
> >
> > On Fri, Feb 4, 2022 at 11:14 PM Akhil Goyal <gakhil@marvell.com> wrote:
> > >
> > > A new option is added in IPsec to enable and attempt reassembly
> > > of inbound packets.
> >
> > First, about extending this structure.
> >
> > Copying the header:
> >
> > /** Reserved bit fields for future extension
> > *
> > * User should ensure reserved_opts is cleared as it may change in
> > * subsequent releases to support new options.
> > *
> > * Note: Reduce number of bits in reserved_opts for every new option.
> > */
> > uint32_t reserved_opts : 18;
> >
> > I did not follow the introduction of the reserved_opts field, but
> > writing this comment in the API only is weak.
> > Why can't the rte_security API enforce reserved_opts == 0 (like in
> > rte_security_session_create)?
> >
> This was discussed here.
> http://patches.dpdk.org/project/dpdk/patch/20211008204516.3497060-3-gakhil@marvell.com/
> rte_security_ipsec_sa_options is being used at multiple places as listed below in abiignore.
> Checking a particular field in each of the API does not make sense to me.
It's strange to me that a user may pass this structure as input in
multiple functions.
But if it's how the security lib works, ok.
>
> >
> >
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > Change-Id: I6f66f0b5a659550976a32629130594070cb16cb1
> > ^^^
> > Internal tag, please remove.
> >
> Yes, missed that will remove.
> >
> > > ---
> > > devtools/libabigail.abignore | 14 ++++++++++++++
> > > lib/security/rte_security.h | 12 +++++++++++-
> > > 2 files changed, 25 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > index 4b676f317d..3bd39042e8 100644
> > > --- a/devtools/libabigail.abignore
> > > +++ b/devtools/libabigail.abignore
> > > @@ -11,3 +11,17 @@
> > > ; Ignore generated PMD information strings
> > > [suppress_variable]
> > > name_regexp = _pmd_info$
> > > +
> > > +; Ignore fields inserted in place of reserved_opts of
> > rte_security_ipsec_sa_options
> > > +[suppress_type]
> > > + name = rte_ipsec_sa_prm
> > > + name = rte_security_ipsec_sa_options
> > > + has_data_member_inserted_between = {offset_of(reserved_opts), end}
> > > +
> > > +[suppress_type]
> > > + name = rte_security_capability
> > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > (offset_of(reserved_opts) + 18)}
> > > +
> > > +[suppress_type]
> > > + name = rte_security_session_conf
> > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > (offset_of(reserved_opts) + 18)}
> >
> > Now, about the suppression rule, I don't understand the intention of
> > those 3 rules.
> >
> > I would simply suppress modifications (after reserved_opts) to the
> > rte_security_ipsec_sa_options struct.
> > Like:
> >
> > ; Ignore fields inserted in place of reserved_opts of
> > rte_security_ipsec_sa_options
> > [suppress_type]
> > name = rte_security_ipsec_sa_options
> > has_data_member_inserted_between = {offset_of(reserved_opts), end}
> >
> I tried this in the first place but abi check was complaining in other structures which included
> rte_security_ipsec_sa_options. So I had to add suppression for those as well.
> Can you try at your end?
I tried before suggesting, and it works with a single rule on this structure.
I'm using libabigail current master, which version are you using so I
can try with the same?
--
David Marchand
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 9:27 ` David Marchand
@ 2022-02-08 10:45 ` Akhil Goyal
2022-02-08 13:19 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 10:45 UTC (permalink / raw)
To: David Marchand
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
> > > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > > index 4b676f317d..3bd39042e8 100644
> > > > --- a/devtools/libabigail.abignore
> > > > +++ b/devtools/libabigail.abignore
> > > > @@ -11,3 +11,17 @@
> > > > ; Ignore generated PMD information strings
> > > > [suppress_variable]
> > > > name_regexp = _pmd_info$
> > > > +
> > > > +; Ignore fields inserted in place of reserved_opts of
> > > rte_security_ipsec_sa_options
> > > > +[suppress_type]
> > > > + name = rte_ipsec_sa_prm
> > > > + name = rte_security_ipsec_sa_options
> > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> end}
> > > > +
> > > > +[suppress_type]
> > > > + name = rte_security_capability
> > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > > (offset_of(reserved_opts) + 18)}
> > > > +
> > > > +[suppress_type]
> > > > + name = rte_security_session_conf
> > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > > (offset_of(reserved_opts) + 18)}
> > >
> > > Now, about the suppression rule, I don't understand the intention of
> > > those 3 rules.
> > >
> > > I would simply suppress modifications (after reserved_opts) to the
> > > rte_security_ipsec_sa_options struct.
> > > Like:
> > >
> > > ; Ignore fields inserted in place of reserved_opts of
> > > rte_security_ipsec_sa_options
> > > [suppress_type]
> > > name = rte_security_ipsec_sa_options
> > > has_data_member_inserted_between = {offset_of(reserved_opts), end}
> > >
> > I tried this in the first place but abi check was complaining in other structures
> which included
> > rte_security_ipsec_sa_options. So I had to add suppression for those as well.
> > Can you try at your end?
>
> I tried before suggesting, and it works with a single rule on this structure.
>
> I'm using libabigail current master, which version are you using so I
> can try with the same?
>
I am currently using 1.6 version. I will try with latest version.
$ abidiff --version
abidiff: 1.6.0
and I get following issue after removing the last two suppress rules.
Functions changes summary: 0 Removed, 1 Changed (8 filtered out), 0 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable
1 function with some indirect sub-type change:
[C]'function const rte_security_capability* rte_security_capabilities_get(rte_security_ctx*)' at rte_security.c:158:1 has some indirect sub-type changes:
return type changed:
in pointed to type 'const rte_security_capability':
in unqualified underlying type 'struct rte_security_capability' at rte_security.h:808:1:
type size hasn't changed
1 data member change:
parameter 1 of type 'rte_security_ctx*' has sub-type changes:
in pointed to type 'struct rte_security_ctx' at rte_security.h:72:1:
type size hasn't changed
1 data member change:
type of 'const rte_security_ops* rte_security_ctx::ops' changed:
in pointed to type 'const rte_security_ops':
in unqualified underlying type 'struct rte_security_ops' at rte_security_driver.h:140:1:
type size hasn't changed
1 data member changes (2 filtered):
type of 'security_session_create_t rte_security_ops::session_create' changed:
underlying type 'int (void*, rte_security_session_conf*, rte_security_session*, rte_mempool*)*' changed:
in pointed to type 'function type int (void*, rte_security_session_conf*, rte_security_session*, rte_mempool*)':
parameter 2 of type 'rte_security_session_conf*' has sub-type changes:
in pointed to type 'struct rte_security_session_conf' at rte_security.h:502:1:
type size hasn't changed
1 data member change:
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 10:45 ` Akhil Goyal
@ 2022-02-08 13:19 ` Akhil Goyal
2022-02-08 19:55 ` David Marchand
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 13:19 UTC (permalink / raw)
To: David Marchand
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
Hi David,
> > > > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > > > index 4b676f317d..3bd39042e8 100644
> > > > > --- a/devtools/libabigail.abignore
> > > > > +++ b/devtools/libabigail.abignore
> > > > > @@ -11,3 +11,17 @@
> > > > > ; Ignore generated PMD information strings
> > > > > [suppress_variable]
> > > > > name_regexp = _pmd_info$
> > > > > +
> > > > > +; Ignore fields inserted in place of reserved_opts of
> > > > rte_security_ipsec_sa_options
> > > > > +[suppress_type]
> > > > > + name = rte_ipsec_sa_prm
> > > > > + name = rte_security_ipsec_sa_options
> > > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > end}
> > > > > +
> > > > > +[suppress_type]
> > > > > + name = rte_security_capability
> > > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > > > (offset_of(reserved_opts) + 18)}
> > > > > +
> > > > > +[suppress_type]
> > > > > + name = rte_security_session_conf
> > > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > > > (offset_of(reserved_opts) + 18)}
> > > >
> > > > Now, about the suppression rule, I don't understand the intention of
> > > > those 3 rules.
> > > >
> > > > I would simply suppress modifications (after reserved_opts) to the
> > > > rte_security_ipsec_sa_options struct.
> > > > Like:
> > > >
> > > > ; Ignore fields inserted in place of reserved_opts of
> > > > rte_security_ipsec_sa_options
> > > > [suppress_type]
> > > > name = rte_security_ipsec_sa_options
> > > > has_data_member_inserted_between = {offset_of(reserved_opts),
> end}
> > > >
> > > I tried this in the first place but abi check was complaining in other structures
> > which included
> > > rte_security_ipsec_sa_options. So I had to add suppression for those as well.
> > > Can you try at your end?
> >
> > I tried before suggesting, and it works with a single rule on this structure.
> >
> > I'm using libabigail current master, which version are you using so I
> > can try with the same?
> >
> I am currently using 1.6 version. I will try with latest version.
> $ abidiff --version
> abidiff: 1.6.0
>
It seems the latest version 2.0 is not compatible with Ubuntu 20.04.
It is not getting compiled.
Can you check with 1.6.0 version?
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 13:19 ` Akhil Goyal
@ 2022-02-08 19:55 ` David Marchand
2022-02-08 20:01 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: David Marchand @ 2022-02-08 19:55 UTC (permalink / raw)
To: Akhil Goyal, Dodji Seketeli
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella
On Tue, Feb 8, 2022 at 2:19 PM Akhil Goyal <gakhil@marvell.com> wrote:
> > > > I tried this in the first place but abi check was complaining in other structures
> > > which included
> > > > rte_security_ipsec_sa_options. So I had to add suppression for those as well.
> > > > Can you try at your end?
> > >
> > > I tried before suggesting, and it works with a single rule on this structure.
> > >
> > > I'm using libabigail current master, which version are you using so I
> > > can try with the same?
> > >
> > I am currently using 1.6 version. I will try with latest version.
> > $ abidiff --version
> > abidiff: 1.6.0
> >
> It seems the latest version 2.0 is not compatible with Ubuntu 20.04.
> It is not getting compiled.
I am using the HEAD of libabigail master branch, so maybe something
got fixed between 2.0 and the current master.
> Can you check with 1.6.0 version?
I tried 1.6 in GHA (Ubuntu 18.04), and I can reproduce the warnings
you reported.
But in the end, we use 1.8 in GHA:
https://git.dpdk.org/dpdk/tree/.github/workflows/build.yml#n23
The simplest rule (on rte_security_ipsec_sa_options only) passes fine
with this version of libabigail:
https://github.com/david-marchand/dpdk/runs/5109221298?check_suite_focus=true
--
David Marchand
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 19:55 ` David Marchand
@ 2022-02-08 20:01 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 20:01 UTC (permalink / raw)
To: David Marchand, Dodji Seketeli
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella
Hi David,
> On Tue, Feb 8, 2022 at 2:19 PM Akhil Goyal <gakhil@marvell.com> wrote:
> > > > > I tried this in the first place but abi check was complaining in other
> structures
> > > > which included
> > > > > rte_security_ipsec_sa_options. So I had to add suppression for those as
> well.
> > > > > Can you try at your end?
> > > >
> > > > I tried before suggesting, and it works with a single rule on this structure.
> > > >
> > > > I'm using libabigail current master, which version are you using so I
> > > > can try with the same?
> > > >
> > > I am currently using 1.6 version. I will try with latest version.
> > > $ abidiff --version
> > > abidiff: 1.6.0
> > >
> > It seems the latest version 2.0 is not compatible with Ubuntu 20.04.
> > It is not getting compiled.
>
> I am using the HEAD of libabigail master branch, so maybe something
> got fixed between 2.0 and the current master.
>
>
> > Can you check with 1.6.0 version?
>
> I tried 1.6 in GHA (Ubuntu 18.04), and I can reproduce the warnings
> you reported.
>
> But in the end, we use 1.8 in GHA:
> https://git.dpdk.org/dpdk/tree/.github/workflows/build.yml#n23
>
> The simplest rule (on rte_security_ipsec_sa_options only) passes fine
> with this version of libabigail:
> https://github.com/david-marchand/dpdk/runs/5109221298?check_suite_focus=true
Thanks for trying it out. I will remove the last two rules and send next version.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 0/3] ethdev: introduce IP reassembly offload
2022-02-04 22:13 ` [PATCH v4 0/3] " Akhil Goyal
` (2 preceding siblings ...)
2022-02-04 22:13 ` [PATCH v4 3/3] security: add IPsec option for " Akhil Goyal
@ 2022-02-08 20:11 ` Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 1/3] " Akhil Goyal
` (3 more replies)
3 siblings, 4 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 20:11 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.[2][3]
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
[2]: APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
[3]: PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
Newer versions of app and PMD will be sent once library changes are
acked.
Changes in v5:
- updated Doxygen comments.(Ferruh)
- Added release notes.
- updated libabigail suppress rules.(David)
Changes in v4:
- removed rte_eth_dev_info update for capability (Ferruh)
- removed Rx offload flag (Ferruh)
- added capability_get() (Ferruh)
- moved dynfield and dynflag namedefines in rte_mbuf_dyn.h (Ferruh)
changes in v3:
- incorporated comments from Andrew and Stephen Hemminger
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (3):
ethdev: introduce IP reassembly offload
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 5 +
doc/guides/nics/features.rst | 13 +++
doc/guides/nics/features/default.ini | 1 +
doc/guides/rel_notes/release_22_03.rst | 6 ++
lib/ethdev/ethdev_driver.h | 63 +++++++++++++
lib/ethdev/rte_ethdev.c | 124 ++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 126 +++++++++++++++++++++++++
lib/ethdev/version.map | 6 ++
lib/mbuf/rte_mbuf_dyn.h | 9 ++
lib/security/rte_security.h | 15 ++-
10 files changed, 367 insertions(+), 1 deletion(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 1/3] ethdev: introduce IP reassembly offload
2022-02-08 20:11 ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
@ 2022-02-08 20:11 ` Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
` (2 subsequent siblings)
3 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 20:11 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offloaded to HW, it can considerably save application
cycles.
Hence, a new offload feature is exposed in eth_dev ops for devices which can
attempt IP reassembly of packets in hardware.
- rte_eth_ip_reassembly_capability_get() - to get the maximum values
of reassembly configuration which can be set.
- rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
and to enable the feature in the PMD (to be called before rte_eth_dev_start()).
- rte_eth_ip_reassembly_conf_get() - to get the current configuration
set in PMD.
Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
the resulting reassembled IP packet would be a typical segmented mbuf in
case of success.
And if reassembly of IP fragments is failed or is incomplete (if fragments do
not come before the reass_timeout, overlap, etc), the mbuf dynamic flags can be
updated by the PMD. This is updated in a subsequent patch.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
doc/guides/nics/features.rst | 13 ++++
doc/guides/nics/features/default.ini | 1 +
doc/guides/rel_notes/release_22_03.rst | 6 ++
lib/ethdev/ethdev_driver.h | 55 ++++++++++++++
lib/ethdev/rte_ethdev.c | 96 ++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 100 +++++++++++++++++++++++++
lib/ethdev/version.map | 5 ++
7 files changed, 276 insertions(+)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 27be2d2576..e6112e16f4 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -602,6 +602,19 @@ Supports inner packet L4 checksum.
``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+.. _nic_features_ip_reassembly:
+
+IP reassembly
+-------------
+
+Supports IP reassembly in hardware.
+
+* **[provides] eth_dev_ops**: ``ip_reassembly_capability_get``,
+ ``ip_reassembly_conf_get``, ``ip_reassembly_conf_set``.
+* **[related] API**: ``rte_eth_ip_reassembly_capability_get()``,
+ ``rte_eth_ip_reassembly_conf_get()``, ``rte_eth_ip_reassembly_conf_set()``.
+
+
.. _nic_features_shared_rx_queue:
Shared Rx queue
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index c96a52b58e..22750dfacb 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -52,6 +52,7 @@ Timestamp offload =
MACsec offload =
Inner L3 checksum =
Inner L4 checksum =
+IP reassembly =
Packet type parsing =
Timesync =
Rx descriptor status =
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 746f50e84f..03ec8d6faa 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,12 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added IP reassembly Ethernet offload API, to get and set config.**
+
+ Added IP reassembly offload APIs which provide functions to query IP
+ reassembly capabilities, to set configuration and to get currently set
+ reassembly configuration.
+
* **Updated Cisco enic driver.**
* Added rte_flow support for matching GENEVE packets.
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..4acf75781d 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -990,6 +990,54 @@ typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
uint64_t *features);
+/**
+ * @internal
+ * Get IP reassembly offload capability of a PMD.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[out] conf
+ * IP reassembly capability supported by the PMD
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_capability_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reassembly_params *capa);
+
+/**
+ * @internal
+ * Get IP reassembly offload configuration parameters set in PMD.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[out] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reassembly_params *conf);
+
+/**
+ * @internal
+ * Set configuration parameters for enabling IP reassembly offload in hardware.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[in] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
+ const struct rte_eth_ip_reassembly_params *conf);
+
/**
* @internal A structure containing the functions exported by an Ethernet driver.
*/
@@ -1186,6 +1234,13 @@ struct eth_dev_ops {
* kinds of metadata to the PMD
*/
eth_rx_metadata_negotiate_t rx_metadata_negotiate;
+
+ /** Get IP reassembly capability */
+ eth_ip_reassembly_capability_get_t ip_reassembly_capability_get;
+ /** Get IP reassembly configuration */
+ eth_ip_reassembly_conf_get_t ip_reassembly_conf_get;
+ /** Set IP reassembly configuration */
+ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
};
/**
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 29e21ad580..6b37cffd07 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6474,6 +6474,102 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
(*dev->dev_ops->rx_metadata_negotiate)(dev, features));
}
+int
+rte_eth_ip_reassembly_capability_get(uint16_t port_id,
+ struct rte_eth_ip_reassembly_params *reassembly_capa)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%u is not configured.\n"
+ "Cannot get IP reassembly capability\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (reassembly_capa == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_capability_get,
+ -ENOTSUP);
+ memset(reassembly_capa, 0, sizeof(struct rte_eth_ip_reassembly_params));
+
+ return eth_err(port_id, (*dev->dev_ops->ip_reassembly_capability_get)
+ (dev, reassembly_capa));
+}
+
+int
+rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reassembly_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%u is not configured.\n"
+ "Cannot get IP reassembly configuration\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_get,
+ -ENOTSUP);
+ memset(conf, 0, sizeof(struct rte_eth_ip_reassembly_params));
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
+}
+
+int
+rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ const struct rte_eth_ip_reassembly_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%u is not configured.\n"
+ "Cannot set IP reassembly configuration",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->dev_started != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%u started,\n"
+ "cannot configure IP reassembly params.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid IP reassembly configuration (NULL)\n");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_set,
+ -ENOTSUP);
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 147cc1ced3..dd416fda37 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5202,6 +5202,106 @@ int rte_eth_representor_info_get(uint16_t port_id,
__rte_experimental
int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
+/* Flag to offload IP reassembly for IPv4 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
+/* Flag to offload IP reassembly for IPv6 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
+/**
+ * A structure used to get/set IP reassembly configuration. It is also used
+ * to get the maximum capability values that a PMD can support.
+ *
+ * If rte_eth_ip_reassembly_capability_get() returns 0, IP reassembly can be
+ * enabled using rte_eth_ip_reassembly_conf_set() and params values lower than
+ * capability params can be set in the PMD.
+ */
+struct rte_eth_ip_reassembly_params {
+ /** Maximum time in ms which PMD can wait for other fragments. */
+ uint32_t timeout_ms;
+ /** Maximum number of fragments that can be reassembled. */
+ uint16_t max_frags;
+ /**
+ * Flags to enable reassembly of packet types -
+ * RTE_ETH_DEV_REASSEMBLY_F_xxx.
+ */
+ uint16_t flags;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get IP reassembly capabilities supported by the PMD. This is the first API
+ * to be called for enabling the IP reassembly offload feature. PMD will return
+ * the maximum values of parameters that PMD can support and user can call
+ * rte_eth_ip_reassembly_conf_set() with param values lower than capability.
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reassembly_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (-EINVAL) if device is not configured or *capa* passed is NULL.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_capability_get(uint16_t port_id,
+ struct rte_eth_ip_reassembly_params *capa);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get IP reassembly configuration parameters currently set in PMD.
+ * The API will return error if the configuration is not already
+ * set using rte_eth_ip_reassembly_conf_set() before calling this API or if
+ * the device is not configured.
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reassembly_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (-EINVAL) if device is not configured or if *conf* passed is NULL or if
+ * configuration is not set using rte_eth_ip_reassembly_conf_set().
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reassembly_params *conf);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set IP reassembly configuration parameters if the PMD supports IP reassembly
+ * offload. User should first call rte_eth_ip_reassembly_capability_get() to
+ * check the maximum values supported by the PMD before setting the
+ * configuration. The use of this API is mandatory to enable this feature and
+ * should be called before rte_eth_dev_start().
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reass_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (-EINVAL) if device is not configured or if device is already started or
+ * if *conf* passed is NULL.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ const struct rte_eth_ip_reassembly_params *conf);
+
+
#include <rte_ethdev_core.h>
/**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..e22c102818 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -256,6 +256,11 @@ EXPERIMENTAL {
rte_flow_flex_item_create;
rte_flow_flex_item_release;
rte_flow_pick_transfer_proxy;
+
+ # added in 22.03
+ rte_eth_ip_reassembly_capability_get;
+ rte_eth_ip_reassembly_conf_get;
+ rte_eth_ip_reassembly_conf_set;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-08 20:11 ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 1/3] " Akhil Goyal
@ 2022-02-08 20:11 ` Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 3/3] security: add IPsec option for " Akhil Goyal
2022-02-08 22:20 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
3 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 20:11 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
Hardware IP reassembly may be incomplete for multiple reasons like
reassembly timeout reached, duplicate fragments, etc.
To save application cycles to process these packets again, a new
mbuf dynflag is added to show that the mbuf received is not
reassembled properly.
Now if this dynflag is set, application can retrieve corresponding
chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
up to application to either drop those fragments or wait for more time.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/ethdev_driver.h | 8 ++++++++
lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 28 +++++++++++++++++++++++++++-
lib/ethdev/version.map | 1 +
lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
5 files changed, 73 insertions(+), 1 deletion(-)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 4acf75781d..81be991191 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1707,6 +1707,14 @@ int
rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
uint32_t direction);
+/**
+ * @internal
+ * Register mbuf dynamic field and flag for IP reassembly incomplete case.
+ */
+__rte_internal
+int
+rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag);
+
/*
* Legacy ethdev API used internally by drivers.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 6b37cffd07..a707f395c4 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6570,6 +6570,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
}
+int
+rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
+{
+ static const struct rte_mbuf_dynfield field_desc = {
+ .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME,
+ .size = sizeof(rte_eth_ip_reassembly_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reassembly_dynfield_t),
+ };
+ static const struct rte_mbuf_dynflag ip_reassembly_dynflag = {
+ .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME,
+ };
+ int offset;
+
+ offset = rte_mbuf_dynfield_register(&field_desc);
+ if (offset < 0)
+ return -1;
+ if (field_offset != NULL)
+ *field_offset = offset;
+
+ offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag);
+ if (offset < 0)
+ return -1;
+ if (flag_offset != NULL)
+ *flag_offset = offset;
+
+ return 0;
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index dd416fda37..b2c41ceb9a 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5285,6 +5285,12 @@ int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
* configuration. The use of this API is mandatory to enable this feature and
* should be called before rte_eth_dev_start().
*
+ * In datapath, PMD cannot guarantee that IP reassembly is always successful.
+ * Hence, PMD shall register mbuf dynamic field and dynamic flag using
+ * rte_eth_ip_reassembly_dynfield_register() to denote incomplete IP reassembly.
+ * If dynfield is not successfully registered, error will be returned and
+ * IP reassembly offload cannnot be used.
+ *
* @param port_id
* The port identifier of the device.
* @param conf
@@ -5294,13 +5300,33 @@ int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
* - (-ENODEV) if *port_id* invalid.
* - (-EIO) if device is removed.
* - (-EINVAL) if device is not configured or if device is already started or
- * if *conf* passed is NULL.
+ * if *conf* passed is NULL or if mbuf dynfield is not registered
+ * successfully by the PMD.
* - (0) on success.
*/
__rte_experimental
int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
const struct rte_eth_ip_reassembly_params *conf);
+/**
+ * In case of IP reassembly offload failure, packet will be updated with
+ * dynamic flag - RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME and packets
+ * will be returned without alteration.
+ * The application can retrieve the attached fragments using mbuf dynamic field
+ * RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME.
+ */
+typedef struct {
+ /**
+ * Next fragment packet. Application should fetch dynamic field of
+ * each fragment until a NULL is received and nb_frags is 0.
+ */
+ struct rte_mbuf *next_frag;
+ /** Time spent(in ms) by HW in waiting for further fragments. */
+ uint16_t time_spent;
+ /** Number of more fragments attached in mbuf dynamic fields. */
+ uint16_t nb_frags;
+} rte_eth_ip_reassembly_dynfield_t;
+
#include <rte_ethdev_core.h>
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index e22c102818..b5499cd9b5 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -284,6 +284,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_bind;
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
+ rte_eth_ip_reassembly_dynfield_register;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index 29abe8da53..1c948e996e 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -320,6 +320,15 @@ int rte_mbuf_dyn_rx_timestamp_register(int *field_offset, uint64_t *rx_flag);
*/
int rte_mbuf_dyn_tx_timestamp_register(int *field_offset, uint64_t *tx_flag);
+/**
+ * For the PMDs which support IP reassembly of packets, PMD will updated the
+ * packet with RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME to denote that
+ * IP reassembly is incomplete and application can retrieve the packets back
+ * using RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME.
+ */
+#define RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME "rte_dynfield_ip_reassembly"
+#define RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME "rte_dynflag_ip_reassembly_incomplete"
+
#ifdef __cplusplus
}
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 3/3] security: add IPsec option for IP reassembly
2022-02-08 20:11 ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 1/3] " Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-02-08 20:11 ` Akhil Goyal
2022-02-08 22:20 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
3 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 20:11 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
A new option is added in IPsec to enable and attempt reassembly
of inbound IP packets.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
devtools/libabigail.abignore | 5 +++++
lib/security/rte_security.h | 15 ++++++++++++++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 4b676f317d..5be41b8805 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,3 +11,8 @@
; Ignore generated PMD information strings
[suppress_variable]
name_regexp = _pmd_info$
+
+; Ignore fields inserted in place of reserved_opts of rte_security_ipsec_sa_options
+[suppress_type]
+ name = rte_security_ipsec_sa_options
+ has_data_member_inserted_between = {offset_of(reserved_opts), end}
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 1228b6c8b1..b080d10c2c 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -264,6 +264,19 @@ struct rte_security_ipsec_sa_options {
*/
uint32_t l4_csum_enable : 1;
+ /** Enable IP reassembly on inline inbound packets.
+ *
+ * * 1: Enable driver to try reassembly of encrypted IP packets for
+ * this SA, if supported by the driver. This feature will work
+ * only if user has successfully set IP reassembly config params
+ * using rte_eth_ip_reassembly_conf_set() for the inline Ethernet
+ * device. PMD need to register mbuf dynamic fields using
+ * rte_eth_ip_reassembly_dynfield_register() and security session
+ * creation would fail if dynfield is not registered successfully.
+ * * 0: Disable IP reassembly of packets (default).
+ */
+ uint32_t ip_reassembly_en : 1;
+
/** Reserved bit fields for future extension
*
* User should ensure reserved_opts is cleared as it may change in
@@ -271,7 +284,7 @@ struct rte_security_ipsec_sa_options {
*
* Note: Reduce number of bits in reserved_opts for every new option.
*/
- uint32_t reserved_opts : 18;
+ uint32_t reserved_opts : 17;
};
/** IPSec security association direction */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 0/3] ethdev: introduce IP reassembly offload
2022-02-08 20:11 ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
` (2 preceding siblings ...)
2022-02-08 20:11 ` [PATCH v5 3/3] security: add IPsec option for " Akhil Goyal
@ 2022-02-08 22:20 ` Akhil Goyal
2022-02-08 22:20 ` [PATCH v6 1/3] " Akhil Goyal
` (3 more replies)
3 siblings, 4 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 22:20 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.[2][3]
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
[2]: APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
[3]: PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
Newer versions of app and PMD will be sent once library changes are
acked.
Changes in v6:
- fix warnings.
Changes in v5:
- updated Doxygen comments.(Ferruh)
- Added release notes.
- updated libabigail suppress rules.(David)
Changes in v4:
- removed rte_eth_dev_info update for capability (Ferruh)
- removed Rx offload flag (Ferruh)
- added capability_get() (Ferruh)
- moved dynfield and dynflag namedefines in rte_mbuf_dyn.h (Ferruh)
changes in v3:
- incorporated comments from Andrew and Stephen Hemminger
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (3):
ethdev: introduce IP reassembly offload
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 5 +
doc/guides/nics/features.rst | 13 +++
doc/guides/nics/features/default.ini | 1 +
doc/guides/rel_notes/release_22_03.rst | 6 ++
lib/ethdev/ethdev_driver.h | 63 +++++++++++++
lib/ethdev/rte_ethdev.c | 124 ++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 126 +++++++++++++++++++++++++
lib/ethdev/version.map | 6 ++
lib/mbuf/rte_mbuf_dyn.h | 9 ++
lib/security/rte_security.h | 15 ++-
10 files changed, 367 insertions(+), 1 deletion(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 1/3] ethdev: introduce IP reassembly offload
2022-02-08 22:20 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
@ 2022-02-08 22:20 ` Akhil Goyal
2022-02-10 8:54 ` Ferruh Yigit
2022-02-10 10:08 ` Andrew Rybchenko
2022-02-08 22:20 ` [PATCH v6 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
` (2 subsequent siblings)
3 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 22:20 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offloaded to HW, it can considerably save application
cycles.
Hence, a new offload feature is exposed in eth_dev ops for devices which
can attempt IP reassembly of packets in hardware.
- rte_eth_ip_reassembly_capability_get() - to get the maximum values
of reassembly configuration which can be set.
- rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
and to enable the feature in the PMD (to be called before
rte_eth_dev_start()).
- rte_eth_ip_reassembly_conf_get() - to get the current configuration
set in PMD.
Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
the resulting reassembled IP packet would be a typical segmented mbuf in
case of success.
And if reassembly of IP fragments is failed or is incomplete (if
fragments do not come before the reass_timeout, overlap, etc), the mbuf
dynamic flags can be updated by the PMD. This is updated in a subsequent
patch.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
doc/guides/nics/features.rst | 13 ++++
doc/guides/nics/features/default.ini | 1 +
doc/guides/rel_notes/release_22_03.rst | 6 ++
lib/ethdev/ethdev_driver.h | 55 ++++++++++++++
lib/ethdev/rte_ethdev.c | 96 ++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 100 +++++++++++++++++++++++++
lib/ethdev/version.map | 5 ++
7 files changed, 276 insertions(+)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 27be2d2576..e6112e16f4 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -602,6 +602,19 @@ Supports inner packet L4 checksum.
``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+.. _nic_features_ip_reassembly:
+
+IP reassembly
+-------------
+
+Supports IP reassembly in hardware.
+
+* **[provides] eth_dev_ops**: ``ip_reassembly_capability_get``,
+ ``ip_reassembly_conf_get``, ``ip_reassembly_conf_set``.
+* **[related] API**: ``rte_eth_ip_reassembly_capability_get()``,
+ ``rte_eth_ip_reassembly_conf_get()``, ``rte_eth_ip_reassembly_conf_set()``.
+
+
.. _nic_features_shared_rx_queue:
Shared Rx queue
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index c96a52b58e..22750dfacb 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -52,6 +52,7 @@ Timestamp offload =
MACsec offload =
Inner L3 checksum =
Inner L4 checksum =
+IP reassembly =
Packet type parsing =
Timesync =
Rx descriptor status =
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 746f50e84f..03ec8d6faa 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,12 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added IP reassembly Ethernet offload API, to get and set config.**
+
+ Added IP reassembly offload APIs which provide functions to query IP
+ reassembly capabilities, to set configuration and to get currently set
+ reassembly configuration.
+
* **Updated Cisco enic driver.**
* Added rte_flow support for matching GENEVE packets.
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index d95605a355..4acf75781d 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -990,6 +990,54 @@ typedef int (*eth_representor_info_get_t)(struct rte_eth_dev *dev,
typedef int (*eth_rx_metadata_negotiate_t)(struct rte_eth_dev *dev,
uint64_t *features);
+/**
+ * @internal
+ * Get IP reassembly offload capability of a PMD.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[out] conf
+ * IP reassembly capability supported by the PMD
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_capability_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reassembly_params *capa);
+
+/**
+ * @internal
+ * Get IP reassembly offload configuration parameters set in PMD.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[out] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_get_t)(struct rte_eth_dev *dev,
+ struct rte_eth_ip_reassembly_params *conf);
+
+/**
+ * @internal
+ * Set configuration parameters for enabling IP reassembly offload in hardware.
+ *
+ * @param dev
+ * Port (ethdev) handle
+ *
+ * @param[in] conf
+ * Configuration parameters for IP reassembly.
+ *
+ * @return
+ * Negative errno value on error, zero otherwise
+ */
+typedef int (*eth_ip_reassembly_conf_set_t)(struct rte_eth_dev *dev,
+ const struct rte_eth_ip_reassembly_params *conf);
+
/**
* @internal A structure containing the functions exported by an Ethernet driver.
*/
@@ -1186,6 +1234,13 @@ struct eth_dev_ops {
* kinds of metadata to the PMD
*/
eth_rx_metadata_negotiate_t rx_metadata_negotiate;
+
+ /** Get IP reassembly capability */
+ eth_ip_reassembly_capability_get_t ip_reassembly_capability_get;
+ /** Get IP reassembly configuration */
+ eth_ip_reassembly_conf_get_t ip_reassembly_conf_get;
+ /** Set IP reassembly configuration */
+ eth_ip_reassembly_conf_set_t ip_reassembly_conf_set;
};
/**
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 29e21ad580..6b37cffd07 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6474,6 +6474,102 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features)
(*dev->dev_ops->rx_metadata_negotiate)(dev, features));
}
+int
+rte_eth_ip_reassembly_capability_get(uint16_t port_id,
+ struct rte_eth_ip_reassembly_params *reassembly_capa)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%u is not configured.\n"
+ "Cannot get IP reassembly capability\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (reassembly_capa == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_capability_get,
+ -ENOTSUP);
+ memset(reassembly_capa, 0, sizeof(struct rte_eth_ip_reassembly_params));
+
+ return eth_err(port_id, (*dev->dev_ops->ip_reassembly_capability_get)
+ (dev, reassembly_capa));
+}
+
+int
+rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reassembly_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%u is not configured.\n"
+ "Cannot get IP reassembly configuration\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_get,
+ -ENOTSUP);
+ memset(conf, 0, sizeof(struct rte_eth_ip_reassembly_params));
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_get)(dev, conf));
+}
+
+int
+rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ const struct rte_eth_ip_reassembly_params *conf)
+{
+ struct rte_eth_dev *dev;
+
+ RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
+ dev = &rte_eth_devices[port_id];
+
+ if (dev->data->dev_configured == 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%u is not configured.\n"
+ "Cannot set IP reassembly configuration",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (dev->data->dev_started != 0) {
+ RTE_ETHDEV_LOG(ERR,
+ "Device with port_id=%u started,\n"
+ "cannot configure IP reassembly params.\n",
+ port_id);
+ return -EINVAL;
+ }
+
+ if (conf == NULL) {
+ RTE_ETHDEV_LOG(ERR,
+ "Invalid IP reassembly configuration (NULL)\n");
+ return -EINVAL;
+ }
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->ip_reassembly_conf_set,
+ -ENOTSUP);
+ return eth_err(port_id,
+ (*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 147cc1ced3..0215f9d854 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5202,6 +5202,106 @@ int rte_eth_representor_info_get(uint16_t port_id,
__rte_experimental
int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
+/* Flag to offload IP reassembly for IPv4 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
+/* Flag to offload IP reassembly for IPv6 packets. */
+#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
+/**
+ * A structure used to get/set IP reassembly configuration. It is also used
+ * to get the maximum capability values that a PMD can support.
+ *
+ * If rte_eth_ip_reassembly_capability_get() returns 0, IP reassembly can be
+ * enabled using rte_eth_ip_reassembly_conf_set() and params values lower than
+ * capability params can be set in the PMD.
+ */
+struct rte_eth_ip_reassembly_params {
+ /** Maximum time in ms which PMD can wait for other fragments. */
+ uint32_t timeout_ms;
+ /** Maximum number of fragments that can be reassembled. */
+ uint16_t max_frags;
+ /**
+ * Flags to enable reassembly of packet types -
+ * RTE_ETH_DEV_REASSEMBLY_F_xxx.
+ */
+ uint16_t flags;
+};
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get IP reassembly capabilities supported by the PMD. This is the first API
+ * to be called for enabling the IP reassembly offload feature. PMD will return
+ * the maximum values of parameters that PMD can support and user can call
+ * rte_eth_ip_reassembly_conf_set() with param values lower than capability.
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param capa
+ * A pointer to rte_eth_ip_reassembly_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (-EINVAL) if device is not configured or *capa* passed is NULL.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_capability_get(uint16_t port_id,
+ struct rte_eth_ip_reassembly_params *capa);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Get IP reassembly configuration parameters currently set in PMD.
+ * The API will return error if the configuration is not already
+ * set using rte_eth_ip_reassembly_conf_set() before calling this API or if
+ * the device is not configured.
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reassembly_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (-EINVAL) if device is not configured or if *conf* passed is NULL or if
+ * configuration is not set using rte_eth_ip_reassembly_conf_set().
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
+ struct rte_eth_ip_reassembly_params *conf);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set IP reassembly configuration parameters if the PMD supports IP reassembly
+ * offload. User should first call rte_eth_ip_reassembly_capability_get() to
+ * check the maximum values supported by the PMD before setting the
+ * configuration. The use of this API is mandatory to enable this feature and
+ * should be called before rte_eth_dev_start().
+ *
+ * @param port_id
+ * The port identifier of the device.
+ * @param conf
+ * A pointer to rte_eth_ip_reassembly_params structure.
+ * @return
+ * - (-ENOTSUP) if offload configuration is not supported by device.
+ * - (-ENODEV) if *port_id* invalid.
+ * - (-EIO) if device is removed.
+ * - (-EINVAL) if device is not configured or if device is already started or
+ * if *conf* passed is NULL.
+ * - (0) on success.
+ */
+__rte_experimental
+int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
+ const struct rte_eth_ip_reassembly_params *conf);
+
+
#include <rte_ethdev_core.h>
/**
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index c2fb0669a4..e22c102818 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -256,6 +256,11 @@ EXPERIMENTAL {
rte_flow_flex_item_create;
rte_flow_flex_item_release;
rte_flow_pick_transfer_proxy;
+
+ # added in 22.03
+ rte_eth_ip_reassembly_capability_get;
+ rte_eth_ip_reassembly_conf_get;
+ rte_eth_ip_reassembly_conf_set;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-08 22:20 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
2022-02-08 22:20 ` [PATCH v6 1/3] " Akhil Goyal
@ 2022-02-08 22:20 ` Akhil Goyal
2022-02-10 8:54 ` Ferruh Yigit
2022-02-08 22:20 ` [PATCH v6 3/3] security: add IPsec option for " Akhil Goyal
2022-02-10 8:54 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Ferruh Yigit
3 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 22:20 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
Hardware IP reassembly may be incomplete for multiple reasons like
reassembly timeout reached, duplicate fragments, etc.
To save application cycles to process these packets again, a new
mbuf dynflag is added to show that the mbuf received is not
reassembled properly.
Now if this dynflag is set, application can retrieve corresponding
chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
up to application to either drop those fragments or wait for more time.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/ethdev_driver.h | 8 ++++++++
lib/ethdev/rte_ethdev.c | 28 ++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 28 +++++++++++++++++++++++++++-
lib/ethdev/version.map | 1 +
lib/mbuf/rte_mbuf_dyn.h | 9 +++++++++
5 files changed, 73 insertions(+), 1 deletion(-)
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 4acf75781d..81be991191 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1707,6 +1707,14 @@ int
rte_eth_hairpin_queue_peer_unbind(uint16_t cur_port, uint16_t cur_queue,
uint32_t direction);
+/**
+ * @internal
+ * Register mbuf dynamic field and flag for IP reassembly incomplete case.
+ */
+__rte_internal
+int
+rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag);
+
/*
* Legacy ethdev API used internally by drivers.
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 6b37cffd07..a707f395c4 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6570,6 +6570,34 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id,
(*dev->dev_ops->ip_reassembly_conf_set)(dev, conf));
}
+int
+rte_eth_ip_reassembly_dynfield_register(int *field_offset, int *flag_offset)
+{
+ static const struct rte_mbuf_dynfield field_desc = {
+ .name = RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME,
+ .size = sizeof(rte_eth_ip_reassembly_dynfield_t),
+ .align = __alignof__(rte_eth_ip_reassembly_dynfield_t),
+ };
+ static const struct rte_mbuf_dynflag ip_reassembly_dynflag = {
+ .name = RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME,
+ };
+ int offset;
+
+ offset = rte_mbuf_dynfield_register(&field_desc);
+ if (offset < 0)
+ return -1;
+ if (field_offset != NULL)
+ *field_offset = offset;
+
+ offset = rte_mbuf_dynflag_register(&ip_reassembly_dynflag);
+ if (offset < 0)
+ return -1;
+ if (flag_offset != NULL)
+ *flag_offset = offset;
+
+ return 0;
+}
+
RTE_LOG_REGISTER_DEFAULT(rte_eth_dev_logtype, INFO);
RTE_INIT(ethdev_init_telemetry)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 0215f9d854..eba9c2f402 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -5285,6 +5285,12 @@ int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
* configuration. The use of this API is mandatory to enable this feature and
* should be called before rte_eth_dev_start().
*
+ * In datapath, PMD cannot guarantee that IP reassembly is always successful.
+ * Hence, PMD shall register mbuf dynamic field and dynamic flag using
+ * rte_eth_ip_reassembly_dynfield_register() to denote incomplete IP reassembly.
+ * If dynfield is not successfully registered, error will be returned and
+ * IP reassembly offload cannot be used.
+ *
* @param port_id
* The port identifier of the device.
* @param conf
@@ -5294,13 +5300,33 @@ int rte_eth_ip_reassembly_conf_get(uint16_t port_id,
* - (-ENODEV) if *port_id* invalid.
* - (-EIO) if device is removed.
* - (-EINVAL) if device is not configured or if device is already started or
- * if *conf* passed is NULL.
+ * if *conf* passed is NULL or if mbuf dynfield is not registered
+ * successfully by the PMD.
* - (0) on success.
*/
__rte_experimental
int rte_eth_ip_reassembly_conf_set(uint16_t port_id,
const struct rte_eth_ip_reassembly_params *conf);
+/**
+ * In case of IP reassembly offload failure, packet will be updated with
+ * dynamic flag - RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME and packets
+ * will be returned without alteration.
+ * The application can retrieve the attached fragments using mbuf dynamic field
+ * RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME.
+ */
+typedef struct {
+ /**
+ * Next fragment packet. Application should fetch dynamic field of
+ * each fragment until a NULL is received and nb_frags is 0.
+ */
+ struct rte_mbuf *next_frag;
+ /** Time spent(in ms) by HW in waiting for further fragments. */
+ uint16_t time_spent;
+ /** Number of more fragments attached in mbuf dynamic fields. */
+ uint16_t nb_frags;
+} rte_eth_ip_reassembly_dynfield_t;
+
#include <rte_ethdev_core.h>
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index e22c102818..b5499cd9b5 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -284,6 +284,7 @@ INTERNAL {
rte_eth_hairpin_queue_peer_bind;
rte_eth_hairpin_queue_peer_unbind;
rte_eth_hairpin_queue_peer_update;
+ rte_eth_ip_reassembly_dynfield_register;
rte_eth_representor_id_get;
rte_eth_switch_domain_alloc;
rte_eth_switch_domain_free;
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index 29abe8da53..1c948e996e 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -320,6 +320,15 @@ int rte_mbuf_dyn_rx_timestamp_register(int *field_offset, uint64_t *rx_flag);
*/
int rte_mbuf_dyn_tx_timestamp_register(int *field_offset, uint64_t *tx_flag);
+/**
+ * For the PMDs which support IP reassembly of packets, PMD will updated the
+ * packet with RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME to denote that
+ * IP reassembly is incomplete and application can retrieve the packets back
+ * using RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME.
+ */
+#define RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME "rte_dynfield_ip_reassembly"
+#define RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME "rte_dynflag_ip_reassembly_incomplete"
+
#ifdef __cplusplus
}
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 3/3] security: add IPsec option for IP reassembly
2022-02-08 22:20 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
2022-02-08 22:20 ` [PATCH v6 1/3] " Akhil Goyal
2022-02-08 22:20 ` [PATCH v6 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-02-08 22:20 ` Akhil Goyal
2022-02-10 8:54 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Ferruh Yigit
3 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-08 22:20 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
A new option is added in IPsec to enable and attempt reassembly
of inbound IP packets.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
devtools/libabigail.abignore | 5 +++++
lib/security/rte_security.h | 15 ++++++++++++++-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 4b676f317d..5be41b8805 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,3 +11,8 @@
; Ignore generated PMD information strings
[suppress_variable]
name_regexp = _pmd_info$
+
+; Ignore fields inserted in place of reserved_opts of rte_security_ipsec_sa_options
+[suppress_type]
+ name = rte_security_ipsec_sa_options
+ has_data_member_inserted_between = {offset_of(reserved_opts), end}
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 1228b6c8b1..b080d10c2c 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -264,6 +264,19 @@ struct rte_security_ipsec_sa_options {
*/
uint32_t l4_csum_enable : 1;
+ /** Enable IP reassembly on inline inbound packets.
+ *
+ * * 1: Enable driver to try reassembly of encrypted IP packets for
+ * this SA, if supported by the driver. This feature will work
+ * only if user has successfully set IP reassembly config params
+ * using rte_eth_ip_reassembly_conf_set() for the inline Ethernet
+ * device. PMD need to register mbuf dynamic fields using
+ * rte_eth_ip_reassembly_dynfield_register() and security session
+ * creation would fail if dynfield is not registered successfully.
+ * * 0: Disable IP reassembly of packets (default).
+ */
+ uint32_t ip_reassembly_en : 1;
+
/** Reserved bit fields for future extension
*
* User should ensure reserved_opts is cleared as it may change in
@@ -271,7 +284,7 @@ struct rte_security_ipsec_sa_options {
*
* Note: Reduce number of bits in reserved_opts for every new option.
*/
- uint32_t reserved_opts : 18;
+ uint32_t reserved_opts : 17;
};
/** IPSec security association direction */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v6 1/3] ethdev: introduce IP reassembly offload
2022-02-08 22:20 ` [PATCH v6 1/3] " Akhil Goyal
@ 2022-02-10 8:54 ` Ferruh Yigit
2022-02-10 10:08 ` Andrew Rybchenko
1 sibling, 0 replies; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-10 8:54 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, olivier.matz, david.marchand, radu.nicolau, jerinj,
stephen, mdr
On 2/8/2022 10:20 PM, Akhil Goyal wrote:
> IP Reassembly is a costly operation if it is done in software.
> The operation becomes even more costlier if IP fragments are encrypted.
> However, if it is offloaded to HW, it can considerably save application
> cycles.
>
> Hence, a new offload feature is exposed in eth_dev ops for devices which
> can attempt IP reassembly of packets in hardware.
> - rte_eth_ip_reassembly_capability_get() - to get the maximum values
> of reassembly configuration which can be set.
> - rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
> and to enable the feature in the PMD (to be called before
> rte_eth_dev_start()).
> - rte_eth_ip_reassembly_conf_get() - to get the current configuration
> set in PMD.
>
> Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
> the resulting reassembled IP packet would be a typical segmented mbuf in
> case of success.
>
> And if reassembly of IP fragments is failed or is incomplete (if
> fragments do not come before the reass_timeout, overlap, etc), the mbuf
> dynamic flags can be updated by the PMD. This is updated in a subsequent
> patch.
>
> Signed-off-by: Akhil Goyal<gakhil@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v6 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly
2022-02-08 22:20 ` [PATCH v6 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
@ 2022-02-10 8:54 ` Ferruh Yigit
0 siblings, 0 replies; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-10 8:54 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, olivier.matz, david.marchand, radu.nicolau, jerinj,
stephen, mdr
On 2/8/2022 10:20 PM, Akhil Goyal wrote:
> Hardware IP reassembly may be incomplete for multiple reasons like
> reassembly timeout reached, duplicate fragments, etc.
> To save application cycles to process these packets again, a new
> mbuf dynflag is added to show that the mbuf received is not
> reassembled properly.
>
> Now if this dynflag is set, application can retrieve corresponding
> chain of mbufs using mbuf dynfield set by the PMD. Now, it will be
> up to application to either drop those fragments or wait for more time.
>
> Signed-off-by: Akhil Goyal<gakhil@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v6 0/3] ethdev: introduce IP reassembly offload
2022-02-08 22:20 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
` (2 preceding siblings ...)
2022-02-08 22:20 ` [PATCH v6 3/3] security: add IPsec option for " Akhil Goyal
@ 2022-02-10 8:54 ` Ferruh Yigit
3 siblings, 0 replies; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-10 8:54 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, olivier.matz, david.marchand, radu.nicolau, jerinj,
stephen, mdr
On 2/8/2022 10:20 PM, Akhil Goyal wrote:
> As discussed in the RFC[1] sent in 21.11, a new offload is
> introduced in ethdev for IP reassembly.
>
> This patchset add the IP reassembly RX offload.
> Currently, the offload is tested along with inline IPsec processing.
> It can also be updated as a standalone offload without IPsec, if there
> are some hardware available to test it.
> The patchset is tested on cnxk platform. The driver implementation
> and a test app are added as separate patchsets.[2][3]
>
> [1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
> [2]: APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
> [3]: PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
> Newer versions of app and PMD will be sent once library changes are
> acked.
>
> Changes in v6:
> - fix warnings.
>
> Changes in v5:
> - updated Doxygen comments.(Ferruh)
> - Added release notes.
> - updated libabigail suppress rules.(David)
>
> Changes in v4:
> - removed rte_eth_dev_info update for capability (Ferruh)
> - removed Rx offload flag (Ferruh)
> - added capability_get() (Ferruh)
> - moved dynfield and dynflag namedefines in rte_mbuf_dyn.h (Ferruh)
>
> changes in v3:
> - incorporated comments from Andrew and Stephen Hemminger
>
> changes in v2:
> - added abi ignore exceptions for modifications in reserved fields.
> Added a crude way to subside the rte_security and rte_ipsec ABI issue.
> Please suggest a better way.
> - incorporated Konstantin's comment for extra checks in new API
> introduced.
> - converted static mbuf ol_flag to mbuf dynflag (Konstantin)
> - added a get API for reassembly configuration (Konstantin)
> - Fixed checkpatch issues.
> - Dynfield is NOT split into 2 parts as it would cause an extra fetch in
> case of IP reassembly failure.
> - Application patches are split into a separate series.
>
>
> Akhil Goyal (3):
> ethdev: introduce IP reassembly offload
> ethdev: add mbuf dynfield for incomplete IP reassembly
> security: add IPsec option for IP reassembly
>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v6 1/3] ethdev: introduce IP reassembly offload
2022-02-08 22:20 ` [PATCH v6 1/3] " Akhil Goyal
2022-02-10 8:54 ` Ferruh Yigit
@ 2022-02-10 10:08 ` Andrew Rybchenko
2022-02-10 10:20 ` Ferruh Yigit
1 sibling, 1 reply; 184+ messages in thread
From: Andrew Rybchenko @ 2022-02-10 10:08 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
rosen.xu, olivier.matz, david.marchand, radu.nicolau, jerinj,
stephen, mdr
On 2/9/22 01:20, Akhil Goyal wrote:
> IP Reassembly is a costly operation if it is done in software.
> The operation becomes even more costlier if IP fragments are encrypted.
> However, if it is offloaded to HW, it can considerably save application
> cycles.
>
> Hence, a new offload feature is exposed in eth_dev ops for devices which
> can attempt IP reassembly of packets in hardware.
> - rte_eth_ip_reassembly_capability_get() - to get the maximum values
> of reassembly configuration which can be set.
> - rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
> and to enable the feature in the PMD (to be called before
> rte_eth_dev_start()).
> - rte_eth_ip_reassembly_conf_get() - to get the current configuration
> set in PMD.
>
> Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
> the resulting reassembled IP packet would be a typical segmented mbuf in
> case of success.
>
> And if reassembly of IP fragments is failed or is incomplete (if
> fragments do not come before the reass_timeout, overlap, etc), the mbuf
> dynamic flags can be updated by the PMD. This is updated in a subsequent
> patch.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Just one nit below, sorry that I'm so late
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 147cc1ced3..0215f9d854 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -5202,6 +5202,106 @@ int rte_eth_representor_info_get(uint16_t port_id,
> __rte_experimental
> int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
>
> +/* Flag to offload IP reassembly for IPv4 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
> +/* Flag to offload IP reassembly for IPv6 packets. */
> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
Doxygen style comments shoud be above: /**
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v6 1/3] ethdev: introduce IP reassembly offload
2022-02-10 10:08 ` Andrew Rybchenko
@ 2022-02-10 10:20 ` Ferruh Yigit
2022-02-10 10:30 ` Ferruh Yigit
0 siblings, 1 reply; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-10 10:20 UTC (permalink / raw)
To: Andrew Rybchenko, Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, rosen.xu,
olivier.matz, david.marchand, radu.nicolau, jerinj, stephen, mdr
On 2/10/2022 10:08 AM, Andrew Rybchenko wrote:
> On 2/9/22 01:20, Akhil Goyal wrote:
>> IP Reassembly is a costly operation if it is done in software.
>> The operation becomes even more costlier if IP fragments are encrypted.
>> However, if it is offloaded to HW, it can considerably save application
>> cycles.
>>
>> Hence, a new offload feature is exposed in eth_dev ops for devices which
>> can attempt IP reassembly of packets in hardware.
>> - rte_eth_ip_reassembly_capability_get() - to get the maximum values
>> of reassembly configuration which can be set.
>> - rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
>> and to enable the feature in the PMD (to be called before
>> rte_eth_dev_start()).
>> - rte_eth_ip_reassembly_conf_get() - to get the current configuration
>> set in PMD.
>>
>> Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
>> the resulting reassembled IP packet would be a typical segmented mbuf in
>> case of success.
>>
>> And if reassembly of IP fragments is failed or is incomplete (if
>> fragments do not come before the reass_timeout, overlap, etc), the mbuf
>> dynamic flags can be updated by the PMD. This is updated in a subsequent
>> patch.
>>
>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>
> Just one nit below, sorry that I'm so late
>
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index 147cc1ced3..0215f9d854 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -5202,6 +5202,106 @@ int rte_eth_representor_info_get(uint16_t port_id,
>> __rte_experimental
>> int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
>> +/* Flag to offload IP reassembly for IPv4 packets. */
>> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
>> +/* Flag to offload IP reassembly for IPv6 packets. */
>> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
>
> Doxygen style comments shoud be above: /**
ack. Let me fix that in next-net.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v6 1/3] ethdev: introduce IP reassembly offload
2022-02-10 10:20 ` Ferruh Yigit
@ 2022-02-10 10:30 ` Ferruh Yigit
0 siblings, 0 replies; 184+ messages in thread
From: Ferruh Yigit @ 2022-02-10 10:30 UTC (permalink / raw)
To: Andrew Rybchenko, Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, rosen.xu,
olivier.matz, david.marchand, radu.nicolau, jerinj, stephen, mdr
On 2/10/2022 10:20 AM, Ferruh Yigit wrote:
> On 2/10/2022 10:08 AM, Andrew Rybchenko wrote:
>> On 2/9/22 01:20, Akhil Goyal wrote:
>>> IP Reassembly is a costly operation if it is done in software.
>>> The operation becomes even more costlier if IP fragments are encrypted.
>>> However, if it is offloaded to HW, it can considerably save application
>>> cycles.
>>>
>>> Hence, a new offload feature is exposed in eth_dev ops for devices which
>>> can attempt IP reassembly of packets in hardware.
>>> - rte_eth_ip_reassembly_capability_get() - to get the maximum values
>>> of reassembly configuration which can be set.
>>> - rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
>>> and to enable the feature in the PMD (to be called before
>>> rte_eth_dev_start()).
>>> - rte_eth_ip_reassembly_conf_get() - to get the current configuration
>>> set in PMD.
>>>
>>> Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
>>> the resulting reassembled IP packet would be a typical segmented mbuf in
>>> case of success.
>>>
>>> And if reassembly of IP fragments is failed or is incomplete (if
>>> fragments do not come before the reass_timeout, overlap, etc), the mbuf
>>> dynamic flags can be updated by the PMD. This is updated in a subsequent
>>> patch.
>>>
>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>
>> Just one nit below, sorry that I'm so late
>>
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>>> index 147cc1ced3..0215f9d854 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -5202,6 +5202,106 @@ int rte_eth_representor_info_get(uint16_t port_id,
>>> __rte_experimental
>>> int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features);
>>> +/* Flag to offload IP reassembly for IPv4 packets. */
>>> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0))
>>> +/* Flag to offload IP reassembly for IPv6 packets. */
>>> +#define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1))
>>
>> Doxygen style comments shoud be above: /**
>
> ack. Let me fix that in next-net.
done, please verify in next-net
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases
2022-01-20 16:48 ` [PATCH v2 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (3 preceding siblings ...)
2022-01-20 16:48 ` [PATCH v2 4/4] app/test: add IP reassembly negative cases Akhil Goyal
@ 2022-02-17 17:23 ` Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 1/4] app/test: add unit cases for inline IPsec offload Akhil Goyal
` (4 more replies)
4 siblings, 5 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-17 17:23 UTC (permalink / raw)
To: dev
Cc: anoobj, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
IP reassembly RX offload is introduced in [1].
This patchset is added to test the IP reassembly RX offload and
to test other inline IPsec test cases which need to be verified
before testing IP reassembly in inline inbound cases.
In this app, plain IP packets(with/without IP fragments) are sent
on one interface for outbound processing and then the packets are
received back on the same interface using loopback mode.
While receiving the packets, the packets are processed for inline
inbound IPsec processing and if the packets are fragmented, they will
be reassembled before getting received in the driver/app.
v1 of this patchset was sent along with the ethdev changes in [2].
v2 is split so that it can be reviewed separately.
Changes in v3:
- incorporated latest ethdev changes for reassembly.
- skipped build on windows as it needs rte_ipsec lib which is not
compiled on windows.
changes in v2:
- added IPsec burst mode case
- updated as per the latest ethdev changes in [1].
[1] http://patches.dpdk.org/project/dpdk/list/?series=21283
[2] http://patches.dpdk.org/project/dpdk/list/?series=21052
Akhil Goyal (4):
app/test: add unit cases for inline IPsec offload
app/test: add IP reassembly case with no frags
app/test: add IP reassembly cases with multiple fragments
app/test: add IP reassembly negative cases
MAINTAINERS | 2 +-
app/test/meson.build | 4 +
app/test/test_security_inline_proto.c | 1299 +++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 778 ++++++++++
4 files changed, 2082 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 1/4] app/test: add unit cases for inline IPsec offload
2022-02-17 17:23 ` [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
@ 2022-02-17 17:23 ` Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 2/4] app/test: add IP reassembly case with no frags Akhil Goyal
` (3 subsequent siblings)
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-17 17:23 UTC (permalink / raw)
To: dev
Cc: anoobj, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal,
Nithin Dabilpuram
A new test suite is added in test app to test inline IPsec protocol
offload. In this patch, a couple of predefined plain and cipher test
vectors are used to verify the IPsec functionality without the need of
external traffic generators. The sent packet is loopbacked onto the same
interface which is received and matched with the expected output.
The test suite can be updated further with other functional test cases.
The testsuite can be run using:
RTE> inline_ipsec_autotest
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
MAINTAINERS | 2 +-
app/test/meson.build | 4 +
app/test/test_security_inline_proto.c | 757 ++++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 185 +++++
4 files changed, 947 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
diff --git a/MAINTAINERS b/MAINTAINERS
index d5cd0a6c2f..52eee9b2b3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -439,7 +439,7 @@ M: Akhil Goyal <gakhil@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
-F: app/test/test_security.c
+F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <roy.fan.zhang@intel.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 5fc1dd1b7b..5804b888c9 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -355,6 +355,10 @@ if not is_windows and dpdk_conf.has('RTE_LIB_TELEMETRY')
test_sources += ['test_telemetry_json.c', 'test_telemetry_data.c']
fast_tests += [['telemetry_json_autotest', true], ['telemetry_data_autotest', true]]
endif
+if not is_windows
+ test_sources += ['test_security_inline_proto.c']
+ fast_tests += [['inline_ipsec_autotest', false]]
+endif
if dpdk_conf.has('RTE_LIB_PIPELINE')
# pipeline lib depends on port and table libs, so those must be present
# if pipeline library is.
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
new file mode 100644
index 0000000000..e2b95de68e
--- /dev/null
+++ b/app/test/test_security_inline_proto.c
@@ -0,0 +1,757 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <signal.h>
+#include <unistd.h>
+#include <rte_cycles.h>
+#include <rte_ethdev.h>
+#include <rte_security.h>
+#include <rte_ipsec.h>
+#include <rte_byteorder.h>
+#include <rte_atomic.h>
+#include <rte_malloc.h>
+#include "test_security_inline_proto_vectors.h"
+#include "test.h"
+
+#define NB_ETHPORTS_USED (1)
+#define NB_SOCKETS (2)
+#define MEMPOOL_CACHE_SIZE 32
+#define MAX_PKT_BURST (32)
+#define RTE_TEST_RX_DESC_DEFAULT (1024)
+#define RTE_TEST_TX_DESC_DEFAULT (1024)
+#define RTE_PORT_ALL (~(uint16_t)0x0)
+
+/*
+ * RX and TX Prefetch, Host, and Write-back threshold values should be
+ * carefully set for optimal performance. Consult the network
+ * controller's datasheet and supporting DPDK documentation for guidance
+ * on how these parameters should be set.
+ */
+#define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */
+#define RX_HTHRESH 8 /**< Default values of RX host threshold reg. */
+#define RX_WTHRESH 0 /**< Default values of RX write-back threshold reg. */
+
+#define TX_PTHRESH 32 /**< Default values of TX prefetch threshold reg. */
+#define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */
+#define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */
+
+#define MAX_TRAFFIC_BURST 2048
+
+#define NB_MBUF 10240
+
+#define APP_REASS_TIMEOUT 10
+
+static struct rte_mempool *mbufpool[NB_SOCKETS];
+static struct rte_mempool *sess_pool[NB_SOCKETS];
+static struct rte_mempool *sess_priv_pool[NB_SOCKETS];
+/* ethernet addresses of ports */
+static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
+ .split_hdr_size = 0,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SECURITY,
+ },
+ .txmode = {
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_SECURITY |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ },
+ .lpbk_mode = 1, /* enable loopback */
+};
+
+static struct rte_eth_rxconf rx_conf = {
+ .rx_thresh = {
+ .pthresh = RX_PTHRESH,
+ .hthresh = RX_HTHRESH,
+ .wthresh = RX_WTHRESH,
+ },
+ .rx_free_thresh = 32,
+};
+
+static struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = 32, /* Use PMD default values */
+ .tx_rs_thresh = 32, /* Use PMD default values */
+};
+
+enum {
+ LCORE_INVALID = 0,
+ LCORE_AVAIL,
+ LCORE_USED,
+};
+
+struct lcore_cfg {
+ uint8_t status;
+ uint8_t socketid;
+ uint16_t nb_ports;
+ uint16_t port;
+} __rte_cache_aligned;
+
+struct lcore_cfg lcore_cfg;
+
+static uint64_t link_mbps;
+
+static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
+
+/* Create Inline IPsec session */
+static int
+create_inline_ipsec_session(struct ipsec_session_data *sa,
+ uint16_t portid, struct rte_ipsec_session *ips,
+ enum rte_security_ipsec_sa_direction dir,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ int32_t ret = 0;
+ struct rte_security_ctx *sec_ctx;
+ uint32_t src_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 2));
+ uint32_t dst_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 1));
+ uint16_t src_v6[8] = {0x2607, 0xf8b0, 0x400c, 0x0c03, 0x0000, 0x0000,
+ 0x0000, 0x001a};
+ uint16_t dst_v6[8] = {0x2001, 0x0470, 0xe5bf, 0xdead, 0x4957, 0x2174,
+ 0xe82c, 0x4887};
+ struct rte_security_session_conf sess_conf = {
+ .action_type = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
+ .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
+ .ipsec = sa->ipsec_xform,
+ .crypto_xform = &sa->xform.aead,
+ .userdata = NULL,
+ };
+ sess_conf.ipsec.direction = dir;
+
+ const struct rte_security_capability *sec_cap;
+
+ sec_ctx = (struct rte_security_ctx *)
+ rte_eth_dev_get_sec_ctx(portid);
+
+ if (sec_ctx == NULL) {
+ printf("Ethernet device doesn't support security features.\n");
+ return TEST_SKIPPED;
+ }
+
+ sess_conf.crypto_xform->aead.key.data = sa->key.data;
+
+ /* Save SA as userdata for the security session. When
+ * the packet is received, this userdata will be
+ * retrieved using the metadata from the packet.
+ *
+ * The PMD is expected to set similar metadata for other
+ * operations, like rte_eth_event, which are tied to
+ * security session. In such cases, the userdata could
+ * be obtained to uniquely identify the security
+ * parameters denoted.
+ */
+
+ sess_conf.userdata = (void *) sa;
+ sess_conf.ipsec.tunnel.type = tun_type;
+ if (tun_type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+ memcpy(&sess_conf.ipsec.tunnel.ipv4.src_ip, &src_v4,
+ sizeof(src_v4));
+ memcpy(&sess_conf.ipsec.tunnel.ipv4.dst_ip, &dst_v4,
+ sizeof(dst_v4));
+ } else {
+ memcpy(&sess_conf.ipsec.tunnel.ipv6.src_addr, &src_v6,
+ sizeof(src_v6));
+ memcpy(&sess_conf.ipsec.tunnel.ipv6.dst_addr, &dst_v6,
+ sizeof(dst_v6));
+ }
+ ips->security.ses = rte_security_session_create(sec_ctx,
+ &sess_conf, sess_pool[lcore_cfg.socketid],
+ sess_priv_pool[lcore_cfg.socketid]);
+ if (ips->security.ses == NULL) {
+ printf("SEC Session init failed: err: %d\n", ret);
+ return TEST_FAILED;
+ }
+
+ sec_cap = rte_security_capabilities_get(sec_ctx);
+ if (sec_cap == NULL) {
+ printf("No capabilities registered\n");
+ return TEST_SKIPPED;
+ }
+
+ /* iterate until ESP tunnel*/
+ while (sec_cap->action !=
+ RTE_SECURITY_ACTION_TYPE_NONE) {
+ if (sec_cap->action == sess_conf.action_type &&
+ sec_cap->protocol ==
+ RTE_SECURITY_PROTOCOL_IPSEC &&
+ sec_cap->ipsec.mode ==
+ sess_conf.ipsec.mode &&
+ sec_cap->ipsec.direction == dir)
+ break;
+ sec_cap++;
+ }
+
+ if (sec_cap->action == RTE_SECURITY_ACTION_TYPE_NONE) {
+ printf("No suitable security capability found\n");
+ return TEST_SKIPPED;
+ }
+
+ ips->security.ol_flags = sec_cap->ol_flags;
+ ips->security.ctx = sec_ctx;
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 3s, and print them finally */
+static void
+check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 30 /* 3s (30 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+ int ret;
+ char link_status[RTE_ETH_LINK_MAX_STR_LEN];
+
+ printf("Checking link statuses...\n");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ all_ports_up = 1;
+ for (portid = 0; portid < port_num; portid++) {
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ ret = rte_eth_link_get_nowait(portid, &link);
+ if (ret < 0) {
+ all_ports_up = 0;
+ if (print_flag == 1)
+ printf("Port %u link get failed: %s\n",
+ portid, rte_strerror(-ret));
+ continue;
+ }
+
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status && link_mbps == 0)
+ link_mbps = link.link_speed;
+
+ rte_eth_link_to_str(link_status,
+ sizeof(link_status), &link);
+ printf("Port %d %s\n", portid, link_status);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1))
+ print_flag = 1;
+ }
+}
+
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
+static void
+copy_buf_to_pkt_segs(void *buf, unsigned int len,
+ struct rte_mbuf *pkt, unsigned int offset)
+{
+ struct rte_mbuf *seg;
+ void *seg_buf;
+ unsigned int copy_len;
+
+ seg = pkt;
+ while (offset >= seg->data_len) {
+ offset -= seg->data_len;
+ seg = seg->next;
+ }
+ copy_len = seg->data_len - offset;
+ seg_buf = rte_pktmbuf_mtod_offset(seg, char *, offset);
+ while (len > copy_len) {
+ rte_memcpy(seg_buf, buf, (size_t) copy_len);
+ len -= copy_len;
+ buf = ((char *) buf + copy_len);
+ seg = seg->next;
+ seg_buf = rte_pktmbuf_mtod(seg, void *);
+ }
+ rte_memcpy(seg_buf, buf, (size_t) len);
+}
+
+static inline void
+copy_buf_to_pkt(void *buf, unsigned int len,
+ struct rte_mbuf *pkt, unsigned int offset)
+{
+ if (offset + len <= pkt->data_len) {
+ rte_memcpy(rte_pktmbuf_mtod_offset(pkt, char *, offset), buf,
+ (size_t) len);
+ return;
+ }
+ copy_buf_to_pkt_segs(buf, len, pkt, offset);
+}
+
+static inline int
+init_traffic(struct rte_mempool *mp,
+ struct rte_mbuf **pkts_burst,
+ struct ipsec_test_packet *vectors[],
+ uint32_t nb_pkts)
+{
+ struct rte_mbuf *pkt;
+ uint32_t i;
+
+ for (i = 0; i < nb_pkts; i++) {
+ pkt = rte_pktmbuf_alloc(mp);
+ if (pkt == NULL)
+ return TEST_FAILED;
+
+ pkt->data_len = vectors[i]->len;
+ pkt->pkt_len = vectors[i]->len;
+ copy_buf_to_pkt(vectors[i]->data, vectors[i]->len,
+ pkt, vectors[i]->l2_offset);
+
+ pkts_burst[i] = pkt;
+ }
+ return i;
+}
+
+static int
+init_lcore(void)
+{
+ unsigned int lcore_id;
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ lcore_cfg.socketid =
+ rte_lcore_to_socket_id(lcore_id);
+ if (rte_lcore_is_enabled(lcore_id) == 0) {
+ lcore_cfg.status = LCORE_INVALID;
+ continue;
+ } else {
+ lcore_cfg.status = LCORE_AVAIL;
+ break;
+ }
+ }
+ return 0;
+}
+
+static int
+init_mempools(unsigned int nb_mbuf)
+{
+ struct rte_security_ctx *sec_ctx;
+ int socketid;
+ unsigned int lcore_id;
+ uint16_t nb_sess = 512;
+ uint32_t sess_sz;
+ char s[64];
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ if (rte_lcore_is_enabled(lcore_id) == 0)
+ continue;
+
+ socketid = rte_lcore_to_socket_id(lcore_id);
+ if (socketid >= NB_SOCKETS)
+ printf("Socket %d of lcore %u is out of range %d\n",
+ socketid, lcore_id, NB_SOCKETS);
+
+ if (mbufpool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "mbuf_pool_%d", socketid);
+ mbufpool[socketid] = rte_pktmbuf_pool_create(s, nb_mbuf,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, socketid);
+ if (mbufpool[socketid] == NULL)
+ printf("Cannot init mbuf pool on socket %d\n",
+ socketid);
+ printf("Allocated mbuf pool on socket %d\n", socketid);
+ }
+
+ sec_ctx = rte_eth_dev_get_sec_ctx(lcore_cfg.port);
+ if (sec_ctx == NULL)
+ continue;
+
+ sess_sz = rte_security_session_get_size(sec_ctx);
+ if (sess_pool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "sess_pool_%d", socketid);
+ sess_pool[socketid] =
+ rte_mempool_create(s, nb_sess,
+ sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ socketid, 0);
+ if (sess_pool[socketid] == NULL) {
+ printf("Cannot init sess pool on socket %d\n",
+ socketid);
+ return TEST_FAILED;
+ }
+ printf("Allocated sess pool on socket %d\n", socketid);
+ }
+ if (sess_priv_pool[socketid] == NULL) {
+ snprintf(s, sizeof(s), "sess_priv_pool_%d", socketid);
+ sess_priv_pool[socketid] =
+ rte_mempool_create(s, nb_sess,
+ sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ socketid, 0);
+ if (sess_priv_pool[socketid] == NULL) {
+ printf("Cannot init sess_priv pool on socket %d\n",
+ socketid);
+ return TEST_FAILED;
+ }
+ printf("Allocated sess_priv pool on socket %d\n",
+ socketid);
+ }
+ }
+ return 0;
+}
+
+static void
+create_default_flow(uint16_t port_id)
+{
+ struct rte_flow_action action[2];
+ struct rte_flow_item pattern[2];
+ struct rte_flow_attr attr = {0};
+ struct rte_flow_error err;
+ struct rte_flow *flow;
+ int ret;
+
+ /* Add the default rte_flow to enable SECURITY for all ESP packets */
+
+ pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP;
+ pattern[0].spec = NULL;
+ pattern[0].mask = NULL;
+ pattern[0].last = NULL;
+ pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+ action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ action[0].conf = NULL;
+ action[1].type = RTE_FLOW_ACTION_TYPE_END;
+ action[1].conf = NULL;
+
+ attr.ingress = 1;
+
+ ret = rte_flow_validate(port_id, &attr, pattern, action, &err);
+ if (ret)
+ return;
+
+ flow = rte_flow_create(port_id, &attr, pattern, action, &err);
+ if (flow == NULL) {
+ printf("\nDefault flow rule create failed\n");
+ return;
+ }
+
+ default_flow[port_id] = flow;
+}
+
+static void
+destroy_default_flow(uint16_t port_id)
+{
+ struct rte_flow_error err;
+ int ret;
+ if (!default_flow[port_id])
+ return;
+ ret = rte_flow_destroy(port_id, default_flow[port_id], &err);
+ if (ret) {
+ printf("\nDefault flow rule destroy failed\n");
+ return;
+ }
+ default_flow[port_id] = NULL;
+}
+
+struct rte_mbuf **tx_pkts_burst;
+struct rte_mbuf **rx_pkts_burst;
+
+static int
+test_ipsec(struct reassembly_vector *vector,
+ enum rte_security_ipsec_sa_direction dir,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ struct rte_eth_ip_reassembly_params reass_capa = {0};
+ unsigned int i, portid, nb_rx = 0, nb_tx = 1;
+ struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
+ struct rte_ipsec_session ips = {0};
+
+ portid = lcore_cfg.port;
+ rte_eth_ip_reassembly_capability_get(portid, &reass_capa);
+ if (reass_capa.max_frags < nb_tx)
+ return TEST_SKIPPED;
+
+ init_traffic(mbufpool[lcore_cfg.socketid],
+ tx_pkts_burst, vector->frags, nb_tx);
+
+ /* Create Inline IPsec session. */
+ if (create_inline_ipsec_session(vector->sa_data, portid, &ips, dir,
+ tun_type))
+ return TEST_FAILED;
+ if (dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ create_default_flow(portid);
+ else {
+ for (i = 0; i < nb_tx; i++) {
+ if (ips.security.ol_flags &
+ RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ips.security.ctx,
+ ips.security.ses, tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ tx_pkts_burst[i]->l2_len = 14;
+ }
+ }
+
+ nb_tx = rte_eth_tx_burst(portid, 0, tx_pkts_burst, nb_tx);
+
+ rte_pause();
+
+ int j = 0;
+ do {
+ nb_rx = rte_eth_rx_burst(portid, 0, pkts_burst, MAX_PKT_BURST);
+ rte_delay_ms(100);
+ j++;
+ } while (nb_rx == 0 && j < 5);
+
+ destroy_default_flow(portid);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ips.security.ctx, ips.security.ses);
+
+ /* Compare results with known vectors. */
+ if (nb_rx == 1) {
+ if (memcmp(rte_pktmbuf_mtod(pkts_burst[0], char *),
+ vector->full_pkt->data,
+ (size_t) vector->full_pkt->len)) {
+ printf("\n====Inline IPsec case failed: Data Mismatch");
+ rte_hexdump(stdout, "received",
+ rte_pktmbuf_mtod(pkts_burst[0], char *),
+ vector->full_pkt->len);
+ rte_hexdump(stdout, "reference",
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+ } else
+ return TEST_FAILED;
+}
+
+static int
+ut_setup_inline_ipsec(void)
+{
+ uint16_t portid = lcore_cfg.port;
+ int ret;
+
+ /* Start device */
+ ret = rte_eth_dev_start(portid);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ /* always enable promiscuous */
+ ret = rte_eth_promiscuous_enable(portid);
+ if (ret != 0) {
+ printf("rte_eth_promiscuous_enable: err=%s, port=%d\n",
+ rte_strerror(-ret), portid);
+ return ret;
+ }
+ lcore_cfg.port = portid;
+ check_all_ports_link_status(1, RTE_PORT_ALL);
+
+ return 0;
+}
+
+static void
+ut_teardown_inline_ipsec(void)
+{
+ uint16_t portid = lcore_cfg.port;
+ int socketid = lcore_cfg.socketid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (socketid != rte_eth_dev_socket_id(portid))
+ continue;
+
+ ret = rte_eth_dev_stop(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+
+static int
+testsuite_setup(void)
+{
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint16_t nb_ports;
+ int socketid, ret;
+ uint16_t nb_rx_queue = 1, nb_tx_queue = 1;
+ uint16_t portid = lcore_cfg.port;
+ struct rte_eth_ip_reassembly_params reass_capa = {0};
+
+ printf("Start inline IPsec test.\n");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports < NB_ETHPORTS_USED) {
+ printf("At least %u port(s) used for test\n",
+ NB_ETHPORTS_USED);
+ return -1;
+ }
+
+ init_lcore();
+
+ init_mempools(NB_MBUF);
+
+ socketid = lcore_cfg.socketid;
+ if (tx_pkts_burst == NULL) {
+ tx_pkts_burst = (struct rte_mbuf **)
+ rte_calloc_socket("tx_buff",
+ MAX_TRAFFIC_BURST * nb_ports,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE, socketid);
+ if (!tx_pkts_burst)
+ return -1;
+
+ rx_pkts_burst = (struct rte_mbuf **)
+ rte_calloc_socket("rx_buff",
+ MAX_TRAFFIC_BURST * nb_ports,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE, socketid);
+ if (!rx_pkts_burst)
+ return -1;
+ }
+
+ printf("Generate %d packets @socket %d\n",
+ MAX_TRAFFIC_BURST * nb_ports, socketid);
+
+ nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+ /* port configure */
+ ret = rte_eth_dev_configure(portid, nb_rx_queue,
+ nb_tx_queue, &port_conf);
+ if (ret < 0) {
+ printf("Cannot configure device: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ ret = rte_eth_macaddr_get(portid, &ports_eth_addr[portid]);
+ if (ret < 0) {
+ printf("Cannot get mac address: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ printf("Port %u ", portid);
+ print_ethaddr("Address:", &ports_eth_addr[portid]);
+ printf("\n");
+
+ /* tx queue setup */
+ ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
+ socketid, &tx_conf);
+ if (ret < 0) {
+ printf("rte_eth_tx_queue_setup: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+ /* rx queue steup */
+ ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
+ socketid, &rx_conf,
+ mbufpool[socketid]);
+ if (ret < 0) {
+ printf("rte_eth_rx_queue_setup: err=%d, port=%d\n",
+ ret, portid);
+ return ret;
+ }
+
+ rte_eth_ip_reassembly_capability_get(portid, &reass_capa);
+
+ if (reass_capa.timeout_ms > APP_REASS_TIMEOUT) {
+ reass_capa.timeout_ms = APP_REASS_TIMEOUT;
+ rte_eth_ip_reassembly_conf_set(portid, &reass_capa);
+ }
+
+ return 0;
+}
+
+static void
+testsuite_teardown(void)
+{
+ int ret;
+ uint16_t portid = lcore_cfg.port;
+ uint16_t socketid = lcore_cfg.socketid;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ if (socketid != rte_eth_dev_socket_id(portid))
+ continue;
+
+ ret = rte_eth_dev_reset(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_reset: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+static int
+test_ipsec_ipv4_encap_nofrag(void)
+{
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_gcm128_cipher,
+ .frags[0] = &pkt_ipv4_plain,
+ .nb_frags = 1,
+ };
+ return test_ipsec(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_ipsec_ipv4_decap_nofrag(void)
+{
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_plain,
+ .frags[0] = &pkt_ipv4_gcm128_cipher,
+ .nb_frags = 1,
+ };
+ return test_ipsec(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static struct unit_test_suite inline_ipsec_testsuite = {
+ .suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
+ .setup = testsuite_setup,
+ .teardown = testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_ipsec_ipv4_encap_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_ipsec_ipv4_decap_nofrag),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ }
+};
+
+static int
+test_inline_ipsec(void)
+{
+ return unit_test_suite_runner(&inline_ipsec_testsuite);
+}
+
+REGISTER_TEST_COMMAND(inline_ipsec_autotest, test_inline_ipsec);
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
new file mode 100644
index 0000000000..94d2f0145c
--- /dev/null
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+
+#define MAX_FRAG_LEN 1500
+#define MAX_FRAGS 6
+#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
+struct ipsec_session_data {
+ struct {
+ uint8_t data[32];
+ } key;
+ struct {
+ uint8_t data[4];
+ unsigned int len;
+ } salt;
+ struct {
+ uint8_t data[16];
+ } iv;
+ struct rte_security_ipsec_xform ipsec_xform;
+ bool aead;
+ union {
+ struct {
+ struct rte_crypto_sym_xform cipher;
+ struct rte_crypto_sym_xform auth;
+ } chain;
+ struct rte_crypto_sym_xform aead;
+ } xform;
+};
+
+struct ipsec_test_packet {
+ uint32_t len;
+ uint32_t l2_offset;
+ uint32_t l3_offset;
+ uint32_t l4_offset;
+ uint8_t data[MAX_PKT_LEN];
+};
+
+struct reassembly_vector {
+ struct ipsec_session_data *sa_data;
+ struct ipsec_test_packet *full_pkt;
+ struct ipsec_test_packet *frags[MAX_FRAGS];
+ uint16_t nb_frags;
+};
+
+struct ipsec_test_packet pkt_ipv4_plain = {
+ .len = 76,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0x3e, 0x69, 0x8f, 0x00, 0x00,
+ 0x80, 0x11, 0x4d, 0xcc, 0xc0, 0xa8, 0x01, 0x02,
+ 0xc0, 0xa8, 0x01, 0x01,
+
+ /* UDP */
+ 0x0a, 0x98, 0x00, 0x35, 0x00, 0x2a, 0x23, 0x43,
+ 0xb2, 0xd0, 0x01, 0x00, 0x00, 0x01, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x03, 0x73, 0x69, 0x70,
+ 0x09, 0x63, 0x79, 0x62, 0x65, 0x72, 0x63, 0x69,
+ 0x74, 0x79, 0x02, 0x64, 0x6b, 0x00, 0x00, 0x01,
+ 0x00, 0x01,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_gcm128_cipher = {
+ .len = 130,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP - outer header */
+ 0x45, 0x00, 0x00, 0x74, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x32, 0xf7, 0x03, 0xc0, 0xa8, 0x01, 0x02,
+ 0xc0, 0xa8, 0x01, 0x01,
+
+ /* ESP */
+ 0x00, 0x00, 0xa5, 0xf8, 0x00, 0x00, 0x00, 0x01,
+
+ /* IV */
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88,
+
+ /* Data */
+ 0xde, 0xb2, 0x2c, 0xd9, 0xb0, 0x7c, 0x72, 0xc1,
+ 0x6e, 0x3a, 0x65, 0xbe, 0xeb, 0x8d, 0xf3, 0x04,
+ 0xa5, 0xa5, 0x89, 0x7d, 0x33, 0xae, 0x53, 0x0f,
+ 0x1b, 0xa7, 0x6d, 0x5d, 0x11, 0x4d, 0x2a, 0x5c,
+ 0x3d, 0xe8, 0x18, 0x27, 0xc1, 0x0e, 0x9a, 0x4f,
+ 0x51, 0x33, 0x0d, 0x0e, 0xec, 0x41, 0x66, 0x42,
+ 0xcf, 0xbb, 0x85, 0xa5, 0xb4, 0x7e, 0x48, 0xa4,
+ 0xec, 0x3b, 0x9b, 0xa9, 0x5d, 0x91, 0x8b, 0xd4,
+ 0x29, 0xc7, 0x37, 0x57, 0x9f, 0xf1, 0x9e, 0x58,
+ 0xcf, 0xfc, 0x60, 0x7a, 0x3b, 0xce, 0x89, 0x94,
+ },
+};
+
+static inline void
+test_vector_payload_populate(struct ipsec_test_packet *pkt,
+ bool first_frag)
+{
+ uint32_t i = pkt->l4_offset;
+
+ /**
+ * For non-fragmented packets and first frag, skip 8 bytes from
+ * l4_offset for UDP header.
+ */
+ if (first_frag)
+ i += 8;
+
+ for (; i < pkt->len; i++)
+ pkt->data[i] = 0x58;
+}
+
+struct ipsec_session_data conf_aes_128_gcm = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 2/4] app/test: add IP reassembly case with no frags
2022-02-17 17:23 ` [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 1/4] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-02-17 17:23 ` Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 3/4] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
` (2 subsequent siblings)
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-17 17:23 UTC (permalink / raw)
To: dev
Cc: anoobj, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal,
Nithin Dabilpuram
test_inline_ipsec testsuite is extended to test IP reassembly of inbound
fragmented packets. The fragmented packet is sent on an interface
which encrypts the packet and then it is loopbacked on the
same interface which decrypts the packet and then attempts IP reassembly
of the decrypted packets.
In this patch, a case is added for packets without fragmentation to
verify the complete path. Other cases are added in subsequent patches.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
app/test/test_security_inline_proto.c | 326 ++++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 1 +
2 files changed, 327 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index e2b95de68e..3fbf8105e1 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -25,6 +25,8 @@
#define RTE_TEST_TX_DESC_DEFAULT (1024)
#define RTE_PORT_ALL (~(uint16_t)0x0)
+#define ENCAP_DECAP_BURST_SZ 33
+
/*
* RX and TX Prefetch, Host, and Write-back threshold values should be
* carefully set for optimal performance. Consult the network
@@ -102,6 +104,8 @@ struct lcore_cfg lcore_cfg;
static uint64_t link_mbps;
+static int ip_reassembly_dynfield_offset = -1;
+
static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
/* Create Inline IPsec session */
@@ -476,6 +480,294 @@ destroy_default_flow(uint16_t port_id)
struct rte_mbuf **tx_pkts_burst;
struct rte_mbuf **rx_pkts_burst;
+static int
+compare_pkt_data(struct rte_mbuf *m, uint8_t *ref, unsigned int tot_len)
+{
+ unsigned int len;
+ unsigned int nb_segs = m->nb_segs;
+ unsigned int matched = 0;
+ struct rte_mbuf *save = m;
+
+ while (m && nb_segs != 0) {
+ len = tot_len;
+ if (len > m->data_len)
+ len = m->data_len;
+ if (len != 0) {
+ if (memcmp(rte_pktmbuf_mtod(m, char *),
+ ref + matched, len)) {
+ printf("\n====Reassembly case failed: Data Mismatch");
+ rte_hexdump(stdout, "Reassembled",
+ rte_pktmbuf_mtod(m, char *),
+ len);
+ rte_hexdump(stdout, "reference",
+ ref + matched,
+ len);
+ return TEST_FAILED;
+ }
+ }
+ tot_len -= len;
+ matched += len;
+ m = m->next;
+ nb_segs--;
+ }
+
+ if (tot_len) {
+ printf("\n====Reassembly case failed: Data Missing %u",
+ tot_len);
+ printf("\n====nb_segs %u, tot_len %u", nb_segs, tot_len);
+ rte_pktmbuf_dump(stderr, save, -1);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+}
+
+static inline bool
+is_ip_reassembly_incomplete(struct rte_mbuf *mbuf)
+{
+ static uint64_t ip_reassembly_dynflag;
+ int ip_reassembly_dynflag_offset;
+
+ if (ip_reassembly_dynflag == 0) {
+ ip_reassembly_dynflag_offset = rte_mbuf_dynflag_lookup(
+ RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME, NULL);
+ if (ip_reassembly_dynflag_offset < 0)
+ return false;
+ ip_reassembly_dynflag = RTE_BIT64(ip_reassembly_dynflag_offset);
+ }
+
+ return (mbuf->ol_flags & ip_reassembly_dynflag) != 0;
+}
+
+static void
+free_mbuf(struct rte_mbuf *mbuf)
+{
+ rte_eth_ip_reassembly_dynfield_t dynfield;
+
+ if (!mbuf)
+ return;
+
+ if (!is_ip_reassembly_incomplete(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ } else {
+ if (ip_reassembly_dynfield_offset < 0)
+ return;
+
+ while (mbuf) {
+ dynfield = *RTE_MBUF_DYNFIELD(mbuf,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ rte_pktmbuf_free(mbuf);
+ mbuf = dynfield.next_frag;
+ }
+ }
+}
+
+
+static int
+get_and_verify_incomplete_frags(struct rte_mbuf *mbuf,
+ struct reassembly_vector *vector)
+{
+ rte_eth_ip_reassembly_dynfield_t *dynfield[MAX_PKT_BURST];
+ int j = 0, ret;
+ /**
+ * IP reassembly offload is incomplete, and fragments are listed in
+ * dynfield which can be reassembled in SW.
+ */
+ printf("\nHW IP Reassembly is not complete; attempt SW IP Reassembly,"
+ "\nMatching with original frags.");
+
+ if (ip_reassembly_dynfield_offset < 0)
+ return -1;
+
+ printf("\ncomparing frag: %d", j);
+ ret = compare_pkt_data(mbuf, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(mbuf, ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ ret = compare_pkt_data(dynfield[j]->next_frag, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+
+ while ((dynfield[j]->nb_frags > 1) &&
+ is_ip_reassembly_incomplete(dynfield[j]->next_frag)) {
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(dynfield[j-1]->next_frag,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ ret = compare_pkt_data(dynfield[j]->next_frag,
+ vector->frags[j]->data, vector->frags[j]->len);
+ if (ret)
+ return ret;
+ }
+ return ret;
+}
+
+static int
+test_ipsec_encap_decap(struct reassembly_vector *vector,
+ enum rte_security_ipsec_tunnel_type tun_type)
+{
+ struct rte_ipsec_session out_ips[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_ipsec_session in_ips[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_eth_ip_reassembly_params reass_capa = {0};
+ unsigned int nb_tx, burst_sz, nb_sent = 0;
+ unsigned int i, portid, nb_rx = 0, j;
+ struct ipsec_session_data sa_data;
+ int ret = 0;
+
+ burst_sz = vector->burst ? ENCAP_DECAP_BURST_SZ : 1;
+
+ portid = lcore_cfg.port;
+ rte_eth_ip_reassembly_capability_get(portid, &reass_capa);
+ if (reass_capa.max_frags < vector->nb_frags)
+ return TEST_SKIPPED;
+
+ nb_tx = vector->nb_frags * burst_sz;
+ memset(tx_pkts_burst, 0, sizeof(tx_pkts_burst[0]) * nb_tx);
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_tx);
+
+ for (i = 0; i < nb_tx; i += vector->nb_frags) {
+ ret = init_traffic(mbufpool[lcore_cfg.socketid],
+ &tx_pkts_burst[i], vector->frags,
+ vector->nb_frags);
+ if (ret != vector->nb_frags) {
+ ret = -1;
+ goto out;
+ }
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data, sizeof(sa_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+
+ /* Create Inline IPsec outbound session. */
+ ret = create_inline_ipsec_session(&sa_data, portid, &out_ips[i],
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ tun_type);
+ if (ret)
+ goto out;
+ }
+
+ j = 0;
+ for (i = 0; i < nb_tx; i++) {
+ if (out_ips[j].security.ol_flags &
+ RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(out_ips[j].security.ctx,
+ out_ips[j].security.ses, tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ tx_pkts_burst[i]->l2_len = RTE_ETHER_HDR_LEN;
+
+ /* Move to next SA after nb_frags */
+ if ((i + 1) % vector->nb_frags == 0)
+ j++;
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data, sizeof(sa_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+
+ /* Create Inline IPsec inbound session. */
+ ret = create_inline_ipsec_session(&sa_data, portid, &in_ips[i],
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
+ tun_type);
+ if (ret)
+ goto out;
+ }
+
+ /* Retrieve reassembly dynfield offset if available */
+ if (ip_reassembly_dynfield_offset < 0 && vector->nb_frags > 1)
+ ip_reassembly_dynfield_offset = rte_mbuf_dynfield_lookup(
+ RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME, NULL);
+
+
+ create_default_flow(portid);
+
+ nb_sent = rte_eth_tx_burst(portid, 0, tx_pkts_burst, nb_tx);
+ if (nb_sent != nb_tx) {
+ ret = -1;
+ printf("\nFailed to tx %u pkts", nb_tx);
+ goto out;
+ }
+
+ rte_delay_ms(100);
+
+ /* Retry few times before giving up */
+ nb_rx = 0;
+ j = 0;
+ do {
+ nb_rx += rte_eth_rx_burst(portid, 0, &rx_pkts_burst[nb_rx],
+ nb_tx - nb_rx);
+ j++;
+ if (nb_rx >= nb_tx)
+ break;
+ rte_delay_ms(100);
+ } while (j < 5 || !nb_rx);
+
+ /* Check for minimum number of Rx packets expected */
+ if ((vector->nb_frags == 1 && nb_rx != nb_tx) ||
+ (vector->nb_frags > 1 && nb_rx < burst_sz)) {
+ printf("\nreceived less Rx pkts(%u) pkts\n", nb_rx);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ if (vector->nb_frags > 1 &&
+ is_ip_reassembly_incomplete(rx_pkts_burst[i])) {
+ ret = get_and_verify_incomplete_frags(rx_pkts_burst[i],
+ vector);
+ if (ret != TEST_SUCCESS)
+ break;
+ continue;
+ }
+
+ if (rx_pkts_burst[i]->ol_flags &
+ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED ||
+ !(rx_pkts_burst[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+ printf("\nsecurity offload failed\n");
+ ret = TEST_FAILED;
+ break;
+ }
+
+ if (vector->full_pkt->len != rx_pkts_burst[i]->pkt_len) {
+ printf("\nreassembled/decrypted packet length mismatch\n");
+ ret = TEST_FAILED;
+ break;
+ }
+ ret = compare_pkt_data(rx_pkts_burst[i],
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ if (ret != TEST_SUCCESS)
+ break;
+ }
+
+out:
+ destroy_default_flow(portid);
+
+ /* Clear session data. */
+ for (i = 0; i < burst_sz; i++) {
+ if (out_ips[i].security.ses)
+ rte_security_session_destroy(out_ips[i].security.ctx,
+ out_ips[i].security.ses);
+ if (in_ips[i].security.ses)
+ rte_security_session_destroy(in_ips[i].security.ctx,
+ in_ips[i].security.ses);
+ }
+
+ for (i = nb_sent; i < nb_tx; i++)
+ free_mbuf(tx_pkts_burst[i]);
+ for (i = 0; i < nb_rx; i++)
+ free_mbuf(rx_pkts_burst[i]);
+ return ret;
+}
+
static int
test_ipsec(struct reassembly_vector *vector,
enum rte_security_ipsec_sa_direction dir,
@@ -732,6 +1024,34 @@ test_ipsec_ipv4_decap_nofrag(void)
RTE_SECURITY_IPSEC_TUNNEL_IPV4);
}
+static int
+test_reassembly_ipv4_nofrag(void)
+{
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_plain,
+ .frags[0] = &pkt_ipv4_plain,
+ .nb_frags = 1,
+ };
+ return test_ipsec_encap_decap(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+
+static int
+test_ipsec_ipv4_burst_encap_decap(void)
+{
+ struct reassembly_vector ipv4_nofrag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_plain,
+ .frags[0] = &pkt_ipv4_plain,
+ .nb_frags = 1,
+ .burst = true,
+ };
+ return test_ipsec_encap_decap(&ipv4_nofrag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = testsuite_setup,
@@ -743,6 +1063,12 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_ipsec_ipv4_decap_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_nofrag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_ipsec_ipv4_burst_encap_decap),
TEST_CASES_END() /**< NULL terminate unit test array */
}
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index 94d2f0145c..2ee1b3fc41 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -42,6 +42,7 @@ struct reassembly_vector {
struct ipsec_test_packet *full_pkt;
struct ipsec_test_packet *frags[MAX_FRAGS];
uint16_t nb_frags;
+ bool burst;
};
struct ipsec_test_packet pkt_ipv4_plain = {
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 3/4] app/test: add IP reassembly cases with multiple fragments
2022-02-17 17:23 ` [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 1/4] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 2/4] app/test: add IP reassembly case with no frags Akhil Goyal
@ 2022-02-17 17:23 ` Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 4/4] app/test: add IP reassembly negative cases Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-17 17:23 UTC (permalink / raw)
To: dev
Cc: anoobj, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
More cases are added in test_inline_ipsec test suite to verify packets
having multiple IP(v4/v6) fragments. These fragments are encrypted
and then decrypted as per inline IPsec processing and then an attempt
is made to reassemble the fragments. The reassembled packet
content is matched with the known test vectors.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 147 ++++-
app/test/test_security_inline_proto_vectors.h | 592 ++++++++++++++++++
2 files changed, 738 insertions(+), 1 deletion(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 3fbf8105e1..9c9e34ca0d 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1037,7 +1037,6 @@ test_reassembly_ipv4_nofrag(void)
RTE_SECURITY_IPSEC_TUNNEL_IPV4);
}
-
static int
test_ipsec_ipv4_burst_encap_decap(void)
{
@@ -1052,6 +1051,134 @@ test_ipsec_ipv4_burst_encap_decap(void)
RTE_SECURITY_IPSEC_TUNNEL_IPV4);
}
+static int
+test_reassembly_ipv4_2frag(void)
+{
+ struct reassembly_vector ipv4_2frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 2,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p1_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p1_f2, false);
+
+ return test_ipsec_encap_decap(&ipv4_2frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_2frag(void)
+{
+ struct reassembly_vector ipv6_2frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p1,
+ .frags[0] = &pkt_ipv6_udp_p1_f1,
+ .frags[1] = &pkt_ipv6_udp_p1_f2,
+ .nb_frags = 2,
+ };
+ test_vector_payload_populate(&pkt_ipv6_udp_p1, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p1_f1, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p1_f2, false);
+
+ return test_ipsec_encap_decap(&ipv6_2frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
+static int
+test_reassembly_ipv4_4frag(void)
+{
+ struct reassembly_vector ipv4_4frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f2, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f3, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f4, false);
+
+ return test_ipsec_encap_decap(&ipv4_4frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_4frag(void)
+{
+ struct reassembly_vector ipv6_4frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p2,
+ .frags[0] = &pkt_ipv6_udp_p2_f1,
+ .frags[1] = &pkt_ipv6_udp_p2_f2,
+ .frags[2] = &pkt_ipv6_udp_p2_f3,
+ .frags[3] = &pkt_ipv6_udp_p2_f4,
+ .nb_frags = 4,
+ };
+ test_vector_payload_populate(&pkt_ipv6_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p2_f2, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p2_f3, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p2_f4, false);
+
+ return test_ipsec_encap_decap(&ipv6_4frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
+static int
+test_reassembly_ipv4_5frag(void)
+{
+ struct reassembly_vector ipv4_5frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p3,
+ .frags[0] = &pkt_ipv4_udp_p3_f1,
+ .frags[1] = &pkt_ipv4_udp_p3_f2,
+ .frags[2] = &pkt_ipv4_udp_p3_f3,
+ .frags[3] = &pkt_ipv4_udp_p3_f4,
+ .frags[4] = &pkt_ipv4_udp_p3_f5,
+ .nb_frags = 5,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p3, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f2, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f3, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f4, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p3_f5, false);
+
+ return test_ipsec_encap_decap(&ipv4_5frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_ipv6_5frag(void)
+{
+ struct reassembly_vector ipv6_5frag_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv6_udp_p3,
+ .frags[0] = &pkt_ipv6_udp_p3_f1,
+ .frags[1] = &pkt_ipv6_udp_p3_f2,
+ .frags[2] = &pkt_ipv6_udp_p3_f3,
+ .frags[3] = &pkt_ipv6_udp_p3_f4,
+ .frags[4] = &pkt_ipv6_udp_p3_f5,
+ .nb_frags = 5,
+ };
+ test_vector_payload_populate(&pkt_ipv6_udp_p3, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f1, true);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f2, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f3, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f4, false);
+ test_vector_payload_populate(&pkt_ipv6_udp_p3_f5, false);
+
+ return test_ipsec_encap_decap(&ipv6_5frag_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV6);
+}
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = testsuite_setup,
@@ -1069,6 +1196,24 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_ipsec_ipv4_burst_encap_decap),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_2frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_2frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_4frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_4frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv4_5frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_ipv6_5frag),
TEST_CASES_END() /**< NULL terminate unit test array */
}
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index 2ee1b3fc41..d12bd2fcf0 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -4,6 +4,47 @@
#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+/* The source file includes below test vectors */
+/* IPv6:
+ *
+ * 1) pkt_ipv6_udp_p1
+ * pkt_ipv6_udp_p1_f1
+ * pkt_ipv6_udp_p1_f2
+ *
+ * 2) pkt_ipv6_udp_p2
+ * pkt_ipv6_udp_p2_f1
+ * pkt_ipv6_udp_p2_f2
+ * pkt_ipv6_udp_p2_f3
+ * pkt_ipv6_udp_p2_f4
+ *
+ * 3) pkt_ipv6_udp_p3
+ * pkt_ipv6_udp_p3_f1
+ * pkt_ipv6_udp_p3_f2
+ * pkt_ipv6_udp_p3_f3
+ * pkt_ipv6_udp_p3_f4
+ * pkt_ipv6_udp_p3_f5
+ */
+
+/* IPv4:
+ *
+ * 1) pkt_ipv4_udp_p1
+ * pkt_ipv4_udp_p1_f1
+ * pkt_ipv4_udp_p1_f2
+ *
+ * 2) pkt_ipv4_udp_p2
+ * pkt_ipv4_udp_p2_f1
+ * pkt_ipv4_udp_p2_f2
+ * pkt_ipv4_udp_p2_f3
+ * pkt_ipv4_udp_p2_f4
+ *
+ * 3) pkt_ipv4_udp_p3
+ * pkt_ipv4_udp_p3_f1
+ * pkt_ipv4_udp_p3_f2
+ * pkt_ipv4_udp_p3_f3
+ * pkt_ipv4_udp_p3_f4
+ * pkt_ipv4_udp_p3_f5
+ */
+
#define MAX_FRAG_LEN 1500
#define MAX_FRAGS 6
#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
@@ -45,6 +86,557 @@ struct reassembly_vector {
bool burst;
};
+struct ipsec_test_packet pkt_ipv6_udp_p1 = {
+ .len = 1514,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 54,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0xb4, 0x2C, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p1_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x5c, 0x92, 0xac, 0xf1,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p1_f2 = {
+ .len = 186,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x00, 0x84, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x38, 0x5c, 0x92, 0xac, 0xf1,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2 = {
+ .len = 4496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 54,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x11, 0x5a, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x64, 0x6c, 0x68, 0x9f,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f2 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f3 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p2_f4 = {
+ .len = 496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0xba, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa8, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3 = {
+ .len = 5796,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 54,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x16, 0x6e, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f1 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x65, 0xcf, 0x5a, 0xae,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f2 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f3 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f4 = {
+ .len = 1398,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa9, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv6_udp_p3_f5 = {
+ .len = 460,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 62,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0x96, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x14, 0xe0, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1 = {
+ .len = 1514,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0xdc, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x66, 0x0d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x01, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p1_f2 = {
+ .len = 114,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0x64, 0x00, 0x01, 0x00, 0xaf,
+ 0x40, 0x11, 0x6a, 0xd6, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2 = {
+ .len = 4496,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x11, 0x82, 0x00, 0x02, 0x00, 0x00,
+ 0x40, 0x11, 0x5a, 0x66, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5c, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f2 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xad, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f3 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfe, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p2_f4 = {
+ .len = 296,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x01, 0x1a, 0x00, 0x02, 0x02, 0x0d,
+ 0x40, 0x11, 0x68, 0xc1, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3 = {
+ .len = 5796,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x16, 0x96, 0x00, 0x03, 0x00, 0x00,
+ 0x40, 0x11, 0x55, 0x51, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f1 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5b, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f2 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xac, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f3 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfd, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f4 = {
+ .len = 1434,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x22, 0x0d,
+ 0x40, 0x11, 0x44, 0x4e, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ipsec_test_packet pkt_ipv4_udp_p3_f5 = {
+ .len = 196,
+ .l2_offset = 0,
+ .l3_offset = 14,
+ .l4_offset = 34,
+ .data = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+
+ /* IP */
+ 0x45, 0x00, 0x00, 0xb6, 0x00, 0x03, 0x02, 0xbc,
+ 0x40, 0x11, 0x68, 0x75, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
struct ipsec_test_packet pkt_ipv4_plain = {
.len = 76,
.l2_offset = 0,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v3 4/4] app/test: add IP reassembly negative cases
2022-02-17 17:23 ` [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (2 preceding siblings ...)
2022-02-17 17:23 ` [PATCH v3 3/4] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
@ 2022-02-17 17:23 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
4 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-02-17 17:23 UTC (permalink / raw)
To: dev
Cc: anoobj, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
test_inline_ipsec testsuite is added with cases where the IP reassembly
is incomplete and software will need to reassemble them later.
The failure cases added are:
- all fragments are not received.
- same fragment is received more than once.
- out of order fragments.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 71 +++++++++++++++++++++++++++
1 file changed, 71 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 9c9e34ca0d..cae8b083bc 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1179,6 +1179,68 @@ test_reassembly_ipv6_5frag(void)
RTE_SECURITY_IPSEC_TUNNEL_IPV6);
}
+static int
+test_reassembly_incomplete(void)
+{
+ /* Negative test case, not sending all fragments. */
+ struct reassembly_vector ipv4_incomplete_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .nb_frags = 2,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f2, false);
+
+ return test_ipsec_encap_decap(&ipv4_incomplete_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_overlap(void)
+{
+ /* Negative test case, sending 1 fragment twice. */
+ struct reassembly_vector ipv4_overlap_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f1, /* Overlap */
+ .frags[2] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 3,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f2, false);
+
+ return test_ipsec_encap_decap(&ipv4_overlap_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
+static int
+test_reassembly_out_of_order(void)
+{
+ /* Negative test case, out of order fragments. */
+ struct reassembly_vector ipv4_ooo_case = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f3,
+ .frags[2] = &pkt_ipv4_udp_p2_f4,
+ .frags[3] = &pkt_ipv4_udp_p2_f2,
+ .nb_frags = 4,
+ };
+ test_vector_payload_populate(&pkt_ipv4_udp_p2, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f1, true);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f2, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f3, false);
+ test_vector_payload_populate(&pkt_ipv4_udp_p2_f4, false);
+
+ return test_ipsec_encap_decap(&ipv4_ooo_case,
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4);
+}
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = testsuite_setup,
@@ -1214,6 +1276,15 @@ static struct unit_test_suite inline_ipsec_testsuite = {
TEST_CASE_ST(ut_setup_inline_ipsec,
ut_teardown_inline_ipsec,
test_reassembly_ipv6_5frag),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_incomplete),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_overlap),
+ TEST_CASE_ST(ut_setup_inline_ipsec,
+ ut_teardown_inline_ipsec,
+ test_reassembly_out_of_order),
TEST_CASES_END() /**< NULL terminate unit test array */
}
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases
2022-02-17 17:23 ` [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (3 preceding siblings ...)
2022-02-17 17:23 ` [PATCH v3 4/4] app/test: add IP reassembly negative cases Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 01/10] app/test: add unit cases for inline IPsec offload Akhil Goyal
` (11 more replies)
4 siblings, 12 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
IP reassembly offload was added in last release.
The test app for unit testing IP reassembly of inline
inbound IPsec flows is added in this patchset.
For testing IP reassembly, base inline IPsec is also
added. The app is enhanced in v4 to handle more functional
unit test cases for inline IPsec similar to Lookaside IPsec.
The functions from Lookaside more are reused to verify
functional cases.
Changes in v4:
- rebased over next-crypto
- updated app to take benefit from Lookaside protocol
test functions.
- Added more functional cases
- Added soft and hard expiry event subtypes in ethdev
for testing SA soft and hard pkt/byte expiry events.
- reassembly cases are squashed in a single patch
Changes in v3:
- incorporated latest ethdev changes for reassembly.
- skipped build on windows as it needs rte_ipsec lib which is not
compiled on windows.
changes in v2:
- added IPsec burst mode case
- updated as per the latest ethdev changes.
Akhil Goyal (6):
app/test: add unit cases for inline IPsec offload
test/security: add inline inbound IPsec cases
test/security: add combined mode inline IPsec cases
test/security: add inline IPsec reassembly cases
test/security: add more inline IPsec functional cases
test/security: add ESN and anti-replay cases for inline
Vamsi Attunuru (4):
ethdev: add IPsec SA expiry event subtypes
test/security: add inline IPsec SA soft expiry cases
test/security: add inline IPsec SA hard expiry cases
test/security: add inline IPsec IPv6 flow label cases
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_cryptodev_security_ipsec.c | 35 +-
app/test/test_cryptodev_security_ipsec.h | 12 +
app/test/test_security_inline_proto.c | 2525 +++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 710 +++++
lib/ethdev/rte_ethdev.h | 9 +
7 files changed, 3292 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 01/10] app/test: add unit cases for inline IPsec offload
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 02/10] test/security: add inline inbound IPsec cases Akhil Goyal
` (10 subsequent siblings)
11 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
A new test suite is added in test app to test inline IPsec protocol
offload. In this patch, predefined vectors from Lookaside IPsec test
are used to verify the IPsec functionality without the need of
external traffic generators. The sent packet is loopbacked onto the same
interface which is received and matched with the expected output.
The test suite can be updated further with other functional test cases.
In this patch encap only cases are added.
The testsuite can be run using:
RTE> inline_ipsec_autotest
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_security_inline_proto.c | 881 ++++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 20 +
4 files changed, 903 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 15008c03bc..89affa08ff 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -440,7 +440,7 @@ M: Akhil Goyal <gakhil@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
-F: app/test/test_security.c
+F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <roy.fan.zhang@intel.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 5fc1dd1b7b..39952c6c4f 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -125,6 +125,7 @@ test_sources = files(
'test_rwlock.c',
'test_sched.c',
'test_security.c',
+ 'test_security_inline_proto.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
new file mode 100644
index 0000000000..aeb5a57aca
--- /dev/null
+++ b/app/test/test_security_inline_proto.c
@@ -0,0 +1,881 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_security.h>
+
+#include "test.h"
+#include "test_security_inline_proto_vectors.h"
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+static int
+test_inline_ipsec(void)
+{
+ printf("Inline ipsec not supported on Windows, skipping test\n");
+ return TEST_SKIPPED;
+}
+
+#else
+
+#define NB_ETHPORTS_USED 1
+#define MEMPOOL_CACHE_SIZE 32
+#define MAX_PKT_BURST 32
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+#define RTE_PORT_ALL (~(uint16_t)0x0)
+
+#define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */
+#define RX_HTHRESH 8 /**< Default values of RX host threshold reg. */
+#define RX_WTHRESH 0 /**< Default values of RX write-back threshold reg. */
+
+#define TX_PTHRESH 32 /**< Default values of TX prefetch threshold reg. */
+#define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */
+#define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */
+
+#define MAX_TRAFFIC_BURST 2048
+#define NB_MBUF 10240
+
+extern struct ipsec_test_data pkt_aes_128_gcm;
+extern struct ipsec_test_data pkt_aes_192_gcm;
+extern struct ipsec_test_data pkt_aes_256_gcm;
+extern struct ipsec_test_data pkt_aes_128_gcm_frag;
+extern struct ipsec_test_data pkt_aes_128_cbc_null;
+extern struct ipsec_test_data pkt_null_aes_xcbc;
+extern struct ipsec_test_data pkt_aes_128_cbc_hmac_sha384;
+extern struct ipsec_test_data pkt_aes_128_cbc_hmac_sha512;
+
+static struct rte_mempool *mbufpool;
+static struct rte_mempool *sess_pool;
+static struct rte_mempool *sess_priv_pool;
+/* ethernet addresses of ports */
+static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
+ .split_hdr_size = 0,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SECURITY,
+ },
+ .txmode = {
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_SECURITY |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ },
+ .lpbk_mode = 1, /* enable loopback */
+};
+
+static struct rte_eth_rxconf rx_conf = {
+ .rx_thresh = {
+ .pthresh = RX_PTHRESH,
+ .hthresh = RX_HTHRESH,
+ .wthresh = RX_WTHRESH,
+ },
+ .rx_free_thresh = 32,
+};
+
+static struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = 32, /* Use PMD default values */
+ .tx_rs_thresh = 32, /* Use PMD default values */
+};
+
+uint16_t port_id;
+
+static uint64_t link_mbps;
+
+static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
+
+/* Create Inline IPsec session */
+static int
+create_inline_ipsec_session(struct ipsec_test_data *sa, uint16_t portid,
+ struct rte_security_session **sess, struct rte_security_ctx **ctx,
+ uint32_t *ol_flags, const struct ipsec_test_flags *flags,
+ struct rte_security_session_conf *sess_conf)
+{
+ uint16_t src_v6[8] = {0x2607, 0xf8b0, 0x400c, 0x0c03, 0x0000, 0x0000,
+ 0x0000, 0x001a};
+ uint16_t dst_v6[8] = {0x2001, 0x0470, 0xe5bf, 0xdead, 0x4957, 0x2174,
+ 0xe82c, 0x4887};
+ uint32_t src_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 2));
+ uint32_t dst_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 1));
+ struct rte_security_capability_idx sec_cap_idx;
+ const struct rte_security_capability *sec_cap;
+ enum rte_security_ipsec_sa_direction dir;
+ struct rte_security_ctx *sec_ctx;
+ uint32_t verify;
+ int32_t ret = 0;
+
+ sess_conf->action_type = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ sess_conf->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sess_conf->ipsec = sa->ipsec_xform;
+
+ dir = sa->ipsec_xform.direction;
+ verify = flags->tunnel_hdr_verify;
+
+ if ((dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) && verify) {
+ if (verify == RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR)
+ src_v4 += 1;
+ else if (verify == RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR)
+ dst_v4 += 1;
+ }
+
+ if (sa->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+ if (sa->ipsec_xform.tunnel.type ==
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+ memcpy(&sess_conf->ipsec.tunnel.ipv4.src_ip, &src_v4,
+ sizeof(src_v4));
+ memcpy(&sess_conf->ipsec.tunnel.ipv4.dst_ip, &dst_v4,
+ sizeof(dst_v4));
+
+ if (flags->df == TEST_IPSEC_SET_DF_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv4.df = 0;
+
+ if (flags->df == TEST_IPSEC_SET_DF_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv4.df = 1;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv4.dscp = 0;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv4.dscp =
+ TEST_IPSEC_DSCP_VAL;
+ } else {
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv6.dscp = 0;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv6.dscp =
+ TEST_IPSEC_DSCP_VAL;
+
+ memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr, &src_v6,
+ sizeof(src_v6));
+ memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr, &dst_v6,
+ sizeof(dst_v6));
+ }
+ }
+
+ /* Save SA as userdata for the security session. When
+ * the packet is received, this userdata will be
+ * retrieved using the metadata from the packet.
+ *
+ * The PMD is expected to set similar metadata for other
+ * operations, like rte_eth_event, which are tied to
+ * security session. In such cases, the userdata could
+ * be obtained to uniquely identify the security
+ * parameters denoted.
+ */
+
+ sess_conf->userdata = (void *) sa;
+
+ sec_ctx = (struct rte_security_ctx *)rte_eth_dev_get_sec_ctx(portid);
+ if (sec_ctx == NULL) {
+ printf("Ethernet device doesn't support security features.\n");
+ return TEST_SKIPPED;
+ }
+
+ sec_cap_idx.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ sec_cap_idx.protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sec_cap_idx.ipsec.proto = sess_conf->ipsec.proto;
+ sec_cap_idx.ipsec.mode = sess_conf->ipsec.mode;
+ sec_cap_idx.ipsec.direction = sess_conf->ipsec.direction;
+ sec_cap = rte_security_capability_get(sec_ctx, &sec_cap_idx);
+ if (sec_cap == NULL) {
+ printf("No capabilities registered\n");
+ return TEST_SKIPPED;
+ }
+
+ if (sa->aead || sa->aes_gmac)
+ memcpy(&sess_conf->ipsec.salt, sa->salt.data,
+ RTE_MIN(sizeof(sess_conf->ipsec.salt), sa->salt.len));
+
+ /* Copy cipher session parameters */
+ if (sa->aead) {
+ rte_memcpy(sess_conf->crypto_xform, &sa->xform.aead,
+ sizeof(struct rte_crypto_sym_xform));
+ sess_conf->crypto_xform->aead.key.data = sa->key.data;
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_aead_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ } else {
+ if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ rte_memcpy(&sess_conf->crypto_xform->cipher,
+ &sa->xform.chain.cipher.cipher,
+ sizeof(struct rte_crypto_cipher_xform));
+
+ rte_memcpy(&sess_conf->crypto_xform->next->auth,
+ &sa->xform.chain.auth.auth,
+ sizeof(struct rte_crypto_auth_xform));
+ sess_conf->crypto_xform->cipher.key.data =
+ sa->key.data;
+ sess_conf->crypto_xform->next->auth.key.data =
+ sa->auth_key.data;
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_cipher_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Cipher crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+
+ if (test_ipsec_crypto_caps_auth_verify(sec_cap,
+ sess_conf->crypto_xform->next) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Auth crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ } else {
+ rte_memcpy(&sess_conf->crypto_xform->next->cipher,
+ &sa->xform.chain.cipher.cipher,
+ sizeof(struct rte_crypto_cipher_xform));
+ rte_memcpy(&sess_conf->crypto_xform->auth,
+ &sa->xform.chain.auth.auth,
+ sizeof(struct rte_crypto_auth_xform));
+ sess_conf->crypto_xform->auth.key.data =
+ sa->auth_key.data;
+ sess_conf->crypto_xform->next->cipher.key.data =
+ sa->key.data;
+
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_cipher_verify(sec_cap,
+ sess_conf->crypto_xform->next) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Cipher crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+
+ if (test_ipsec_crypto_caps_auth_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Auth crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ }
+ }
+
+ if (test_ipsec_sec_caps_verify(&sess_conf->ipsec, sec_cap, false) != 0)
+ return TEST_SKIPPED;
+
+ if ((sa->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) &&
+ (sa->ipsec_xform.options.iv_gen_disable == 1)) {
+ /* Set env variable when IV generation is disabled */
+ char arr[128];
+ int len = 0, j = 0;
+ int iv_len = (sa->aead || sa->aes_gmac) ? 8 : 16;
+
+ for (; j < iv_len; j++)
+ len += snprintf(arr+len, sizeof(arr) - len,
+ "0x%x, ", sa->iv.data[j]);
+ setenv("ETH_SEC_IV_OVR", arr, 1);
+ }
+
+ *sess = rte_security_session_create(sec_ctx,
+ sess_conf, sess_pool, sess_priv_pool);
+ if (*sess == NULL) {
+ printf("SEC Session init failed: err: %d\n", ret);
+ return TEST_FAILED;
+ }
+
+ *ol_flags = sec_cap->ol_flags;
+ *ctx = sec_ctx;
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 3s, and print them finally */
+static void
+check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 30 /* 3s (30 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+ int ret;
+ char link_status[RTE_ETH_LINK_MAX_STR_LEN];
+
+ printf("Checking link statuses...\n");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ all_ports_up = 1;
+ for (portid = 0; portid < port_num; portid++) {
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ ret = rte_eth_link_get_nowait(portid, &link);
+ if (ret < 0) {
+ all_ports_up = 0;
+ if (print_flag == 1)
+ printf("Port %u link get failed: %s\n",
+ portid, rte_strerror(-ret));
+ continue;
+ }
+
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status && link_mbps == 0)
+ link_mbps = link.link_speed;
+
+ rte_eth_link_to_str(link_status,
+ sizeof(link_status), &link);
+ printf("Port %d %s\n", portid, link_status);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1))
+ print_flag = 1;
+ }
+}
+
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
+static void
+copy_buf_to_pkt_segs(const uint8_t *buf, unsigned int len,
+ struct rte_mbuf *pkt, unsigned int offset)
+{
+ unsigned int copied = 0;
+ unsigned int copy_len;
+ struct rte_mbuf *seg;
+ void *seg_buf;
+
+ seg = pkt;
+ while (offset >= seg->data_len) {
+ offset -= seg->data_len;
+ seg = seg->next;
+ }
+ copy_len = seg->data_len - offset;
+ seg_buf = rte_pktmbuf_mtod_offset(seg, char *, offset);
+ while (len > copy_len) {
+ rte_memcpy(seg_buf, buf + copied, (size_t) copy_len);
+ len -= copy_len;
+ copied += copy_len;
+ seg = seg->next;
+ seg_buf = rte_pktmbuf_mtod(seg, void *);
+ }
+ rte_memcpy(seg_buf, buf + copied, (size_t) len);
+}
+
+static inline struct rte_mbuf *
+init_packet(struct rte_mempool *mp, const uint8_t *data, unsigned int len)
+{
+ struct rte_mbuf *pkt;
+
+ pkt = rte_pktmbuf_alloc(mp);
+ if (pkt == NULL)
+ return NULL;
+ if (((data[0] & 0xF0) >> 4) == IPVERSION) {
+ rte_memcpy(rte_pktmbuf_append(pkt, RTE_ETHER_HDR_LEN),
+ &dummy_ipv4_eth_hdr, RTE_ETHER_HDR_LEN);
+ pkt->l3_len = sizeof(struct rte_ipv4_hdr);
+ } else {
+ rte_memcpy(rte_pktmbuf_append(pkt, RTE_ETHER_HDR_LEN),
+ &dummy_ipv6_eth_hdr, RTE_ETHER_HDR_LEN);
+ pkt->l3_len = sizeof(struct rte_ipv6_hdr);
+ }
+ pkt->l2_len = RTE_ETHER_HDR_LEN;
+
+ if (pkt->buf_len > (len + RTE_ETHER_HDR_LEN))
+ rte_memcpy(rte_pktmbuf_append(pkt, len), data, len);
+ else
+ copy_buf_to_pkt_segs(data, len, pkt, RTE_ETHER_HDR_LEN);
+ return pkt;
+}
+
+static int
+init_mempools(unsigned int nb_mbuf)
+{
+ struct rte_security_ctx *sec_ctx;
+ uint16_t nb_sess = 512;
+ uint32_t sess_sz;
+ char s[64];
+
+ if (mbufpool == NULL) {
+ snprintf(s, sizeof(s), "mbuf_pool");
+ mbufpool = rte_pktmbuf_pool_create(s, nb_mbuf,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, SOCKET_ID_ANY);
+ if (mbufpool == NULL) {
+ printf("Cannot init mbuf pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated mbuf pool \n");
+ }
+
+ sec_ctx = rte_eth_dev_get_sec_ctx(port_id);
+ if (sec_ctx == NULL) {
+ printf("Device does not support Security ctx\n");
+ return TEST_FAILED;
+ }
+ sess_sz = rte_security_session_get_size(sec_ctx);
+ if (sess_pool == NULL) {
+ snprintf(s, sizeof(s), "sess_pool");
+ sess_pool = rte_mempool_create(s, nb_sess, sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (sess_pool == NULL) {
+ printf("Cannot init sess pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated sess pool\n");
+ }
+ if (sess_priv_pool == NULL) {
+ snprintf(s, sizeof(s), "sess_priv_pool");
+ sess_priv_pool = rte_mempool_create(s, nb_sess, sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (sess_priv_pool == NULL) {
+ printf("Cannot init sess_priv pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated sess_priv pool\n");
+ }
+
+ return 0;
+}
+
+static void
+create_default_flow(uint16_t portid)
+{
+ struct rte_flow_action action[2];
+ struct rte_flow_item pattern[2];
+ struct rte_flow_attr attr = {0};
+ struct rte_flow_error err;
+ struct rte_flow *flow;
+ int ret;
+
+ /* Add the default rte_flow to enable SECURITY for all ESP packets */
+
+ pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP;
+ pattern[0].spec = NULL;
+ pattern[0].mask = NULL;
+ pattern[0].last = NULL;
+ pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+ action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ action[0].conf = NULL;
+ action[1].type = RTE_FLOW_ACTION_TYPE_END;
+ action[1].conf = NULL;
+
+ attr.ingress = 1;
+
+ ret = rte_flow_validate(portid, &attr, pattern, action, &err);
+ if (ret) {
+ printf("\nValidate flow failed, ret = %d\n", ret);
+ return;
+ }
+ flow = rte_flow_create(portid, &attr, pattern, action, &err);
+ if (flow == NULL) {
+ printf("\nDefault flow rule create failed\n");
+ return;
+ }
+
+ default_flow[portid] = flow;
+}
+
+static void
+destroy_default_flow(uint16_t portid)
+{
+ struct rte_flow_error err;
+ int ret;
+ if (!default_flow[portid])
+ return;
+ ret = rte_flow_destroy(portid, default_flow[portid], &err);
+ if (ret) {
+ printf("\nDefault flow rule destroy failed\n");
+ return;
+ }
+ default_flow[portid] = NULL;
+}
+
+struct rte_mbuf **tx_pkts_burst;
+struct rte_mbuf **rx_pkts_burst;
+
+static int
+test_ipsec_inline_proto_process(struct ipsec_test_data *td,
+ struct ipsec_test_data *res_d,
+ int nb_pkts,
+ bool silent,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session_conf sess_conf = {0};
+ struct rte_crypto_sym_xform cipher = {0};
+ struct rte_crypto_sym_xform auth = {0};
+ struct rte_crypto_sym_xform aead = {0};
+ struct rte_security_session *ses;
+ struct rte_security_ctx *ctx;
+ int nb_rx = 0, nb_sent;
+ uint32_t ol_flags;
+ int i, j = 0, ret;
+
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
+
+ if (td->aead) {
+ sess_conf.crypto_xform = &aead;
+ } else {
+ if (td->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ sess_conf.crypto_xform = &cipher;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sess_conf.crypto_xform->next = &auth;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ } else {
+ sess_conf.crypto_xform = &auth;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sess_conf.crypto_xform->next = &cipher;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ }
+ }
+
+ /* Create Inline IPsec session. */
+ ret = create_inline_ipsec_session(td, port_id, &ses, &ctx,
+ &ol_flags, flags, &sess_conf);
+ if (ret)
+ return ret;
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ create_default_flow(port_id);
+
+ for (i = 0; i < nb_pkts; i++) {
+ tx_pkts_burst[i] = init_packet(mbufpool, td->input_text.data,
+ td->input_text.len);
+ if (tx_pkts_burst[i] == NULL) {
+ while (i--)
+ rte_pktmbuf_free(tx_pkts_burst[i]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkts_burst[i],
+ uint8_t *, RTE_ETHER_HDR_LEN), flags)) {
+ while (i--)
+ rte_pktmbuf_free(tx_pkts_burst[i]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx, ses,
+ tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ }
+ }
+ /* Send packet to ethdev for inline IPsec processing. */
+ nb_sent = rte_eth_tx_burst(port_id, 0, tx_pkts_burst, nb_pkts);
+ if (nb_sent != nb_pkts) {
+ printf("\nUnable to TX %d packets", nb_pkts);
+ for ( ; nb_sent < nb_pkts; nb_sent++)
+ rte_pktmbuf_free(tx_pkts_burst[nb_sent]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ rte_pause();
+
+ /* Receive back packet on loopback interface. */
+ do {
+ rte_delay_ms(1);
+ nb_rx += rte_eth_rx_burst(port_id, 0, &rx_pkts_burst[nb_rx],
+ nb_sent - nb_rx);
+ if (nb_rx >= nb_sent)
+ break;
+ } while (j++ < 5 || nb_rx == 0);
+
+ if (nb_rx != nb_sent) {
+ printf("\nUnable to RX all %d packets", nb_sent);
+ while(--nb_rx)
+ rte_pktmbuf_free(rx_pkts_burst[nb_rx]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ rte_pktmbuf_adj(rx_pkts_burst[i], RTE_ETHER_HDR_LEN);
+
+ ret = test_ipsec_post_process(rx_pkts_burst[i], td,
+ res_d, silent, flags);
+ if (ret != TEST_SUCCESS) {
+ for ( ; i < nb_rx; i++)
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ goto out;
+ }
+
+ ret = test_ipsec_stats_verify(ctx, ses, flags,
+ td->ipsec_xform.direction);
+ if (ret != TEST_SUCCESS) {
+ for ( ; i < nb_rx; i++)
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ goto out;
+ }
+
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ rx_pkts_burst[i] = NULL;
+ }
+
+out:
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ destroy_default_flow(port_id);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ctx, ses);
+ ses = NULL;
+
+ return ret;
+}
+
+static int
+ut_setup_inline_ipsec(void)
+{
+ int ret;
+
+ /* Start device */
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ /* always enable promiscuous */
+ ret = rte_eth_promiscuous_enable(port_id);
+ if (ret != 0) {
+ printf("rte_eth_promiscuous_enable: err=%s, port=%d\n",
+ rte_strerror(-ret), port_id);
+ return ret;
+ }
+
+ check_all_ports_link_status(1, RTE_PORT_ALL);
+
+ return 0;
+}
+
+static void
+ut_teardown_inline_ipsec(void)
+{
+ uint16_t portid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ ret = rte_eth_dev_stop(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+
+static int
+inline_ipsec_testsuite_setup(void)
+{
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint16_t nb_ports;
+ int ret;
+ uint16_t nb_rx_queue = 1, nb_tx_queue = 1;
+
+ printf("Start inline IPsec test.\n");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports < NB_ETHPORTS_USED) {
+ printf("At least %u port(s) used for test\n",
+ NB_ETHPORTS_USED);
+ return -1;
+ }
+
+ init_mempools(NB_MBUF);
+
+ if (tx_pkts_burst == NULL) {
+ tx_pkts_burst = (struct rte_mbuf **)rte_calloc("tx_buff",
+ MAX_TRAFFIC_BURST,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE);
+ if (!tx_pkts_burst)
+ return -1;
+
+ rx_pkts_burst = (struct rte_mbuf **)rte_calloc("rx_buff",
+ MAX_TRAFFIC_BURST,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE);
+ if (!rx_pkts_burst)
+ return -1;
+ }
+
+ printf("Generate %d packets \n", MAX_TRAFFIC_BURST);
+
+ nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+ /* configuring port 0 for the test is enough */
+ port_id = 0;
+ /* port configure */
+ ret = rte_eth_dev_configure(port_id, nb_rx_queue,
+ nb_tx_queue, &port_conf);
+ if (ret < 0) {
+ printf("Cannot configure device: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ ret = rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+ if (ret < 0) {
+ printf("Cannot get mac address: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ printf("Port %u ", port_id);
+ print_ethaddr("Address:", &ports_eth_addr[port_id]);
+ printf("\n");
+
+ /* tx queue setup */
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd,
+ SOCKET_ID_ANY, &tx_conf);
+ if (ret < 0) {
+ printf("rte_eth_tx_queue_setup: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ /* rx queue steup */
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, SOCKET_ID_ANY,
+ &rx_conf, mbufpool);
+ if (ret < 0) {
+ printf("rte_eth_rx_queue_setup: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ test_ipsec_alg_list_populate();
+
+ return 0;
+}
+
+static void
+inline_ipsec_testsuite_teardown(void)
+{
+ uint16_t portid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ ret = rte_eth_dev_reset(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_reset: err=%s, port=%u\n",
+ rte_strerror(-ret), port_id);
+ }
+}
+
+static int
+test_ipsec_inline_proto_known_vec(const void *test_data)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ memcpy(&td_outb, test_data, sizeof(td_outb));
+
+ if (td_outb.aead ||
+ td_outb.xform.chain.cipher.cipher.algo != RTE_CRYPTO_CIPHER_NULL) {
+ /* Disable IV gen to be able to test with known vectors */
+ td_outb.ipsec_xform.options.iv_gen_disable = 1;
+ }
+
+ return test_ipsec_inline_proto_process(&td_outb, NULL, 1,
+ false, &flags);
+}
+
+static struct unit_test_suite inline_ipsec_testsuite = {
+ .suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
+ .setup = inline_ipsec_testsuite_setup,
+ .teardown = inline_ipsec_testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 192)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_192_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 256)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_256_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha256),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA384 [24B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha384),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA512 [32B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha512),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv6 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_256_gcm_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv6 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha256_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 NULL AES-XCBC-MAC [12B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_null_aes_xcbc),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ },
+};
+
+
+static int
+test_inline_ipsec(void)
+{
+ return unit_test_suite_runner(&inline_ipsec_testsuite);
+}
+
+#endif /* !RTE_EXEC_ENV_WINDOWS */
+
+REGISTER_TEST_COMMAND(inline_ipsec_autotest, test_inline_ipsec);
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
new file mode 100644
index 0000000000..d1074da36a
--- /dev/null
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+
+#include "test_cryptodev_security_ipsec.h"
+
+uint8_t dummy_ipv4_eth_hdr[] = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+};
+uint8_t dummy_ipv6_eth_hdr[] = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+};
+
+#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 02/10] test/security: add inline inbound IPsec cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 01/10] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 03/10] test/security: add combined mode inline " Akhil Goyal
` (9 subsequent siblings)
11 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added test cases for inline Inbound protocol offload
verification with known test vectors from Lookaside mode.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 65 +++++++++++++++++++++++++++
1 file changed, 65 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index aeb5a57aca..fc0525479c 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -818,6 +818,24 @@ test_ipsec_inline_proto_known_vec(const void *test_data)
false, &flags);
}
+static int
+test_ipsec_inline_proto_known_vec_inb(const void *test_data)
+{
+ const struct ipsec_test_data *td = test_data;
+ struct ipsec_test_flags flags;
+ struct ipsec_test_data td_inb;
+
+ memset(&flags, 0, sizeof(flags));
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ test_ipsec_td_in_from_out(td, &td_inb);
+ else
+ memcpy(&td_inb, td, sizeof(td_inb));
+
+ return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags);
+}
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -864,6 +882,53 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_known_vec,
&pkt_null_aes_xcbc),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 192)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_192_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 256)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_256_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_128_cbc_null),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha256),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA384 [24B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha384),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA512 [32B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha512),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv6 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_256_gcm_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv6 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha256_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 NULL AES-XCBC-MAC [12B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_null_aes_xcbc),
+
+
TEST_CASES_END() /**< NULL terminate unit test array */
},
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 03/10] test/security: add combined mode inline IPsec cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 01/10] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 02/10] test/security: add inline inbound IPsec cases Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 04/10] test/security: add inline IPsec reassembly cases Akhil Goyal
` (8 subsequent siblings)
11 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added combined encap and decap test cases for various algorithm
combinations
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 102 ++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index fc0525479c..890bc10115 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -661,6 +661,92 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
return ret;
}
+static int
+test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_data td_inb;
+ unsigned int i, nb_pkts = 1, pass_cnt = 0, fail_cnt = 0;
+ int ret;
+
+ if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
+ flags->sa_expiry_pkts_hard)
+ nb_pkts = IPSEC_TEST_PACKETS_MAX;
+
+ for (i = 0; i < RTE_DIM(alg_list); i++) {
+ test_ipsec_td_prepare(alg_list[i].param1,
+ alg_list[i].param2,
+ flags, &td_outb, 1);
+
+ if (!td_outb.aead) {
+ enum rte_crypto_cipher_algorithm cipher_alg;
+ enum rte_crypto_auth_algorithm auth_alg;
+
+ cipher_alg = td_outb.xform.chain.cipher.cipher.algo;
+ auth_alg = td_outb.xform.chain.auth.auth.algo;
+
+ if (td_outb.aes_gmac && cipher_alg != RTE_CRYPTO_CIPHER_NULL)
+ continue;
+
+ /* ICV is not applicable for NULL auth */
+ if (flags->icv_corrupt &&
+ auth_alg == RTE_CRYPTO_AUTH_NULL)
+ continue;
+
+ /* IV is not applicable for NULL cipher */
+ if (flags->iv_gen &&
+ cipher_alg == RTE_CRYPTO_CIPHER_NULL)
+ continue;
+ }
+
+ if (flags->udp_encap)
+ td_outb.ipsec_xform.options.udp_encap = 1;
+
+ ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
+ false, flags);
+ if (ret == TEST_SKIPPED)
+ continue;
+
+ if (ret == TEST_FAILED) {
+ printf("\n TEST FAILED");
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+ fail_cnt++;
+ continue;
+ }
+
+ test_ipsec_td_update(&td_inb, &td_outb, 1, flags);
+
+ ret = test_ipsec_inline_proto_process(&td_inb, NULL, nb_pkts,
+ false, flags);
+ if (ret == TEST_SKIPPED)
+ continue;
+
+ if (ret == TEST_FAILED) {
+ printf("\n TEST FAILED");
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+ fail_cnt++;
+ continue;
+ }
+
+ if (flags->display_alg)
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+
+ pass_cnt++;
+ }
+
+ printf("Tests passed: %d, failed: %d", pass_cnt, fail_cnt);
+ if (fail_cnt > 0)
+ return TEST_FAILED;
+ if (pass_cnt > 0)
+ return TEST_SUCCESS;
+ else
+ return TEST_SKIPPED;
+}
+
+
static int
ut_setup_inline_ipsec(void)
{
@@ -835,6 +921,17 @@ test_ipsec_inline_proto_known_vec_inb(const void *test_data)
return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags);
}
+static int
+test_ipsec_inline_proto_display_list(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.display_alg = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
@@ -928,6 +1025,11 @@ static struct unit_test_suite inline_ipsec_testsuite = {
test_ipsec_inline_proto_known_vec_inb,
&pkt_null_aes_xcbc),
+ TEST_CASE_NAMED_ST(
+ "Combined test alg list",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_display_list),
+
TEST_CASES_END() /**< NULL terminate unit test array */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 04/10] test/security: add inline IPsec reassembly cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (2 preceding siblings ...)
2022-04-16 19:25 ` [PATCH v4 03/10] test/security: add combined mode inline " Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 05/10] test/security: add more inline IPsec functional cases Akhil Goyal
` (7 subsequent siblings)
11 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added unit test cases for IP reassembly of inline IPsec
inbound scenarios.
In these cases, known test vectors of fragments are first
processed for inline outbound processing and then received
back on loopback interface for inbound processing along with
IP reassembly of the corresponding decrypted packets.
The resultant plain text reassembled packet is compared with
original unfragmented packet.
In this patch, cases are added for 2/4/5 fragments for both
IPv4 and IPv6 packets. A few negative test cases are also added
like incomplete fragments, out of place fragments, duplicate
fragments.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 421 ++++++++++-
app/test/test_security_inline_proto_vectors.h | 684 ++++++++++++++++++
2 files changed, 1104 insertions(+), 1 deletion(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 890bc10115..9ddc3f7dd4 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -41,6 +41,9 @@ test_inline_ipsec(void)
#define MAX_TRAFFIC_BURST 2048
#define NB_MBUF 10240
+#define ENCAP_DECAP_BURST_SZ 33
+#define APP_REASS_TIMEOUT 10
+
extern struct ipsec_test_data pkt_aes_128_gcm;
extern struct ipsec_test_data pkt_aes_192_gcm;
extern struct ipsec_test_data pkt_aes_256_gcm;
@@ -94,6 +97,8 @@ uint16_t port_id;
static uint64_t link_mbps;
+static int ip_reassembly_dynfield_offset = -1;
+
static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
/* Create Inline IPsec session */
@@ -528,6 +533,347 @@ destroy_default_flow(uint16_t portid)
struct rte_mbuf **tx_pkts_burst;
struct rte_mbuf **rx_pkts_burst;
+static int
+compare_pkt_data(struct rte_mbuf *m, uint8_t *ref, unsigned int tot_len)
+{
+ unsigned int len;
+ unsigned int nb_segs = m->nb_segs;
+ unsigned int matched = 0;
+ struct rte_mbuf *save = m;
+
+ while (m) {
+ len = tot_len;
+ if (len > m->data_len)
+ len = m->data_len;
+ if (len != 0) {
+ if (memcmp(rte_pktmbuf_mtod(m, char *),
+ ref + matched, len)) {
+ printf("\n====Reassembly case failed: Data Mismatch");
+ rte_hexdump(stdout, "Reassembled",
+ rte_pktmbuf_mtod(m, char *),
+ len);
+ rte_hexdump(stdout, "reference",
+ ref + matched,
+ len);
+ return TEST_FAILED;
+ }
+ }
+ tot_len -= len;
+ matched += len;
+ m = m->next;
+ }
+
+ if (tot_len) {
+ printf("\n====Reassembly case failed: Data Missing %u",
+ tot_len);
+ printf("\n====nb_segs %u, tot_len %u", nb_segs, tot_len);
+ rte_pktmbuf_dump(stderr, save, -1);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+}
+
+static inline bool
+is_ip_reassembly_incomplete(struct rte_mbuf *mbuf)
+{
+ static uint64_t ip_reassembly_dynflag;
+ int ip_reassembly_dynflag_offset;
+
+ if (ip_reassembly_dynflag == 0) {
+ ip_reassembly_dynflag_offset = rte_mbuf_dynflag_lookup(
+ RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME, NULL);
+ if (ip_reassembly_dynflag_offset < 0)
+ return false;
+ ip_reassembly_dynflag = RTE_BIT64(ip_reassembly_dynflag_offset);
+ }
+
+ return (mbuf->ol_flags & ip_reassembly_dynflag) != 0;
+}
+
+static void
+free_mbuf(struct rte_mbuf *mbuf)
+{
+ rte_eth_ip_reassembly_dynfield_t dynfield;
+
+ if (!mbuf)
+ return;
+
+ if (!is_ip_reassembly_incomplete(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ } else {
+ if (ip_reassembly_dynfield_offset < 0)
+ return;
+
+ while (mbuf) {
+ dynfield = *RTE_MBUF_DYNFIELD(mbuf,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ rte_pktmbuf_free(mbuf);
+ mbuf = dynfield.next_frag;
+ }
+ }
+}
+
+
+static int
+get_and_verify_incomplete_frags(struct rte_mbuf *mbuf,
+ struct reassembly_vector *vector)
+{
+ rte_eth_ip_reassembly_dynfield_t *dynfield[MAX_PKT_BURST];
+ int j = 0, ret;
+ /**
+ * IP reassembly offload is incomplete, and fragments are listed in
+ * dynfield which can be reassembled in SW.
+ */
+ printf("\nHW IP Reassembly is not complete; attempt SW IP Reassembly,"
+ "\nMatching with original frags.");
+
+ if (ip_reassembly_dynfield_offset < 0)
+ return -1;
+
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(mbuf, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(mbuf, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(mbuf, ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(dynfield[j]->next_frag, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(dynfield[j]->next_frag, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+
+ while ((dynfield[j]->nb_frags > 1) &&
+ is_ip_reassembly_incomplete(dynfield[j]->next_frag)) {
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(dynfield[j-1]->next_frag,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(dynfield[j]->next_frag, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(dynfield[j]->next_frag,
+ vector->frags[j]->data, vector->frags[j]->len);
+ if (ret)
+ return ret;
+ }
+ return ret;
+}
+
+static int
+test_ipsec_with_reassembly(struct reassembly_vector *vector,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session *out_ses[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_security_session *in_ses[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_eth_ip_reassembly_params reass_capa = {0};
+ struct rte_security_session_conf sess_conf_out = {0};
+ struct rte_security_session_conf sess_conf_in = {0};
+ unsigned int nb_tx, burst_sz, nb_sent = 0;
+ struct rte_crypto_sym_xform cipher_out = {0};
+ struct rte_crypto_sym_xform auth_out = {0};
+ struct rte_crypto_sym_xform aead_out = {0};
+ struct rte_crypto_sym_xform cipher_in = {0};
+ struct rte_crypto_sym_xform auth_in = {0};
+ struct rte_crypto_sym_xform aead_in = {0};
+ struct ipsec_test_data sa_data = {0};
+ struct rte_security_ctx *ctx;
+ unsigned int i, nb_rx = 0, j;
+ uint32_t ol_flags;
+ int ret = 0;
+
+ burst_sz = vector->burst ? ENCAP_DECAP_BURST_SZ : 1;
+ nb_tx = vector->nb_frags * burst_sz;
+
+ rte_eth_dev_stop(port_id);
+ if (ret != 0) {
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), port_id);
+ return ret;
+ }
+ rte_eth_ip_reassembly_capability_get(port_id, &reass_capa);
+ if (reass_capa.max_frags < vector->nb_frags)
+ return TEST_SKIPPED;
+ if (reass_capa.timeout_ms > APP_REASS_TIMEOUT) {
+ reass_capa.timeout_ms = APP_REASS_TIMEOUT;
+ rte_eth_ip_reassembly_conf_set(port_id, &reass_capa);
+ }
+
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+
+ memset(tx_pkts_burst, 0, sizeof(tx_pkts_burst[0]) * nb_tx);
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_tx);
+
+ for (i = 0; i < nb_tx; i += vector->nb_frags) {
+ for (j = 0; j < vector->nb_frags; j++) {
+ tx_pkts_burst[i+j] = init_packet(mbufpool,
+ vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (tx_pkts_burst[i+j] == NULL) {
+ ret = -1;
+ printf("\n packed init failed\n");
+ goto out;
+ }
+ }
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data,
+ sizeof(struct ipsec_test_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+ sa_data.ipsec_xform.direction =
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+ if (sa_data.aead) {
+ sess_conf_out.crypto_xform = &aead_out;
+ } else {
+ sess_conf_out.crypto_xform = &cipher_out;
+ sess_conf_out.crypto_xform->next = &auth_out;
+ }
+
+ /* Create Inline IPsec outbound session. */
+ ret = create_inline_ipsec_session(&sa_data, port_id,
+ &out_ses[i], &ctx, &ol_flags, flags,
+ &sess_conf_out);
+ if (ret) {
+ printf("\nInline outbound session create failed\n");
+ goto out;
+ }
+ }
+
+ j = 0;
+ for (i = 0; i < nb_tx; i++) {
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx,
+ out_ses[j], tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+
+ /* Move to next SA after nb_frags */
+ if ((i + 1) % vector->nb_frags == 0)
+ j++;
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data,
+ sizeof(struct ipsec_test_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+ sa_data.ipsec_xform.direction =
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+
+ if (sa_data.aead) {
+ sess_conf_in.crypto_xform = &aead_in;
+ } else {
+ sess_conf_in.crypto_xform = &auth_in;
+ sess_conf_in.crypto_xform->next = &cipher_in;
+ }
+ /* Create Inline IPsec inbound session. */
+ ret = create_inline_ipsec_session(&sa_data, port_id, &in_ses[i],
+ &ctx, &ol_flags, flags, &sess_conf_in);
+ if (ret) {
+ printf("\nInline inbound session create failed\n");
+ goto out;
+ }
+ }
+
+ /* Retrieve reassembly dynfield offset if available */
+ if (ip_reassembly_dynfield_offset < 0 && vector->nb_frags > 1)
+ ip_reassembly_dynfield_offset = rte_mbuf_dynfield_lookup(
+ RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME, NULL);
+
+
+ create_default_flow(port_id);
+
+ nb_sent = rte_eth_tx_burst(port_id, 0, tx_pkts_burst, nb_tx);
+ if (nb_sent != nb_tx) {
+ ret = -1;
+ printf("\nFailed to tx %u pkts", nb_tx);
+ goto out;
+ }
+
+ rte_delay_ms(1);
+
+ /* Retry few times before giving up */
+ nb_rx = 0;
+ j = 0;
+ do {
+ nb_rx += rte_eth_rx_burst(port_id, 0, &rx_pkts_burst[nb_rx],
+ nb_tx - nb_rx);
+ j++;
+ if (nb_rx >= nb_tx)
+ break;
+ rte_delay_ms(1);
+ } while (j < 5 || !nb_rx);
+
+ /* Check for minimum number of Rx packets expected */
+ if ((vector->nb_frags == 1 && nb_rx != nb_tx) ||
+ (vector->nb_frags > 1 && nb_rx < burst_sz)) {
+ printf("\nreceived less Rx pkts(%u) pkts\n", nb_rx);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ if (vector->nb_frags > 1 &&
+ is_ip_reassembly_incomplete(rx_pkts_burst[i])) {
+ ret = get_and_verify_incomplete_frags(rx_pkts_burst[i],
+ vector);
+ if (ret != TEST_SUCCESS)
+ break;
+ continue;
+ }
+
+ if (rx_pkts_burst[i]->ol_flags &
+ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED ||
+ !(rx_pkts_burst[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+ printf("\nsecurity offload failed\n");
+ ret = TEST_FAILED;
+ break;
+ }
+
+ if (vector->full_pkt->len + RTE_ETHER_HDR_LEN !=
+ rx_pkts_burst[i]->pkt_len) {
+ printf("\nreassembled/decrypted packet length mismatch\n");
+ ret = TEST_FAILED;
+ break;
+ }
+ rte_pktmbuf_adj(rx_pkts_burst[i], RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(rx_pkts_burst[i],
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ if (ret != TEST_SUCCESS)
+ break;
+ }
+
+out:
+ destroy_default_flow(port_id);
+
+ /* Clear session data. */
+ for (i = 0; i < burst_sz; i++) {
+ if (out_ses[i])
+ rte_security_session_destroy(ctx, out_ses[i]);
+ if (in_ses[i])
+ rte_security_session_destroy(ctx, in_ses[i]);
+ }
+
+ for (i = nb_sent; i < nb_tx; i++)
+ free_mbuf(tx_pkts_burst[i]);
+ for (i = 0; i < nb_rx; i++)
+ free_mbuf(rx_pkts_burst[i]);
+ return ret;
+}
+
static int
test_ipsec_inline_proto_process(struct ipsec_test_data *td,
struct ipsec_test_data *res_d,
@@ -775,6 +1121,7 @@ ut_setup_inline_ipsec(void)
static void
ut_teardown_inline_ipsec(void)
{
+ struct rte_eth_ip_reassembly_params reass_conf = {0};
uint16_t portid;
int ret;
@@ -784,6 +1131,9 @@ ut_teardown_inline_ipsec(void)
if (ret != 0)
printf("rte_eth_dev_stop: err=%s, port=%u\n",
rte_strerror(-ret), portid);
+
+ /* Clear reassembly configuration */
+ rte_eth_ip_reassembly_conf_set(portid, &reass_conf);
}
}
@@ -884,6 +1234,36 @@ inline_ipsec_testsuite_teardown(void)
}
}
+static int
+test_inline_ip_reassembly(const void *testdata)
+{
+ struct reassembly_vector reassembly_td = {0};
+ const struct reassembly_vector *td = testdata;
+ struct ip_reassembly_test_packet full_pkt;
+ struct ip_reassembly_test_packet frags[MAX_FRAGS];
+ struct ipsec_test_flags flags = {0};
+ int i = 0;
+
+ reassembly_td.sa_data = td->sa_data;
+ reassembly_td.nb_frags = td->nb_frags;
+ reassembly_td.burst = td->burst;
+
+ memcpy(&full_pkt, td->full_pkt,
+ sizeof(struct ip_reassembly_test_packet));
+ reassembly_td.full_pkt = &full_pkt;
+
+ test_vector_payload_populate(reassembly_td.full_pkt, true);
+ for (; i < reassembly_td.nb_frags; i++) {
+ memcpy(&frags[i], td->frags[i],
+ sizeof(struct ip_reassembly_test_packet));
+ reassembly_td.frags[i] = &frags[i];
+ test_vector_payload_populate(reassembly_td.frags[i],
+ (i == 0) ? true : false);
+ }
+
+ return test_ipsec_with_reassembly(&reassembly_td, &flags);
+}
+
static int
test_ipsec_inline_proto_known_vec(const void *test_data)
{
@@ -1030,7 +1410,46 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_display_list),
-
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 2 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_2frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 2 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_2frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_4frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_4frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 5 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_5frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 5 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_5frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with incomplete fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_incomplete_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with overlapping fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_overlap_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with out of order fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_out_of_order_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with burst of 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_4frag_burst_vector),
TEST_CASES_END() /**< NULL terminate unit test array */
},
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index d1074da36a..c18965d80f 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -17,4 +17,688 @@ uint8_t dummy_ipv6_eth_hdr[] = {
0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
};
+#define MAX_FRAG_LEN 1500
+#define MAX_FRAGS 6
+#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
+
+struct ip_reassembly_test_packet {
+ uint32_t len;
+ uint32_t l4_offset;
+ uint8_t data[MAX_PKT_LEN];
+};
+
+struct reassembly_vector {
+ /* input/output text in struct ipsec_test_data are not used */
+ struct ipsec_test_data *sa_data;
+ struct ip_reassembly_test_packet *full_pkt;
+ struct ip_reassembly_test_packet *frags[MAX_FRAGS];
+ uint16_t nb_frags;
+ bool burst;
+};
+
+/* The source file includes below test vectors */
+/* IPv6:
+ *
+ * 1) pkt_ipv6_udp_p1
+ * pkt_ipv6_udp_p1_f1
+ * pkt_ipv6_udp_p1_f2
+ *
+ * 2) pkt_ipv6_udp_p2
+ * pkt_ipv6_udp_p2_f1
+ * pkt_ipv6_udp_p2_f2
+ * pkt_ipv6_udp_p2_f3
+ * pkt_ipv6_udp_p2_f4
+ *
+ * 3) pkt_ipv6_udp_p3
+ * pkt_ipv6_udp_p3_f1
+ * pkt_ipv6_udp_p3_f2
+ * pkt_ipv6_udp_p3_f3
+ * pkt_ipv6_udp_p3_f4
+ * pkt_ipv6_udp_p3_f5
+ */
+
+/* IPv4:
+ *
+ * 1) pkt_ipv4_udp_p1
+ * pkt_ipv4_udp_p1_f1
+ * pkt_ipv4_udp_p1_f2
+ *
+ * 2) pkt_ipv4_udp_p2
+ * pkt_ipv4_udp_p2_f1
+ * pkt_ipv4_udp_p2_f2
+ * pkt_ipv4_udp_p2_f3
+ * pkt_ipv4_udp_p2_f4
+ *
+ * 3) pkt_ipv4_udp_p3
+ * pkt_ipv4_udp_p3_f1
+ * pkt_ipv4_udp_p3_f2
+ * pkt_ipv4_udp_p3_f3
+ * pkt_ipv4_udp_p3_f4
+ * pkt_ipv4_udp_p3_f5
+ */
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1 = {
+ .len = 1500,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0xb4, 0x2C, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x5c, 0x92, 0xac, 0xf1,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1_f2 = {
+ .len = 172,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x00, 0x84, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x38, 0x5c, 0x92, 0xac, 0xf1,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2 = {
+ .len = 4482,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x11, 0x5a, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x64, 0x6c, 0x68, 0x9f,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f2 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f3 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f4 = {
+ .len = 482,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0xba, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa8, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3 = {
+ .len = 5782,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x16, 0x6e, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x65, 0xcf, 0x5a, 0xae,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f2 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f3 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f4 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa9, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f5 = {
+ .len = 446,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0x96, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x14, 0xe0, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1 = {
+ .len = 1500,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0xdc, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x66, 0x0d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x01, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1_f2 = {
+ .len = 100,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x00, 0x64, 0x00, 0x01, 0x00, 0xaf,
+ 0x40, 0x11, 0x6a, 0xd6, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2 = {
+ .len = 4482,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x11, 0x82, 0x00, 0x02, 0x00, 0x00,
+ 0x40, 0x11, 0x5a, 0x66, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5c, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f2 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xad, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f3 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfe, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f4 = {
+ .len = 282,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x01, 0x1a, 0x00, 0x02, 0x02, 0x0d,
+ 0x40, 0x11, 0x68, 0xc1, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3 = {
+ .len = 5782,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x16, 0x96, 0x00, 0x03, 0x00, 0x00,
+ 0x40, 0x11, 0x55, 0x51, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5b, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f2 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xac, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f3 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfd, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f4 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x22, 0x0d,
+ 0x40, 0x11, 0x44, 0x4e, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f5 = {
+ .len = 182,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x00, 0xb6, 0x00, 0x03, 0x02, 0xbc,
+ 0x40, 0x11, 0x68, 0x75, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+static inline void
+test_vector_payload_populate(struct ip_reassembly_test_packet *pkt,
+ bool first_frag)
+{
+ uint32_t i = pkt->l4_offset;
+
+ /**
+ * For non-fragmented packets and first frag, skip 8 bytes from
+ * l4_offset for UDP header.
+ */
+ if (first_frag)
+ i += 8;
+
+ for (; i < pkt->len; i++)
+ pkt->data[i] = 0x58;
+}
+
+struct ipsec_test_data conf_aes_128_gcm = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+
+struct ipsec_test_data conf_aes_128_gcm_v6_tunnel = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+
+const struct reassembly_vector ipv4_2frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv6_2frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p1,
+ .frags[0] = &pkt_ipv6_udp_p1_f1,
+ .frags[1] = &pkt_ipv6_udp_p1_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv4_4frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv6_4frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p2,
+ .frags[0] = &pkt_ipv6_udp_p2_f1,
+ .frags[1] = &pkt_ipv6_udp_p2_f2,
+ .frags[2] = &pkt_ipv6_udp_p2_f3,
+ .frags[3] = &pkt_ipv6_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_5frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p3,
+ .frags[0] = &pkt_ipv4_udp_p3_f1,
+ .frags[1] = &pkt_ipv4_udp_p3_f2,
+ .frags[2] = &pkt_ipv4_udp_p3_f3,
+ .frags[3] = &pkt_ipv4_udp_p3_f4,
+ .frags[4] = &pkt_ipv4_udp_p3_f5,
+ .nb_frags = 5,
+ .burst = false,
+};
+const struct reassembly_vector ipv6_5frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p3,
+ .frags[0] = &pkt_ipv6_udp_p3_f1,
+ .frags[1] = &pkt_ipv6_udp_p3_f2,
+ .frags[2] = &pkt_ipv6_udp_p3_f3,
+ .frags[3] = &pkt_ipv6_udp_p3_f4,
+ .frags[4] = &pkt_ipv6_udp_p3_f5,
+ .nb_frags = 5,
+ .burst = false,
+};
+/* Negative test cases. */
+const struct reassembly_vector ipv4_incomplete_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_overlap_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f1, /* Overlap */
+ .frags[2] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 3,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_out_of_order_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f3,
+ .frags[2] = &pkt_ipv4_udp_p2_f4,
+ .frags[3] = &pkt_ipv4_udp_p2_f2, /* out of order */
+ .nb_frags = 4,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_4frag_burst_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = true,
+};
+
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 05/10] test/security: add more inline IPsec functional cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (3 preceding siblings ...)
2022-04-16 19:25 ` [PATCH v4 04/10] test/security: add inline IPsec reassembly cases Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 06/10] test/security: add ESN and anti-replay cases for inline Akhil Goyal
` (6 subsequent siblings)
11 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added more inline IPsec functional verification cases.
These cases do not have known vectors but are verified
using encap + decap test for all the algo combinations.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 517 ++++++++++++++++++++++++++
1 file changed, 517 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 9ddc3f7dd4..209999d02c 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1313,6 +1313,394 @@ test_ipsec_inline_proto_display_list(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_udp_encap(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.udp_encap = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_udp_ports_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.udp_encap = true;
+ flags.udp_ports_verify = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_err_icv_corrupt(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.icv_corrupt = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_dst_addr_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_src_dst_addr_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_inner_ip_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ip_csum = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_inner_l4_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.l4_csum = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v4_in_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.tunnel_ipv6 = false;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v6_in_v6(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v4_in_v6(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.tunnel_ipv6 = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v6_in_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = false;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_transport_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.transport = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_transport_l4_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .l4_csum = true,
+ .transport = true,
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_stats(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.stats_success = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_pkt_fragment(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.fragment = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+
+}
+
+static int
+test_ipsec_inline_proto_copy_df_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_COPY_DF_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_copy_df_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_COPY_DF_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_set_df_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_SET_DF_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_set_df_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_SET_DF_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_copy_dscp_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_copy_dscp_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_dscp_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_dscp_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .dec_ttl_or_hop_limit = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_hop_limit_decrement(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .ipv6 = true,
+ .dec_ttl_or_hop_limit = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_iv_gen(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.iv_gen = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+ flags.fragment = true;
+
+ memcpy(&td_outb, test_data, sizeof(td_outb));
+
+ /* Disable IV gen to be able to test with known vectors */
+ td_outb.ipsec_xform.options.iv_gen_disable = 1;
+
+ return test_ipsec_inline_proto_process(&td_outb, NULL, 1, false,
+ &flags);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -1359,6 +1747,13 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_known_vec,
&pkt_null_aes_xcbc),
+
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound fragmented packet",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_fragmented,
+ &pkt_aes_128_gcm_frag),
+
TEST_CASE_NAMED_WITH_DATA(
"Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
@@ -1410,6 +1805,128 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_display_list),
+ TEST_CASE_NAMED_ST(
+ "UDP encapsulation",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_udp_encap),
+ TEST_CASE_NAMED_ST(
+ "UDP encapsulation ports verification test",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_udp_ports_verify),
+ TEST_CASE_NAMED_ST(
+ "Negative test: ICV corruption",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_err_icv_corrupt),
+ TEST_CASE_NAMED_ST(
+ "Tunnel dst addr verification",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_dst_addr_verify),
+ TEST_CASE_NAMED_ST(
+ "Tunnel src and dst addr verification",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_src_dst_addr_verify),
+ TEST_CASE_NAMED_ST(
+ "Inner IP checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_inner_ip_csum),
+ TEST_CASE_NAMED_ST(
+ "Inner L4 checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_inner_l4_csum),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv4 in IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v4_in_v4),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv6 in IPv6",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v6_in_v6),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv4 in IPv6",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v4_in_v6),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv6 in IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v6_in_v4),
+ TEST_CASE_NAMED_ST(
+ "Transport IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_transport_v4),
+ TEST_CASE_NAMED_ST(
+ "Transport l4 checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_transport_l4_csum),
+ TEST_CASE_NAMED_ST(
+ "Statistics: success",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_stats),
+ TEST_CASE_NAMED_ST(
+ "Fragmented packet",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_fragment),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header copy DF (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_copy_df_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header copy DF (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_copy_df_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header set DF 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_set_df_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header set DF 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_set_df_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 copy DSCP (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_copy_dscp_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 copy DSCP (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_copy_dscp_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 set DSCP 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_set_dscp_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 set DSCP 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy DSCP (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_dscp_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy DSCP (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_dscp_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set DSCP 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_dscp_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set DSCP 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 decrement inner TTL",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_ttl_decrement),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 decrement inner hop limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_hop_limit_decrement),
+ TEST_CASE_NAMED_ST(
+ "IV generation",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_iv_gen),
+
+
TEST_CASE_NAMED_WITH_DATA(
"IPv4 Reassembly with 2 fragments",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 06/10] test/security: add ESN and anti-replay cases for inline
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (4 preceding siblings ...)
2022-04-16 19:25 ` [PATCH v4 05/10] test/security: add more inline IPsec functional cases Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
` (5 subsequent siblings)
11 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added cases to test anti replay for inline IPsec processing
with and without extended sequence number support.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 308 ++++++++++++++++++++++++++
1 file changed, 308 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 209999d02c..f8d6adc88f 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1092,6 +1092,136 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
return TEST_SKIPPED;
}
+static int
+test_ipsec_inline_proto_process_with_esn(struct ipsec_test_data td[],
+ struct ipsec_test_data res_d[],
+ int nb_pkts,
+ bool silent,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session_conf sess_conf = {0};
+ struct ipsec_test_data *res_d_tmp = NULL;
+ struct rte_crypto_sym_xform cipher = {0};
+ struct rte_crypto_sym_xform auth = {0};
+ struct rte_crypto_sym_xform aead = {0};
+ struct rte_mbuf *rx_pkt = NULL;
+ struct rte_mbuf *tx_pkt = NULL;
+ int nb_rx, nb_sent;
+ struct rte_security_session *ses;
+ struct rte_security_ctx *ctx;
+ uint32_t ol_flags;
+ int i, ret;
+
+ if (td[0].aead) {
+ sess_conf.crypto_xform = &aead;
+ } else {
+ if (td[0].ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ sess_conf.crypto_xform = &cipher;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sess_conf.crypto_xform->next = &auth;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ } else {
+ sess_conf.crypto_xform = &auth;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sess_conf.crypto_xform->next = &cipher;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ }
+ }
+
+ /* Create Inline IPsec session. */
+ ret = create_inline_ipsec_session(&td[0], port_id, &ses, &ctx,
+ &ol_flags, flags, &sess_conf);
+ if (ret)
+ return ret;
+
+ if (td[0].ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ create_default_flow(port_id);
+
+ for (i = 0; i < nb_pkts; i++) {
+ tx_pkt = init_packet(mbufpool, td[i].input_text.data,
+ td[i].input_text.len);
+ if (tx_pkt == NULL) {
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkt,
+ uint8_t *, RTE_ETHER_HDR_LEN), flags)) {
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (td[i].ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ if (flags->antireplay) {
+ sess_conf.ipsec.esn.value =
+ td[i].ipsec_xform.esn.value;
+ ret = rte_security_session_update(ctx, ses,
+ &sess_conf);
+ if (ret) {
+ printf("Could not update ESN in session\n");
+ rte_pktmbuf_free(tx_pkt);
+ goto out;
+ }
+ }
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx, ses,
+ tx_pkt, NULL);
+ tx_pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ }
+ /* Send packet to ethdev for inline IPsec processing. */
+ nb_sent = rte_eth_tx_burst(port_id, 0, &tx_pkt, 1);
+ if (nb_sent != 1) {
+ printf("\nUnable to TX packets");
+ rte_pktmbuf_free(tx_pkt);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ rte_pause();
+
+ /* Receive back packet on loopback interface. */
+ do {
+ rte_delay_ms(1);
+ nb_rx = rte_eth_rx_burst(port_id, 0, &rx_pkt, 1);
+ } while (nb_rx == 0);
+
+ rte_pktmbuf_adj(rx_pkt, RTE_ETHER_HDR_LEN);
+
+ if (res_d != NULL)
+ res_d_tmp = &res_d[i];
+
+ ret = test_ipsec_post_process(rx_pkt, &td[i],
+ res_d_tmp, silent, flags);
+ if (ret != TEST_SUCCESS) {
+ rte_pktmbuf_free(rx_pkt);
+ goto out;
+ }
+
+ ret = test_ipsec_stats_verify(ctx, ses, flags,
+ td->ipsec_xform.direction);
+ if (ret != TEST_SUCCESS) {
+ rte_pktmbuf_free(rx_pkt);
+ goto out;
+ }
+
+ rte_pktmbuf_free(rx_pkt);
+ rx_pkt = NULL;
+ tx_pkt = NULL;
+ res_d_tmp = NULL;
+ }
+
+out:
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ destroy_default_flow(port_id);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ctx, ses);
+ ses = NULL;
+
+ return ret;
+}
static int
ut_setup_inline_ipsec(void)
@@ -1701,6 +1831,153 @@ test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
return test_ipsec_inline_proto_process(&td_outb, NULL, 1, false,
&flags);
}
+
+static int
+test_ipsec_inline_pkt_replay(const void *test_data, const uint64_t esn[],
+ bool replayed_pkt[], uint32_t nb_pkts, bool esn_en,
+ uint64_t winsz)
+{
+ struct ipsec_test_data td_outb[IPSEC_TEST_PACKETS_MAX];
+ struct ipsec_test_data td_inb[IPSEC_TEST_PACKETS_MAX];
+ struct ipsec_test_flags flags;
+ uint32_t i, ret = 0;
+
+ memset(&flags, 0, sizeof(flags));
+ flags.antireplay = true;
+
+ for (i = 0; i < nb_pkts; i++) {
+ memcpy(&td_outb[i], test_data, sizeof(td_outb));
+ td_outb[i].ipsec_xform.options.iv_gen_disable = 1;
+ td_outb[i].ipsec_xform.replay_win_sz = winsz;
+ td_outb[i].ipsec_xform.options.esn = esn_en;
+ }
+
+ for (i = 0; i < nb_pkts; i++)
+ td_outb[i].ipsec_xform.esn.value = esn[i];
+
+ ret = test_ipsec_inline_proto_process_with_esn(td_outb, td_inb,
+ nb_pkts, true, &flags);
+ if (ret != TEST_SUCCESS)
+ return ret;
+
+ test_ipsec_td_update(td_inb, td_outb, nb_pkts, &flags);
+
+ for (i = 0; i < nb_pkts; i++) {
+ td_inb[i].ipsec_xform.options.esn = esn_en;
+ /* Set antireplay flag for packets to be dropped */
+ td_inb[i].ar_packet = replayed_pkt[i];
+ }
+
+ ret = test_ipsec_inline_proto_process_with_esn(td_inb, NULL, nb_pkts,
+ true, &flags);
+
+ return ret;
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay(const void *test_data, uint64_t winsz)
+{
+
+ uint32_t nb_pkts = 5;
+ bool replayed_pkt[5];
+ uint64_t esn[5];
+
+ /* 1. Advance the TOP of the window to WS * 2 */
+ esn[0] = winsz * 2;
+ /* 2. Test sequence number within the new window(WS + 1) */
+ esn[1] = winsz + 1;
+ /* 3. Test sequence number less than the window BOTTOM */
+ esn[2] = winsz;
+ /* 4. Test sequence number in the middle of the window */
+ esn[3] = winsz + (winsz / 2);
+ /* 5. Test replay of the packet in the middle of the window */
+ esn[4] = winsz + (winsz / 2);
+
+ replayed_pkt[0] = false;
+ replayed_pkt[1] = false;
+ replayed_pkt[2] = true;
+ replayed_pkt[3] = false;
+ replayed_pkt[4] = true;
+
+ return test_ipsec_inline_pkt_replay(test_data, esn, replayed_pkt,
+ nb_pkts, false, winsz);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay1024(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay2048(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay4096(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 4096);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay(const void *test_data, uint64_t winsz)
+{
+
+ uint32_t nb_pkts = 7;
+ bool replayed_pkt[7];
+ uint64_t esn[7];
+
+ /* Set the initial sequence number */
+ esn[0] = (uint64_t)(0xFFFFFFFF - winsz);
+ /* 1. Advance the TOP of the window to (1<<32 + WS/2) */
+ esn[1] = (uint64_t)((1ULL << 32) + (winsz / 2));
+ /* 2. Test sequence number within new window (1<<32 + WS/2 + 1) */
+ esn[2] = (uint64_t)((1ULL << 32) - (winsz / 2) + 1);
+ /* 3. Test with sequence number within window (1<<32 - 1) */
+ esn[3] = (uint64_t)((1ULL << 32) - 1);
+ /* 4. Test with sequence number within window (1<<32 - 1) */
+ esn[4] = (uint64_t)(1ULL << 32);
+ /* 5. Test with duplicate sequence number within
+ * new window (1<<32 - 1)
+ */
+ esn[5] = (uint64_t)((1ULL << 32) - 1);
+ /* 6. Test with duplicate sequence number within new window (1<<32) */
+ esn[6] = (uint64_t)(1ULL << 32);
+
+ replayed_pkt[0] = false;
+ replayed_pkt[1] = false;
+ replayed_pkt[2] = false;
+ replayed_pkt[3] = false;
+ replayed_pkt[4] = false;
+ replayed_pkt[5] = true;
+ replayed_pkt[6] = true;
+
+ return test_ipsec_inline_pkt_replay(test_data, esn, replayed_pkt, nb_pkts,
+ true, winsz);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay1024(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay2048(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay4096(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 4096);
+}
+
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -1927,6 +2204,37 @@ static struct unit_test_suite inline_ipsec_testsuite = {
test_ipsec_inline_proto_iv_gen),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 1024",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay1024,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 2048",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay2048,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 4096",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay4096,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 1024",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay1024,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 2048",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay2048,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 4096",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay4096,
+ &pkt_aes_128_gcm),
+
TEST_CASE_NAMED_WITH_DATA(
"IPv4 Reassembly with 2 fragments",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (5 preceding siblings ...)
2022-04-16 19:25 ` [PATCH v4 06/10] test/security: add ESN and anti-replay cases for inline Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-19 8:58 ` Thomas Monjalon
2022-09-24 13:57 ` [PATCH v5 0/3] Add and test IPsec SA expiry events Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 08/10] test/security: add inline IPsec SA soft " Akhil Goyal
` (4 subsequent siblings)
11 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds new event subtypes for notifying expiry
events upon reaching IPsec SA soft packet expiry and
hard packet/byte expiry limits.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
lib/ethdev/rte_ethdev.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 04cff8ee10..08819fe4ba 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3828,6 +3828,12 @@ enum rte_eth_event_ipsec_subtype {
RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
/** Soft byte expiry of SA */
RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
+ /** Soft packet expiry of SA */
+ RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
+ /** Hard byte expiry of SA */
+ RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
+ /** Hard packet expiry of SA */
+ RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
/** Max value of this enum */
RTE_ETH_EVENT_IPSEC_MAX
};
@@ -3849,6 +3855,9 @@ struct rte_eth_event_ipsec_desc {
* - @ref RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW
* - @ref RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY
* - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY
*
* @see struct rte_security_session_conf
*
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 08/10] test/security: add inline IPsec SA soft expiry cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (6 preceding siblings ...)
2022-04-16 19:25 ` [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 09/10] test/security: add inline IPsec SA hard " Akhil Goyal
` (3 subsequent siblings)
11 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds unit tests for packet & byte soft expiry events.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
app/test/test_cryptodev_security_ipsec.h | 2 +
app/test/test_security_inline_proto.c | 105 +++++++++++++++++-
app/test/test_security_inline_proto_vectors.h | 6 +
3 files changed, 112 insertions(+), 1 deletion(-)
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 0d9b5b6e2e..418ab16ba6 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -77,6 +77,8 @@ struct ipsec_test_flags {
bool display_alg;
bool sa_expiry_pkts_soft;
bool sa_expiry_pkts_hard;
+ bool sa_expiry_bytes_soft;
+ bool sa_expiry_bytes_hard;
bool icv_corrupt;
bool iv_gen;
uint32_t tunnel_hdr_verify;
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index f8d6adc88f..5b111af53e 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -874,6 +874,62 @@ test_ipsec_with_reassembly(struct reassembly_vector *vector,
return ret;
}
+static int
+test_ipsec_inline_sa_exp_event_callback(uint16_t port_id,
+ enum rte_eth_event_type type, void *param, void *ret_param)
+{
+ struct sa_expiry_vector *vector = (struct sa_expiry_vector *)param;
+ struct rte_eth_event_ipsec_desc *event_desc = NULL;
+
+ RTE_SET_USED(port_id);
+
+ if (type != RTE_ETH_EVENT_IPSEC)
+ return -1;
+
+ event_desc = ret_param;
+ if (event_desc == NULL) {
+ printf("Event descriptor not set\n");
+ return -1;
+ }
+ vector->notify_event = true;
+ if (event_desc->metadata != (uint64_t)vector->sa_data) {
+ printf("Mismatch in event specific metadata\n");
+ return -1;
+ }
+ if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY) {
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
+ return 0;
+ } else if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY) {
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ return 0;
+ } else if (event_desc->subtype >= RTE_ETH_EVENT_IPSEC_MAX) {
+ printf("Invalid IPsec event reported\n");
+ return -1;
+ }
+
+ return -1;
+}
+
+static enum rte_eth_event_ipsec_subtype
+test_ipsec_inline_setup_expiry_vector(struct sa_expiry_vector *vector,
+ const struct ipsec_test_flags *flags,
+ struct ipsec_test_data *tdata)
+{
+ enum rte_eth_event_ipsec_subtype event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+
+ vector->event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+ vector->notify_event = false;
+ vector->sa_data = (void *)tdata;
+ if (flags->sa_expiry_pkts_soft)
+ event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
+ else
+ event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ rte_eth_dev_callback_register(port_id, RTE_ETH_EVENT_IPSEC,
+ test_ipsec_inline_sa_exp_event_callback, vector);
+
+ return event;
+}
+
static int
test_ipsec_inline_proto_process(struct ipsec_test_data *td,
struct ipsec_test_data *res_d,
@@ -881,10 +937,12 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
bool silent,
const struct ipsec_test_flags *flags)
{
+ enum rte_eth_event_ipsec_subtype event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
struct rte_security_session_conf sess_conf = {0};
struct rte_crypto_sym_xform cipher = {0};
struct rte_crypto_sym_xform auth = {0};
struct rte_crypto_sym_xform aead = {0};
+ struct sa_expiry_vector vector = {0};
struct rte_security_session *ses;
struct rte_security_ctx *ctx;
int nb_rx = 0, nb_sent;
@@ -893,6 +951,12 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ return TEST_SUCCESS;
+ event = test_ipsec_inline_setup_expiry_vector(&vector, flags, td);
+ }
+
if (td->aead) {
sess_conf.crypto_xform = &aead;
} else {
@@ -999,6 +1063,15 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
out:
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
destroy_default_flow(port_id);
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (vector.notify_event && (vector.event == event))
+ ret = TEST_SUCCESS;
+ else
+ ret = TEST_FAILED;
+
+ rte_eth_dev_callback_unregister(port_id, RTE_ETH_EVENT_IPSEC,
+ test_ipsec_inline_sa_exp_event_callback, &vector);
+ }
/* Destroy session so that other cases can create the session again */
rte_security_session_destroy(ctx, ses);
@@ -1016,6 +1089,7 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
int ret;
if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
+ flags->sa_expiry_bytes_soft ||
flags->sa_expiry_pkts_hard)
nb_pkts = IPSEC_TEST_PACKETS_MAX;
@@ -1048,6 +1122,11 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
if (flags->udp_encap)
td_outb.ipsec_xform.options.udp_encap = 1;
+ if (flags->sa_expiry_bytes_soft)
+ td_outb.ipsec_xform.life.bytes_soft_limit =
+ (((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
+ * nb_pkts) >> 3) - 1;
+
ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
false, flags);
if (ret == TEST_SKIPPED)
@@ -1814,6 +1893,23 @@ test_ipsec_inline_proto_iv_gen(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_sa_pkt_soft_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_pkts_soft = true
+ };
+ return test_ipsec_inline_proto_all(&flags);
+}
+static int
+test_ipsec_inline_proto_sa_byte_soft_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_bytes_soft = true
+ };
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
{
@@ -2202,7 +2298,14 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"IV generation",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_iv_gen),
-
+ TEST_CASE_NAMED_ST(
+ "SA soft expiry with packet limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_pkt_soft_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA soft expiry with byte limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_byte_soft_expiry),
TEST_CASE_NAMED_WITH_DATA(
"Antireplay with window size 1024",
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index c18965d80f..003537e200 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -36,6 +36,12 @@ struct reassembly_vector {
bool burst;
};
+struct sa_expiry_vector {
+ struct ipsec_session_data *sa_data;
+ enum rte_eth_event_ipsec_subtype event;
+ bool notify_event;
+};
+
/* The source file includes below test vectors */
/* IPv6:
*
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 09/10] test/security: add inline IPsec SA hard expiry cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (7 preceding siblings ...)
2022-04-16 19:25 ` [PATCH v4 08/10] test/security: add inline IPsec SA soft " Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 10/10] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
` (2 subsequent siblings)
11 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds hard expiry unit tests for both packet
and byte limits.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
app/test/test_security_inline_proto.c | 71 +++++++++++++++++++++++----
1 file changed, 61 insertions(+), 10 deletions(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 5b111af53e..15f08a2d6c 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -896,18 +896,25 @@ test_ipsec_inline_sa_exp_event_callback(uint16_t port_id,
printf("Mismatch in event specific metadata\n");
return -1;
}
- if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY) {
+ switch (event_desc->subtype) {
+ case RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY:
vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
- return 0;
- } else if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY) {
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY:
vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
- return 0;
- } else if (event_desc->subtype >= RTE_ETH_EVENT_IPSEC_MAX) {
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY:
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY;
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY:
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY;
+ break;
+ default:
printf("Invalid IPsec event reported\n");
return -1;
}
- return -1;
+ return 0;
}
static enum rte_eth_event_ipsec_subtype
@@ -922,8 +929,12 @@ test_ipsec_inline_setup_expiry_vector(struct sa_expiry_vector *vector,
vector->sa_data = (void *)tdata;
if (flags->sa_expiry_pkts_soft)
event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
- else
+ else if (flags->sa_expiry_bytes_soft)
event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ else if (flags->sa_expiry_pkts_hard)
+ event = RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY;
+ else
+ event = RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY;
rte_eth_dev_callback_register(port_id, RTE_ETH_EVENT_IPSEC,
test_ipsec_inline_sa_exp_event_callback, vector);
@@ -951,7 +962,8 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
- if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_pkts_hard || flags->sa_expiry_bytes_hard) {
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
return TEST_SUCCESS;
event = test_ipsec_inline_setup_expiry_vector(&vector, flags, td);
@@ -1029,7 +1041,9 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
break;
} while (j++ < 5 || nb_rx == 0);
- if (nb_rx != nb_sent) {
+ if (!flags->sa_expiry_pkts_hard &&
+ !flags->sa_expiry_bytes_hard &&
+ (nb_rx != nb_sent)) {
printf("\nUnable to RX all %d packets", nb_sent);
while(--nb_rx)
rte_pktmbuf_free(rx_pkts_burst[nb_rx]);
@@ -1063,7 +1077,8 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
out:
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
destroy_default_flow(port_id);
- if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_pkts_hard || flags->sa_expiry_bytes_hard) {
if (vector.notify_event && (vector.event == event))
ret = TEST_SUCCESS;
else
@@ -1090,6 +1105,7 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_bytes_hard ||
flags->sa_expiry_pkts_hard)
nb_pkts = IPSEC_TEST_PACKETS_MAX;
@@ -1126,6 +1142,13 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
td_outb.ipsec_xform.life.bytes_soft_limit =
(((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
* nb_pkts) >> 3) - 1;
+ if (flags->sa_expiry_pkts_hard)
+ td_outb.ipsec_xform.life.packets_hard_limit =
+ IPSEC_TEST_PACKETS_MAX - 1;
+ if (flags->sa_expiry_bytes_hard)
+ td_outb.ipsec_xform.life.bytes_hard_limit =
+ (((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
+ * nb_pkts) >> 3) - 1;
ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
false, flags);
@@ -1910,6 +1933,26 @@ test_ipsec_inline_proto_sa_byte_soft_expiry(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_sa_pkt_hard_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_pkts_hard = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_sa_byte_hard_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_bytes_hard = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
{
@@ -2306,6 +2349,14 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"SA soft expiry with byte limit",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_sa_byte_soft_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA hard expiry with packet limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_pkt_hard_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA hard expiry with byte limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_byte_hard_expiry),
TEST_CASE_NAMED_WITH_DATA(
"Antireplay with window size 1024",
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v4 10/10] test/security: add inline IPsec IPv6 flow label cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (8 preceding siblings ...)
2022-04-16 19:25 ` [PATCH v4 09/10] test/security: add inline IPsec SA hard " Akhil Goyal
@ 2022-04-16 19:25 ` Akhil Goyal
2022-04-18 3:44 ` Anoob Joseph
2022-04-25 12:38 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Poczatek, Jakub
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
11 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-04-16 19:25 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds unit tests for IPv6 flow label set & copy
operations.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
app/test/test_cryptodev_security_ipsec.c | 35 ++++++++++-
app/test/test_cryptodev_security_ipsec.h | 10 +++
app/test/test_security_inline_proto.c | 79 ++++++++++++++++++++++++
3 files changed, 123 insertions(+), 1 deletion(-)
diff --git a/app/test/test_cryptodev_security_ipsec.c b/app/test/test_cryptodev_security_ipsec.c
index 14c6ba681f..408bd0bc82 100644
--- a/app/test/test_cryptodev_security_ipsec.c
+++ b/app/test/test_cryptodev_security_ipsec.c
@@ -495,6 +495,10 @@ test_ipsec_td_prepare(const struct crypto_param *param1,
flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1)
td->ipsec_xform.options.copy_dscp = 1;
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1)
+ td->ipsec_xform.options.copy_flabel = 1;
+
if (flags->dec_ttl_or_hop_limit)
td->ipsec_xform.options.dec_ttl = 1;
}
@@ -933,6 +937,7 @@ test_ipsec_iph6_hdr_validate(const struct rte_ipv6_hdr *iph6,
const struct ipsec_test_flags *flags)
{
uint32_t vtc_flow;
+ uint32_t flabel;
uint8_t dscp;
if (!is_valid_ipv6_pkt(iph6)) {
@@ -959,6 +964,23 @@ test_ipsec_iph6_hdr_validate(const struct rte_ipv6_hdr *iph6,
}
}
+ flabel = vtc_flow & RTE_IPV6_HDR_FL_MASK;
+
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
+ if (flabel != TEST_IPSEC_FLABEL_VAL) {
+ printf("FLABEL value is not matching [exp: %x, actual: %x]\n",
+ TEST_IPSEC_FLABEL_VAL, flabel);
+ return -1;
+ }
+ } else {
+ if (flabel != 0) {
+ printf("FLABEL value is set [exp: 0, actual: %x]\n",
+ flabel);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -1159,7 +1181,11 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct ipsec_test_flags *flags)
if (flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1 ||
flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1 ||
flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_0 ||
- flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0) {
+ flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
if (is_ipv4(iph4)) {
uint8_t tos;
@@ -1187,6 +1213,13 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct ipsec_test_flags *flags)
else
vtc_flow &= ~RTE_IPV6_HDR_DSCP_MASK;
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1)
+ vtc_flow |= (RTE_IPV6_HDR_FL_MASK &
+ (TEST_IPSEC_FLABEL_VAL << RTE_IPV6_HDR_FL_SHIFT));
+ else
+ vtc_flow &= ~RTE_IPV6_HDR_FL_MASK;
+
iph6->vtc_flow = rte_cpu_to_be_32(vtc_flow);
}
}
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 418ab16ba6..9a3c021dd8 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -73,6 +73,15 @@ enum dscp_flags {
TEST_IPSEC_SET_DSCP_1_INNER_0,
};
+#define TEST_IPSEC_FLABEL_VAL 0x1234
+
+enum flabel_flags {
+ TEST_IPSEC_COPY_FLABEL_INNER_0 = 1,
+ TEST_IPSEC_COPY_FLABEL_INNER_1,
+ TEST_IPSEC_SET_FLABEL_0_INNER_1,
+ TEST_IPSEC_SET_FLABEL_1_INNER_0,
+};
+
struct ipsec_test_flags {
bool display_alg;
bool sa_expiry_pkts_soft;
@@ -94,6 +103,7 @@ struct ipsec_test_flags {
bool antireplay;
enum df_flags df;
enum dscp_flags dscp;
+ enum flabel_flags flabel;
bool dec_ttl_or_hop_limit;
bool ah;
};
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 15f08a2d6c..16fe164f77 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -163,6 +163,13 @@ create_inline_ipsec_session(struct ipsec_test_data *sa, uint16_t portid,
sess_conf->ipsec.tunnel.ipv6.dscp =
TEST_IPSEC_DSCP_VAL;
+ if (flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv6.flabel = 0;
+
+ if (flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv6.flabel =
+ TEST_IPSEC_FLABEL_VAL;
+
memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr, &src_v6,
sizeof(src_v6));
memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr, &dst_v6,
@@ -1883,6 +1890,62 @@ test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_ipv6_copy_flabel_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_flabel_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_SET_FLABEL_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_SET_FLABEL_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data __rte_unused)
{
@@ -2329,6 +2392,22 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"Tunnel header IPv6 set DSCP 1 (inner 0)",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy FLABEL (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_flabel_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy FLABEL (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_flabel_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set FLABEL 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set FLABEL 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0),
TEST_CASE_NAMED_ST(
"Tunnel header IPv4 decrement inner TTL",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v4 10/10] test/security: add inline IPsec IPv6 flow label cases
2022-04-16 19:25 ` [PATCH v4 10/10] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
@ 2022-04-18 3:44 ` Anoob Joseph
2022-04-18 3:55 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Anoob Joseph @ 2022-04-18 3:44 UTC (permalink / raw)
To: Akhil Goyal, Vamsi Krishna Attunuru
Cc: thomas, david.marchand, hemant.agrawal, konstantin.ananyev,
ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, dev
Hi Akhil, Vamsi,
Please add the same test cases in lookaside IPsec tests also. And please do update release notes.
Thanks,
Anoob
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Sunday, April 17, 2022 12:56 AM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; Anoob Joseph <anoobj@marvell.com>;
> konstantin.ananyev@intel.com; ciara.power@intel.com;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>; Vamsi Krishna Attunuru
> <vattunuru@marvell.com>
> Subject: [PATCH v4 10/10] test/security: add inline IPsec IPv6 flow label cases
>
> From: Vamsi Attunuru <vattunuru@marvell.com>
>
> Patch adds unit tests for IPv6 flow label set & copy operations.
>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> ---
> app/test/test_cryptodev_security_ipsec.c | 35 ++++++++++-
> app/test/test_cryptodev_security_ipsec.h | 10 +++
> app/test/test_security_inline_proto.c | 79 ++++++++++++++++++++++++
> 3 files changed, 123 insertions(+), 1 deletion(-)
>
> diff --git a/app/test/test_cryptodev_security_ipsec.c
> b/app/test/test_cryptodev_security_ipsec.c
> index 14c6ba681f..408bd0bc82 100644
> --- a/app/test/test_cryptodev_security_ipsec.c
> +++ b/app/test/test_cryptodev_security_ipsec.c
> @@ -495,6 +495,10 @@ test_ipsec_td_prepare(const struct crypto_param
> *param1,
> flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1)
> td->ipsec_xform.options.copy_dscp = 1;
>
> + if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
> + flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1)
> + td->ipsec_xform.options.copy_flabel = 1;
> +
> if (flags->dec_ttl_or_hop_limit)
> td->ipsec_xform.options.dec_ttl = 1;
> }
> @@ -933,6 +937,7 @@ test_ipsec_iph6_hdr_validate(const struct
> rte_ipv6_hdr *iph6,
> const struct ipsec_test_flags *flags) {
> uint32_t vtc_flow;
> + uint32_t flabel;
> uint8_t dscp;
>
> if (!is_valid_ipv6_pkt(iph6)) {
> @@ -959,6 +964,23 @@ test_ipsec_iph6_hdr_validate(const struct
> rte_ipv6_hdr *iph6,
> }
> }
>
> + flabel = vtc_flow & RTE_IPV6_HDR_FL_MASK;
> +
> + if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
> + flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
> + if (flabel != TEST_IPSEC_FLABEL_VAL) {
> + printf("FLABEL value is not matching [exp: %x, actual:
> %x]\n",
> + TEST_IPSEC_FLABEL_VAL, flabel);
> + return -1;
> + }
> + } else {
> + if (flabel != 0) {
> + printf("FLABEL value is set [exp: 0, actual: %x]\n",
> + flabel);
> + return -1;
> + }
> + }
> +
> return 0;
> }
>
> @@ -1159,7 +1181,11 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct
> ipsec_test_flags *flags)
> if (flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1 ||
> flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1 ||
> flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_0 ||
> - flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0) {
> + flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0 ||
> + flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
> + flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1 ||
> + flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
> + flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
>
> if (is_ipv4(iph4)) {
> uint8_t tos;
> @@ -1187,6 +1213,13 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct
> ipsec_test_flags *flags)
> else
> vtc_flow &= ~RTE_IPV6_HDR_DSCP_MASK;
>
> + if (flags->flabel ==
> TEST_IPSEC_COPY_FLABEL_INNER_1 ||
> + flags->flabel ==
> TEST_IPSEC_SET_FLABEL_0_INNER_1)
> + vtc_flow |= (RTE_IPV6_HDR_FL_MASK &
> + (TEST_IPSEC_FLABEL_VAL <<
> RTE_IPV6_HDR_FL_SHIFT));
> + else
> + vtc_flow &= ~RTE_IPV6_HDR_FL_MASK;
> +
> iph6->vtc_flow = rte_cpu_to_be_32(vtc_flow);
> }
> }
> diff --git a/app/test/test_cryptodev_security_ipsec.h
> b/app/test/test_cryptodev_security_ipsec.h
> index 418ab16ba6..9a3c021dd8 100644
> --- a/app/test/test_cryptodev_security_ipsec.h
> +++ b/app/test/test_cryptodev_security_ipsec.h
> @@ -73,6 +73,15 @@ enum dscp_flags {
> TEST_IPSEC_SET_DSCP_1_INNER_0,
> };
>
> +#define TEST_IPSEC_FLABEL_VAL 0x1234
> +
> +enum flabel_flags {
> + TEST_IPSEC_COPY_FLABEL_INNER_0 = 1,
> + TEST_IPSEC_COPY_FLABEL_INNER_1,
> + TEST_IPSEC_SET_FLABEL_0_INNER_1,
> + TEST_IPSEC_SET_FLABEL_1_INNER_0,
> +};
> +
> struct ipsec_test_flags {
> bool display_alg;
> bool sa_expiry_pkts_soft;
> @@ -94,6 +103,7 @@ struct ipsec_test_flags {
> bool antireplay;
> enum df_flags df;
> enum dscp_flags dscp;
> + enum flabel_flags flabel;
> bool dec_ttl_or_hop_limit;
> bool ah;
> };
> diff --git a/app/test/test_security_inline_proto.c
> b/app/test/test_security_inline_proto.c
> index 15f08a2d6c..16fe164f77 100644
> --- a/app/test/test_security_inline_proto.c
> +++ b/app/test/test_security_inline_proto.c
> @@ -163,6 +163,13 @@ create_inline_ipsec_session(struct ipsec_test_data
> *sa, uint16_t portid,
> sess_conf->ipsec.tunnel.ipv6.dscp =
> TEST_IPSEC_DSCP_VAL;
>
> + if (flags->flabel ==
> TEST_IPSEC_SET_FLABEL_0_INNER_1)
> + sess_conf->ipsec.tunnel.ipv6.flabel = 0;
> +
> + if (flags->flabel ==
> TEST_IPSEC_SET_FLABEL_1_INNER_0)
> + sess_conf->ipsec.tunnel.ipv6.flabel =
> + TEST_IPSEC_FLABEL_VAL;
> +
> memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr,
> &src_v6,
> sizeof(src_v6));
> memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr,
> &dst_v6, @@ -1883,6 +1890,62 @@
> test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data
> __rte_unused)
> return test_ipsec_inline_proto_all(&flags);
> }
>
> +static int
> +test_ipsec_inline_proto_ipv6_copy_flabel_inner_0(const void *data
> +__rte_unused) {
> + struct ipsec_test_flags flags;
> +
> + memset(&flags, 0, sizeof(flags));
> +
> + flags.ipv6 = true;
> + flags.tunnel_ipv6 = true;
> + flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_0;
> +
> + return test_ipsec_inline_proto_all(&flags);
> +}
> +
> +static int
> +test_ipsec_inline_proto_ipv6_copy_flabel_inner_1(const void *data
> +__rte_unused) {
> + struct ipsec_test_flags flags;
> +
> + memset(&flags, 0, sizeof(flags));
> +
> + flags.ipv6 = true;
> + flags.tunnel_ipv6 = true;
> + flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_1;
> +
> + return test_ipsec_inline_proto_all(&flags);
> +}
> +
> +static int
> +test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1(const void *data
> +__rte_unused) {
> + struct ipsec_test_flags flags;
> +
> + memset(&flags, 0, sizeof(flags));
> +
> + flags.ipv6 = true;
> + flags.tunnel_ipv6 = true;
> + flags.flabel = TEST_IPSEC_SET_FLABEL_0_INNER_1;
> +
> + return test_ipsec_inline_proto_all(&flags);
> +}
> +
> +static int
> +test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0(const void *data
> +__rte_unused) {
> + struct ipsec_test_flags flags;
> +
> + memset(&flags, 0, sizeof(flags));
> +
> + flags.ipv6 = true;
> + flags.tunnel_ipv6 = true;
> + flags.flabel = TEST_IPSEC_SET_FLABEL_1_INNER_0;
> +
> + return test_ipsec_inline_proto_all(&flags);
> +}
> +
> static int
> test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data
> __rte_unused) { @@ -2329,6 +2392,22 @@ static struct unit_test_suite
> inline_ipsec_testsuite = {
> "Tunnel header IPv6 set DSCP 1 (inner 0)",
> ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
> test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
> + TEST_CASE_NAMED_ST(
> + "Tunnel header IPv6 copy FLABEL (inner 0)",
> + ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
> + test_ipsec_inline_proto_ipv6_copy_flabel_inner_0),
> + TEST_CASE_NAMED_ST(
> + "Tunnel header IPv6 copy FLABEL (inner 1)",
> + ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
> + test_ipsec_inline_proto_ipv6_copy_flabel_inner_1),
> + TEST_CASE_NAMED_ST(
> + "Tunnel header IPv6 set FLABEL 0 (inner 1)",
> + ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
> +
> test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1),
> + TEST_CASE_NAMED_ST(
> + "Tunnel header IPv6 set FLABEL 1 (inner 0)",
> + ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
> +
> test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0),
> TEST_CASE_NAMED_ST(
> "Tunnel header IPv4 decrement inner TTL",
> ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
> --
> 2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v4 10/10] test/security: add inline IPsec IPv6 flow label cases
2022-04-18 3:44 ` Anoob Joseph
@ 2022-04-18 3:55 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-18 3:55 UTC (permalink / raw)
To: Anoob Joseph, Vamsi Krishna Attunuru
Cc: thomas, david.marchand, hemant.agrawal, konstantin.ananyev,
ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, dev
Hi Anoob,
> Hi Akhil, Vamsi,
>
> Please add the same test cases in lookaside IPsec tests also. And please do
> update release notes.
>
I was planning to send it as a separate patchset.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-16 19:25 ` [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
@ 2022-04-19 8:58 ` Thomas Monjalon
2022-04-19 10:14 ` [EXT] " Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 0/3] Add and test IPsec SA expiry events Akhil Goyal
1 sibling, 1 reply; 184+ messages in thread
From: Thomas Monjalon @ 2022-04-19 8:58 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, david.marchand, hemant.agrawal, anoobj, konstantin.ananyev,
ciara.power, ferruh.yigit, andrew.rybchenko, ndabilpuram,
vattunuru
16/04/2022 21:25, Akhil Goyal:
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -3828,6 +3828,12 @@ enum rte_eth_event_ipsec_subtype {
> RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
> /** Soft byte expiry of SA */
> RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
> + /** Soft packet expiry of SA */
Is there a reference explaining what exactly is a "soft packet expiry"?
I think you should also mention what should be done
in the event handler.
> + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
> + /** Hard byte expiry of SA */
> + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
> + /** Hard packet expiry of SA */
> + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
Same comment for the 3 events.
> /** Max value of this enum */
> RTE_ETH_EVENT_IPSEC_MAX
> };
What is the impact of this "MAX" value on ABI compatibility?
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-19 8:58 ` Thomas Monjalon
@ 2022-04-19 10:14 ` Akhil Goyal
2022-04-19 10:19 ` Anoob Joseph
2022-04-19 10:47 ` Thomas Monjalon
0 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-19 10:14 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, david.marchand, hemant.agrawal, Anoob Joseph,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
Hi Thomas,
> 16/04/2022 21:25, Akhil Goyal:
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -3828,6 +3828,12 @@ enum rte_eth_event_ipsec_subtype {
> > RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
> > /** Soft byte expiry of SA */
> > RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
> > + /** Soft packet expiry of SA */
>
> Is there a reference explaining what exactly is a "soft packet expiry"?
SA expiry is a very common procedure in case of IPsec.
And all stacks must support this feature.
You can refer https://docs.strongswan.org/strongswan-docs/5.9/config/rekeying.html
For details.
Time expiry means after x seconds SA will expire.
Packet expiry means after x packets processing SA will expire.
Byte expiry means after x bytes of packet processing SA will expire.
> I think you should also mention what should be done
> in the event handler.
I believe this is quite obvious as per IPsec specifications.
Application need to start rekeying or SA need to be created again.
>
> > + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
> > + /** Hard byte expiry of SA */
> > + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
> > + /** Hard packet expiry of SA */
> > + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
>
> Same comment for the 3 events.
>
> > /** Max value of this enum */
> > RTE_ETH_EVENT_IPSEC_MAX
> > };
>
> What is the impact of this "MAX" value on ABI compatibility?
I see no issues reported while running ABI check.
There is no array being used inside library based on MAX.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-19 10:14 ` [EXT] " Akhil Goyal
@ 2022-04-19 10:19 ` Anoob Joseph
2022-04-19 10:37 ` Thomas Monjalon
2022-04-19 10:47 ` Thomas Monjalon
1 sibling, 1 reply; 184+ messages in thread
From: Anoob Joseph @ 2022-04-19 10:19 UTC (permalink / raw)
To: Akhil Goyal, Thomas Monjalon
Cc: dev, david.marchand, hemant.agrawal, konstantin.ananyev,
ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
Hi Thomas, Akhil,
> Is there a reference explaining what exactly is a "soft packet expiry"?
The SA lifetime/expiry is described in security library.
https://elixir.bootlin.com/dpdk/latest/source/lib/security/rte_security.h#L295
Thanks,
Anoob
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Tuesday, April 19, 2022 3:44 PM
> To: Thomas Monjalon <thomas@monjalon.net>
> Cc: dev@dpdk.org; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; Anoob Joseph <anoobj@marvell.com>;
> konstantin.ananyev@intel.com; ciara.power@intel.com;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>; Vamsi Krishna Attunuru
> <vattunuru@marvell.com>
> Subject: RE: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event
> subtypes
>
> Hi Thomas,
>
> > 16/04/2022 21:25, Akhil Goyal:
> > > --- a/lib/ethdev/rte_ethdev.h
> > > +++ b/lib/ethdev/rte_ethdev.h
> > > @@ -3828,6 +3828,12 @@ enum rte_eth_event_ipsec_subtype {
> > > RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
> > > /** Soft byte expiry of SA */
> > > RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
> > > + /** Soft packet expiry of SA */
> >
> > Is there a reference explaining what exactly is a "soft packet expiry"?
>
> SA expiry is a very common procedure in case of IPsec.
> And all stacks must support this feature.
> You can refer https://docs.strongswan.org/strongswan-
> docs/5.9/config/rekeying.html
> For details.
> Time expiry means after x seconds SA will expire.
> Packet expiry means after x packets processing SA will expire.
> Byte expiry means after x bytes of packet processing SA will expire.
>
> > I think you should also mention what should be done in the event
> > handler.
>
> I believe this is quite obvious as per IPsec specifications.
> Application need to start rekeying or SA need to be created again.
>
> >
> > > + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
> > > + /** Hard byte expiry of SA */
> > > + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
> > > + /** Hard packet expiry of SA */
> > > + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
> >
> > Same comment for the 3 events.
> >
> > > /** Max value of this enum */
> > > RTE_ETH_EVENT_IPSEC_MAX
> > > };
> >
> > What is the impact of this "MAX" value on ABI compatibility?
> I see no issues reported while running ABI check.
> There is no array being used inside library based on MAX.
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-19 10:19 ` Anoob Joseph
@ 2022-04-19 10:37 ` Thomas Monjalon
2022-04-19 10:39 ` Anoob Joseph
0 siblings, 1 reply; 184+ messages in thread
From: Thomas Monjalon @ 2022-04-19 10:37 UTC (permalink / raw)
To: Anoob Joseph
Cc: Akhil Goyal, dev, david.marchand, hemant.agrawal,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
19/04/2022 12:19, Anoob Joseph:
> Hi Thomas, Akhil,
>
> > Is there a reference explaining what exactly is a "soft packet expiry"?
>
> The SA lifetime/expiry is described in security library.
> https://elixir.bootlin.com/dpdk/latest/source/lib/security/rte_security.h#L295
The comment you are referencing is using "soft" for all limits,
even for packets_hard_limit and bytes_hard_limit.
It seems these comments need to be fixed.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-19 10:37 ` Thomas Monjalon
@ 2022-04-19 10:39 ` Anoob Joseph
0 siblings, 0 replies; 184+ messages in thread
From: Anoob Joseph @ 2022-04-19 10:39 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Akhil Goyal, dev, david.marchand, hemant.agrawal,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
Hi Thomas,
Yes. The comments need to be fixed. I'll push a patch for that. Thanks for pointing out.
Thanks,
Anoob
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Tuesday, April 19, 2022 4:08 PM
> To: Anoob Joseph <anoobj@marvell.com>
> Cc: Akhil Goyal <gakhil@marvell.com>; dev@dpdk.org;
> david.marchand@redhat.com; hemant.agrawal@nxp.com;
> konstantin.ananyev@intel.com; ciara.power@intel.com;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru; Nithin Kumar
> Dabilpuram <ndabilpuram@marvell.com>; Vamsi Krishna Attunuru
> <vattunuru@marvell.com>
> Subject: Re: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event
> subtypes
>
> 19/04/2022 12:19, Anoob Joseph:
> > Hi Thomas, Akhil,
> >
> > > Is there a reference explaining what exactly is a "soft packet expiry"?
> >
> > The SA lifetime/expiry is described in security library.
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__elixir.bootlin.co
> > m_dpdk_latest_source_lib_security_rte-5Fsecurity.h-
> 23L295&d=DwICAg&c=n
> > KjWec2b6R0mOyPaz7xtfQ&r=jPfB8rwwviRSxyLWs2n6B-
> WYLn1v9SyTMrT5EQqh2TU&m=
> >
> d9ONaYIdobFMxqt05JVZ3nT7rJib1rL35ax9X56teaPlvEGKpDNKMSCv1bYxhN8
> 1&s=jqz
> > cuIk0XOpIidta-KGZ4zmUvmkhsnbe_VzT3QXAWe4&e=
>
> The comment you are referencing is using "soft" for all limits, even for
> packets_hard_limit and bytes_hard_limit.
> It seems these comments need to be fixed.
>
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-19 10:14 ` [EXT] " Akhil Goyal
2022-04-19 10:19 ` Anoob Joseph
@ 2022-04-19 10:47 ` Thomas Monjalon
2022-04-19 12:27 ` Akhil Goyal
1 sibling, 1 reply; 184+ messages in thread
From: Thomas Monjalon @ 2022-04-19 10:47 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, david.marchand, hemant.agrawal, Anoob Joseph,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, mdr
19/04/2022 12:14, Akhil Goyal:
> Hi Thomas,
>
> > 16/04/2022 21:25, Akhil Goyal:
> > > --- a/lib/ethdev/rte_ethdev.h
> > > +++ b/lib/ethdev/rte_ethdev.h
> > > @@ -3828,6 +3828,12 @@ enum rte_eth_event_ipsec_subtype {
> > > RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
> > > /** Soft byte expiry of SA */
> > > RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
> > > + /** Soft packet expiry of SA */
> >
> > Is there a reference explaining what exactly is a "soft packet expiry"?
>
> SA expiry is a very common procedure in case of IPsec.
> And all stacks must support this feature.
> You can refer https://docs.strongswan.org/strongswan-docs/5.9/config/rekeying.html
> For details.
> Time expiry means after x seconds SA will expire.
> Packet expiry means after x packets processing SA will expire.
> Byte expiry means after x bytes of packet processing SA will expire.
I think you should use the syntax @ref packets_soft_limit
so it is clear where the event come from.
> > I think you should also mention what should be done
> > in the event handler.
>
> I believe this is quite obvious as per IPsec specifications.
> Application need to start rekeying or SA need to be created again.
Yes indeed.
> > > + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
> > > + /** Hard byte expiry of SA */
> > > + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
> > > + /** Hard packet expiry of SA */
> > > + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
> >
> > Same comment for the 3 events.
> >
> > > /** Max value of this enum */
> > > RTE_ETH_EVENT_IPSEC_MAX
> > > };
> >
> > What is the impact of this "MAX" value on ABI compatibility?
>
> I see no issues reported while running ABI check.
> There is no array being used inside library based on MAX.
No need of array inside the library, the events are exposed to the app.
I'm surprised libabigail is OK with changing an enum value.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-19 10:47 ` Thomas Monjalon
@ 2022-04-19 12:27 ` Akhil Goyal
2022-04-19 15:41 ` Ray Kinsella
0 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-04-19 12:27 UTC (permalink / raw)
To: Thomas Monjalon, mdr
Cc: dev, david.marchand, hemant.agrawal, Anoob Joseph,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
> > Time expiry means after x seconds SA will expire.
> > Packet expiry means after x packets processing SA will expire.
> > Byte expiry means after x bytes of packet processing SA will expire.
>
> I think you should use the syntax @ref packets_soft_limit
> so it is clear where the event come from.
OK will update the comments.
>
>
> > > > + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
> > > > + /** Hard byte expiry of SA */
> > > > + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
> > > > + /** Hard packet expiry of SA */
> > > > + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
> > >
> > > Same comment for the 3 events.
> > >
> > > > /** Max value of this enum */
> > > > RTE_ETH_EVENT_IPSEC_MAX
> > > > };
> > >
> > > What is the impact of this "MAX" value on ABI compatibility?
> >
> > I see no issues reported while running ABI check.
> > There is no array being used inside library based on MAX.
>
> No need of array inside the library, the events are exposed to the app.
> I'm surprised libabigail is OK with changing an enum value.
>
@Ray Can you please check if it is an issue to add more values in this enum?
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-19 12:27 ` Akhil Goyal
@ 2022-04-19 15:41 ` Ray Kinsella
2022-04-20 13:51 ` Akhil Goyal
0 siblings, 1 reply; 184+ messages in thread
From: Ray Kinsella @ 2022-04-19 15:41 UTC (permalink / raw)
To: Akhil Goyal
Cc: Thomas Monjalon, dev, david.marchand, hemant.agrawal,
Anoob Joseph, konstantin.ananyev, ciara.power, ferruh.yigit,
andrew.rybchenko, Nithin Kumar Dabilpuram,
Vamsi Krishna Attunuru
Akhil Goyal <gakhil@marvell.com> writes:
>> > Time expiry means after x seconds SA will expire.
>> > Packet expiry means after x packets processing SA will expire.
>> > Byte expiry means after x bytes of packet processing SA will expire.
>>
>> I think you should use the syntax @ref packets_soft_limit
>> so it is clear where the event come from.
>
> OK will update the comments.
>
>>
>>
>> > > > + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
>> > > > + /** Hard byte expiry of SA */
>> > > > + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
>> > > > + /** Hard packet expiry of SA */
>> > > > + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
>> > >
>> > > Same comment for the 3 events.
>> > >
>> > > > /** Max value of this enum */
>> > > > RTE_ETH_EVENT_IPSEC_MAX
>> > > > };
>> > >
>> > > What is the impact of this "MAX" value on ABI compatibility?
>> >
>> > I see no issues reported while running ABI check.
>> > There is no array being used inside library based on MAX.
>>
>> No need of array inside the library, the events are exposed to the app.
>> I'm surprised libabigail is OK with changing an enum value.
>>
> @Ray Can you please check if it is an issue to add more values in this enum?
Well look there is two seperate things going on here.
Why isn't libabigail complaining about the _MAX value changing. I'll
need to look at libabigail to see what the issue is, so lets put this
one aside for a moment.
This second issue is it safe for the _MAX value to change?
We have a lot of back and forth argument on these, and previously
concluded that we should probably look to remove _MAX values in the
22.11 release.
The core issue is that we need look at how a user is likely to use
rte_eth_event_ipsec_subtype. Take a look at the example below:-
/root/src/dpdk/examples/ipsec-secgw/ipsec-secgw.c:2592:0
For instance, can we guarantee that an application built against DPDK
21.11, but running against 22.xx will never recieve one of the new
values in event_desc->subtype (or by any other means)?
--
Regards, Ray K
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes
2022-04-19 15:41 ` Ray Kinsella
@ 2022-04-20 13:51 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-20 13:51 UTC (permalink / raw)
To: Ray Kinsella
Cc: Thomas Monjalon, dev, david.marchand, hemant.agrawal,
Anoob Joseph, konstantin.ananyev, ciara.power, ferruh.yigit,
andrew.rybchenko, Nithin Kumar Dabilpuram,
Vamsi Krishna Attunuru
Hi Ray,
> >> > > > + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
> >> > > > + /** Hard byte expiry of SA */
> >> > > > + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
> >> > > > + /** Hard packet expiry of SA */
> >> > > > + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
> >> > >
> >> > > Same comment for the 3 events.
> >> > >
> >> > > > /** Max value of this enum */
> >> > > > RTE_ETH_EVENT_IPSEC_MAX
> >> > > > };
> >> > >
> >> > > What is the impact of this "MAX" value on ABI compatibility?
> >> >
> >> > I see no issues reported while running ABI check.
> >> > There is no array being used inside library based on MAX.
> >>
> >> No need of array inside the library, the events are exposed to the app.
> >> I'm surprised libabigail is OK with changing an enum value.
> >>
> > @Ray Can you please check if it is an issue to add more values in this enum?
>
> Well look there is two seperate things going on here.
>
> Why isn't libabigail complaining about the _MAX value changing. I'll
> need to look at libabigail to see what the issue is, so lets put this
> one aside for a moment.
>
> This second issue is it safe for the _MAX value to change?
> We have a lot of back and forth argument on these, and previously
> concluded that we should probably look to remove _MAX values in the
> 22.11 release.
Agreed.
>
> The core issue is that we need look at how a user is likely to use
> rte_eth_event_ipsec_subtype. Take a look at the example below:-
>
> /root/src/dpdk/examples/ipsec-secgw/ipsec-secgw.c:2592:0
>
> For instance, can we guarantee that an application built against DPDK
> 21.11, but running against 22.xx will never recieve one of the new
> values in event_desc->subtype (or by any other means)?
ok we can defer the 7/10, 8/10, 9/10 patch to next release then.
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (9 preceding siblings ...)
2022-04-16 19:25 ` [PATCH v4 10/10] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
@ 2022-04-25 12:38 ` Poczatek, Jakub
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
11 siblings, 0 replies; 184+ messages in thread
From: Poczatek, Jakub @ 2022-04-25 12:38 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, Yigit, Ferruh, andrew.rybchenko,
ndabilpuram, vattunuru
Hi everyone,
When running inline_ipsec_autotest, tests should be marked as "skipped" rather than "failed".
Kind Regards,
Jakub Poczatek
-----Original Message-----
From: Akhil Goyal <gakhil@marvell.com>
Sent: Saturday 16 April 2022 20:25
To: dev@dpdk.org
Cc: thomas@monjalon.net; david.marchand@redhat.com; hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; andrew.rybchenko@oktetlabs.ru; ndabilpuram@marvell.com; vattunuru@marvell.com; Akhil Goyal <gakhil@marvell.com>
Subject: [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases
IP reassembly offload was added in last release.
The test app for unit testing IP reassembly of inline inbound IPsec flows is added in this patchset.
For testing IP reassembly, base inline IPsec is also added. The app is enhanced in v4 to handle more functional unit test cases for inline IPsec similar to Lookaside IPsec.
The functions from Lookaside more are reused to verify functional cases.
Changes in v4:
- rebased over next-crypto
- updated app to take benefit from Lookaside protocol test functions.
- Added more functional cases
- Added soft and hard expiry event subtypes in ethdev for testing SA soft and hard pkt/byte expiry events.
- reassembly cases are squashed in a single patch
Changes in v3:
- incorporated latest ethdev changes for reassembly.
- skipped build on windows as it needs rte_ipsec lib which is not
compiled on windows.
changes in v2:
- added IPsec burst mode case
- updated as per the latest ethdev changes.
Akhil Goyal (6):
app/test: add unit cases for inline IPsec offload
test/security: add inline inbound IPsec cases
test/security: add combined mode inline IPsec cases
test/security: add inline IPsec reassembly cases
test/security: add more inline IPsec functional cases
test/security: add ESN and anti-replay cases for inline
Vamsi Attunuru (4):
ethdev: add IPsec SA expiry event subtypes
test/security: add inline IPsec SA soft expiry cases
test/security: add inline IPsec SA hard expiry cases
test/security: add inline IPsec IPv6 flow label cases
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_cryptodev_security_ipsec.c | 35 +-
app/test/test_cryptodev_security_ipsec.h | 12 +
app/test/test_security_inline_proto.c | 2525 +++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 710 +++++
lib/ethdev/rte_ethdev.h | 9 +
7 files changed, 3292 insertions(+), 2 deletions(-) create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 0/7] app/test: add inline IPsec and reassembly cases
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (10 preceding siblings ...)
2022-04-25 12:38 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Poczatek, Jakub
@ 2022-04-27 15:10 ` Akhil Goyal
2022-04-27 15:10 ` [PATCH v5 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
` (8 more replies)
11 siblings, 9 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-27 15:10 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
IP reassembly offload was added in last release.
The test app for unit testing IP reassembly of inline
inbound IPsec flows is added in this patchset.
For testing IP reassembly, base inline IPsec is also
added. The app is enhanced in v4 to handle more functional
unit test cases for inline IPsec similar to Lookaside IPsec.
The functions from Lookaside more are reused to verify
functional cases.
changed in v5:
- removed soft/hard expiry patches which are deferred for next release
- skipped tests if no port is added.
- added release notes.
Changes in v4:
- rebased over next-crypto
- updated app to take benefit from Lookaside protocol
test functions.
- Added more functional cases
- Added soft and hard expiry event subtypes in ethdev
for testing SA soft and hard pkt/byte expiry events.
- reassembly cases are squashed in a single patch
Changes in v3:
- incorporated latest ethdev changes for reassembly.
- skipped build on windows as it needs rte_ipsec lib which is not
compiled on windows.
changes in v2:
- added IPsec burst mode case
- updated as per the latest ethdev changes.
Akhil Goyal (6):
app/test: add unit cases for inline IPsec offload
test/security: add inline inbound IPsec cases
test/security: add combined mode inline IPsec cases
test/security: add inline IPsec reassembly cases
test/security: add more inline IPsec functional cases
test/security: add ESN and anti-replay cases for inline
Vamsi Attunuru (1):
test/security: add inline IPsec IPv6 flow label cases
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_cryptodev_security_ipsec.c | 35 +-
app/test/test_cryptodev_security_ipsec.h | 10 +
app/test/test_security_inline_proto.c | 2372 +++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 704 +++++
doc/guides/rel_notes/release_22_07.rst | 5 +
7 files changed, 3127 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 1/7] app/test: add unit cases for inline IPsec offload
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
@ 2022-04-27 15:10 ` Akhil Goyal
2022-04-27 15:44 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
` (7 subsequent siblings)
8 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-04-27 15:10 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
A new test suite is added in test app to test inline IPsec protocol
offload. In this patch, predefined vectors from Lookaside IPsec test
are used to verify the IPsec functionality without the need of
external traffic generators. The sent packet is loopbacked onto the same
interface which is received and matched with the expected output.
The test suite can be updated further with other functional test cases.
In this patch encap only cases are added.
The testsuite can be run using:
RTE> inline_ipsec_autotest
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_security_inline_proto.c | 882 ++++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 20 +
doc/guides/rel_notes/release_22_07.rst | 5 +
5 files changed, 909 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 15008c03bc..89affa08ff 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -440,7 +440,7 @@ M: Akhil Goyal <gakhil@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
-F: app/test/test_security.c
+F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <roy.fan.zhang@intel.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 5fc1dd1b7b..39952c6c4f 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -125,6 +125,7 @@ test_sources = files(
'test_rwlock.c',
'test_sched.c',
'test_security.c',
+ 'test_security_inline_proto.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
new file mode 100644
index 0000000000..249474be91
--- /dev/null
+++ b/app/test/test_security_inline_proto.c
@@ -0,0 +1,882 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_security.h>
+
+#include "test.h"
+#include "test_security_inline_proto_vectors.h"
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+static int
+test_inline_ipsec(void)
+{
+ printf("Inline ipsec not supported on Windows, skipping test\n");
+ return TEST_SKIPPED;
+}
+
+#else
+
+#define NB_ETHPORTS_USED 1
+#define MEMPOOL_CACHE_SIZE 32
+#define MAX_PKT_BURST 32
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+#define RTE_PORT_ALL (~(uint16_t)0x0)
+
+#define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */
+#define RX_HTHRESH 8 /**< Default values of RX host threshold reg. */
+#define RX_WTHRESH 0 /**< Default values of RX write-back threshold reg. */
+
+#define TX_PTHRESH 32 /**< Default values of TX prefetch threshold reg. */
+#define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */
+#define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */
+
+#define MAX_TRAFFIC_BURST 2048
+#define NB_MBUF 10240
+
+extern struct ipsec_test_data pkt_aes_128_gcm;
+extern struct ipsec_test_data pkt_aes_192_gcm;
+extern struct ipsec_test_data pkt_aes_256_gcm;
+extern struct ipsec_test_data pkt_aes_128_gcm_frag;
+extern struct ipsec_test_data pkt_aes_128_cbc_null;
+extern struct ipsec_test_data pkt_null_aes_xcbc;
+extern struct ipsec_test_data pkt_aes_128_cbc_hmac_sha384;
+extern struct ipsec_test_data pkt_aes_128_cbc_hmac_sha512;
+
+static struct rte_mempool *mbufpool;
+static struct rte_mempool *sess_pool;
+static struct rte_mempool *sess_priv_pool;
+/* ethernet addresses of ports */
+static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
+ .split_hdr_size = 0,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SECURITY,
+ },
+ .txmode = {
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_SECURITY |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ },
+ .lpbk_mode = 1, /* enable loopback */
+};
+
+static struct rte_eth_rxconf rx_conf = {
+ .rx_thresh = {
+ .pthresh = RX_PTHRESH,
+ .hthresh = RX_HTHRESH,
+ .wthresh = RX_WTHRESH,
+ },
+ .rx_free_thresh = 32,
+};
+
+static struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = 32, /* Use PMD default values */
+ .tx_rs_thresh = 32, /* Use PMD default values */
+};
+
+uint16_t port_id;
+
+static uint64_t link_mbps;
+
+static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
+
+/* Create Inline IPsec session */
+static int
+create_inline_ipsec_session(struct ipsec_test_data *sa, uint16_t portid,
+ struct rte_security_session **sess, struct rte_security_ctx **ctx,
+ uint32_t *ol_flags, const struct ipsec_test_flags *flags,
+ struct rte_security_session_conf *sess_conf)
+{
+ uint16_t src_v6[8] = {0x2607, 0xf8b0, 0x400c, 0x0c03, 0x0000, 0x0000,
+ 0x0000, 0x001a};
+ uint16_t dst_v6[8] = {0x2001, 0x0470, 0xe5bf, 0xdead, 0x4957, 0x2174,
+ 0xe82c, 0x4887};
+ uint32_t src_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 2));
+ uint32_t dst_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 1));
+ struct rte_security_capability_idx sec_cap_idx;
+ const struct rte_security_capability *sec_cap;
+ enum rte_security_ipsec_sa_direction dir;
+ struct rte_security_ctx *sec_ctx;
+ uint32_t verify;
+
+ sess_conf->action_type = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ sess_conf->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sess_conf->ipsec = sa->ipsec_xform;
+
+ dir = sa->ipsec_xform.direction;
+ verify = flags->tunnel_hdr_verify;
+
+ if ((dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) && verify) {
+ if (verify == RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR)
+ src_v4 += 1;
+ else if (verify == RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR)
+ dst_v4 += 1;
+ }
+
+ if (sa->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+ if (sa->ipsec_xform.tunnel.type ==
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+ memcpy(&sess_conf->ipsec.tunnel.ipv4.src_ip, &src_v4,
+ sizeof(src_v4));
+ memcpy(&sess_conf->ipsec.tunnel.ipv4.dst_ip, &dst_v4,
+ sizeof(dst_v4));
+
+ if (flags->df == TEST_IPSEC_SET_DF_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv4.df = 0;
+
+ if (flags->df == TEST_IPSEC_SET_DF_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv4.df = 1;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv4.dscp = 0;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv4.dscp =
+ TEST_IPSEC_DSCP_VAL;
+ } else {
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv6.dscp = 0;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv6.dscp =
+ TEST_IPSEC_DSCP_VAL;
+
+ memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr, &src_v6,
+ sizeof(src_v6));
+ memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr, &dst_v6,
+ sizeof(dst_v6));
+ }
+ }
+
+ /* Save SA as userdata for the security session. When
+ * the packet is received, this userdata will be
+ * retrieved using the metadata from the packet.
+ *
+ * The PMD is expected to set similar metadata for other
+ * operations, like rte_eth_event, which are tied to
+ * security session. In such cases, the userdata could
+ * be obtained to uniquely identify the security
+ * parameters denoted.
+ */
+
+ sess_conf->userdata = (void *) sa;
+
+ sec_ctx = (struct rte_security_ctx *)rte_eth_dev_get_sec_ctx(portid);
+ if (sec_ctx == NULL) {
+ printf("Ethernet device doesn't support security features.\n");
+ return TEST_SKIPPED;
+ }
+
+ sec_cap_idx.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ sec_cap_idx.protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sec_cap_idx.ipsec.proto = sess_conf->ipsec.proto;
+ sec_cap_idx.ipsec.mode = sess_conf->ipsec.mode;
+ sec_cap_idx.ipsec.direction = sess_conf->ipsec.direction;
+ sec_cap = rte_security_capability_get(sec_ctx, &sec_cap_idx);
+ if (sec_cap == NULL) {
+ printf("No capabilities registered\n");
+ return TEST_SKIPPED;
+ }
+
+ if (sa->aead || sa->aes_gmac)
+ memcpy(&sess_conf->ipsec.salt, sa->salt.data,
+ RTE_MIN(sizeof(sess_conf->ipsec.salt), sa->salt.len));
+
+ /* Copy cipher session parameters */
+ if (sa->aead) {
+ rte_memcpy(sess_conf->crypto_xform, &sa->xform.aead,
+ sizeof(struct rte_crypto_sym_xform));
+ sess_conf->crypto_xform->aead.key.data = sa->key.data;
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_aead_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ } else {
+ if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ rte_memcpy(&sess_conf->crypto_xform->cipher,
+ &sa->xform.chain.cipher.cipher,
+ sizeof(struct rte_crypto_cipher_xform));
+
+ rte_memcpy(&sess_conf->crypto_xform->next->auth,
+ &sa->xform.chain.auth.auth,
+ sizeof(struct rte_crypto_auth_xform));
+ sess_conf->crypto_xform->cipher.key.data =
+ sa->key.data;
+ sess_conf->crypto_xform->next->auth.key.data =
+ sa->auth_key.data;
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_cipher_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Cipher crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+
+ if (test_ipsec_crypto_caps_auth_verify(sec_cap,
+ sess_conf->crypto_xform->next) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Auth crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ } else {
+ rte_memcpy(&sess_conf->crypto_xform->next->cipher,
+ &sa->xform.chain.cipher.cipher,
+ sizeof(struct rte_crypto_cipher_xform));
+ rte_memcpy(&sess_conf->crypto_xform->auth,
+ &sa->xform.chain.auth.auth,
+ sizeof(struct rte_crypto_auth_xform));
+ sess_conf->crypto_xform->auth.key.data =
+ sa->auth_key.data;
+ sess_conf->crypto_xform->next->cipher.key.data =
+ sa->key.data;
+
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_cipher_verify(sec_cap,
+ sess_conf->crypto_xform->next) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Cipher crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+
+ if (test_ipsec_crypto_caps_auth_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Auth crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ }
+ }
+
+ if (test_ipsec_sec_caps_verify(&sess_conf->ipsec, sec_cap, false) != 0)
+ return TEST_SKIPPED;
+
+ if ((sa->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) &&
+ (sa->ipsec_xform.options.iv_gen_disable == 1)) {
+ /* Set env variable when IV generation is disabled */
+ char arr[128];
+ int len = 0, j = 0;
+ int iv_len = (sa->aead || sa->aes_gmac) ? 8 : 16;
+
+ for (; j < iv_len; j++)
+ len += snprintf(arr+len, sizeof(arr) - len,
+ "0x%x, ", sa->iv.data[j]);
+ setenv("ETH_SEC_IV_OVR", arr, 1);
+ }
+
+ *sess = rte_security_session_create(sec_ctx,
+ sess_conf, sess_pool, sess_priv_pool);
+ if (*sess == NULL) {
+ printf("SEC Session init failed.\n");
+ return TEST_FAILED;
+ }
+
+ *ol_flags = sec_cap->ol_flags;
+ *ctx = sec_ctx;
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 3s, and print them finally */
+static void
+check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 30 /* 3s (30 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+ int ret;
+ char link_status[RTE_ETH_LINK_MAX_STR_LEN];
+
+ printf("Checking link statuses...\n");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ all_ports_up = 1;
+ for (portid = 0; portid < port_num; portid++) {
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ ret = rte_eth_link_get_nowait(portid, &link);
+ if (ret < 0) {
+ all_ports_up = 0;
+ if (print_flag == 1)
+ printf("Port %u link get failed: %s\n",
+ portid, rte_strerror(-ret));
+ continue;
+ }
+
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status && link_mbps == 0)
+ link_mbps = link.link_speed;
+
+ rte_eth_link_to_str(link_status,
+ sizeof(link_status), &link);
+ printf("Port %d %s\n", portid, link_status);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1))
+ print_flag = 1;
+ }
+}
+
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
+static void
+copy_buf_to_pkt_segs(const uint8_t *buf, unsigned int len,
+ struct rte_mbuf *pkt, unsigned int offset)
+{
+ unsigned int copied = 0;
+ unsigned int copy_len;
+ struct rte_mbuf *seg;
+ void *seg_buf;
+
+ seg = pkt;
+ while (offset >= seg->data_len) {
+ offset -= seg->data_len;
+ seg = seg->next;
+ }
+ copy_len = seg->data_len - offset;
+ seg_buf = rte_pktmbuf_mtod_offset(seg, char *, offset);
+ while (len > copy_len) {
+ rte_memcpy(seg_buf, buf + copied, (size_t) copy_len);
+ len -= copy_len;
+ copied += copy_len;
+ seg = seg->next;
+ seg_buf = rte_pktmbuf_mtod(seg, void *);
+ }
+ rte_memcpy(seg_buf, buf + copied, (size_t) len);
+}
+
+static inline struct rte_mbuf *
+init_packet(struct rte_mempool *mp, const uint8_t *data, unsigned int len)
+{
+ struct rte_mbuf *pkt;
+
+ pkt = rte_pktmbuf_alloc(mp);
+ if (pkt == NULL)
+ return NULL;
+ if (((data[0] & 0xF0) >> 4) == IPVERSION) {
+ rte_memcpy(rte_pktmbuf_append(pkt, RTE_ETHER_HDR_LEN),
+ &dummy_ipv4_eth_hdr, RTE_ETHER_HDR_LEN);
+ pkt->l3_len = sizeof(struct rte_ipv4_hdr);
+ } else {
+ rte_memcpy(rte_pktmbuf_append(pkt, RTE_ETHER_HDR_LEN),
+ &dummy_ipv6_eth_hdr, RTE_ETHER_HDR_LEN);
+ pkt->l3_len = sizeof(struct rte_ipv6_hdr);
+ }
+ pkt->l2_len = RTE_ETHER_HDR_LEN;
+
+ if (pkt->buf_len > (len + RTE_ETHER_HDR_LEN))
+ rte_memcpy(rte_pktmbuf_append(pkt, len), data, len);
+ else
+ copy_buf_to_pkt_segs(data, len, pkt, RTE_ETHER_HDR_LEN);
+ return pkt;
+}
+
+static int
+init_mempools(unsigned int nb_mbuf)
+{
+ struct rte_security_ctx *sec_ctx;
+ uint16_t nb_sess = 512;
+ uint32_t sess_sz;
+ char s[64];
+
+ if (mbufpool == NULL) {
+ snprintf(s, sizeof(s), "mbuf_pool");
+ mbufpool = rte_pktmbuf_pool_create(s, nb_mbuf,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, SOCKET_ID_ANY);
+ if (mbufpool == NULL) {
+ printf("Cannot init mbuf pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated mbuf pool\n");
+ }
+
+ sec_ctx = rte_eth_dev_get_sec_ctx(port_id);
+ if (sec_ctx == NULL) {
+ printf("Device does not support Security ctx\n");
+ return TEST_SKIPPED;
+ }
+ sess_sz = rte_security_session_get_size(sec_ctx);
+ if (sess_pool == NULL) {
+ snprintf(s, sizeof(s), "sess_pool");
+ sess_pool = rte_mempool_create(s, nb_sess, sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (sess_pool == NULL) {
+ printf("Cannot init sess pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated sess pool\n");
+ }
+ if (sess_priv_pool == NULL) {
+ snprintf(s, sizeof(s), "sess_priv_pool");
+ sess_priv_pool = rte_mempool_create(s, nb_sess, sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (sess_priv_pool == NULL) {
+ printf("Cannot init sess_priv pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated sess_priv pool\n");
+ }
+
+ return 0;
+}
+
+static void
+create_default_flow(uint16_t portid)
+{
+ struct rte_flow_action action[2];
+ struct rte_flow_item pattern[2];
+ struct rte_flow_attr attr = {0};
+ struct rte_flow_error err;
+ struct rte_flow *flow;
+ int ret;
+
+ /* Add the default rte_flow to enable SECURITY for all ESP packets */
+
+ pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP;
+ pattern[0].spec = NULL;
+ pattern[0].mask = NULL;
+ pattern[0].last = NULL;
+ pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+ action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ action[0].conf = NULL;
+ action[1].type = RTE_FLOW_ACTION_TYPE_END;
+ action[1].conf = NULL;
+
+ attr.ingress = 1;
+
+ ret = rte_flow_validate(portid, &attr, pattern, action, &err);
+ if (ret) {
+ printf("\nValidate flow failed, ret = %d\n", ret);
+ return;
+ }
+ flow = rte_flow_create(portid, &attr, pattern, action, &err);
+ if (flow == NULL) {
+ printf("\nDefault flow rule create failed\n");
+ return;
+ }
+
+ default_flow[portid] = flow;
+}
+
+static void
+destroy_default_flow(uint16_t portid)
+{
+ struct rte_flow_error err;
+ int ret;
+ if (!default_flow[portid])
+ return;
+ ret = rte_flow_destroy(portid, default_flow[portid], &err);
+ if (ret) {
+ printf("\nDefault flow rule destroy failed\n");
+ return;
+ }
+ default_flow[portid] = NULL;
+}
+
+struct rte_mbuf **tx_pkts_burst;
+struct rte_mbuf **rx_pkts_burst;
+
+static int
+test_ipsec_inline_proto_process(struct ipsec_test_data *td,
+ struct ipsec_test_data *res_d,
+ int nb_pkts,
+ bool silent,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session_conf sess_conf = {0};
+ struct rte_crypto_sym_xform cipher = {0};
+ struct rte_crypto_sym_xform auth = {0};
+ struct rte_crypto_sym_xform aead = {0};
+ struct rte_security_session *ses;
+ struct rte_security_ctx *ctx;
+ int nb_rx = 0, nb_sent;
+ uint32_t ol_flags;
+ int i, j = 0, ret;
+
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
+
+ if (td->aead) {
+ sess_conf.crypto_xform = &aead;
+ } else {
+ if (td->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ sess_conf.crypto_xform = &cipher;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sess_conf.crypto_xform->next = &auth;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ } else {
+ sess_conf.crypto_xform = &auth;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sess_conf.crypto_xform->next = &cipher;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ }
+ }
+
+ /* Create Inline IPsec session. */
+ ret = create_inline_ipsec_session(td, port_id, &ses, &ctx,
+ &ol_flags, flags, &sess_conf);
+ if (ret)
+ return ret;
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ create_default_flow(port_id);
+
+ for (i = 0; i < nb_pkts; i++) {
+ tx_pkts_burst[i] = init_packet(mbufpool, td->input_text.data,
+ td->input_text.len);
+ if (tx_pkts_burst[i] == NULL) {
+ while (i--)
+ rte_pktmbuf_free(tx_pkts_burst[i]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkts_burst[i],
+ uint8_t *, RTE_ETHER_HDR_LEN), flags)) {
+ while (i--)
+ rte_pktmbuf_free(tx_pkts_burst[i]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx, ses,
+ tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ }
+ }
+ /* Send packet to ethdev for inline IPsec processing. */
+ nb_sent = rte_eth_tx_burst(port_id, 0, tx_pkts_burst, nb_pkts);
+ if (nb_sent != nb_pkts) {
+ printf("\nUnable to TX %d packets", nb_pkts);
+ for ( ; nb_sent < nb_pkts; nb_sent++)
+ rte_pktmbuf_free(tx_pkts_burst[nb_sent]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ rte_pause();
+
+ /* Receive back packet on loopback interface. */
+ do {
+ rte_delay_ms(1);
+ nb_rx += rte_eth_rx_burst(port_id, 0, &rx_pkts_burst[nb_rx],
+ nb_sent - nb_rx);
+ if (nb_rx >= nb_sent)
+ break;
+ } while (j++ < 5 || nb_rx == 0);
+
+ if (nb_rx != nb_sent) {
+ printf("\nUnable to RX all %d packets", nb_sent);
+ while (--nb_rx)
+ rte_pktmbuf_free(rx_pkts_burst[nb_rx]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ rte_pktmbuf_adj(rx_pkts_burst[i], RTE_ETHER_HDR_LEN);
+
+ ret = test_ipsec_post_process(rx_pkts_burst[i], td,
+ res_d, silent, flags);
+ if (ret != TEST_SUCCESS) {
+ for ( ; i < nb_rx; i++)
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ goto out;
+ }
+
+ ret = test_ipsec_stats_verify(ctx, ses, flags,
+ td->ipsec_xform.direction);
+ if (ret != TEST_SUCCESS) {
+ for ( ; i < nb_rx; i++)
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ goto out;
+ }
+
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ rx_pkts_burst[i] = NULL;
+ }
+
+out:
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ destroy_default_flow(port_id);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ctx, ses);
+ ses = NULL;
+
+ return ret;
+}
+
+static int
+ut_setup_inline_ipsec(void)
+{
+ int ret;
+
+ /* Start device */
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ /* always enable promiscuous */
+ ret = rte_eth_promiscuous_enable(port_id);
+ if (ret != 0) {
+ printf("rte_eth_promiscuous_enable: err=%s, port=%d\n",
+ rte_strerror(-ret), port_id);
+ return ret;
+ }
+
+ check_all_ports_link_status(1, RTE_PORT_ALL);
+
+ return 0;
+}
+
+static void
+ut_teardown_inline_ipsec(void)
+{
+ uint16_t portid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ ret = rte_eth_dev_stop(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+
+static int
+inline_ipsec_testsuite_setup(void)
+{
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint16_t nb_ports;
+ int ret;
+ uint16_t nb_rx_queue = 1, nb_tx_queue = 1;
+
+ printf("Start inline IPsec test.\n");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports < NB_ETHPORTS_USED) {
+ printf("At least %u port(s) used for test\n",
+ NB_ETHPORTS_USED);
+ return TEST_SKIPPED;
+ }
+
+ ret = init_mempools(NB_MBUF);
+ if (ret)
+ return ret;
+
+ if (tx_pkts_burst == NULL) {
+ tx_pkts_burst = (struct rte_mbuf **)rte_calloc("tx_buff",
+ MAX_TRAFFIC_BURST,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE);
+ if (!tx_pkts_burst)
+ return TEST_FAILED;
+
+ rx_pkts_burst = (struct rte_mbuf **)rte_calloc("rx_buff",
+ MAX_TRAFFIC_BURST,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE);
+ if (!rx_pkts_burst)
+ return TEST_FAILED;
+ }
+
+ printf("Generate %d packets\n", MAX_TRAFFIC_BURST);
+
+ nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+ /* configuring port 0 for the test is enough */
+ port_id = 0;
+ /* port configure */
+ ret = rte_eth_dev_configure(port_id, nb_rx_queue,
+ nb_tx_queue, &port_conf);
+ if (ret < 0) {
+ printf("Cannot configure device: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ ret = rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+ if (ret < 0) {
+ printf("Cannot get mac address: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ printf("Port %u ", port_id);
+ print_ethaddr("Address:", &ports_eth_addr[port_id]);
+ printf("\n");
+
+ /* tx queue setup */
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd,
+ SOCKET_ID_ANY, &tx_conf);
+ if (ret < 0) {
+ printf("rte_eth_tx_queue_setup: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ /* rx queue steup */
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, SOCKET_ID_ANY,
+ &rx_conf, mbufpool);
+ if (ret < 0) {
+ printf("rte_eth_rx_queue_setup: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ test_ipsec_alg_list_populate();
+
+ return 0;
+}
+
+static void
+inline_ipsec_testsuite_teardown(void)
+{
+ uint16_t portid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ ret = rte_eth_dev_reset(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_reset: err=%s, port=%u\n",
+ rte_strerror(-ret), port_id);
+ }
+}
+
+static int
+test_ipsec_inline_proto_known_vec(const void *test_data)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ memcpy(&td_outb, test_data, sizeof(td_outb));
+
+ if (td_outb.aead ||
+ td_outb.xform.chain.cipher.cipher.algo != RTE_CRYPTO_CIPHER_NULL) {
+ /* Disable IV gen to be able to test with known vectors */
+ td_outb.ipsec_xform.options.iv_gen_disable = 1;
+ }
+
+ return test_ipsec_inline_proto_process(&td_outb, NULL, 1,
+ false, &flags);
+}
+
+static struct unit_test_suite inline_ipsec_testsuite = {
+ .suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
+ .setup = inline_ipsec_testsuite_setup,
+ .teardown = inline_ipsec_testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 192)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_192_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 256)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_256_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha256),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA384 [24B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha384),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA512 [32B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha512),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv6 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_256_gcm_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv6 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha256_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 NULL AES-XCBC-MAC [12B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_null_aes_xcbc),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ },
+};
+
+
+static int
+test_inline_ipsec(void)
+{
+ return unit_test_suite_runner(&inline_ipsec_testsuite);
+}
+
+#endif /* !RTE_EXEC_ENV_WINDOWS */
+
+REGISTER_TEST_COMMAND(inline_ipsec_autotest, test_inline_ipsec);
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
new file mode 100644
index 0000000000..d1074da36a
--- /dev/null
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+
+#include "test_cryptodev_security_ipsec.h"
+
+uint8_t dummy_ipv4_eth_hdr[] = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+};
+uint8_t dummy_ipv6_eth_hdr[] = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+};
+
+#endif
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 42a5f2d990..212116f166 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -55,6 +55,11 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added security inline protocol (IPsec) tests in dpdk-test.**
+
+ Added various functional test cases in dpdk-test to verify
+ inline IPsec protocol offload using loopback interface.
+
Removed Items
-------------
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 2/7] test/security: add inline inbound IPsec cases
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
2022-04-27 15:10 ` [PATCH v5 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-04-27 15:10 ` Akhil Goyal
2022-04-27 15:44 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 3/7] test/security: add combined mode inline " Akhil Goyal
` (6 subsequent siblings)
8 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-04-27 15:10 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added test cases for inline Inbound protocol offload
verification with known test vectors from Lookaside mode.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 65 +++++++++++++++++++++++++++
1 file changed, 65 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 249474be91..7dd9ba7aff 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -819,6 +819,24 @@ test_ipsec_inline_proto_known_vec(const void *test_data)
false, &flags);
}
+static int
+test_ipsec_inline_proto_known_vec_inb(const void *test_data)
+{
+ const struct ipsec_test_data *td = test_data;
+ struct ipsec_test_flags flags;
+ struct ipsec_test_data td_inb;
+
+ memset(&flags, 0, sizeof(flags));
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ test_ipsec_td_in_from_out(td, &td_inb);
+ else
+ memcpy(&td_inb, td, sizeof(td_inb));
+
+ return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags);
+}
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -865,6 +883,53 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_known_vec,
&pkt_null_aes_xcbc),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 192)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_192_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 256)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_256_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_128_cbc_null),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha256),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA384 [24B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha384),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA512 [32B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha512),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv6 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_256_gcm_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv6 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha256_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 NULL AES-XCBC-MAC [12B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_null_aes_xcbc),
+
+
TEST_CASES_END() /**< NULL terminate unit test array */
},
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 3/7] test/security: add combined mode inline IPsec cases
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
2022-04-27 15:10 ` [PATCH v5 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-04-27 15:10 ` [PATCH v5 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
@ 2022-04-27 15:10 ` Akhil Goyal
2022-04-27 15:45 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
` (5 subsequent siblings)
8 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-04-27 15:10 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added combined encap and decap test cases for various algorithm
combinations
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 102 ++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 7dd9ba7aff..ea36d1188c 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -660,6 +660,92 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
return ret;
}
+static int
+test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_data td_inb;
+ unsigned int i, nb_pkts = 1, pass_cnt = 0, fail_cnt = 0;
+ int ret;
+
+ if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
+ flags->sa_expiry_pkts_hard)
+ nb_pkts = IPSEC_TEST_PACKETS_MAX;
+
+ for (i = 0; i < RTE_DIM(alg_list); i++) {
+ test_ipsec_td_prepare(alg_list[i].param1,
+ alg_list[i].param2,
+ flags, &td_outb, 1);
+
+ if (!td_outb.aead) {
+ enum rte_crypto_cipher_algorithm cipher_alg;
+ enum rte_crypto_auth_algorithm auth_alg;
+
+ cipher_alg = td_outb.xform.chain.cipher.cipher.algo;
+ auth_alg = td_outb.xform.chain.auth.auth.algo;
+
+ if (td_outb.aes_gmac && cipher_alg != RTE_CRYPTO_CIPHER_NULL)
+ continue;
+
+ /* ICV is not applicable for NULL auth */
+ if (flags->icv_corrupt &&
+ auth_alg == RTE_CRYPTO_AUTH_NULL)
+ continue;
+
+ /* IV is not applicable for NULL cipher */
+ if (flags->iv_gen &&
+ cipher_alg == RTE_CRYPTO_CIPHER_NULL)
+ continue;
+ }
+
+ if (flags->udp_encap)
+ td_outb.ipsec_xform.options.udp_encap = 1;
+
+ ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
+ false, flags);
+ if (ret == TEST_SKIPPED)
+ continue;
+
+ if (ret == TEST_FAILED) {
+ printf("\n TEST FAILED");
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+ fail_cnt++;
+ continue;
+ }
+
+ test_ipsec_td_update(&td_inb, &td_outb, 1, flags);
+
+ ret = test_ipsec_inline_proto_process(&td_inb, NULL, nb_pkts,
+ false, flags);
+ if (ret == TEST_SKIPPED)
+ continue;
+
+ if (ret == TEST_FAILED) {
+ printf("\n TEST FAILED");
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+ fail_cnt++;
+ continue;
+ }
+
+ if (flags->display_alg)
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+
+ pass_cnt++;
+ }
+
+ printf("Tests passed: %d, failed: %d", pass_cnt, fail_cnt);
+ if (fail_cnt > 0)
+ return TEST_FAILED;
+ if (pass_cnt > 0)
+ return TEST_SUCCESS;
+ else
+ return TEST_SKIPPED;
+}
+
+
static int
ut_setup_inline_ipsec(void)
{
@@ -836,6 +922,17 @@ test_ipsec_inline_proto_known_vec_inb(const void *test_data)
return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags);
}
+static int
+test_ipsec_inline_proto_display_list(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.display_alg = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
@@ -929,6 +1026,11 @@ static struct unit_test_suite inline_ipsec_testsuite = {
test_ipsec_inline_proto_known_vec_inb,
&pkt_null_aes_xcbc),
+ TEST_CASE_NAMED_ST(
+ "Combined test alg list",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_display_list),
+
TEST_CASES_END() /**< NULL terminate unit test array */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 4/7] test/security: add inline IPsec reassembly cases
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
` (2 preceding siblings ...)
2022-04-27 15:10 ` [PATCH v5 3/7] test/security: add combined mode inline " Akhil Goyal
@ 2022-04-27 15:10 ` Akhil Goyal
2022-04-27 15:45 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
` (4 subsequent siblings)
8 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-04-27 15:10 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added unit test cases for IP reassembly of inline IPsec
inbound scenarios.
In these cases, known test vectors of fragments are first
processed for inline outbound processing and then received
back on loopback interface for inbound processing along with
IP reassembly of the corresponding decrypted packets.
The resultant plain text reassembled packet is compared with
original unfragmented packet.
In this patch, cases are added for 2/4/5 fragments for both
IPv4 and IPv6 packets. A few negative test cases are also added
like incomplete fragments, out of place fragments, duplicate
fragments.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 421 ++++++++++-
app/test/test_security_inline_proto_vectors.h | 684 ++++++++++++++++++
2 files changed, 1104 insertions(+), 1 deletion(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index ea36d1188c..46636af072 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -41,6 +41,9 @@ test_inline_ipsec(void)
#define MAX_TRAFFIC_BURST 2048
#define NB_MBUF 10240
+#define ENCAP_DECAP_BURST_SZ 33
+#define APP_REASS_TIMEOUT 10
+
extern struct ipsec_test_data pkt_aes_128_gcm;
extern struct ipsec_test_data pkt_aes_192_gcm;
extern struct ipsec_test_data pkt_aes_256_gcm;
@@ -94,6 +97,8 @@ uint16_t port_id;
static uint64_t link_mbps;
+static int ip_reassembly_dynfield_offset = -1;
+
static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
/* Create Inline IPsec session */
@@ -527,6 +532,347 @@ destroy_default_flow(uint16_t portid)
struct rte_mbuf **tx_pkts_burst;
struct rte_mbuf **rx_pkts_burst;
+static int
+compare_pkt_data(struct rte_mbuf *m, uint8_t *ref, unsigned int tot_len)
+{
+ unsigned int len;
+ unsigned int nb_segs = m->nb_segs;
+ unsigned int matched = 0;
+ struct rte_mbuf *save = m;
+
+ while (m) {
+ len = tot_len;
+ if (len > m->data_len)
+ len = m->data_len;
+ if (len != 0) {
+ if (memcmp(rte_pktmbuf_mtod(m, char *),
+ ref + matched, len)) {
+ printf("\n====Reassembly case failed: Data Mismatch");
+ rte_hexdump(stdout, "Reassembled",
+ rte_pktmbuf_mtod(m, char *),
+ len);
+ rte_hexdump(stdout, "reference",
+ ref + matched,
+ len);
+ return TEST_FAILED;
+ }
+ }
+ tot_len -= len;
+ matched += len;
+ m = m->next;
+ }
+
+ if (tot_len) {
+ printf("\n====Reassembly case failed: Data Missing %u",
+ tot_len);
+ printf("\n====nb_segs %u, tot_len %u", nb_segs, tot_len);
+ rte_pktmbuf_dump(stderr, save, -1);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+}
+
+static inline bool
+is_ip_reassembly_incomplete(struct rte_mbuf *mbuf)
+{
+ static uint64_t ip_reassembly_dynflag;
+ int ip_reassembly_dynflag_offset;
+
+ if (ip_reassembly_dynflag == 0) {
+ ip_reassembly_dynflag_offset = rte_mbuf_dynflag_lookup(
+ RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME, NULL);
+ if (ip_reassembly_dynflag_offset < 0)
+ return false;
+ ip_reassembly_dynflag = RTE_BIT64(ip_reassembly_dynflag_offset);
+ }
+
+ return (mbuf->ol_flags & ip_reassembly_dynflag) != 0;
+}
+
+static void
+free_mbuf(struct rte_mbuf *mbuf)
+{
+ rte_eth_ip_reassembly_dynfield_t dynfield;
+
+ if (!mbuf)
+ return;
+
+ if (!is_ip_reassembly_incomplete(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ } else {
+ if (ip_reassembly_dynfield_offset < 0)
+ return;
+
+ while (mbuf) {
+ dynfield = *RTE_MBUF_DYNFIELD(mbuf,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ rte_pktmbuf_free(mbuf);
+ mbuf = dynfield.next_frag;
+ }
+ }
+}
+
+
+static int
+get_and_verify_incomplete_frags(struct rte_mbuf *mbuf,
+ struct reassembly_vector *vector)
+{
+ rte_eth_ip_reassembly_dynfield_t *dynfield[MAX_PKT_BURST];
+ int j = 0, ret;
+ /**
+ * IP reassembly offload is incomplete, and fragments are listed in
+ * dynfield which can be reassembled in SW.
+ */
+ printf("\nHW IP Reassembly is not complete; attempt SW IP Reassembly,"
+ "\nMatching with original frags.");
+
+ if (ip_reassembly_dynfield_offset < 0)
+ return -1;
+
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(mbuf, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(mbuf, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(mbuf, ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(dynfield[j]->next_frag, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(dynfield[j]->next_frag, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+
+ while ((dynfield[j]->nb_frags > 1) &&
+ is_ip_reassembly_incomplete(dynfield[j]->next_frag)) {
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(dynfield[j-1]->next_frag,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(dynfield[j]->next_frag, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(dynfield[j]->next_frag,
+ vector->frags[j]->data, vector->frags[j]->len);
+ if (ret)
+ return ret;
+ }
+ return ret;
+}
+
+static int
+test_ipsec_with_reassembly(struct reassembly_vector *vector,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session *out_ses[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_security_session *in_ses[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_eth_ip_reassembly_params reass_capa = {0};
+ struct rte_security_session_conf sess_conf_out = {0};
+ struct rte_security_session_conf sess_conf_in = {0};
+ unsigned int nb_tx, burst_sz, nb_sent = 0;
+ struct rte_crypto_sym_xform cipher_out = {0};
+ struct rte_crypto_sym_xform auth_out = {0};
+ struct rte_crypto_sym_xform aead_out = {0};
+ struct rte_crypto_sym_xform cipher_in = {0};
+ struct rte_crypto_sym_xform auth_in = {0};
+ struct rte_crypto_sym_xform aead_in = {0};
+ struct ipsec_test_data sa_data = {0};
+ struct rte_security_ctx *ctx;
+ unsigned int i, nb_rx = 0, j;
+ uint32_t ol_flags;
+ int ret = 0;
+
+ burst_sz = vector->burst ? ENCAP_DECAP_BURST_SZ : 1;
+ nb_tx = vector->nb_frags * burst_sz;
+
+ rte_eth_dev_stop(port_id);
+ if (ret != 0) {
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), port_id);
+ return ret;
+ }
+ rte_eth_ip_reassembly_capability_get(port_id, &reass_capa);
+ if (reass_capa.max_frags < vector->nb_frags)
+ return TEST_SKIPPED;
+ if (reass_capa.timeout_ms > APP_REASS_TIMEOUT) {
+ reass_capa.timeout_ms = APP_REASS_TIMEOUT;
+ rte_eth_ip_reassembly_conf_set(port_id, &reass_capa);
+ }
+
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+
+ memset(tx_pkts_burst, 0, sizeof(tx_pkts_burst[0]) * nb_tx);
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_tx);
+
+ for (i = 0; i < nb_tx; i += vector->nb_frags) {
+ for (j = 0; j < vector->nb_frags; j++) {
+ tx_pkts_burst[i+j] = init_packet(mbufpool,
+ vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (tx_pkts_burst[i+j] == NULL) {
+ ret = -1;
+ printf("\n packed init failed\n");
+ goto out;
+ }
+ }
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data,
+ sizeof(struct ipsec_test_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+ sa_data.ipsec_xform.direction =
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+ if (sa_data.aead) {
+ sess_conf_out.crypto_xform = &aead_out;
+ } else {
+ sess_conf_out.crypto_xform = &cipher_out;
+ sess_conf_out.crypto_xform->next = &auth_out;
+ }
+
+ /* Create Inline IPsec outbound session. */
+ ret = create_inline_ipsec_session(&sa_data, port_id,
+ &out_ses[i], &ctx, &ol_flags, flags,
+ &sess_conf_out);
+ if (ret) {
+ printf("\nInline outbound session create failed\n");
+ goto out;
+ }
+ }
+
+ j = 0;
+ for (i = 0; i < nb_tx; i++) {
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx,
+ out_ses[j], tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+
+ /* Move to next SA after nb_frags */
+ if ((i + 1) % vector->nb_frags == 0)
+ j++;
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data,
+ sizeof(struct ipsec_test_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+ sa_data.ipsec_xform.direction =
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+
+ if (sa_data.aead) {
+ sess_conf_in.crypto_xform = &aead_in;
+ } else {
+ sess_conf_in.crypto_xform = &auth_in;
+ sess_conf_in.crypto_xform->next = &cipher_in;
+ }
+ /* Create Inline IPsec inbound session. */
+ ret = create_inline_ipsec_session(&sa_data, port_id, &in_ses[i],
+ &ctx, &ol_flags, flags, &sess_conf_in);
+ if (ret) {
+ printf("\nInline inbound session create failed\n");
+ goto out;
+ }
+ }
+
+ /* Retrieve reassembly dynfield offset if available */
+ if (ip_reassembly_dynfield_offset < 0 && vector->nb_frags > 1)
+ ip_reassembly_dynfield_offset = rte_mbuf_dynfield_lookup(
+ RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME, NULL);
+
+
+ create_default_flow(port_id);
+
+ nb_sent = rte_eth_tx_burst(port_id, 0, tx_pkts_burst, nb_tx);
+ if (nb_sent != nb_tx) {
+ ret = -1;
+ printf("\nFailed to tx %u pkts", nb_tx);
+ goto out;
+ }
+
+ rte_delay_ms(1);
+
+ /* Retry few times before giving up */
+ nb_rx = 0;
+ j = 0;
+ do {
+ nb_rx += rte_eth_rx_burst(port_id, 0, &rx_pkts_burst[nb_rx],
+ nb_tx - nb_rx);
+ j++;
+ if (nb_rx >= nb_tx)
+ break;
+ rte_delay_ms(1);
+ } while (j < 5 || !nb_rx);
+
+ /* Check for minimum number of Rx packets expected */
+ if ((vector->nb_frags == 1 && nb_rx != nb_tx) ||
+ (vector->nb_frags > 1 && nb_rx < burst_sz)) {
+ printf("\nreceived less Rx pkts(%u) pkts\n", nb_rx);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ if (vector->nb_frags > 1 &&
+ is_ip_reassembly_incomplete(rx_pkts_burst[i])) {
+ ret = get_and_verify_incomplete_frags(rx_pkts_burst[i],
+ vector);
+ if (ret != TEST_SUCCESS)
+ break;
+ continue;
+ }
+
+ if (rx_pkts_burst[i]->ol_flags &
+ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED ||
+ !(rx_pkts_burst[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+ printf("\nsecurity offload failed\n");
+ ret = TEST_FAILED;
+ break;
+ }
+
+ if (vector->full_pkt->len + RTE_ETHER_HDR_LEN !=
+ rx_pkts_burst[i]->pkt_len) {
+ printf("\nreassembled/decrypted packet length mismatch\n");
+ ret = TEST_FAILED;
+ break;
+ }
+ rte_pktmbuf_adj(rx_pkts_burst[i], RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(rx_pkts_burst[i],
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ if (ret != TEST_SUCCESS)
+ break;
+ }
+
+out:
+ destroy_default_flow(port_id);
+
+ /* Clear session data. */
+ for (i = 0; i < burst_sz; i++) {
+ if (out_ses[i])
+ rte_security_session_destroy(ctx, out_ses[i]);
+ if (in_ses[i])
+ rte_security_session_destroy(ctx, in_ses[i]);
+ }
+
+ for (i = nb_sent; i < nb_tx; i++)
+ free_mbuf(tx_pkts_burst[i]);
+ for (i = 0; i < nb_rx; i++)
+ free_mbuf(rx_pkts_burst[i]);
+ return ret;
+}
+
static int
test_ipsec_inline_proto_process(struct ipsec_test_data *td,
struct ipsec_test_data *res_d,
@@ -774,6 +1120,7 @@ ut_setup_inline_ipsec(void)
static void
ut_teardown_inline_ipsec(void)
{
+ struct rte_eth_ip_reassembly_params reass_conf = {0};
uint16_t portid;
int ret;
@@ -783,6 +1130,9 @@ ut_teardown_inline_ipsec(void)
if (ret != 0)
printf("rte_eth_dev_stop: err=%s, port=%u\n",
rte_strerror(-ret), portid);
+
+ /* Clear reassembly configuration */
+ rte_eth_ip_reassembly_conf_set(portid, &reass_conf);
}
}
@@ -885,6 +1235,36 @@ inline_ipsec_testsuite_teardown(void)
}
}
+static int
+test_inline_ip_reassembly(const void *testdata)
+{
+ struct reassembly_vector reassembly_td = {0};
+ const struct reassembly_vector *td = testdata;
+ struct ip_reassembly_test_packet full_pkt;
+ struct ip_reassembly_test_packet frags[MAX_FRAGS];
+ struct ipsec_test_flags flags = {0};
+ int i = 0;
+
+ reassembly_td.sa_data = td->sa_data;
+ reassembly_td.nb_frags = td->nb_frags;
+ reassembly_td.burst = td->burst;
+
+ memcpy(&full_pkt, td->full_pkt,
+ sizeof(struct ip_reassembly_test_packet));
+ reassembly_td.full_pkt = &full_pkt;
+
+ test_vector_payload_populate(reassembly_td.full_pkt, true);
+ for (; i < reassembly_td.nb_frags; i++) {
+ memcpy(&frags[i], td->frags[i],
+ sizeof(struct ip_reassembly_test_packet));
+ reassembly_td.frags[i] = &frags[i];
+ test_vector_payload_populate(reassembly_td.frags[i],
+ (i == 0) ? true : false);
+ }
+
+ return test_ipsec_with_reassembly(&reassembly_td, &flags);
+}
+
static int
test_ipsec_inline_proto_known_vec(const void *test_data)
{
@@ -1031,7 +1411,46 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_display_list),
-
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 2 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_2frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 2 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_2frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_4frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_4frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 5 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_5frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 5 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_5frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with incomplete fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_incomplete_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with overlapping fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_overlap_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with out of order fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_out_of_order_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with burst of 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_4frag_burst_vector),
TEST_CASES_END() /**< NULL terminate unit test array */
},
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index d1074da36a..c18965d80f 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -17,4 +17,688 @@ uint8_t dummy_ipv6_eth_hdr[] = {
0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
};
+#define MAX_FRAG_LEN 1500
+#define MAX_FRAGS 6
+#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
+
+struct ip_reassembly_test_packet {
+ uint32_t len;
+ uint32_t l4_offset;
+ uint8_t data[MAX_PKT_LEN];
+};
+
+struct reassembly_vector {
+ /* input/output text in struct ipsec_test_data are not used */
+ struct ipsec_test_data *sa_data;
+ struct ip_reassembly_test_packet *full_pkt;
+ struct ip_reassembly_test_packet *frags[MAX_FRAGS];
+ uint16_t nb_frags;
+ bool burst;
+};
+
+/* The source file includes below test vectors */
+/* IPv6:
+ *
+ * 1) pkt_ipv6_udp_p1
+ * pkt_ipv6_udp_p1_f1
+ * pkt_ipv6_udp_p1_f2
+ *
+ * 2) pkt_ipv6_udp_p2
+ * pkt_ipv6_udp_p2_f1
+ * pkt_ipv6_udp_p2_f2
+ * pkt_ipv6_udp_p2_f3
+ * pkt_ipv6_udp_p2_f4
+ *
+ * 3) pkt_ipv6_udp_p3
+ * pkt_ipv6_udp_p3_f1
+ * pkt_ipv6_udp_p3_f2
+ * pkt_ipv6_udp_p3_f3
+ * pkt_ipv6_udp_p3_f4
+ * pkt_ipv6_udp_p3_f5
+ */
+
+/* IPv4:
+ *
+ * 1) pkt_ipv4_udp_p1
+ * pkt_ipv4_udp_p1_f1
+ * pkt_ipv4_udp_p1_f2
+ *
+ * 2) pkt_ipv4_udp_p2
+ * pkt_ipv4_udp_p2_f1
+ * pkt_ipv4_udp_p2_f2
+ * pkt_ipv4_udp_p2_f3
+ * pkt_ipv4_udp_p2_f4
+ *
+ * 3) pkt_ipv4_udp_p3
+ * pkt_ipv4_udp_p3_f1
+ * pkt_ipv4_udp_p3_f2
+ * pkt_ipv4_udp_p3_f3
+ * pkt_ipv4_udp_p3_f4
+ * pkt_ipv4_udp_p3_f5
+ */
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1 = {
+ .len = 1500,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0xb4, 0x2C, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x5c, 0x92, 0xac, 0xf1,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1_f2 = {
+ .len = 172,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x00, 0x84, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x38, 0x5c, 0x92, 0xac, 0xf1,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2 = {
+ .len = 4482,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x11, 0x5a, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x64, 0x6c, 0x68, 0x9f,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f2 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f3 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f4 = {
+ .len = 482,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0xba, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa8, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3 = {
+ .len = 5782,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x16, 0x6e, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x65, 0xcf, 0x5a, 0xae,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f2 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f3 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f4 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa9, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f5 = {
+ .len = 446,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0x96, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x14, 0xe0, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1 = {
+ .len = 1500,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0xdc, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x66, 0x0d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x01, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1_f2 = {
+ .len = 100,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x00, 0x64, 0x00, 0x01, 0x00, 0xaf,
+ 0x40, 0x11, 0x6a, 0xd6, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2 = {
+ .len = 4482,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x11, 0x82, 0x00, 0x02, 0x00, 0x00,
+ 0x40, 0x11, 0x5a, 0x66, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5c, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f2 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xad, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f3 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfe, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f4 = {
+ .len = 282,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x01, 0x1a, 0x00, 0x02, 0x02, 0x0d,
+ 0x40, 0x11, 0x68, 0xc1, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3 = {
+ .len = 5782,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x16, 0x96, 0x00, 0x03, 0x00, 0x00,
+ 0x40, 0x11, 0x55, 0x51, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5b, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f2 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xac, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f3 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfd, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f4 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x22, 0x0d,
+ 0x40, 0x11, 0x44, 0x4e, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f5 = {
+ .len = 182,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x00, 0xb6, 0x00, 0x03, 0x02, 0xbc,
+ 0x40, 0x11, 0x68, 0x75, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+static inline void
+test_vector_payload_populate(struct ip_reassembly_test_packet *pkt,
+ bool first_frag)
+{
+ uint32_t i = pkt->l4_offset;
+
+ /**
+ * For non-fragmented packets and first frag, skip 8 bytes from
+ * l4_offset for UDP header.
+ */
+ if (first_frag)
+ i += 8;
+
+ for (; i < pkt->len; i++)
+ pkt->data[i] = 0x58;
+}
+
+struct ipsec_test_data conf_aes_128_gcm = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+
+struct ipsec_test_data conf_aes_128_gcm_v6_tunnel = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+
+const struct reassembly_vector ipv4_2frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv6_2frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p1,
+ .frags[0] = &pkt_ipv6_udp_p1_f1,
+ .frags[1] = &pkt_ipv6_udp_p1_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv4_4frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv6_4frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p2,
+ .frags[0] = &pkt_ipv6_udp_p2_f1,
+ .frags[1] = &pkt_ipv6_udp_p2_f2,
+ .frags[2] = &pkt_ipv6_udp_p2_f3,
+ .frags[3] = &pkt_ipv6_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_5frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p3,
+ .frags[0] = &pkt_ipv4_udp_p3_f1,
+ .frags[1] = &pkt_ipv4_udp_p3_f2,
+ .frags[2] = &pkt_ipv4_udp_p3_f3,
+ .frags[3] = &pkt_ipv4_udp_p3_f4,
+ .frags[4] = &pkt_ipv4_udp_p3_f5,
+ .nb_frags = 5,
+ .burst = false,
+};
+const struct reassembly_vector ipv6_5frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p3,
+ .frags[0] = &pkt_ipv6_udp_p3_f1,
+ .frags[1] = &pkt_ipv6_udp_p3_f2,
+ .frags[2] = &pkt_ipv6_udp_p3_f3,
+ .frags[3] = &pkt_ipv6_udp_p3_f4,
+ .frags[4] = &pkt_ipv6_udp_p3_f5,
+ .nb_frags = 5,
+ .burst = false,
+};
+/* Negative test cases. */
+const struct reassembly_vector ipv4_incomplete_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_overlap_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f1, /* Overlap */
+ .frags[2] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 3,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_out_of_order_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f3,
+ .frags[2] = &pkt_ipv4_udp_p2_f4,
+ .frags[3] = &pkt_ipv4_udp_p2_f2, /* out of order */
+ .nb_frags = 4,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_4frag_burst_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = true,
+};
+
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 5/7] test/security: add more inline IPsec functional cases
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
` (3 preceding siblings ...)
2022-04-27 15:10 ` [PATCH v5 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
@ 2022-04-27 15:10 ` Akhil Goyal
2022-04-27 15:46 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
` (3 subsequent siblings)
8 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-04-27 15:10 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added more inline IPsec functional verification cases.
These cases do not have known vectors but are verified
using encap + decap test for all the algo combinations.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 517 ++++++++++++++++++++++++++
1 file changed, 517 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 46636af072..055b753634 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1314,6 +1314,394 @@ test_ipsec_inline_proto_display_list(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_udp_encap(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.udp_encap = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_udp_ports_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.udp_encap = true;
+ flags.udp_ports_verify = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_err_icv_corrupt(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.icv_corrupt = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_dst_addr_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_src_dst_addr_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_inner_ip_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ip_csum = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_inner_l4_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.l4_csum = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v4_in_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.tunnel_ipv6 = false;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v6_in_v6(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v4_in_v6(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.tunnel_ipv6 = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v6_in_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = false;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_transport_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.transport = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_transport_l4_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .l4_csum = true,
+ .transport = true,
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_stats(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.stats_success = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_pkt_fragment(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.fragment = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+
+}
+
+static int
+test_ipsec_inline_proto_copy_df_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_COPY_DF_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_copy_df_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_COPY_DF_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_set_df_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_SET_DF_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_set_df_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_SET_DF_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_copy_dscp_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_copy_dscp_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_dscp_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_dscp_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .dec_ttl_or_hop_limit = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_hop_limit_decrement(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .ipv6 = true,
+ .dec_ttl_or_hop_limit = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_iv_gen(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.iv_gen = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+ flags.fragment = true;
+
+ memcpy(&td_outb, test_data, sizeof(td_outb));
+
+ /* Disable IV gen to be able to test with known vectors */
+ td_outb.ipsec_xform.options.iv_gen_disable = 1;
+
+ return test_ipsec_inline_proto_process(&td_outb, NULL, 1, false,
+ &flags);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -1360,6 +1748,13 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_known_vec,
&pkt_null_aes_xcbc),
+
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound fragmented packet",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_fragmented,
+ &pkt_aes_128_gcm_frag),
+
TEST_CASE_NAMED_WITH_DATA(
"Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
@@ -1411,6 +1806,128 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_display_list),
+ TEST_CASE_NAMED_ST(
+ "UDP encapsulation",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_udp_encap),
+ TEST_CASE_NAMED_ST(
+ "UDP encapsulation ports verification test",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_udp_ports_verify),
+ TEST_CASE_NAMED_ST(
+ "Negative test: ICV corruption",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_err_icv_corrupt),
+ TEST_CASE_NAMED_ST(
+ "Tunnel dst addr verification",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_dst_addr_verify),
+ TEST_CASE_NAMED_ST(
+ "Tunnel src and dst addr verification",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_src_dst_addr_verify),
+ TEST_CASE_NAMED_ST(
+ "Inner IP checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_inner_ip_csum),
+ TEST_CASE_NAMED_ST(
+ "Inner L4 checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_inner_l4_csum),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv4 in IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v4_in_v4),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv6 in IPv6",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v6_in_v6),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv4 in IPv6",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v4_in_v6),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv6 in IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v6_in_v4),
+ TEST_CASE_NAMED_ST(
+ "Transport IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_transport_v4),
+ TEST_CASE_NAMED_ST(
+ "Transport l4 checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_transport_l4_csum),
+ TEST_CASE_NAMED_ST(
+ "Statistics: success",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_stats),
+ TEST_CASE_NAMED_ST(
+ "Fragmented packet",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_fragment),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header copy DF (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_copy_df_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header copy DF (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_copy_df_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header set DF 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_set_df_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header set DF 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_set_df_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 copy DSCP (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_copy_dscp_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 copy DSCP (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_copy_dscp_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 set DSCP 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_set_dscp_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 set DSCP 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy DSCP (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_dscp_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy DSCP (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_dscp_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set DSCP 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_dscp_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set DSCP 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 decrement inner TTL",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_ttl_decrement),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 decrement inner hop limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_hop_limit_decrement),
+ TEST_CASE_NAMED_ST(
+ "IV generation",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_iv_gen),
+
+
TEST_CASE_NAMED_WITH_DATA(
"IPv4 Reassembly with 2 fragments",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
` (4 preceding siblings ...)
2022-04-27 15:10 ` [PATCH v5 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
@ 2022-04-27 15:10 ` Akhil Goyal
2022-04-27 15:46 ` Zhang, Roy Fan
2022-04-28 5:25 ` Anoob Joseph
2022-04-27 15:10 ` [PATCH v5 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
` (2 subsequent siblings)
8 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-04-27 15:10 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
Added cases to test anti replay for inline IPsec processing
with and without extended sequence number support.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 308 ++++++++++++++++++++++++++
1 file changed, 308 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 055b753634..009405f403 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1091,6 +1091,136 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
return TEST_SKIPPED;
}
+static int
+test_ipsec_inline_proto_process_with_esn(struct ipsec_test_data td[],
+ struct ipsec_test_data res_d[],
+ int nb_pkts,
+ bool silent,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session_conf sess_conf = {0};
+ struct ipsec_test_data *res_d_tmp = NULL;
+ struct rte_crypto_sym_xform cipher = {0};
+ struct rte_crypto_sym_xform auth = {0};
+ struct rte_crypto_sym_xform aead = {0};
+ struct rte_mbuf *rx_pkt = NULL;
+ struct rte_mbuf *tx_pkt = NULL;
+ int nb_rx, nb_sent;
+ struct rte_security_session *ses;
+ struct rte_security_ctx *ctx;
+ uint32_t ol_flags;
+ int i, ret;
+
+ if (td[0].aead) {
+ sess_conf.crypto_xform = &aead;
+ } else {
+ if (td[0].ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ sess_conf.crypto_xform = &cipher;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sess_conf.crypto_xform->next = &auth;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ } else {
+ sess_conf.crypto_xform = &auth;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sess_conf.crypto_xform->next = &cipher;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ }
+ }
+
+ /* Create Inline IPsec session. */
+ ret = create_inline_ipsec_session(&td[0], port_id, &ses, &ctx,
+ &ol_flags, flags, &sess_conf);
+ if (ret)
+ return ret;
+
+ if (td[0].ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ create_default_flow(port_id);
+
+ for (i = 0; i < nb_pkts; i++) {
+ tx_pkt = init_packet(mbufpool, td[i].input_text.data,
+ td[i].input_text.len);
+ if (tx_pkt == NULL) {
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkt,
+ uint8_t *, RTE_ETHER_HDR_LEN), flags)) {
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (td[i].ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ if (flags->antireplay) {
+ sess_conf.ipsec.esn.value =
+ td[i].ipsec_xform.esn.value;
+ ret = rte_security_session_update(ctx, ses,
+ &sess_conf);
+ if (ret) {
+ printf("Could not update ESN in session\n");
+ rte_pktmbuf_free(tx_pkt);
+ goto out;
+ }
+ }
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx, ses,
+ tx_pkt, NULL);
+ tx_pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ }
+ /* Send packet to ethdev for inline IPsec processing. */
+ nb_sent = rte_eth_tx_burst(port_id, 0, &tx_pkt, 1);
+ if (nb_sent != 1) {
+ printf("\nUnable to TX packets");
+ rte_pktmbuf_free(tx_pkt);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ rte_pause();
+
+ /* Receive back packet on loopback interface. */
+ do {
+ rte_delay_ms(1);
+ nb_rx = rte_eth_rx_burst(port_id, 0, &rx_pkt, 1);
+ } while (nb_rx == 0);
+
+ rte_pktmbuf_adj(rx_pkt, RTE_ETHER_HDR_LEN);
+
+ if (res_d != NULL)
+ res_d_tmp = &res_d[i];
+
+ ret = test_ipsec_post_process(rx_pkt, &td[i],
+ res_d_tmp, silent, flags);
+ if (ret != TEST_SUCCESS) {
+ rte_pktmbuf_free(rx_pkt);
+ goto out;
+ }
+
+ ret = test_ipsec_stats_verify(ctx, ses, flags,
+ td->ipsec_xform.direction);
+ if (ret != TEST_SUCCESS) {
+ rte_pktmbuf_free(rx_pkt);
+ goto out;
+ }
+
+ rte_pktmbuf_free(rx_pkt);
+ rx_pkt = NULL;
+ tx_pkt = NULL;
+ res_d_tmp = NULL;
+ }
+
+out:
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ destroy_default_flow(port_id);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ctx, ses);
+ ses = NULL;
+
+ return ret;
+}
static int
ut_setup_inline_ipsec(void)
@@ -1702,6 +1832,153 @@ test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
return test_ipsec_inline_proto_process(&td_outb, NULL, 1, false,
&flags);
}
+
+static int
+test_ipsec_inline_pkt_replay(const void *test_data, const uint64_t esn[],
+ bool replayed_pkt[], uint32_t nb_pkts, bool esn_en,
+ uint64_t winsz)
+{
+ struct ipsec_test_data td_outb[IPSEC_TEST_PACKETS_MAX];
+ struct ipsec_test_data td_inb[IPSEC_TEST_PACKETS_MAX];
+ struct ipsec_test_flags flags;
+ uint32_t i, ret = 0;
+
+ memset(&flags, 0, sizeof(flags));
+ flags.antireplay = true;
+
+ for (i = 0; i < nb_pkts; i++) {
+ memcpy(&td_outb[i], test_data, sizeof(td_outb));
+ td_outb[i].ipsec_xform.options.iv_gen_disable = 1;
+ td_outb[i].ipsec_xform.replay_win_sz = winsz;
+ td_outb[i].ipsec_xform.options.esn = esn_en;
+ }
+
+ for (i = 0; i < nb_pkts; i++)
+ td_outb[i].ipsec_xform.esn.value = esn[i];
+
+ ret = test_ipsec_inline_proto_process_with_esn(td_outb, td_inb,
+ nb_pkts, true, &flags);
+ if (ret != TEST_SUCCESS)
+ return ret;
+
+ test_ipsec_td_update(td_inb, td_outb, nb_pkts, &flags);
+
+ for (i = 0; i < nb_pkts; i++) {
+ td_inb[i].ipsec_xform.options.esn = esn_en;
+ /* Set antireplay flag for packets to be dropped */
+ td_inb[i].ar_packet = replayed_pkt[i];
+ }
+
+ ret = test_ipsec_inline_proto_process_with_esn(td_inb, NULL, nb_pkts,
+ true, &flags);
+
+ return ret;
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay(const void *test_data, uint64_t winsz)
+{
+
+ uint32_t nb_pkts = 5;
+ bool replayed_pkt[5];
+ uint64_t esn[5];
+
+ /* 1. Advance the TOP of the window to WS * 2 */
+ esn[0] = winsz * 2;
+ /* 2. Test sequence number within the new window(WS + 1) */
+ esn[1] = winsz + 1;
+ /* 3. Test sequence number less than the window BOTTOM */
+ esn[2] = winsz;
+ /* 4. Test sequence number in the middle of the window */
+ esn[3] = winsz + (winsz / 2);
+ /* 5. Test replay of the packet in the middle of the window */
+ esn[4] = winsz + (winsz / 2);
+
+ replayed_pkt[0] = false;
+ replayed_pkt[1] = false;
+ replayed_pkt[2] = true;
+ replayed_pkt[3] = false;
+ replayed_pkt[4] = true;
+
+ return test_ipsec_inline_pkt_replay(test_data, esn, replayed_pkt,
+ nb_pkts, false, winsz);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay1024(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay2048(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay4096(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 4096);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay(const void *test_data, uint64_t winsz)
+{
+
+ uint32_t nb_pkts = 7;
+ bool replayed_pkt[7];
+ uint64_t esn[7];
+
+ /* Set the initial sequence number */
+ esn[0] = (uint64_t)(0xFFFFFFFF - winsz);
+ /* 1. Advance the TOP of the window to (1<<32 + WS/2) */
+ esn[1] = (uint64_t)((1ULL << 32) + (winsz / 2));
+ /* 2. Test sequence number within new window (1<<32 + WS/2 + 1) */
+ esn[2] = (uint64_t)((1ULL << 32) - (winsz / 2) + 1);
+ /* 3. Test with sequence number within window (1<<32 - 1) */
+ esn[3] = (uint64_t)((1ULL << 32) - 1);
+ /* 4. Test with sequence number within window (1<<32 - 1) */
+ esn[4] = (uint64_t)(1ULL << 32);
+ /* 5. Test with duplicate sequence number within
+ * new window (1<<32 - 1)
+ */
+ esn[5] = (uint64_t)((1ULL << 32) - 1);
+ /* 6. Test with duplicate sequence number within new window (1<<32) */
+ esn[6] = (uint64_t)(1ULL << 32);
+
+ replayed_pkt[0] = false;
+ replayed_pkt[1] = false;
+ replayed_pkt[2] = false;
+ replayed_pkt[3] = false;
+ replayed_pkt[4] = false;
+ replayed_pkt[5] = true;
+ replayed_pkt[6] = true;
+
+ return test_ipsec_inline_pkt_replay(test_data, esn, replayed_pkt, nb_pkts,
+ true, winsz);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay1024(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay2048(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay4096(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 4096);
+}
+
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -1928,6 +2205,37 @@ static struct unit_test_suite inline_ipsec_testsuite = {
test_ipsec_inline_proto_iv_gen),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 1024",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay1024,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 2048",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay2048,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 4096",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay4096,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 1024",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay1024,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 2048",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay2048,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 4096",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay4096,
+ &pkt_aes_128_gcm),
+
TEST_CASE_NAMED_WITH_DATA(
"IPv4 Reassembly with 2 fragments",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 7/7] test/security: add inline IPsec IPv6 flow label cases
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
` (5 preceding siblings ...)
2022-04-27 15:10 ` [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
@ 2022-04-27 15:10 ` Akhil Goyal
2022-04-27 15:46 ` Zhang, Roy Fan
2022-04-27 15:42 ` [PATCH v5 0/7] app/test: add inline IPsec and reassembly cases Zhang, Roy Fan
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
8 siblings, 1 reply; 184+ messages in thread
From: Akhil Goyal @ 2022-04-27 15:10 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds unit tests for IPv6 flow label set & copy
operations.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
app/test/test_cryptodev_security_ipsec.c | 35 ++++++++++-
app/test/test_cryptodev_security_ipsec.h | 10 +++
app/test/test_security_inline_proto.c | 79 ++++++++++++++++++++++++
3 files changed, 123 insertions(+), 1 deletion(-)
diff --git a/app/test/test_cryptodev_security_ipsec.c b/app/test/test_cryptodev_security_ipsec.c
index 14c6ba681f..408bd0bc82 100644
--- a/app/test/test_cryptodev_security_ipsec.c
+++ b/app/test/test_cryptodev_security_ipsec.c
@@ -495,6 +495,10 @@ test_ipsec_td_prepare(const struct crypto_param *param1,
flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1)
td->ipsec_xform.options.copy_dscp = 1;
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1)
+ td->ipsec_xform.options.copy_flabel = 1;
+
if (flags->dec_ttl_or_hop_limit)
td->ipsec_xform.options.dec_ttl = 1;
}
@@ -933,6 +937,7 @@ test_ipsec_iph6_hdr_validate(const struct rte_ipv6_hdr *iph6,
const struct ipsec_test_flags *flags)
{
uint32_t vtc_flow;
+ uint32_t flabel;
uint8_t dscp;
if (!is_valid_ipv6_pkt(iph6)) {
@@ -959,6 +964,23 @@ test_ipsec_iph6_hdr_validate(const struct rte_ipv6_hdr *iph6,
}
}
+ flabel = vtc_flow & RTE_IPV6_HDR_FL_MASK;
+
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
+ if (flabel != TEST_IPSEC_FLABEL_VAL) {
+ printf("FLABEL value is not matching [exp: %x, actual: %x]\n",
+ TEST_IPSEC_FLABEL_VAL, flabel);
+ return -1;
+ }
+ } else {
+ if (flabel != 0) {
+ printf("FLABEL value is set [exp: 0, actual: %x]\n",
+ flabel);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -1159,7 +1181,11 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct ipsec_test_flags *flags)
if (flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1 ||
flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1 ||
flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_0 ||
- flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0) {
+ flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
if (is_ipv4(iph4)) {
uint8_t tos;
@@ -1187,6 +1213,13 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct ipsec_test_flags *flags)
else
vtc_flow &= ~RTE_IPV6_HDR_DSCP_MASK;
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1)
+ vtc_flow |= (RTE_IPV6_HDR_FL_MASK &
+ (TEST_IPSEC_FLABEL_VAL << RTE_IPV6_HDR_FL_SHIFT));
+ else
+ vtc_flow &= ~RTE_IPV6_HDR_FL_MASK;
+
iph6->vtc_flow = rte_cpu_to_be_32(vtc_flow);
}
}
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 0d9b5b6e2e..744dd64a9e 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -73,6 +73,15 @@ enum dscp_flags {
TEST_IPSEC_SET_DSCP_1_INNER_0,
};
+#define TEST_IPSEC_FLABEL_VAL 0x1234
+
+enum flabel_flags {
+ TEST_IPSEC_COPY_FLABEL_INNER_0 = 1,
+ TEST_IPSEC_COPY_FLABEL_INNER_1,
+ TEST_IPSEC_SET_FLABEL_0_INNER_1,
+ TEST_IPSEC_SET_FLABEL_1_INNER_0,
+};
+
struct ipsec_test_flags {
bool display_alg;
bool sa_expiry_pkts_soft;
@@ -92,6 +101,7 @@ struct ipsec_test_flags {
bool antireplay;
enum df_flags df;
enum dscp_flags dscp;
+ enum flabel_flags flabel;
bool dec_ttl_or_hop_limit;
bool ah;
};
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 009405f403..88ec1c0209 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -162,6 +162,13 @@ create_inline_ipsec_session(struct ipsec_test_data *sa, uint16_t portid,
sess_conf->ipsec.tunnel.ipv6.dscp =
TEST_IPSEC_DSCP_VAL;
+ if (flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv6.flabel = 0;
+
+ if (flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv6.flabel =
+ TEST_IPSEC_FLABEL_VAL;
+
memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr, &src_v6,
sizeof(src_v6));
memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr, &dst_v6,
@@ -1782,6 +1789,62 @@ test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_ipv6_copy_flabel_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_flabel_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_SET_FLABEL_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_SET_FLABEL_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data __rte_unused)
{
@@ -2191,6 +2254,22 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"Tunnel header IPv6 set DSCP 1 (inner 0)",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy FLABEL (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_flabel_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy FLABEL (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_flabel_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set FLABEL 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set FLABEL 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0),
TEST_CASE_NAMED_ST(
"Tunnel header IPv4 decrement inner TTL",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 0/7] app/test: add inline IPsec and reassembly cases
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
` (6 preceding siblings ...)
2022-04-27 15:10 ` [PATCH v5 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
@ 2022-04-27 15:42 ` Zhang, Roy Fan
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
8 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2022-04-27 15:42 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, April 27, 2022 4:11 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru;
> ndabilpuram@marvell.com; vattunuru@marvell.com; Akhil Goyal
> <gakhil@marvell.com>
> Subject: [PATCH v5 0/7] app/test: add inline IPsec and reassembly cases
>
> IP reassembly offload was added in last release.
> The test app for unit testing IP reassembly of inline
> inbound IPsec flows is added in this patchset.
> For testing IP reassembly, base inline IPsec is also
> added. The app is enhanced in v4 to handle more functional
> unit test cases for inline IPsec similar to Lookaside IPsec.
> The functions from Lookaside more are reused to verify
> functional cases.
>
> changed in v5:
> - removed soft/hard expiry patches which are deferred for next release
> - skipped tests if no port is added.
> - added release notes.
> Changes in v4:
> - rebased over next-crypto
> - updated app to take benefit from Lookaside protocol
> test functions.
> - Added more functional cases
> - Added soft and hard expiry event subtypes in ethdev
> for testing SA soft and hard pkt/byte expiry events.
> - reassembly cases are squashed in a single patch
>
> Changes in v3:
> - incorporated latest ethdev changes for reassembly.
> - skipped build on windows as it needs rte_ipsec lib which is not
> compiled on windows.
> changes in v2:
> - added IPsec burst mode case
> - updated as per the latest ethdev changes.
>
>
> Akhil Goyal (6):
> app/test: add unit cases for inline IPsec offload
> test/security: add inline inbound IPsec cases
> test/security: add combined mode inline IPsec cases
> test/security: add inline IPsec reassembly cases
> test/security: add more inline IPsec functional cases
> test/security: add ESN and anti-replay cases for inline
>
> Vamsi Attunuru (1):
> test/security: add inline IPsec IPv6 flow label cases
>
> MAINTAINERS | 2 +-
> app/test/meson.build | 1 +
> app/test/test_cryptodev_security_ipsec.c | 35 +-
> app/test/test_cryptodev_security_ipsec.h | 10 +
> app/test/test_security_inline_proto.c | 2372 +++++++++++++++++
> app/test/test_security_inline_proto_vectors.h | 704 +++++
> doc/guides/rel_notes/release_22_07.rst | 5 +
> 7 files changed, 3127 insertions(+), 2 deletions(-)
> create mode 100644 app/test/test_security_inline_proto.c
> create mode 100644 app/test/test_security_inline_proto_vectors.h
>
> --
> 2.25.1
Series-acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 1/7] app/test: add unit cases for inline IPsec offload
2022-04-27 15:10 ` [PATCH v5 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-04-27 15:44 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2022-04-27 15:44 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, April 27, 2022 4:11 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru;
> ndabilpuram@marvell.com; vattunuru@marvell.com; Akhil Goyal
> <gakhil@marvell.com>
> Subject: [PATCH v5 1/7] app/test: add unit cases for inline IPsec offload
>
> A new test suite is added in test app to test inline IPsec protocol
> offload. In this patch, predefined vectors from Lookaside IPsec test
> are used to verify the IPsec functionality without the need of
> external traffic generators. The sent packet is loopbacked onto the same
> interface which is received and matched with the expected output.
> The test suite can be updated further with other functional test cases.
> In this patch encap only cases are added.
> The testsuite can be run using:
> RTE> inline_ipsec_autotest
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 2/7] test/security: add inline inbound IPsec cases
2022-04-27 15:10 ` [PATCH v5 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
@ 2022-04-27 15:44 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2022-04-27 15:44 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, April 27, 2022 4:11 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru;
> ndabilpuram@marvell.com; vattunuru@marvell.com; Akhil Goyal
> <gakhil@marvell.com>
> Subject: [PATCH v5 2/7] test/security: add inline inbound IPsec cases
>
> Added test cases for inline Inbound protocol offload
> verification with known test vectors from Lookaside mode.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 3/7] test/security: add combined mode inline IPsec cases
2022-04-27 15:10 ` [PATCH v5 3/7] test/security: add combined mode inline " Akhil Goyal
@ 2022-04-27 15:45 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2022-04-27 15:45 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, April 27, 2022 4:11 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru;
> ndabilpuram@marvell.com; vattunuru@marvell.com; Akhil Goyal
> <gakhil@marvell.com>
> Subject: [PATCH v5 3/7] test/security: add combined mode inline IPsec cases
>
> Added combined encap and decap test cases for various algorithm
> combinations
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 4/7] test/security: add inline IPsec reassembly cases
2022-04-27 15:10 ` [PATCH v5 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
@ 2022-04-27 15:45 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2022-04-27 15:45 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, April 27, 2022 4:11 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru;
> ndabilpuram@marvell.com; vattunuru@marvell.com; Akhil Goyal
> <gakhil@marvell.com>
> Subject: [PATCH v5 4/7] test/security: add inline IPsec reassembly cases
>
> Added unit test cases for IP reassembly of inline IPsec
> inbound scenarios.
> In these cases, known test vectors of fragments are first
> processed for inline outbound processing and then received
> back on loopback interface for inbound processing along with
> IP reassembly of the corresponding decrypted packets.
> The resultant plain text reassembled packet is compared with
> original unfragmented packet.
>
> In this patch, cases are added for 2/4/5 fragments for both
> IPv4 and IPv6 packets. A few negative test cases are also added
> like incomplete fragments, out of place fragments, duplicate
> fragments.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 5/7] test/security: add more inline IPsec functional cases
2022-04-27 15:10 ` [PATCH v5 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
@ 2022-04-27 15:46 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2022-04-27 15:46 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, April 27, 2022 4:11 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru;
> ndabilpuram@marvell.com; vattunuru@marvell.com; Akhil Goyal
> <gakhil@marvell.com>
> Subject: [PATCH v5 5/7] test/security: add more inline IPsec functional cases
>
> Added more inline IPsec functional verification cases.
> These cases do not have known vectors but are verified
> using encap + decap test for all the algo combinations.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline
2022-04-27 15:10 ` [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
@ 2022-04-27 15:46 ` Zhang, Roy Fan
2022-04-28 5:25 ` Anoob Joseph
1 sibling, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2022-04-27 15:46 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, April 27, 2022 4:11 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru;
> ndabilpuram@marvell.com; vattunuru@marvell.com; Akhil Goyal
> <gakhil@marvell.com>
> Subject: [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline
>
> Added cases to test anti replay for inline IPsec processing
> with and without extended sequence number support.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 7/7] test/security: add inline IPsec IPv6 flow label cases
2022-04-27 15:10 ` [PATCH v5 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
@ 2022-04-27 15:46 ` Zhang, Roy Fan
0 siblings, 0 replies; 184+ messages in thread
From: Zhang, Roy Fan @ 2022-04-27 15:46 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj, Ananyev,
Konstantin, Power, Ciara, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Wednesday, April 27, 2022 4:11 PM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; david.marchand@redhat.com;
> hemant.agrawal@nxp.com; anoobj@marvell.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Power, Ciara <ciara.power@intel.com>;
> ferruh.yigit@intel.com; andrew.rybchenko@oktetlabs.ru;
> ndabilpuram@marvell.com; vattunuru@marvell.com
> Subject: [PATCH v5 7/7] test/security: add inline IPsec IPv6 flow label cases
>
> From: Vamsi Attunuru <vattunuru@marvell.com>
>
> Patch adds unit tests for IPv6 flow label set & copy
> operations.
>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> ---
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline
2022-04-27 15:10 ` [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
2022-04-27 15:46 ` Zhang, Roy Fan
@ 2022-04-28 5:25 ` Anoob Joseph
1 sibling, 0 replies; 184+ messages in thread
From: Anoob Joseph @ 2022-04-28 5:25 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, konstantin.ananyev,
ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Akhil Goyal
Hi Akhil,
Please see inline.
Thanks,
Anoob
> Subject: [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline
>
> Added cases to test anti replay for inline IPsec processing with and without
> extended sequence number support.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> app/test/test_security_inline_proto.c | 308
> ++++++++++++++++++++++++++
> 1 file changed, 308 insertions(+)
>
> diff --git a/app/test/test_security_inline_proto.c
> b/app/test/test_security_inline_proto.c
> index 055b753634..009405f403 100644
> --- a/app/test/test_security_inline_proto.c
> +++ b/app/test/test_security_inline_proto.c
> @@ -1091,6 +1091,136 @@ test_ipsec_inline_proto_all(const struct
> ipsec_test_flags *flags)
> return TEST_SKIPPED;
> }
>
> +static int
> +test_ipsec_inline_proto_process_with_esn(struct ipsec_test_data td[],
> + struct ipsec_test_data res_d[],
> + int nb_pkts,
> + bool silent,
> + const struct ipsec_test_flags *flags) {
> + struct rte_security_session_conf sess_conf = {0};
> + struct ipsec_test_data *res_d_tmp = NULL;
> + struct rte_crypto_sym_xform cipher = {0};
> + struct rte_crypto_sym_xform auth = {0};
> + struct rte_crypto_sym_xform aead = {0};
> + struct rte_mbuf *rx_pkt = NULL;
> + struct rte_mbuf *tx_pkt = NULL;
> + int nb_rx, nb_sent;
> + struct rte_security_session *ses;
> + struct rte_security_ctx *ctx;
> + uint32_t ol_flags;
> + int i, ret;
> +
> + if (td[0].aead) {
> + sess_conf.crypto_xform = &aead;
> + } else {
> + if (td[0].ipsec_xform.direction ==
> + RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
> + sess_conf.crypto_xform = &cipher;
> + sess_conf.crypto_xform->type =
> RTE_CRYPTO_SYM_XFORM_CIPHER;
> + sess_conf.crypto_xform->next = &auth;
> + sess_conf.crypto_xform->next->type =
> RTE_CRYPTO_SYM_XFORM_AUTH;
> + } else {
> + sess_conf.crypto_xform = &auth;
> + sess_conf.crypto_xform->type =
> RTE_CRYPTO_SYM_XFORM_AUTH;
> + sess_conf.crypto_xform->next = &cipher;
> + sess_conf.crypto_xform->next->type =
> RTE_CRYPTO_SYM_XFORM_CIPHER;
> + }
> + }
> +
> + /* Create Inline IPsec session. */
> + ret = create_inline_ipsec_session(&td[0], port_id, &ses, &ctx,
> + &ol_flags, flags, &sess_conf);
> + if (ret)
> + return ret;
> +
> + if (td[0].ipsec_xform.direction ==
> RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
> + create_default_flow(port_id);
[Anoob] If rte_flow creation fails, then the test should be skipped. I see that create_default_flow() is not returning error in case flow_validate() or flow_create() fails. IMO, it should be fixed.
> +
> + for (i = 0; i < nb_pkts; i++) {
> + tx_pkt = init_packet(mbufpool, td[i].input_text.data,
> + td[i].input_text.len);
> + if (tx_pkt == NULL) {
> + ret = TEST_FAILED;
> + goto out;
> + }
> +
> + if
> (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkt,
> + uint8_t *, RTE_ETHER_HDR_LEN),
> flags)) {
> + ret = TEST_FAILED;
> + goto out;
> + }
> +
> + if (td[i].ipsec_xform.direction ==
> + RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
> + if (flags->antireplay) {
> + sess_conf.ipsec.esn.value =
> + td[i].ipsec_xform.esn.value;
> + ret = rte_security_session_update(ctx, ses,
> + &sess_conf);
> + if (ret) {
[Anoob] ret should be set as TEST_SKIPPED.
> + printf("Could not update ESN in
> session\n");
> + rte_pktmbuf_free(tx_pkt);
> + goto out;
> + }
> + }
> + if (ol_flags &
> RTE_SECURITY_TX_OLOAD_NEED_MDATA)
> + rte_security_set_pkt_metadata(ctx, ses,
> + tx_pkt, NULL);
> + tx_pkt->ol_flags |=
> RTE_MBUF_F_TX_SEC_OFFLOAD;
> + }
> + /* Send packet to ethdev for inline IPsec processing. */
> + nb_sent = rte_eth_tx_burst(port_id, 0, &tx_pkt, 1);
> + if (nb_sent != 1) {
> + printf("\nUnable to TX packets");
> + rte_pktmbuf_free(tx_pkt);
> + ret = TEST_FAILED;
> + goto out;
> + }
> +
> + rte_pause();
> +
> + /* Receive back packet on loopback interface. */
> + do {
> + rte_delay_ms(1);
> + nb_rx = rte_eth_rx_burst(port_id, 0, &rx_pkt, 1);
> + } while (nb_rx == 0);
> +
> + rte_pktmbuf_adj(rx_pkt, RTE_ETHER_HDR_LEN);
> +
> + if (res_d != NULL)
> + res_d_tmp = &res_d[i];
> +
> + ret = test_ipsec_post_process(rx_pkt, &td[i],
> + res_d_tmp, silent, flags);
> + if (ret != TEST_SUCCESS) {
> + rte_pktmbuf_free(rx_pkt);
> + goto out;
> + }
> +
> + ret = test_ipsec_stats_verify(ctx, ses, flags,
> + td->ipsec_xform.direction);
> + if (ret != TEST_SUCCESS) {
> + rte_pktmbuf_free(rx_pkt);
> + goto out;
> + }
> +
> + rte_pktmbuf_free(rx_pkt);
> + rx_pkt = NULL;
> + tx_pkt = NULL;
> + res_d_tmp = NULL;
[Anoob] Why do we need to set res_d_tmp to NULL?
> + }
> +
> +out:
> + if (td->ipsec_xform.direction ==
> RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
> + destroy_default_flow(port_id);
> +
> + /* Destroy session so that other cases can create the session again */
> + rte_security_session_destroy(ctx, ses);
> + ses = NULL;
> +
> + return ret;
> +}
>
<snip>
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 0/7] app/test: add inline IPsec and reassembly cases
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
` (7 preceding siblings ...)
2022-04-27 15:42 ` [PATCH v5 0/7] app/test: add inline IPsec and reassembly cases Zhang, Roy Fan
@ 2022-05-13 7:31 ` Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
` (7 more replies)
8 siblings, 8 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-13 7:31 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
IP reassembly offload was added in last release.
The test app for unit testing IP reassembly of inline
inbound IPsec flows is added in this patchset.
For testing IP reassembly, base inline IPsec is also
added. The app is enhanced in v4 to handle more functional
unit test cases for inline IPsec similar to Lookaside IPsec.
The functions from Lookaside more are reused to verify
functional cases.
Changes in v6:
- Addressed comments from Anoob.
changes in v5:
- removed soft/hard expiry patches which are deferred for next release
- skipped tests if no port is added.
- added release notes.
Changes in v4:
- rebased over next-crypto
- updated app to take benefit from Lookaside protocol
test functions.
- Added more functional cases
- Added soft and hard expiry event subtypes in ethdev
for testing SA soft and hard pkt/byte expiry events.
- reassembly cases are squashed in a single patch
Changes in v3:
- incorporated latest ethdev changes for reassembly.
- skipped build on windows as it needs rte_ipsec lib which is not
compiled on windows.
changes in v2:
- added IPsec burst mode case
- updated as per the latest ethdev changes.
Akhil Goyal (6):
app/test: add unit cases for inline IPsec offload
test/security: add inline inbound IPsec cases
test/security: add combined mode inline IPsec cases
test/security: add inline IPsec reassembly cases
test/security: add more inline IPsec functional cases
test/security: add ESN and anti-replay cases for inline
Vamsi Attunuru (1):
test/security: add inline IPsec IPv6 flow label cases
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_cryptodev_security_ipsec.c | 35 +-
app/test/test_cryptodev_security_ipsec.h | 10 +
app/test/test_security_inline_proto.c | 2382 +++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 704 +++++
doc/guides/rel_notes/release_22_07.rst | 5 +
7 files changed, 3137 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 1/7] app/test: add unit cases for inline IPsec offload
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
@ 2022-05-13 7:31 ` Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
` (6 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-13 7:31 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
A new test suite is added in test app to test inline IPsec protocol
offload. In this patch, predefined vectors from Lookaside IPsec test
are used to verify the IPsec functionality without the need of
external traffic generators. The sent packet is loopbacked onto the same
interface which is received and matched with the expected output.
The test suite can be updated further with other functional test cases.
In this patch encap only cases are added.
The testsuite can be run using:
RTE> inline_ipsec_autotest
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_security_inline_proto.c | 887 ++++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 20 +
doc/guides/rel_notes/release_22_07.rst | 5 +
5 files changed, 914 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 8f4e9c3479..8031fed09f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -440,7 +440,7 @@ M: Akhil Goyal <gakhil@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
-F: app/test/test_security.c
+F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <roy.fan.zhang@intel.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index bb4621ed2a..df01257142 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -125,6 +125,7 @@ test_sources = files(
'test_rwlock.c',
'test_sched.c',
'test_security.c',
+ 'test_security_inline_proto.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
new file mode 100644
index 0000000000..4b960ddfe0
--- /dev/null
+++ b/app/test/test_security_inline_proto.c
@@ -0,0 +1,887 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_security.h>
+
+#include "test.h"
+#include "test_security_inline_proto_vectors.h"
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+static int
+test_inline_ipsec(void)
+{
+ printf("Inline ipsec not supported on Windows, skipping test\n");
+ return TEST_SKIPPED;
+}
+
+#else
+
+#define NB_ETHPORTS_USED 1
+#define MEMPOOL_CACHE_SIZE 32
+#define MAX_PKT_BURST 32
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+#define RTE_PORT_ALL (~(uint16_t)0x0)
+
+#define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */
+#define RX_HTHRESH 8 /**< Default values of RX host threshold reg. */
+#define RX_WTHRESH 0 /**< Default values of RX write-back threshold reg. */
+
+#define TX_PTHRESH 32 /**< Default values of TX prefetch threshold reg. */
+#define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */
+#define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */
+
+#define MAX_TRAFFIC_BURST 2048
+#define NB_MBUF 10240
+
+extern struct ipsec_test_data pkt_aes_128_gcm;
+extern struct ipsec_test_data pkt_aes_192_gcm;
+extern struct ipsec_test_data pkt_aes_256_gcm;
+extern struct ipsec_test_data pkt_aes_128_gcm_frag;
+extern struct ipsec_test_data pkt_aes_128_cbc_null;
+extern struct ipsec_test_data pkt_null_aes_xcbc;
+extern struct ipsec_test_data pkt_aes_128_cbc_hmac_sha384;
+extern struct ipsec_test_data pkt_aes_128_cbc_hmac_sha512;
+
+static struct rte_mempool *mbufpool;
+static struct rte_mempool *sess_pool;
+static struct rte_mempool *sess_priv_pool;
+/* ethernet addresses of ports */
+static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
+ .split_hdr_size = 0,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SECURITY,
+ },
+ .txmode = {
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_SECURITY |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ },
+ .lpbk_mode = 1, /* enable loopback */
+};
+
+static struct rte_eth_rxconf rx_conf = {
+ .rx_thresh = {
+ .pthresh = RX_PTHRESH,
+ .hthresh = RX_HTHRESH,
+ .wthresh = RX_WTHRESH,
+ },
+ .rx_free_thresh = 32,
+};
+
+static struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = 32, /* Use PMD default values */
+ .tx_rs_thresh = 32, /* Use PMD default values */
+};
+
+uint16_t port_id;
+
+static uint64_t link_mbps;
+
+static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
+
+/* Create Inline IPsec session */
+static int
+create_inline_ipsec_session(struct ipsec_test_data *sa, uint16_t portid,
+ struct rte_security_session **sess, struct rte_security_ctx **ctx,
+ uint32_t *ol_flags, const struct ipsec_test_flags *flags,
+ struct rte_security_session_conf *sess_conf)
+{
+ uint16_t src_v6[8] = {0x2607, 0xf8b0, 0x400c, 0x0c03, 0x0000, 0x0000,
+ 0x0000, 0x001a};
+ uint16_t dst_v6[8] = {0x2001, 0x0470, 0xe5bf, 0xdead, 0x4957, 0x2174,
+ 0xe82c, 0x4887};
+ uint32_t src_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 2));
+ uint32_t dst_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 1));
+ struct rte_security_capability_idx sec_cap_idx;
+ const struct rte_security_capability *sec_cap;
+ enum rte_security_ipsec_sa_direction dir;
+ struct rte_security_ctx *sec_ctx;
+ uint32_t verify;
+
+ sess_conf->action_type = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ sess_conf->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sess_conf->ipsec = sa->ipsec_xform;
+
+ dir = sa->ipsec_xform.direction;
+ verify = flags->tunnel_hdr_verify;
+
+ if ((dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) && verify) {
+ if (verify == RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR)
+ src_v4 += 1;
+ else if (verify == RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR)
+ dst_v4 += 1;
+ }
+
+ if (sa->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+ if (sa->ipsec_xform.tunnel.type ==
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+ memcpy(&sess_conf->ipsec.tunnel.ipv4.src_ip, &src_v4,
+ sizeof(src_v4));
+ memcpy(&sess_conf->ipsec.tunnel.ipv4.dst_ip, &dst_v4,
+ sizeof(dst_v4));
+
+ if (flags->df == TEST_IPSEC_SET_DF_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv4.df = 0;
+
+ if (flags->df == TEST_IPSEC_SET_DF_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv4.df = 1;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv4.dscp = 0;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv4.dscp =
+ TEST_IPSEC_DSCP_VAL;
+ } else {
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv6.dscp = 0;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv6.dscp =
+ TEST_IPSEC_DSCP_VAL;
+
+ memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr, &src_v6,
+ sizeof(src_v6));
+ memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr, &dst_v6,
+ sizeof(dst_v6));
+ }
+ }
+
+ /* Save SA as userdata for the security session. When
+ * the packet is received, this userdata will be
+ * retrieved using the metadata from the packet.
+ *
+ * The PMD is expected to set similar metadata for other
+ * operations, like rte_eth_event, which are tied to
+ * security session. In such cases, the userdata could
+ * be obtained to uniquely identify the security
+ * parameters denoted.
+ */
+
+ sess_conf->userdata = (void *) sa;
+
+ sec_ctx = (struct rte_security_ctx *)rte_eth_dev_get_sec_ctx(portid);
+ if (sec_ctx == NULL) {
+ printf("Ethernet device doesn't support security features.\n");
+ return TEST_SKIPPED;
+ }
+
+ sec_cap_idx.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ sec_cap_idx.protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sec_cap_idx.ipsec.proto = sess_conf->ipsec.proto;
+ sec_cap_idx.ipsec.mode = sess_conf->ipsec.mode;
+ sec_cap_idx.ipsec.direction = sess_conf->ipsec.direction;
+ sec_cap = rte_security_capability_get(sec_ctx, &sec_cap_idx);
+ if (sec_cap == NULL) {
+ printf("No capabilities registered\n");
+ return TEST_SKIPPED;
+ }
+
+ if (sa->aead || sa->aes_gmac)
+ memcpy(&sess_conf->ipsec.salt, sa->salt.data,
+ RTE_MIN(sizeof(sess_conf->ipsec.salt), sa->salt.len));
+
+ /* Copy cipher session parameters */
+ if (sa->aead) {
+ rte_memcpy(sess_conf->crypto_xform, &sa->xform.aead,
+ sizeof(struct rte_crypto_sym_xform));
+ sess_conf->crypto_xform->aead.key.data = sa->key.data;
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_aead_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ } else {
+ if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ rte_memcpy(&sess_conf->crypto_xform->cipher,
+ &sa->xform.chain.cipher.cipher,
+ sizeof(struct rte_crypto_cipher_xform));
+
+ rte_memcpy(&sess_conf->crypto_xform->next->auth,
+ &sa->xform.chain.auth.auth,
+ sizeof(struct rte_crypto_auth_xform));
+ sess_conf->crypto_xform->cipher.key.data =
+ sa->key.data;
+ sess_conf->crypto_xform->next->auth.key.data =
+ sa->auth_key.data;
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_cipher_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Cipher crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+
+ if (test_ipsec_crypto_caps_auth_verify(sec_cap,
+ sess_conf->crypto_xform->next) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Auth crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ } else {
+ rte_memcpy(&sess_conf->crypto_xform->next->cipher,
+ &sa->xform.chain.cipher.cipher,
+ sizeof(struct rte_crypto_cipher_xform));
+ rte_memcpy(&sess_conf->crypto_xform->auth,
+ &sa->xform.chain.auth.auth,
+ sizeof(struct rte_crypto_auth_xform));
+ sess_conf->crypto_xform->auth.key.data =
+ sa->auth_key.data;
+ sess_conf->crypto_xform->next->cipher.key.data =
+ sa->key.data;
+
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_cipher_verify(sec_cap,
+ sess_conf->crypto_xform->next) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Cipher crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+
+ if (test_ipsec_crypto_caps_auth_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Auth crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ }
+ }
+
+ if (test_ipsec_sec_caps_verify(&sess_conf->ipsec, sec_cap, false) != 0)
+ return TEST_SKIPPED;
+
+ if ((sa->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) &&
+ (sa->ipsec_xform.options.iv_gen_disable == 1)) {
+ /* Set env variable when IV generation is disabled */
+ char arr[128];
+ int len = 0, j = 0;
+ int iv_len = (sa->aead || sa->aes_gmac) ? 8 : 16;
+
+ for (; j < iv_len; j++)
+ len += snprintf(arr+len, sizeof(arr) - len,
+ "0x%x, ", sa->iv.data[j]);
+ setenv("ETH_SEC_IV_OVR", arr, 1);
+ }
+
+ *sess = rte_security_session_create(sec_ctx,
+ sess_conf, sess_pool, sess_priv_pool);
+ if (*sess == NULL) {
+ printf("SEC Session init failed.\n");
+ return TEST_FAILED;
+ }
+
+ *ol_flags = sec_cap->ol_flags;
+ *ctx = sec_ctx;
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 3s, and print them finally */
+static void
+check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 30 /* 3s (30 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+ int ret;
+ char link_status[RTE_ETH_LINK_MAX_STR_LEN];
+
+ printf("Checking link statuses...\n");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ all_ports_up = 1;
+ for (portid = 0; portid < port_num; portid++) {
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ ret = rte_eth_link_get_nowait(portid, &link);
+ if (ret < 0) {
+ all_ports_up = 0;
+ if (print_flag == 1)
+ printf("Port %u link get failed: %s\n",
+ portid, rte_strerror(-ret));
+ continue;
+ }
+
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status && link_mbps == 0)
+ link_mbps = link.link_speed;
+
+ rte_eth_link_to_str(link_status,
+ sizeof(link_status), &link);
+ printf("Port %d %s\n", portid, link_status);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1))
+ print_flag = 1;
+ }
+}
+
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
+static void
+copy_buf_to_pkt_segs(const uint8_t *buf, unsigned int len,
+ struct rte_mbuf *pkt, unsigned int offset)
+{
+ unsigned int copied = 0;
+ unsigned int copy_len;
+ struct rte_mbuf *seg;
+ void *seg_buf;
+
+ seg = pkt;
+ while (offset >= seg->data_len) {
+ offset -= seg->data_len;
+ seg = seg->next;
+ }
+ copy_len = seg->data_len - offset;
+ seg_buf = rte_pktmbuf_mtod_offset(seg, char *, offset);
+ while (len > copy_len) {
+ rte_memcpy(seg_buf, buf + copied, (size_t) copy_len);
+ len -= copy_len;
+ copied += copy_len;
+ seg = seg->next;
+ seg_buf = rte_pktmbuf_mtod(seg, void *);
+ }
+ rte_memcpy(seg_buf, buf + copied, (size_t) len);
+}
+
+static inline struct rte_mbuf *
+init_packet(struct rte_mempool *mp, const uint8_t *data, unsigned int len)
+{
+ struct rte_mbuf *pkt;
+
+ pkt = rte_pktmbuf_alloc(mp);
+ if (pkt == NULL)
+ return NULL;
+ if (((data[0] & 0xF0) >> 4) == IPVERSION) {
+ rte_memcpy(rte_pktmbuf_append(pkt, RTE_ETHER_HDR_LEN),
+ &dummy_ipv4_eth_hdr, RTE_ETHER_HDR_LEN);
+ pkt->l3_len = sizeof(struct rte_ipv4_hdr);
+ } else {
+ rte_memcpy(rte_pktmbuf_append(pkt, RTE_ETHER_HDR_LEN),
+ &dummy_ipv6_eth_hdr, RTE_ETHER_HDR_LEN);
+ pkt->l3_len = sizeof(struct rte_ipv6_hdr);
+ }
+ pkt->l2_len = RTE_ETHER_HDR_LEN;
+
+ if (pkt->buf_len > (len + RTE_ETHER_HDR_LEN))
+ rte_memcpy(rte_pktmbuf_append(pkt, len), data, len);
+ else
+ copy_buf_to_pkt_segs(data, len, pkt, RTE_ETHER_HDR_LEN);
+ return pkt;
+}
+
+static int
+init_mempools(unsigned int nb_mbuf)
+{
+ struct rte_security_ctx *sec_ctx;
+ uint16_t nb_sess = 512;
+ uint32_t sess_sz;
+ char s[64];
+
+ if (mbufpool == NULL) {
+ snprintf(s, sizeof(s), "mbuf_pool");
+ mbufpool = rte_pktmbuf_pool_create(s, nb_mbuf,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, SOCKET_ID_ANY);
+ if (mbufpool == NULL) {
+ printf("Cannot init mbuf pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated mbuf pool\n");
+ }
+
+ sec_ctx = rte_eth_dev_get_sec_ctx(port_id);
+ if (sec_ctx == NULL) {
+ printf("Device does not support Security ctx\n");
+ return TEST_SKIPPED;
+ }
+ sess_sz = rte_security_session_get_size(sec_ctx);
+ if (sess_pool == NULL) {
+ snprintf(s, sizeof(s), "sess_pool");
+ sess_pool = rte_mempool_create(s, nb_sess, sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (sess_pool == NULL) {
+ printf("Cannot init sess pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated sess pool\n");
+ }
+ if (sess_priv_pool == NULL) {
+ snprintf(s, sizeof(s), "sess_priv_pool");
+ sess_priv_pool = rte_mempool_create(s, nb_sess, sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (sess_priv_pool == NULL) {
+ printf("Cannot init sess_priv pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated sess_priv pool\n");
+ }
+
+ return 0;
+}
+
+static int
+create_default_flow(uint16_t portid)
+{
+ struct rte_flow_action action[2];
+ struct rte_flow_item pattern[2];
+ struct rte_flow_attr attr = {0};
+ struct rte_flow_error err;
+ struct rte_flow *flow;
+ int ret;
+
+ /* Add the default rte_flow to enable SECURITY for all ESP packets */
+
+ pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP;
+ pattern[0].spec = NULL;
+ pattern[0].mask = NULL;
+ pattern[0].last = NULL;
+ pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+ action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ action[0].conf = NULL;
+ action[1].type = RTE_FLOW_ACTION_TYPE_END;
+ action[1].conf = NULL;
+
+ attr.ingress = 1;
+
+ ret = rte_flow_validate(portid, &attr, pattern, action, &err);
+ if (ret) {
+ printf("\nValidate flow failed, ret = %d\n", ret);
+ return -1;
+ }
+ flow = rte_flow_create(portid, &attr, pattern, action, &err);
+ if (flow == NULL) {
+ printf("\nDefault flow rule create failed\n");
+ return -1;
+ }
+
+ default_flow[portid] = flow;
+
+ return 0;
+}
+
+static void
+destroy_default_flow(uint16_t portid)
+{
+ struct rte_flow_error err;
+ int ret;
+
+ if (!default_flow[portid])
+ return;
+ ret = rte_flow_destroy(portid, default_flow[portid], &err);
+ if (ret) {
+ printf("\nDefault flow rule destroy failed\n");
+ return;
+ }
+ default_flow[portid] = NULL;
+}
+
+struct rte_mbuf **tx_pkts_burst;
+struct rte_mbuf **rx_pkts_burst;
+
+static int
+test_ipsec_inline_proto_process(struct ipsec_test_data *td,
+ struct ipsec_test_data *res_d,
+ int nb_pkts,
+ bool silent,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session_conf sess_conf = {0};
+ struct rte_crypto_sym_xform cipher = {0};
+ struct rte_crypto_sym_xform auth = {0};
+ struct rte_crypto_sym_xform aead = {0};
+ struct rte_security_session *ses;
+ struct rte_security_ctx *ctx;
+ int nb_rx = 0, nb_sent;
+ uint32_t ol_flags;
+ int i, j = 0, ret;
+
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
+
+ if (td->aead) {
+ sess_conf.crypto_xform = &aead;
+ } else {
+ if (td->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ sess_conf.crypto_xform = &cipher;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sess_conf.crypto_xform->next = &auth;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ } else {
+ sess_conf.crypto_xform = &auth;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sess_conf.crypto_xform->next = &cipher;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ }
+ }
+
+ /* Create Inline IPsec session. */
+ ret = create_inline_ipsec_session(td, port_id, &ses, &ctx,
+ &ol_flags, flags, &sess_conf);
+ if (ret)
+ return ret;
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ ret = create_default_flow(port_id);
+ if (ret)
+ goto out;
+ }
+ for (i = 0; i < nb_pkts; i++) {
+ tx_pkts_burst[i] = init_packet(mbufpool, td->input_text.data,
+ td->input_text.len);
+ if (tx_pkts_burst[i] == NULL) {
+ while (i--)
+ rte_pktmbuf_free(tx_pkts_burst[i]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkts_burst[i],
+ uint8_t *, RTE_ETHER_HDR_LEN), flags)) {
+ while (i--)
+ rte_pktmbuf_free(tx_pkts_burst[i]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx, ses,
+ tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ }
+ }
+ /* Send packet to ethdev for inline IPsec processing. */
+ nb_sent = rte_eth_tx_burst(port_id, 0, tx_pkts_burst, nb_pkts);
+ if (nb_sent != nb_pkts) {
+ printf("\nUnable to TX %d packets", nb_pkts);
+ for ( ; nb_sent < nb_pkts; nb_sent++)
+ rte_pktmbuf_free(tx_pkts_burst[nb_sent]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ rte_pause();
+
+ /* Receive back packet on loopback interface. */
+ do {
+ rte_delay_ms(1);
+ nb_rx += rte_eth_rx_burst(port_id, 0, &rx_pkts_burst[nb_rx],
+ nb_sent - nb_rx);
+ if (nb_rx >= nb_sent)
+ break;
+ } while (j++ < 5 || nb_rx == 0);
+
+ if (nb_rx != nb_sent) {
+ printf("\nUnable to RX all %d packets", nb_sent);
+ while (--nb_rx)
+ rte_pktmbuf_free(rx_pkts_burst[nb_rx]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ rte_pktmbuf_adj(rx_pkts_burst[i], RTE_ETHER_HDR_LEN);
+
+ ret = test_ipsec_post_process(rx_pkts_burst[i], td,
+ res_d, silent, flags);
+ if (ret != TEST_SUCCESS) {
+ for ( ; i < nb_rx; i++)
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ goto out;
+ }
+
+ ret = test_ipsec_stats_verify(ctx, ses, flags,
+ td->ipsec_xform.direction);
+ if (ret != TEST_SUCCESS) {
+ for ( ; i < nb_rx; i++)
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ goto out;
+ }
+
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ rx_pkts_burst[i] = NULL;
+ }
+
+out:
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ destroy_default_flow(port_id);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ctx, ses);
+ ses = NULL;
+
+ return ret;
+}
+
+static int
+ut_setup_inline_ipsec(void)
+{
+ int ret;
+
+ /* Start device */
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ /* always enable promiscuous */
+ ret = rte_eth_promiscuous_enable(port_id);
+ if (ret != 0) {
+ printf("rte_eth_promiscuous_enable: err=%s, port=%d\n",
+ rte_strerror(-ret), port_id);
+ return ret;
+ }
+
+ check_all_ports_link_status(1, RTE_PORT_ALL);
+
+ return 0;
+}
+
+static void
+ut_teardown_inline_ipsec(void)
+{
+ uint16_t portid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ ret = rte_eth_dev_stop(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+
+static int
+inline_ipsec_testsuite_setup(void)
+{
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint16_t nb_ports;
+ int ret;
+ uint16_t nb_rx_queue = 1, nb_tx_queue = 1;
+
+ printf("Start inline IPsec test.\n");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports < NB_ETHPORTS_USED) {
+ printf("At least %u port(s) used for test\n",
+ NB_ETHPORTS_USED);
+ return TEST_SKIPPED;
+ }
+
+ ret = init_mempools(NB_MBUF);
+ if (ret)
+ return ret;
+
+ if (tx_pkts_burst == NULL) {
+ tx_pkts_burst = (struct rte_mbuf **)rte_calloc("tx_buff",
+ MAX_TRAFFIC_BURST,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE);
+ if (!tx_pkts_burst)
+ return TEST_FAILED;
+
+ rx_pkts_burst = (struct rte_mbuf **)rte_calloc("rx_buff",
+ MAX_TRAFFIC_BURST,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE);
+ if (!rx_pkts_burst)
+ return TEST_FAILED;
+ }
+
+ printf("Generate %d packets\n", MAX_TRAFFIC_BURST);
+
+ nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+ /* configuring port 0 for the test is enough */
+ port_id = 0;
+ /* port configure */
+ ret = rte_eth_dev_configure(port_id, nb_rx_queue,
+ nb_tx_queue, &port_conf);
+ if (ret < 0) {
+ printf("Cannot configure device: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ ret = rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+ if (ret < 0) {
+ printf("Cannot get mac address: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ printf("Port %u ", port_id);
+ print_ethaddr("Address:", &ports_eth_addr[port_id]);
+ printf("\n");
+
+ /* tx queue setup */
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd,
+ SOCKET_ID_ANY, &tx_conf);
+ if (ret < 0) {
+ printf("rte_eth_tx_queue_setup: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ /* rx queue steup */
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, SOCKET_ID_ANY,
+ &rx_conf, mbufpool);
+ if (ret < 0) {
+ printf("rte_eth_rx_queue_setup: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ test_ipsec_alg_list_populate();
+
+ return 0;
+}
+
+static void
+inline_ipsec_testsuite_teardown(void)
+{
+ uint16_t portid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ ret = rte_eth_dev_reset(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_reset: err=%s, port=%u\n",
+ rte_strerror(-ret), port_id);
+ }
+}
+
+static int
+test_ipsec_inline_proto_known_vec(const void *test_data)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ memcpy(&td_outb, test_data, sizeof(td_outb));
+
+ if (td_outb.aead ||
+ td_outb.xform.chain.cipher.cipher.algo != RTE_CRYPTO_CIPHER_NULL) {
+ /* Disable IV gen to be able to test with known vectors */
+ td_outb.ipsec_xform.options.iv_gen_disable = 1;
+ }
+
+ return test_ipsec_inline_proto_process(&td_outb, NULL, 1,
+ false, &flags);
+}
+
+static struct unit_test_suite inline_ipsec_testsuite = {
+ .suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
+ .setup = inline_ipsec_testsuite_setup,
+ .teardown = inline_ipsec_testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 192)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_192_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 256)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_256_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha256),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA384 [24B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha384),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA512 [32B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha512),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv6 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_256_gcm_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv6 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha256_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 NULL AES-XCBC-MAC [12B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_null_aes_xcbc),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ },
+};
+
+
+static int
+test_inline_ipsec(void)
+{
+ return unit_test_suite_runner(&inline_ipsec_testsuite);
+}
+
+#endif /* !RTE_EXEC_ENV_WINDOWS */
+
+REGISTER_TEST_COMMAND(inline_ipsec_autotest, test_inline_ipsec);
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
new file mode 100644
index 0000000000..d1074da36a
--- /dev/null
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+
+#include "test_cryptodev_security_ipsec.h"
+
+uint8_t dummy_ipv4_eth_hdr[] = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+};
+uint8_t dummy_ipv6_eth_hdr[] = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+};
+
+#endif
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index 4ae91dd94d..b5ffbd8bca 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -70,6 +70,11 @@ New Features
* Added AH mode support in lookaside protocol (IPsec) for CN9K & CN10K.
* Added AES-GMAC support in lookaside protocol (IPsec) for CN9K & CN10K.
+* **Added security inline protocol (IPsec) tests in dpdk-test.**
+
+ Added various functional test cases in dpdk-test to verify
+ inline IPsec protocol offload using loopback interface.
+
Removed Items
-------------
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 2/7] test/security: add inline inbound IPsec cases
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-05-13 7:31 ` Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 3/7] test/security: add combined mode inline " Akhil Goyal
` (5 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-13 7:31 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added test cases for inline Inbound protocol offload
verification with known test vectors from Lookaside mode.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 65 +++++++++++++++++++++++++++
1 file changed, 65 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 4b960ddfe0..4a95b25a0b 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -824,6 +824,24 @@ test_ipsec_inline_proto_known_vec(const void *test_data)
false, &flags);
}
+static int
+test_ipsec_inline_proto_known_vec_inb(const void *test_data)
+{
+ const struct ipsec_test_data *td = test_data;
+ struct ipsec_test_flags flags;
+ struct ipsec_test_data td_inb;
+
+ memset(&flags, 0, sizeof(flags));
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ test_ipsec_td_in_from_out(td, &td_inb);
+ else
+ memcpy(&td_inb, td, sizeof(td_inb));
+
+ return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags);
+}
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -870,6 +888,53 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_known_vec,
&pkt_null_aes_xcbc),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 192)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_192_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 256)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_256_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_128_cbc_null),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha256),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA384 [24B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha384),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA512 [32B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha512),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv6 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_256_gcm_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv6 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha256_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 NULL AES-XCBC-MAC [12B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_null_aes_xcbc),
+
+
TEST_CASES_END() /**< NULL terminate unit test array */
},
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 3/7] test/security: add combined mode inline IPsec cases
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
@ 2022-05-13 7:31 ` Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
` (4 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-13 7:31 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added combined encap and decap test cases for various algorithm
combinations
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 102 ++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 4a95b25a0b..a44a4f9b04 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -665,6 +665,92 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
return ret;
}
+static int
+test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_data td_inb;
+ unsigned int i, nb_pkts = 1, pass_cnt = 0, fail_cnt = 0;
+ int ret;
+
+ if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
+ flags->sa_expiry_pkts_hard)
+ nb_pkts = IPSEC_TEST_PACKETS_MAX;
+
+ for (i = 0; i < RTE_DIM(alg_list); i++) {
+ test_ipsec_td_prepare(alg_list[i].param1,
+ alg_list[i].param2,
+ flags, &td_outb, 1);
+
+ if (!td_outb.aead) {
+ enum rte_crypto_cipher_algorithm cipher_alg;
+ enum rte_crypto_auth_algorithm auth_alg;
+
+ cipher_alg = td_outb.xform.chain.cipher.cipher.algo;
+ auth_alg = td_outb.xform.chain.auth.auth.algo;
+
+ if (td_outb.aes_gmac && cipher_alg != RTE_CRYPTO_CIPHER_NULL)
+ continue;
+
+ /* ICV is not applicable for NULL auth */
+ if (flags->icv_corrupt &&
+ auth_alg == RTE_CRYPTO_AUTH_NULL)
+ continue;
+
+ /* IV is not applicable for NULL cipher */
+ if (flags->iv_gen &&
+ cipher_alg == RTE_CRYPTO_CIPHER_NULL)
+ continue;
+ }
+
+ if (flags->udp_encap)
+ td_outb.ipsec_xform.options.udp_encap = 1;
+
+ ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
+ false, flags);
+ if (ret == TEST_SKIPPED)
+ continue;
+
+ if (ret == TEST_FAILED) {
+ printf("\n TEST FAILED");
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+ fail_cnt++;
+ continue;
+ }
+
+ test_ipsec_td_update(&td_inb, &td_outb, 1, flags);
+
+ ret = test_ipsec_inline_proto_process(&td_inb, NULL, nb_pkts,
+ false, flags);
+ if (ret == TEST_SKIPPED)
+ continue;
+
+ if (ret == TEST_FAILED) {
+ printf("\n TEST FAILED");
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+ fail_cnt++;
+ continue;
+ }
+
+ if (flags->display_alg)
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+
+ pass_cnt++;
+ }
+
+ printf("Tests passed: %d, failed: %d", pass_cnt, fail_cnt);
+ if (fail_cnt > 0)
+ return TEST_FAILED;
+ if (pass_cnt > 0)
+ return TEST_SUCCESS;
+ else
+ return TEST_SKIPPED;
+}
+
+
static int
ut_setup_inline_ipsec(void)
{
@@ -841,6 +927,17 @@ test_ipsec_inline_proto_known_vec_inb(const void *test_data)
return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags);
}
+static int
+test_ipsec_inline_proto_display_list(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.display_alg = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
@@ -934,6 +1031,11 @@ static struct unit_test_suite inline_ipsec_testsuite = {
test_ipsec_inline_proto_known_vec_inb,
&pkt_null_aes_xcbc),
+ TEST_CASE_NAMED_ST(
+ "Combined test alg list",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_display_list),
+
TEST_CASES_END() /**< NULL terminate unit test array */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 4/7] test/security: add inline IPsec reassembly cases
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
` (2 preceding siblings ...)
2022-05-13 7:31 ` [PATCH v6 3/7] test/security: add combined mode inline " Akhil Goyal
@ 2022-05-13 7:31 ` Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
` (3 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-13 7:31 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added unit test cases for IP reassembly of inline IPsec
inbound scenarios.
In these cases, known test vectors of fragments are first
processed for inline outbound processing and then received
back on loopback interface for inbound processing along with
IP reassembly of the corresponding decrypted packets.
The resultant plain text reassembled packet is compared with
original unfragmented packet.
In this patch, cases are added for 2/4/5 fragments for both
IPv4 and IPv6 packets. A few negative test cases are also added
like incomplete fragments, out of place fragments, duplicate
fragments.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 423 ++++++++++-
app/test/test_security_inline_proto_vectors.h | 684 ++++++++++++++++++
2 files changed, 1106 insertions(+), 1 deletion(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index a44a4f9b04..2aa7072512 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -41,6 +41,9 @@ test_inline_ipsec(void)
#define MAX_TRAFFIC_BURST 2048
#define NB_MBUF 10240
+#define ENCAP_DECAP_BURST_SZ 33
+#define APP_REASS_TIMEOUT 10
+
extern struct ipsec_test_data pkt_aes_128_gcm;
extern struct ipsec_test_data pkt_aes_192_gcm;
extern struct ipsec_test_data pkt_aes_256_gcm;
@@ -94,6 +97,8 @@ uint16_t port_id;
static uint64_t link_mbps;
+static int ip_reassembly_dynfield_offset = -1;
+
static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
/* Create Inline IPsec session */
@@ -530,6 +535,349 @@ destroy_default_flow(uint16_t portid)
struct rte_mbuf **tx_pkts_burst;
struct rte_mbuf **rx_pkts_burst;
+static int
+compare_pkt_data(struct rte_mbuf *m, uint8_t *ref, unsigned int tot_len)
+{
+ unsigned int len;
+ unsigned int nb_segs = m->nb_segs;
+ unsigned int matched = 0;
+ struct rte_mbuf *save = m;
+
+ while (m) {
+ len = tot_len;
+ if (len > m->data_len)
+ len = m->data_len;
+ if (len != 0) {
+ if (memcmp(rte_pktmbuf_mtod(m, char *),
+ ref + matched, len)) {
+ printf("\n====Reassembly case failed: Data Mismatch");
+ rte_hexdump(stdout, "Reassembled",
+ rte_pktmbuf_mtod(m, char *),
+ len);
+ rte_hexdump(stdout, "reference",
+ ref + matched,
+ len);
+ return TEST_FAILED;
+ }
+ }
+ tot_len -= len;
+ matched += len;
+ m = m->next;
+ }
+
+ if (tot_len) {
+ printf("\n====Reassembly case failed: Data Missing %u",
+ tot_len);
+ printf("\n====nb_segs %u, tot_len %u", nb_segs, tot_len);
+ rte_pktmbuf_dump(stderr, save, -1);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+}
+
+static inline bool
+is_ip_reassembly_incomplete(struct rte_mbuf *mbuf)
+{
+ static uint64_t ip_reassembly_dynflag;
+ int ip_reassembly_dynflag_offset;
+
+ if (ip_reassembly_dynflag == 0) {
+ ip_reassembly_dynflag_offset = rte_mbuf_dynflag_lookup(
+ RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME, NULL);
+ if (ip_reassembly_dynflag_offset < 0)
+ return false;
+ ip_reassembly_dynflag = RTE_BIT64(ip_reassembly_dynflag_offset);
+ }
+
+ return (mbuf->ol_flags & ip_reassembly_dynflag) != 0;
+}
+
+static void
+free_mbuf(struct rte_mbuf *mbuf)
+{
+ rte_eth_ip_reassembly_dynfield_t dynfield;
+
+ if (!mbuf)
+ return;
+
+ if (!is_ip_reassembly_incomplete(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ } else {
+ if (ip_reassembly_dynfield_offset < 0)
+ return;
+
+ while (mbuf) {
+ dynfield = *RTE_MBUF_DYNFIELD(mbuf,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ rte_pktmbuf_free(mbuf);
+ mbuf = dynfield.next_frag;
+ }
+ }
+}
+
+
+static int
+get_and_verify_incomplete_frags(struct rte_mbuf *mbuf,
+ struct reassembly_vector *vector)
+{
+ rte_eth_ip_reassembly_dynfield_t *dynfield[MAX_PKT_BURST];
+ int j = 0, ret;
+ /**
+ * IP reassembly offload is incomplete, and fragments are listed in
+ * dynfield which can be reassembled in SW.
+ */
+ printf("\nHW IP Reassembly is not complete; attempt SW IP Reassembly,"
+ "\nMatching with original frags.");
+
+ if (ip_reassembly_dynfield_offset < 0)
+ return -1;
+
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(mbuf, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(mbuf, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(mbuf, ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(dynfield[j]->next_frag, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(dynfield[j]->next_frag, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+
+ while ((dynfield[j]->nb_frags > 1) &&
+ is_ip_reassembly_incomplete(dynfield[j]->next_frag)) {
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(dynfield[j-1]->next_frag,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(dynfield[j]->next_frag, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(dynfield[j]->next_frag,
+ vector->frags[j]->data, vector->frags[j]->len);
+ if (ret)
+ return ret;
+ }
+ return ret;
+}
+
+static int
+test_ipsec_with_reassembly(struct reassembly_vector *vector,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session *out_ses[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_security_session *in_ses[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_eth_ip_reassembly_params reass_capa = {0};
+ struct rte_security_session_conf sess_conf_out = {0};
+ struct rte_security_session_conf sess_conf_in = {0};
+ unsigned int nb_tx, burst_sz, nb_sent = 0;
+ struct rte_crypto_sym_xform cipher_out = {0};
+ struct rte_crypto_sym_xform auth_out = {0};
+ struct rte_crypto_sym_xform aead_out = {0};
+ struct rte_crypto_sym_xform cipher_in = {0};
+ struct rte_crypto_sym_xform auth_in = {0};
+ struct rte_crypto_sym_xform aead_in = {0};
+ struct ipsec_test_data sa_data = {0};
+ struct rte_security_ctx *ctx;
+ unsigned int i, nb_rx = 0, j;
+ uint32_t ol_flags;
+ int ret = 0;
+
+ burst_sz = vector->burst ? ENCAP_DECAP_BURST_SZ : 1;
+ nb_tx = vector->nb_frags * burst_sz;
+
+ rte_eth_dev_stop(port_id);
+ if (ret != 0) {
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), port_id);
+ return ret;
+ }
+ rte_eth_ip_reassembly_capability_get(port_id, &reass_capa);
+ if (reass_capa.max_frags < vector->nb_frags)
+ return TEST_SKIPPED;
+ if (reass_capa.timeout_ms > APP_REASS_TIMEOUT) {
+ reass_capa.timeout_ms = APP_REASS_TIMEOUT;
+ rte_eth_ip_reassembly_conf_set(port_id, &reass_capa);
+ }
+
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+
+ memset(tx_pkts_burst, 0, sizeof(tx_pkts_burst[0]) * nb_tx);
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_tx);
+
+ for (i = 0; i < nb_tx; i += vector->nb_frags) {
+ for (j = 0; j < vector->nb_frags; j++) {
+ tx_pkts_burst[i+j] = init_packet(mbufpool,
+ vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (tx_pkts_burst[i+j] == NULL) {
+ ret = -1;
+ printf("\n packed init failed\n");
+ goto out;
+ }
+ }
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data,
+ sizeof(struct ipsec_test_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+ sa_data.ipsec_xform.direction =
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+ if (sa_data.aead) {
+ sess_conf_out.crypto_xform = &aead_out;
+ } else {
+ sess_conf_out.crypto_xform = &cipher_out;
+ sess_conf_out.crypto_xform->next = &auth_out;
+ }
+
+ /* Create Inline IPsec outbound session. */
+ ret = create_inline_ipsec_session(&sa_data, port_id,
+ &out_ses[i], &ctx, &ol_flags, flags,
+ &sess_conf_out);
+ if (ret) {
+ printf("\nInline outbound session create failed\n");
+ goto out;
+ }
+ }
+
+ j = 0;
+ for (i = 0; i < nb_tx; i++) {
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx,
+ out_ses[j], tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+
+ /* Move to next SA after nb_frags */
+ if ((i + 1) % vector->nb_frags == 0)
+ j++;
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data,
+ sizeof(struct ipsec_test_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+ sa_data.ipsec_xform.direction =
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+
+ if (sa_data.aead) {
+ sess_conf_in.crypto_xform = &aead_in;
+ } else {
+ sess_conf_in.crypto_xform = &auth_in;
+ sess_conf_in.crypto_xform->next = &cipher_in;
+ }
+ /* Create Inline IPsec inbound session. */
+ ret = create_inline_ipsec_session(&sa_data, port_id, &in_ses[i],
+ &ctx, &ol_flags, flags, &sess_conf_in);
+ if (ret) {
+ printf("\nInline inbound session create failed\n");
+ goto out;
+ }
+ }
+
+ /* Retrieve reassembly dynfield offset if available */
+ if (ip_reassembly_dynfield_offset < 0 && vector->nb_frags > 1)
+ ip_reassembly_dynfield_offset = rte_mbuf_dynfield_lookup(
+ RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME, NULL);
+
+
+ ret = create_default_flow(port_id);
+ if (ret)
+ goto out;
+
+ nb_sent = rte_eth_tx_burst(port_id, 0, tx_pkts_burst, nb_tx);
+ if (nb_sent != nb_tx) {
+ ret = -1;
+ printf("\nFailed to tx %u pkts", nb_tx);
+ goto out;
+ }
+
+ rte_delay_ms(1);
+
+ /* Retry few times before giving up */
+ nb_rx = 0;
+ j = 0;
+ do {
+ nb_rx += rte_eth_rx_burst(port_id, 0, &rx_pkts_burst[nb_rx],
+ nb_tx - nb_rx);
+ j++;
+ if (nb_rx >= nb_tx)
+ break;
+ rte_delay_ms(1);
+ } while (j < 5 || !nb_rx);
+
+ /* Check for minimum number of Rx packets expected */
+ if ((vector->nb_frags == 1 && nb_rx != nb_tx) ||
+ (vector->nb_frags > 1 && nb_rx < burst_sz)) {
+ printf("\nreceived less Rx pkts(%u) pkts\n", nb_rx);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ if (vector->nb_frags > 1 &&
+ is_ip_reassembly_incomplete(rx_pkts_burst[i])) {
+ ret = get_and_verify_incomplete_frags(rx_pkts_burst[i],
+ vector);
+ if (ret != TEST_SUCCESS)
+ break;
+ continue;
+ }
+
+ if (rx_pkts_burst[i]->ol_flags &
+ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED ||
+ !(rx_pkts_burst[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+ printf("\nsecurity offload failed\n");
+ ret = TEST_FAILED;
+ break;
+ }
+
+ if (vector->full_pkt->len + RTE_ETHER_HDR_LEN !=
+ rx_pkts_burst[i]->pkt_len) {
+ printf("\nreassembled/decrypted packet length mismatch\n");
+ ret = TEST_FAILED;
+ break;
+ }
+ rte_pktmbuf_adj(rx_pkts_burst[i], RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(rx_pkts_burst[i],
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ if (ret != TEST_SUCCESS)
+ break;
+ }
+
+out:
+ destroy_default_flow(port_id);
+
+ /* Clear session data. */
+ for (i = 0; i < burst_sz; i++) {
+ if (out_ses[i])
+ rte_security_session_destroy(ctx, out_ses[i]);
+ if (in_ses[i])
+ rte_security_session_destroy(ctx, in_ses[i]);
+ }
+
+ for (i = nb_sent; i < nb_tx; i++)
+ free_mbuf(tx_pkts_burst[i]);
+ for (i = 0; i < nb_rx; i++)
+ free_mbuf(rx_pkts_burst[i]);
+ return ret;
+}
+
static int
test_ipsec_inline_proto_process(struct ipsec_test_data *td,
struct ipsec_test_data *res_d,
@@ -779,6 +1127,7 @@ ut_setup_inline_ipsec(void)
static void
ut_teardown_inline_ipsec(void)
{
+ struct rte_eth_ip_reassembly_params reass_conf = {0};
uint16_t portid;
int ret;
@@ -788,6 +1137,9 @@ ut_teardown_inline_ipsec(void)
if (ret != 0)
printf("rte_eth_dev_stop: err=%s, port=%u\n",
rte_strerror(-ret), portid);
+
+ /* Clear reassembly configuration */
+ rte_eth_ip_reassembly_conf_set(portid, &reass_conf);
}
}
@@ -890,6 +1242,36 @@ inline_ipsec_testsuite_teardown(void)
}
}
+static int
+test_inline_ip_reassembly(const void *testdata)
+{
+ struct reassembly_vector reassembly_td = {0};
+ const struct reassembly_vector *td = testdata;
+ struct ip_reassembly_test_packet full_pkt;
+ struct ip_reassembly_test_packet frags[MAX_FRAGS];
+ struct ipsec_test_flags flags = {0};
+ int i = 0;
+
+ reassembly_td.sa_data = td->sa_data;
+ reassembly_td.nb_frags = td->nb_frags;
+ reassembly_td.burst = td->burst;
+
+ memcpy(&full_pkt, td->full_pkt,
+ sizeof(struct ip_reassembly_test_packet));
+ reassembly_td.full_pkt = &full_pkt;
+
+ test_vector_payload_populate(reassembly_td.full_pkt, true);
+ for (; i < reassembly_td.nb_frags; i++) {
+ memcpy(&frags[i], td->frags[i],
+ sizeof(struct ip_reassembly_test_packet));
+ reassembly_td.frags[i] = &frags[i];
+ test_vector_payload_populate(reassembly_td.frags[i],
+ (i == 0) ? true : false);
+ }
+
+ return test_ipsec_with_reassembly(&reassembly_td, &flags);
+}
+
static int
test_ipsec_inline_proto_known_vec(const void *test_data)
{
@@ -1036,7 +1418,46 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_display_list),
-
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 2 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_2frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 2 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_2frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_4frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_4frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 5 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_5frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 5 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_5frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with incomplete fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_incomplete_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with overlapping fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_overlap_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with out of order fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_out_of_order_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with burst of 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_4frag_burst_vector),
TEST_CASES_END() /**< NULL terminate unit test array */
},
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index d1074da36a..c18965d80f 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -17,4 +17,688 @@ uint8_t dummy_ipv6_eth_hdr[] = {
0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
};
+#define MAX_FRAG_LEN 1500
+#define MAX_FRAGS 6
+#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
+
+struct ip_reassembly_test_packet {
+ uint32_t len;
+ uint32_t l4_offset;
+ uint8_t data[MAX_PKT_LEN];
+};
+
+struct reassembly_vector {
+ /* input/output text in struct ipsec_test_data are not used */
+ struct ipsec_test_data *sa_data;
+ struct ip_reassembly_test_packet *full_pkt;
+ struct ip_reassembly_test_packet *frags[MAX_FRAGS];
+ uint16_t nb_frags;
+ bool burst;
+};
+
+/* The source file includes below test vectors */
+/* IPv6:
+ *
+ * 1) pkt_ipv6_udp_p1
+ * pkt_ipv6_udp_p1_f1
+ * pkt_ipv6_udp_p1_f2
+ *
+ * 2) pkt_ipv6_udp_p2
+ * pkt_ipv6_udp_p2_f1
+ * pkt_ipv6_udp_p2_f2
+ * pkt_ipv6_udp_p2_f3
+ * pkt_ipv6_udp_p2_f4
+ *
+ * 3) pkt_ipv6_udp_p3
+ * pkt_ipv6_udp_p3_f1
+ * pkt_ipv6_udp_p3_f2
+ * pkt_ipv6_udp_p3_f3
+ * pkt_ipv6_udp_p3_f4
+ * pkt_ipv6_udp_p3_f5
+ */
+
+/* IPv4:
+ *
+ * 1) pkt_ipv4_udp_p1
+ * pkt_ipv4_udp_p1_f1
+ * pkt_ipv4_udp_p1_f2
+ *
+ * 2) pkt_ipv4_udp_p2
+ * pkt_ipv4_udp_p2_f1
+ * pkt_ipv4_udp_p2_f2
+ * pkt_ipv4_udp_p2_f3
+ * pkt_ipv4_udp_p2_f4
+ *
+ * 3) pkt_ipv4_udp_p3
+ * pkt_ipv4_udp_p3_f1
+ * pkt_ipv4_udp_p3_f2
+ * pkt_ipv4_udp_p3_f3
+ * pkt_ipv4_udp_p3_f4
+ * pkt_ipv4_udp_p3_f5
+ */
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1 = {
+ .len = 1500,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0xb4, 0x2C, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x5c, 0x92, 0xac, 0xf1,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1_f2 = {
+ .len = 172,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x00, 0x84, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x38, 0x5c, 0x92, 0xac, 0xf1,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2 = {
+ .len = 4482,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x11, 0x5a, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x64, 0x6c, 0x68, 0x9f,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f2 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f3 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f4 = {
+ .len = 482,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0xba, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa8, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3 = {
+ .len = 5782,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x16, 0x6e, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x65, 0xcf, 0x5a, 0xae,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f2 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f3 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f4 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa9, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f5 = {
+ .len = 446,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0x96, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x14, 0xe0, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1 = {
+ .len = 1500,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0xdc, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x66, 0x0d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x01, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1_f2 = {
+ .len = 100,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x00, 0x64, 0x00, 0x01, 0x00, 0xaf,
+ 0x40, 0x11, 0x6a, 0xd6, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2 = {
+ .len = 4482,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x11, 0x82, 0x00, 0x02, 0x00, 0x00,
+ 0x40, 0x11, 0x5a, 0x66, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5c, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f2 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xad, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f3 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfe, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f4 = {
+ .len = 282,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x01, 0x1a, 0x00, 0x02, 0x02, 0x0d,
+ 0x40, 0x11, 0x68, 0xc1, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3 = {
+ .len = 5782,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x16, 0x96, 0x00, 0x03, 0x00, 0x00,
+ 0x40, 0x11, 0x55, 0x51, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5b, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f2 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xac, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f3 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfd, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f4 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x22, 0x0d,
+ 0x40, 0x11, 0x44, 0x4e, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f5 = {
+ .len = 182,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x00, 0xb6, 0x00, 0x03, 0x02, 0xbc,
+ 0x40, 0x11, 0x68, 0x75, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+static inline void
+test_vector_payload_populate(struct ip_reassembly_test_packet *pkt,
+ bool first_frag)
+{
+ uint32_t i = pkt->l4_offset;
+
+ /**
+ * For non-fragmented packets and first frag, skip 8 bytes from
+ * l4_offset for UDP header.
+ */
+ if (first_frag)
+ i += 8;
+
+ for (; i < pkt->len; i++)
+ pkt->data[i] = 0x58;
+}
+
+struct ipsec_test_data conf_aes_128_gcm = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+
+struct ipsec_test_data conf_aes_128_gcm_v6_tunnel = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+
+const struct reassembly_vector ipv4_2frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv6_2frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p1,
+ .frags[0] = &pkt_ipv6_udp_p1_f1,
+ .frags[1] = &pkt_ipv6_udp_p1_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv4_4frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv6_4frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p2,
+ .frags[0] = &pkt_ipv6_udp_p2_f1,
+ .frags[1] = &pkt_ipv6_udp_p2_f2,
+ .frags[2] = &pkt_ipv6_udp_p2_f3,
+ .frags[3] = &pkt_ipv6_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_5frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p3,
+ .frags[0] = &pkt_ipv4_udp_p3_f1,
+ .frags[1] = &pkt_ipv4_udp_p3_f2,
+ .frags[2] = &pkt_ipv4_udp_p3_f3,
+ .frags[3] = &pkt_ipv4_udp_p3_f4,
+ .frags[4] = &pkt_ipv4_udp_p3_f5,
+ .nb_frags = 5,
+ .burst = false,
+};
+const struct reassembly_vector ipv6_5frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p3,
+ .frags[0] = &pkt_ipv6_udp_p3_f1,
+ .frags[1] = &pkt_ipv6_udp_p3_f2,
+ .frags[2] = &pkt_ipv6_udp_p3_f3,
+ .frags[3] = &pkt_ipv6_udp_p3_f4,
+ .frags[4] = &pkt_ipv6_udp_p3_f5,
+ .nb_frags = 5,
+ .burst = false,
+};
+/* Negative test cases. */
+const struct reassembly_vector ipv4_incomplete_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_overlap_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f1, /* Overlap */
+ .frags[2] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 3,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_out_of_order_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f3,
+ .frags[2] = &pkt_ipv4_udp_p2_f4,
+ .frags[3] = &pkt_ipv4_udp_p2_f2, /* out of order */
+ .nb_frags = 4,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_4frag_burst_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = true,
+};
+
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 5/7] test/security: add more inline IPsec functional cases
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
` (3 preceding siblings ...)
2022-05-13 7:31 ` [PATCH v6 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
@ 2022-05-13 7:31 ` Akhil Goyal
2022-05-13 7:32 ` [PATCH v6 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
` (2 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-13 7:31 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added more inline IPsec functional verification cases.
These cases do not have known vectors but are verified
using encap + decap test for all the algo combinations.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 517 ++++++++++++++++++++++++++
1 file changed, 517 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 2aa7072512..ce68da7605 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1321,6 +1321,394 @@ test_ipsec_inline_proto_display_list(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_udp_encap(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.udp_encap = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_udp_ports_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.udp_encap = true;
+ flags.udp_ports_verify = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_err_icv_corrupt(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.icv_corrupt = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_dst_addr_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_src_dst_addr_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_inner_ip_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ip_csum = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_inner_l4_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.l4_csum = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v4_in_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.tunnel_ipv6 = false;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v6_in_v6(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v4_in_v6(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.tunnel_ipv6 = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v6_in_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = false;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_transport_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.transport = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_transport_l4_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .l4_csum = true,
+ .transport = true,
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_stats(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.stats_success = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_pkt_fragment(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.fragment = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+
+}
+
+static int
+test_ipsec_inline_proto_copy_df_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_COPY_DF_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_copy_df_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_COPY_DF_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_set_df_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_SET_DF_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_set_df_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_SET_DF_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_copy_dscp_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_copy_dscp_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_dscp_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_dscp_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .dec_ttl_or_hop_limit = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_hop_limit_decrement(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .ipv6 = true,
+ .dec_ttl_or_hop_limit = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_iv_gen(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.iv_gen = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+ flags.fragment = true;
+
+ memcpy(&td_outb, test_data, sizeof(td_outb));
+
+ /* Disable IV gen to be able to test with known vectors */
+ td_outb.ipsec_xform.options.iv_gen_disable = 1;
+
+ return test_ipsec_inline_proto_process(&td_outb, NULL, 1, false,
+ &flags);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -1367,6 +1755,13 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_known_vec,
&pkt_null_aes_xcbc),
+
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound fragmented packet",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_fragmented,
+ &pkt_aes_128_gcm_frag),
+
TEST_CASE_NAMED_WITH_DATA(
"Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
@@ -1418,6 +1813,128 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_display_list),
+ TEST_CASE_NAMED_ST(
+ "UDP encapsulation",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_udp_encap),
+ TEST_CASE_NAMED_ST(
+ "UDP encapsulation ports verification test",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_udp_ports_verify),
+ TEST_CASE_NAMED_ST(
+ "Negative test: ICV corruption",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_err_icv_corrupt),
+ TEST_CASE_NAMED_ST(
+ "Tunnel dst addr verification",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_dst_addr_verify),
+ TEST_CASE_NAMED_ST(
+ "Tunnel src and dst addr verification",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_src_dst_addr_verify),
+ TEST_CASE_NAMED_ST(
+ "Inner IP checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_inner_ip_csum),
+ TEST_CASE_NAMED_ST(
+ "Inner L4 checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_inner_l4_csum),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv4 in IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v4_in_v4),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv6 in IPv6",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v6_in_v6),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv4 in IPv6",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v4_in_v6),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv6 in IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v6_in_v4),
+ TEST_CASE_NAMED_ST(
+ "Transport IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_transport_v4),
+ TEST_CASE_NAMED_ST(
+ "Transport l4 checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_transport_l4_csum),
+ TEST_CASE_NAMED_ST(
+ "Statistics: success",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_stats),
+ TEST_CASE_NAMED_ST(
+ "Fragmented packet",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_fragment),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header copy DF (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_copy_df_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header copy DF (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_copy_df_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header set DF 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_set_df_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header set DF 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_set_df_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 copy DSCP (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_copy_dscp_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 copy DSCP (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_copy_dscp_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 set DSCP 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_set_dscp_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 set DSCP 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy DSCP (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_dscp_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy DSCP (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_dscp_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set DSCP 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_dscp_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set DSCP 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 decrement inner TTL",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_ttl_decrement),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 decrement inner hop limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_hop_limit_decrement),
+ TEST_CASE_NAMED_ST(
+ "IV generation",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_iv_gen),
+
+
TEST_CASE_NAMED_WITH_DATA(
"IPv4 Reassembly with 2 fragments",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 6/7] test/security: add ESN and anti-replay cases for inline
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
` (4 preceding siblings ...)
2022-05-13 7:31 ` [PATCH v6 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
@ 2022-05-13 7:32 ` Akhil Goyal
2022-05-13 7:32 ` [PATCH v6 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-13 7:32 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added cases to test anti replay for inline IPsec processing
with and without extended sequence number support.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 311 ++++++++++++++++++++++++++
1 file changed, 311 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index ce68da7605..1c32141cd1 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1098,6 +1098,139 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
return TEST_SKIPPED;
}
+static int
+test_ipsec_inline_proto_process_with_esn(struct ipsec_test_data td[],
+ struct ipsec_test_data res_d[],
+ int nb_pkts,
+ bool silent,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session_conf sess_conf = {0};
+ struct ipsec_test_data *res_d_tmp = NULL;
+ struct rte_crypto_sym_xform cipher = {0};
+ struct rte_crypto_sym_xform auth = {0};
+ struct rte_crypto_sym_xform aead = {0};
+ struct rte_mbuf *rx_pkt = NULL;
+ struct rte_mbuf *tx_pkt = NULL;
+ int nb_rx, nb_sent;
+ struct rte_security_session *ses;
+ struct rte_security_ctx *ctx;
+ uint32_t ol_flags;
+ int i, ret;
+
+ if (td[0].aead) {
+ sess_conf.crypto_xform = &aead;
+ } else {
+ if (td[0].ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ sess_conf.crypto_xform = &cipher;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sess_conf.crypto_xform->next = &auth;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ } else {
+ sess_conf.crypto_xform = &auth;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sess_conf.crypto_xform->next = &cipher;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ }
+ }
+
+ /* Create Inline IPsec session. */
+ ret = create_inline_ipsec_session(&td[0], port_id, &ses, &ctx,
+ &ol_flags, flags, &sess_conf);
+ if (ret)
+ return ret;
+
+ if (td[0].ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ ret = create_default_flow(port_id);
+ if (ret)
+ goto out;
+ }
+
+ for (i = 0; i < nb_pkts; i++) {
+ tx_pkt = init_packet(mbufpool, td[i].input_text.data,
+ td[i].input_text.len);
+ if (tx_pkt == NULL) {
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkt,
+ uint8_t *, RTE_ETHER_HDR_LEN), flags)) {
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (td[i].ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ if (flags->antireplay) {
+ sess_conf.ipsec.esn.value =
+ td[i].ipsec_xform.esn.value;
+ ret = rte_security_session_update(ctx, ses,
+ &sess_conf);
+ if (ret) {
+ printf("Could not update ESN in session\n");
+ rte_pktmbuf_free(tx_pkt);
+ ret = TEST_SKIPPED;
+ goto out;
+ }
+ }
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx, ses,
+ tx_pkt, NULL);
+ tx_pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ }
+ /* Send packet to ethdev for inline IPsec processing. */
+ nb_sent = rte_eth_tx_burst(port_id, 0, &tx_pkt, 1);
+ if (nb_sent != 1) {
+ printf("\nUnable to TX packets");
+ rte_pktmbuf_free(tx_pkt);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ rte_pause();
+
+ /* Receive back packet on loopback interface. */
+ do {
+ rte_delay_ms(1);
+ nb_rx = rte_eth_rx_burst(port_id, 0, &rx_pkt, 1);
+ } while (nb_rx == 0);
+
+ rte_pktmbuf_adj(rx_pkt, RTE_ETHER_HDR_LEN);
+
+ if (res_d != NULL)
+ res_d_tmp = &res_d[i];
+
+ ret = test_ipsec_post_process(rx_pkt, &td[i],
+ res_d_tmp, silent, flags);
+ if (ret != TEST_SUCCESS) {
+ rte_pktmbuf_free(rx_pkt);
+ goto out;
+ }
+
+ ret = test_ipsec_stats_verify(ctx, ses, flags,
+ td->ipsec_xform.direction);
+ if (ret != TEST_SUCCESS) {
+ rte_pktmbuf_free(rx_pkt);
+ goto out;
+ }
+
+ rte_pktmbuf_free(rx_pkt);
+ rx_pkt = NULL;
+ tx_pkt = NULL;
+ }
+
+out:
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ destroy_default_flow(port_id);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ctx, ses);
+ ses = NULL;
+
+ return ret;
+}
static int
ut_setup_inline_ipsec(void)
@@ -1709,6 +1842,153 @@ test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
return test_ipsec_inline_proto_process(&td_outb, NULL, 1, false,
&flags);
}
+
+static int
+test_ipsec_inline_pkt_replay(const void *test_data, const uint64_t esn[],
+ bool replayed_pkt[], uint32_t nb_pkts, bool esn_en,
+ uint64_t winsz)
+{
+ struct ipsec_test_data td_outb[IPSEC_TEST_PACKETS_MAX];
+ struct ipsec_test_data td_inb[IPSEC_TEST_PACKETS_MAX];
+ struct ipsec_test_flags flags;
+ uint32_t i, ret = 0;
+
+ memset(&flags, 0, sizeof(flags));
+ flags.antireplay = true;
+
+ for (i = 0; i < nb_pkts; i++) {
+ memcpy(&td_outb[i], test_data, sizeof(td_outb));
+ td_outb[i].ipsec_xform.options.iv_gen_disable = 1;
+ td_outb[i].ipsec_xform.replay_win_sz = winsz;
+ td_outb[i].ipsec_xform.options.esn = esn_en;
+ }
+
+ for (i = 0; i < nb_pkts; i++)
+ td_outb[i].ipsec_xform.esn.value = esn[i];
+
+ ret = test_ipsec_inline_proto_process_with_esn(td_outb, td_inb,
+ nb_pkts, true, &flags);
+ if (ret != TEST_SUCCESS)
+ return ret;
+
+ test_ipsec_td_update(td_inb, td_outb, nb_pkts, &flags);
+
+ for (i = 0; i < nb_pkts; i++) {
+ td_inb[i].ipsec_xform.options.esn = esn_en;
+ /* Set antireplay flag for packets to be dropped */
+ td_inb[i].ar_packet = replayed_pkt[i];
+ }
+
+ ret = test_ipsec_inline_proto_process_with_esn(td_inb, NULL, nb_pkts,
+ true, &flags);
+
+ return ret;
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay(const void *test_data, uint64_t winsz)
+{
+
+ uint32_t nb_pkts = 5;
+ bool replayed_pkt[5];
+ uint64_t esn[5];
+
+ /* 1. Advance the TOP of the window to WS * 2 */
+ esn[0] = winsz * 2;
+ /* 2. Test sequence number within the new window(WS + 1) */
+ esn[1] = winsz + 1;
+ /* 3. Test sequence number less than the window BOTTOM */
+ esn[2] = winsz;
+ /* 4. Test sequence number in the middle of the window */
+ esn[3] = winsz + (winsz / 2);
+ /* 5. Test replay of the packet in the middle of the window */
+ esn[4] = winsz + (winsz / 2);
+
+ replayed_pkt[0] = false;
+ replayed_pkt[1] = false;
+ replayed_pkt[2] = true;
+ replayed_pkt[3] = false;
+ replayed_pkt[4] = true;
+
+ return test_ipsec_inline_pkt_replay(test_data, esn, replayed_pkt,
+ nb_pkts, false, winsz);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay1024(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay2048(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay4096(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 4096);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay(const void *test_data, uint64_t winsz)
+{
+
+ uint32_t nb_pkts = 7;
+ bool replayed_pkt[7];
+ uint64_t esn[7];
+
+ /* Set the initial sequence number */
+ esn[0] = (uint64_t)(0xFFFFFFFF - winsz);
+ /* 1. Advance the TOP of the window to (1<<32 + WS/2) */
+ esn[1] = (uint64_t)((1ULL << 32) + (winsz / 2));
+ /* 2. Test sequence number within new window (1<<32 + WS/2 + 1) */
+ esn[2] = (uint64_t)((1ULL << 32) - (winsz / 2) + 1);
+ /* 3. Test with sequence number within window (1<<32 - 1) */
+ esn[3] = (uint64_t)((1ULL << 32) - 1);
+ /* 4. Test with sequence number within window (1<<32 - 1) */
+ esn[4] = (uint64_t)(1ULL << 32);
+ /* 5. Test with duplicate sequence number within
+ * new window (1<<32 - 1)
+ */
+ esn[5] = (uint64_t)((1ULL << 32) - 1);
+ /* 6. Test with duplicate sequence number within new window (1<<32) */
+ esn[6] = (uint64_t)(1ULL << 32);
+
+ replayed_pkt[0] = false;
+ replayed_pkt[1] = false;
+ replayed_pkt[2] = false;
+ replayed_pkt[3] = false;
+ replayed_pkt[4] = false;
+ replayed_pkt[5] = true;
+ replayed_pkt[6] = true;
+
+ return test_ipsec_inline_pkt_replay(test_data, esn, replayed_pkt, nb_pkts,
+ true, winsz);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay1024(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay2048(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay4096(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 4096);
+}
+
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -1935,6 +2215,37 @@ static struct unit_test_suite inline_ipsec_testsuite = {
test_ipsec_inline_proto_iv_gen),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 1024",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay1024,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 2048",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay2048,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 4096",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay4096,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 1024",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay1024,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 2048",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay2048,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 4096",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay4096,
+ &pkt_aes_128_gcm),
+
TEST_CASE_NAMED_WITH_DATA(
"IPv4 Reassembly with 2 fragments",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 7/7] test/security: add inline IPsec IPv6 flow label cases
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
` (5 preceding siblings ...)
2022-05-13 7:32 ` [PATCH v6 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
@ 2022-05-13 7:32 ` Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-13 7:32 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Fan Zhang
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds unit tests for IPv6 flow label set & copy
operations.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_cryptodev_security_ipsec.c | 35 ++++++++++-
app/test/test_cryptodev_security_ipsec.h | 10 +++
app/test/test_security_inline_proto.c | 79 ++++++++++++++++++++++++
3 files changed, 123 insertions(+), 1 deletion(-)
diff --git a/app/test/test_cryptodev_security_ipsec.c b/app/test/test_cryptodev_security_ipsec.c
index 14c6ba681f..408bd0bc82 100644
--- a/app/test/test_cryptodev_security_ipsec.c
+++ b/app/test/test_cryptodev_security_ipsec.c
@@ -495,6 +495,10 @@ test_ipsec_td_prepare(const struct crypto_param *param1,
flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1)
td->ipsec_xform.options.copy_dscp = 1;
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1)
+ td->ipsec_xform.options.copy_flabel = 1;
+
if (flags->dec_ttl_or_hop_limit)
td->ipsec_xform.options.dec_ttl = 1;
}
@@ -933,6 +937,7 @@ test_ipsec_iph6_hdr_validate(const struct rte_ipv6_hdr *iph6,
const struct ipsec_test_flags *flags)
{
uint32_t vtc_flow;
+ uint32_t flabel;
uint8_t dscp;
if (!is_valid_ipv6_pkt(iph6)) {
@@ -959,6 +964,23 @@ test_ipsec_iph6_hdr_validate(const struct rte_ipv6_hdr *iph6,
}
}
+ flabel = vtc_flow & RTE_IPV6_HDR_FL_MASK;
+
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
+ if (flabel != TEST_IPSEC_FLABEL_VAL) {
+ printf("FLABEL value is not matching [exp: %x, actual: %x]\n",
+ TEST_IPSEC_FLABEL_VAL, flabel);
+ return -1;
+ }
+ } else {
+ if (flabel != 0) {
+ printf("FLABEL value is set [exp: 0, actual: %x]\n",
+ flabel);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -1159,7 +1181,11 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct ipsec_test_flags *flags)
if (flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1 ||
flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1 ||
flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_0 ||
- flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0) {
+ flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
if (is_ipv4(iph4)) {
uint8_t tos;
@@ -1187,6 +1213,13 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct ipsec_test_flags *flags)
else
vtc_flow &= ~RTE_IPV6_HDR_DSCP_MASK;
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1)
+ vtc_flow |= (RTE_IPV6_HDR_FL_MASK &
+ (TEST_IPSEC_FLABEL_VAL << RTE_IPV6_HDR_FL_SHIFT));
+ else
+ vtc_flow &= ~RTE_IPV6_HDR_FL_MASK;
+
iph6->vtc_flow = rte_cpu_to_be_32(vtc_flow);
}
}
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 0d9b5b6e2e..744dd64a9e 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -73,6 +73,15 @@ enum dscp_flags {
TEST_IPSEC_SET_DSCP_1_INNER_0,
};
+#define TEST_IPSEC_FLABEL_VAL 0x1234
+
+enum flabel_flags {
+ TEST_IPSEC_COPY_FLABEL_INNER_0 = 1,
+ TEST_IPSEC_COPY_FLABEL_INNER_1,
+ TEST_IPSEC_SET_FLABEL_0_INNER_1,
+ TEST_IPSEC_SET_FLABEL_1_INNER_0,
+};
+
struct ipsec_test_flags {
bool display_alg;
bool sa_expiry_pkts_soft;
@@ -92,6 +101,7 @@ struct ipsec_test_flags {
bool antireplay;
enum df_flags df;
enum dscp_flags dscp;
+ enum flabel_flags flabel;
bool dec_ttl_or_hop_limit;
bool ah;
};
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 1c32141cd1..e082a1612f 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -162,6 +162,13 @@ create_inline_ipsec_session(struct ipsec_test_data *sa, uint16_t portid,
sess_conf->ipsec.tunnel.ipv6.dscp =
TEST_IPSEC_DSCP_VAL;
+ if (flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv6.flabel = 0;
+
+ if (flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv6.flabel =
+ TEST_IPSEC_FLABEL_VAL;
+
memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr, &src_v6,
sizeof(src_v6));
memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr, &dst_v6,
@@ -1792,6 +1799,62 @@ test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_ipv6_copy_flabel_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_flabel_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_SET_FLABEL_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_SET_FLABEL_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data __rte_unused)
{
@@ -2201,6 +2264,22 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"Tunnel header IPv6 set DSCP 1 (inner 0)",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy FLABEL (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_flabel_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy FLABEL (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_flabel_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set FLABEL 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set FLABEL 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0),
TEST_CASE_NAMED_ST(
"Tunnel header IPv4 decrement inner TTL",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
` (6 preceding siblings ...)
2022-05-13 7:32 ` [PATCH v6 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
@ 2022-05-24 7:22 ` Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
` (7 more replies)
7 siblings, 8 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 7:22 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal
IP reassembly offload was added in last release.
The test app for unit testing IP reassembly of inline
inbound IPsec flows is added in this patchset.
For testing IP reassembly, base inline IPsec is also
added. The app is enhanced in v4 to handle more functional
unit test cases for inline IPsec similar to Lookaside IPsec.
The functions from Lookaside more are reused to verify
functional cases.
Changes in v7:
- fixed compilation
Changes in v6:
- Addressed comments from Anoob.
changes in v5:
- removed soft/hard expiry patches which are deferred for next release
- skipped tests if no port is added.
- added release notes.
Changes in v4:
- rebased over next-crypto
- updated app to take benefit from Lookaside protocol
test functions.
- Added more functional cases
- Added soft and hard expiry event subtypes in ethdev
for testing SA soft and hard pkt/byte expiry events.
- reassembly cases are squashed in a single patch
Changes in v3:
- incorporated latest ethdev changes for reassembly.
- skipped build on windows as it needs rte_ipsec lib which is not
compiled on windows.
changes in v2:
- added IPsec burst mode case
- updated as per the latest ethdev changes.
Akhil Goyal (6):
app/test: add unit cases for inline IPsec offload
test/security: add inline inbound IPsec cases
test/security: add combined mode inline IPsec cases
test/security: add inline IPsec reassembly cases
test/security: add more inline IPsec functional cases
test/security: add ESN and anti-replay cases for inline
Vamsi Attunuru (1):
test/security: add inline IPsec IPv6 flow label cases
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_cryptodev_security_ipsec.c | 35 +-
app/test/test_cryptodev_security_ipsec.h | 10 +
app/test/test_security_inline_proto.c | 2382 +++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 704 +++++
doc/guides/rel_notes/release_22_07.rst | 5 +
7 files changed, 3137 insertions(+), 2 deletions(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v7 1/7] app/test: add unit cases for inline IPsec offload
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
@ 2022-05-24 7:22 ` Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
` (6 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 7:22 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
A new test suite is added in test app to test inline IPsec protocol
offload. In this patch, predefined vectors from Lookaside IPsec test
are used to verify the IPsec functionality without the need of
external traffic generators. The sent packet is loopbacked onto the same
interface which is received and matched with the expected output.
The test suite can be updated further with other functional test cases.
In this patch encap only cases are added.
The testsuite can be run using:
RTE> inline_ipsec_autotest
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
MAINTAINERS | 2 +-
app/test/meson.build | 1 +
app/test/test_security_inline_proto.c | 887 ++++++++++++++++++
app/test/test_security_inline_proto_vectors.h | 20 +
doc/guides/rel_notes/release_22_07.rst | 5 +
5 files changed, 914 insertions(+), 1 deletion(-)
create mode 100644 app/test/test_security_inline_proto.c
create mode 100644 app/test/test_security_inline_proto_vectors.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 17a0559ee7..841279814b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -441,7 +441,7 @@ M: Akhil Goyal <gakhil@marvell.com>
T: git://dpdk.org/next/dpdk-next-crypto
F: lib/security/
F: doc/guides/prog_guide/rte_security.rst
-F: app/test/test_security.c
+F: app/test/test_security*
Compression API - EXPERIMENTAL
M: Fan Zhang <roy.fan.zhang@intel.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 15591ce5cf..0f712680de 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -125,6 +125,7 @@ test_sources = files(
'test_rwlock.c',
'test_sched.c',
'test_security.c',
+ 'test_security_inline_proto.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
new file mode 100644
index 0000000000..4b960ddfe0
--- /dev/null
+++ b/app/test/test_security_inline_proto.c
@@ -0,0 +1,887 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_ethdev.h>
+#include <rte_malloc.h>
+#include <rte_security.h>
+
+#include "test.h"
+#include "test_security_inline_proto_vectors.h"
+
+#ifdef RTE_EXEC_ENV_WINDOWS
+static int
+test_inline_ipsec(void)
+{
+ printf("Inline ipsec not supported on Windows, skipping test\n");
+ return TEST_SKIPPED;
+}
+
+#else
+
+#define NB_ETHPORTS_USED 1
+#define MEMPOOL_CACHE_SIZE 32
+#define MAX_PKT_BURST 32
+#define RTE_TEST_RX_DESC_DEFAULT 1024
+#define RTE_TEST_TX_DESC_DEFAULT 1024
+#define RTE_PORT_ALL (~(uint16_t)0x0)
+
+#define RX_PTHRESH 8 /**< Default values of RX prefetch threshold reg. */
+#define RX_HTHRESH 8 /**< Default values of RX host threshold reg. */
+#define RX_WTHRESH 0 /**< Default values of RX write-back threshold reg. */
+
+#define TX_PTHRESH 32 /**< Default values of TX prefetch threshold reg. */
+#define TX_HTHRESH 0 /**< Default values of TX host threshold reg. */
+#define TX_WTHRESH 0 /**< Default values of TX write-back threshold reg. */
+
+#define MAX_TRAFFIC_BURST 2048
+#define NB_MBUF 10240
+
+extern struct ipsec_test_data pkt_aes_128_gcm;
+extern struct ipsec_test_data pkt_aes_192_gcm;
+extern struct ipsec_test_data pkt_aes_256_gcm;
+extern struct ipsec_test_data pkt_aes_128_gcm_frag;
+extern struct ipsec_test_data pkt_aes_128_cbc_null;
+extern struct ipsec_test_data pkt_null_aes_xcbc;
+extern struct ipsec_test_data pkt_aes_128_cbc_hmac_sha384;
+extern struct ipsec_test_data pkt_aes_128_cbc_hmac_sha512;
+
+static struct rte_mempool *mbufpool;
+static struct rte_mempool *sess_pool;
+static struct rte_mempool *sess_priv_pool;
+/* ethernet addresses of ports */
+static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
+
+static struct rte_eth_conf port_conf = {
+ .rxmode = {
+ .mq_mode = RTE_ETH_MQ_RX_NONE,
+ .split_hdr_size = 0,
+ .offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM |
+ RTE_ETH_RX_OFFLOAD_SECURITY,
+ },
+ .txmode = {
+ .mq_mode = RTE_ETH_MQ_TX_NONE,
+ .offloads = RTE_ETH_TX_OFFLOAD_SECURITY |
+ RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+ },
+ .lpbk_mode = 1, /* enable loopback */
+};
+
+static struct rte_eth_rxconf rx_conf = {
+ .rx_thresh = {
+ .pthresh = RX_PTHRESH,
+ .hthresh = RX_HTHRESH,
+ .wthresh = RX_WTHRESH,
+ },
+ .rx_free_thresh = 32,
+};
+
+static struct rte_eth_txconf tx_conf = {
+ .tx_thresh = {
+ .pthresh = TX_PTHRESH,
+ .hthresh = TX_HTHRESH,
+ .wthresh = TX_WTHRESH,
+ },
+ .tx_free_thresh = 32, /* Use PMD default values */
+ .tx_rs_thresh = 32, /* Use PMD default values */
+};
+
+uint16_t port_id;
+
+static uint64_t link_mbps;
+
+static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
+
+/* Create Inline IPsec session */
+static int
+create_inline_ipsec_session(struct ipsec_test_data *sa, uint16_t portid,
+ struct rte_security_session **sess, struct rte_security_ctx **ctx,
+ uint32_t *ol_flags, const struct ipsec_test_flags *flags,
+ struct rte_security_session_conf *sess_conf)
+{
+ uint16_t src_v6[8] = {0x2607, 0xf8b0, 0x400c, 0x0c03, 0x0000, 0x0000,
+ 0x0000, 0x001a};
+ uint16_t dst_v6[8] = {0x2001, 0x0470, 0xe5bf, 0xdead, 0x4957, 0x2174,
+ 0xe82c, 0x4887};
+ uint32_t src_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 2));
+ uint32_t dst_v4 = rte_cpu_to_be_32(RTE_IPV4(192, 168, 1, 1));
+ struct rte_security_capability_idx sec_cap_idx;
+ const struct rte_security_capability *sec_cap;
+ enum rte_security_ipsec_sa_direction dir;
+ struct rte_security_ctx *sec_ctx;
+ uint32_t verify;
+
+ sess_conf->action_type = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ sess_conf->protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sess_conf->ipsec = sa->ipsec_xform;
+
+ dir = sa->ipsec_xform.direction;
+ verify = flags->tunnel_hdr_verify;
+
+ if ((dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) && verify) {
+ if (verify == RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR)
+ src_v4 += 1;
+ else if (verify == RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR)
+ dst_v4 += 1;
+ }
+
+ if (sa->ipsec_xform.mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
+ if (sa->ipsec_xform.tunnel.type ==
+ RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
+ memcpy(&sess_conf->ipsec.tunnel.ipv4.src_ip, &src_v4,
+ sizeof(src_v4));
+ memcpy(&sess_conf->ipsec.tunnel.ipv4.dst_ip, &dst_v4,
+ sizeof(dst_v4));
+
+ if (flags->df == TEST_IPSEC_SET_DF_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv4.df = 0;
+
+ if (flags->df == TEST_IPSEC_SET_DF_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv4.df = 1;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv4.dscp = 0;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv4.dscp =
+ TEST_IPSEC_DSCP_VAL;
+ } else {
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv6.dscp = 0;
+
+ if (flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv6.dscp =
+ TEST_IPSEC_DSCP_VAL;
+
+ memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr, &src_v6,
+ sizeof(src_v6));
+ memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr, &dst_v6,
+ sizeof(dst_v6));
+ }
+ }
+
+ /* Save SA as userdata for the security session. When
+ * the packet is received, this userdata will be
+ * retrieved using the metadata from the packet.
+ *
+ * The PMD is expected to set similar metadata for other
+ * operations, like rte_eth_event, which are tied to
+ * security session. In such cases, the userdata could
+ * be obtained to uniquely identify the security
+ * parameters denoted.
+ */
+
+ sess_conf->userdata = (void *) sa;
+
+ sec_ctx = (struct rte_security_ctx *)rte_eth_dev_get_sec_ctx(portid);
+ if (sec_ctx == NULL) {
+ printf("Ethernet device doesn't support security features.\n");
+ return TEST_SKIPPED;
+ }
+
+ sec_cap_idx.action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL;
+ sec_cap_idx.protocol = RTE_SECURITY_PROTOCOL_IPSEC;
+ sec_cap_idx.ipsec.proto = sess_conf->ipsec.proto;
+ sec_cap_idx.ipsec.mode = sess_conf->ipsec.mode;
+ sec_cap_idx.ipsec.direction = sess_conf->ipsec.direction;
+ sec_cap = rte_security_capability_get(sec_ctx, &sec_cap_idx);
+ if (sec_cap == NULL) {
+ printf("No capabilities registered\n");
+ return TEST_SKIPPED;
+ }
+
+ if (sa->aead || sa->aes_gmac)
+ memcpy(&sess_conf->ipsec.salt, sa->salt.data,
+ RTE_MIN(sizeof(sess_conf->ipsec.salt), sa->salt.len));
+
+ /* Copy cipher session parameters */
+ if (sa->aead) {
+ rte_memcpy(sess_conf->crypto_xform, &sa->xform.aead,
+ sizeof(struct rte_crypto_sym_xform));
+ sess_conf->crypto_xform->aead.key.data = sa->key.data;
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_aead_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ } else {
+ if (dir == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ rte_memcpy(&sess_conf->crypto_xform->cipher,
+ &sa->xform.chain.cipher.cipher,
+ sizeof(struct rte_crypto_cipher_xform));
+
+ rte_memcpy(&sess_conf->crypto_xform->next->auth,
+ &sa->xform.chain.auth.auth,
+ sizeof(struct rte_crypto_auth_xform));
+ sess_conf->crypto_xform->cipher.key.data =
+ sa->key.data;
+ sess_conf->crypto_xform->next->auth.key.data =
+ sa->auth_key.data;
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_cipher_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Cipher crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+
+ if (test_ipsec_crypto_caps_auth_verify(sec_cap,
+ sess_conf->crypto_xform->next) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Auth crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ } else {
+ rte_memcpy(&sess_conf->crypto_xform->next->cipher,
+ &sa->xform.chain.cipher.cipher,
+ sizeof(struct rte_crypto_cipher_xform));
+ rte_memcpy(&sess_conf->crypto_xform->auth,
+ &sa->xform.chain.auth.auth,
+ sizeof(struct rte_crypto_auth_xform));
+ sess_conf->crypto_xform->auth.key.data =
+ sa->auth_key.data;
+ sess_conf->crypto_xform->next->cipher.key.data =
+ sa->key.data;
+
+ /* Verify crypto capabilities */
+ if (test_ipsec_crypto_caps_cipher_verify(sec_cap,
+ sess_conf->crypto_xform->next) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Cipher crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+
+ if (test_ipsec_crypto_caps_auth_verify(sec_cap,
+ sess_conf->crypto_xform) != 0) {
+ RTE_LOG(INFO, USER1,
+ "Auth crypto capabilities not supported\n");
+ return TEST_SKIPPED;
+ }
+ }
+ }
+
+ if (test_ipsec_sec_caps_verify(&sess_conf->ipsec, sec_cap, false) != 0)
+ return TEST_SKIPPED;
+
+ if ((sa->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) &&
+ (sa->ipsec_xform.options.iv_gen_disable == 1)) {
+ /* Set env variable when IV generation is disabled */
+ char arr[128];
+ int len = 0, j = 0;
+ int iv_len = (sa->aead || sa->aes_gmac) ? 8 : 16;
+
+ for (; j < iv_len; j++)
+ len += snprintf(arr+len, sizeof(arr) - len,
+ "0x%x, ", sa->iv.data[j]);
+ setenv("ETH_SEC_IV_OVR", arr, 1);
+ }
+
+ *sess = rte_security_session_create(sec_ctx,
+ sess_conf, sess_pool, sess_priv_pool);
+ if (*sess == NULL) {
+ printf("SEC Session init failed.\n");
+ return TEST_FAILED;
+ }
+
+ *ol_flags = sec_cap->ol_flags;
+ *ctx = sec_ctx;
+
+ return 0;
+}
+
+/* Check the link status of all ports in up to 3s, and print them finally */
+static void
+check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
+{
+#define CHECK_INTERVAL 100 /* 100ms */
+#define MAX_CHECK_TIME 30 /* 3s (30 * 100ms) in total */
+ uint16_t portid;
+ uint8_t count, all_ports_up, print_flag = 0;
+ struct rte_eth_link link;
+ int ret;
+ char link_status[RTE_ETH_LINK_MAX_STR_LEN];
+
+ printf("Checking link statuses...\n");
+ fflush(stdout);
+ for (count = 0; count <= MAX_CHECK_TIME; count++) {
+ all_ports_up = 1;
+ for (portid = 0; portid < port_num; portid++) {
+ if ((port_mask & (1 << portid)) == 0)
+ continue;
+ memset(&link, 0, sizeof(link));
+ ret = rte_eth_link_get_nowait(portid, &link);
+ if (ret < 0) {
+ all_ports_up = 0;
+ if (print_flag == 1)
+ printf("Port %u link get failed: %s\n",
+ portid, rte_strerror(-ret));
+ continue;
+ }
+
+ /* print link status if flag set */
+ if (print_flag == 1) {
+ if (link.link_status && link_mbps == 0)
+ link_mbps = link.link_speed;
+
+ rte_eth_link_to_str(link_status,
+ sizeof(link_status), &link);
+ printf("Port %d %s\n", portid, link_status);
+ continue;
+ }
+ /* clear all_ports_up flag if any link down */
+ if (link.link_status == RTE_ETH_LINK_DOWN) {
+ all_ports_up = 0;
+ break;
+ }
+ }
+ /* after finally printing all link status, get out */
+ if (print_flag == 1)
+ break;
+
+ if (all_ports_up == 0) {
+ fflush(stdout);
+ rte_delay_ms(CHECK_INTERVAL);
+ }
+
+ /* set the print_flag if all ports up or timeout */
+ if (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1))
+ print_flag = 1;
+ }
+}
+
+static void
+print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+{
+ char buf[RTE_ETHER_ADDR_FMT_SIZE];
+ rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
+ printf("%s%s", name, buf);
+}
+
+static void
+copy_buf_to_pkt_segs(const uint8_t *buf, unsigned int len,
+ struct rte_mbuf *pkt, unsigned int offset)
+{
+ unsigned int copied = 0;
+ unsigned int copy_len;
+ struct rte_mbuf *seg;
+ void *seg_buf;
+
+ seg = pkt;
+ while (offset >= seg->data_len) {
+ offset -= seg->data_len;
+ seg = seg->next;
+ }
+ copy_len = seg->data_len - offset;
+ seg_buf = rte_pktmbuf_mtod_offset(seg, char *, offset);
+ while (len > copy_len) {
+ rte_memcpy(seg_buf, buf + copied, (size_t) copy_len);
+ len -= copy_len;
+ copied += copy_len;
+ seg = seg->next;
+ seg_buf = rte_pktmbuf_mtod(seg, void *);
+ }
+ rte_memcpy(seg_buf, buf + copied, (size_t) len);
+}
+
+static inline struct rte_mbuf *
+init_packet(struct rte_mempool *mp, const uint8_t *data, unsigned int len)
+{
+ struct rte_mbuf *pkt;
+
+ pkt = rte_pktmbuf_alloc(mp);
+ if (pkt == NULL)
+ return NULL;
+ if (((data[0] & 0xF0) >> 4) == IPVERSION) {
+ rte_memcpy(rte_pktmbuf_append(pkt, RTE_ETHER_HDR_LEN),
+ &dummy_ipv4_eth_hdr, RTE_ETHER_HDR_LEN);
+ pkt->l3_len = sizeof(struct rte_ipv4_hdr);
+ } else {
+ rte_memcpy(rte_pktmbuf_append(pkt, RTE_ETHER_HDR_LEN),
+ &dummy_ipv6_eth_hdr, RTE_ETHER_HDR_LEN);
+ pkt->l3_len = sizeof(struct rte_ipv6_hdr);
+ }
+ pkt->l2_len = RTE_ETHER_HDR_LEN;
+
+ if (pkt->buf_len > (len + RTE_ETHER_HDR_LEN))
+ rte_memcpy(rte_pktmbuf_append(pkt, len), data, len);
+ else
+ copy_buf_to_pkt_segs(data, len, pkt, RTE_ETHER_HDR_LEN);
+ return pkt;
+}
+
+static int
+init_mempools(unsigned int nb_mbuf)
+{
+ struct rte_security_ctx *sec_ctx;
+ uint16_t nb_sess = 512;
+ uint32_t sess_sz;
+ char s[64];
+
+ if (mbufpool == NULL) {
+ snprintf(s, sizeof(s), "mbuf_pool");
+ mbufpool = rte_pktmbuf_pool_create(s, nb_mbuf,
+ MEMPOOL_CACHE_SIZE, 0,
+ RTE_MBUF_DEFAULT_BUF_SIZE, SOCKET_ID_ANY);
+ if (mbufpool == NULL) {
+ printf("Cannot init mbuf pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated mbuf pool\n");
+ }
+
+ sec_ctx = rte_eth_dev_get_sec_ctx(port_id);
+ if (sec_ctx == NULL) {
+ printf("Device does not support Security ctx\n");
+ return TEST_SKIPPED;
+ }
+ sess_sz = rte_security_session_get_size(sec_ctx);
+ if (sess_pool == NULL) {
+ snprintf(s, sizeof(s), "sess_pool");
+ sess_pool = rte_mempool_create(s, nb_sess, sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (sess_pool == NULL) {
+ printf("Cannot init sess pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated sess pool\n");
+ }
+ if (sess_priv_pool == NULL) {
+ snprintf(s, sizeof(s), "sess_priv_pool");
+ sess_priv_pool = rte_mempool_create(s, nb_sess, sess_sz,
+ MEMPOOL_CACHE_SIZE, 0,
+ NULL, NULL, NULL, NULL,
+ SOCKET_ID_ANY, 0);
+ if (sess_priv_pool == NULL) {
+ printf("Cannot init sess_priv pool\n");
+ return TEST_FAILED;
+ }
+ printf("Allocated sess_priv pool\n");
+ }
+
+ return 0;
+}
+
+static int
+create_default_flow(uint16_t portid)
+{
+ struct rte_flow_action action[2];
+ struct rte_flow_item pattern[2];
+ struct rte_flow_attr attr = {0};
+ struct rte_flow_error err;
+ struct rte_flow *flow;
+ int ret;
+
+ /* Add the default rte_flow to enable SECURITY for all ESP packets */
+
+ pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP;
+ pattern[0].spec = NULL;
+ pattern[0].mask = NULL;
+ pattern[0].last = NULL;
+ pattern[1].type = RTE_FLOW_ITEM_TYPE_END;
+
+ action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY;
+ action[0].conf = NULL;
+ action[1].type = RTE_FLOW_ACTION_TYPE_END;
+ action[1].conf = NULL;
+
+ attr.ingress = 1;
+
+ ret = rte_flow_validate(portid, &attr, pattern, action, &err);
+ if (ret) {
+ printf("\nValidate flow failed, ret = %d\n", ret);
+ return -1;
+ }
+ flow = rte_flow_create(portid, &attr, pattern, action, &err);
+ if (flow == NULL) {
+ printf("\nDefault flow rule create failed\n");
+ return -1;
+ }
+
+ default_flow[portid] = flow;
+
+ return 0;
+}
+
+static void
+destroy_default_flow(uint16_t portid)
+{
+ struct rte_flow_error err;
+ int ret;
+
+ if (!default_flow[portid])
+ return;
+ ret = rte_flow_destroy(portid, default_flow[portid], &err);
+ if (ret) {
+ printf("\nDefault flow rule destroy failed\n");
+ return;
+ }
+ default_flow[portid] = NULL;
+}
+
+struct rte_mbuf **tx_pkts_burst;
+struct rte_mbuf **rx_pkts_burst;
+
+static int
+test_ipsec_inline_proto_process(struct ipsec_test_data *td,
+ struct ipsec_test_data *res_d,
+ int nb_pkts,
+ bool silent,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session_conf sess_conf = {0};
+ struct rte_crypto_sym_xform cipher = {0};
+ struct rte_crypto_sym_xform auth = {0};
+ struct rte_crypto_sym_xform aead = {0};
+ struct rte_security_session *ses;
+ struct rte_security_ctx *ctx;
+ int nb_rx = 0, nb_sent;
+ uint32_t ol_flags;
+ int i, j = 0, ret;
+
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
+
+ if (td->aead) {
+ sess_conf.crypto_xform = &aead;
+ } else {
+ if (td->ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ sess_conf.crypto_xform = &cipher;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sess_conf.crypto_xform->next = &auth;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ } else {
+ sess_conf.crypto_xform = &auth;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sess_conf.crypto_xform->next = &cipher;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ }
+ }
+
+ /* Create Inline IPsec session. */
+ ret = create_inline_ipsec_session(td, port_id, &ses, &ctx,
+ &ol_flags, flags, &sess_conf);
+ if (ret)
+ return ret;
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ ret = create_default_flow(port_id);
+ if (ret)
+ goto out;
+ }
+ for (i = 0; i < nb_pkts; i++) {
+ tx_pkts_burst[i] = init_packet(mbufpool, td->input_text.data,
+ td->input_text.len);
+ if (tx_pkts_burst[i] == NULL) {
+ while (i--)
+ rte_pktmbuf_free(tx_pkts_burst[i]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkts_burst[i],
+ uint8_t *, RTE_ETHER_HDR_LEN), flags)) {
+ while (i--)
+ rte_pktmbuf_free(tx_pkts_burst[i]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx, ses,
+ tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ }
+ }
+ /* Send packet to ethdev for inline IPsec processing. */
+ nb_sent = rte_eth_tx_burst(port_id, 0, tx_pkts_burst, nb_pkts);
+ if (nb_sent != nb_pkts) {
+ printf("\nUnable to TX %d packets", nb_pkts);
+ for ( ; nb_sent < nb_pkts; nb_sent++)
+ rte_pktmbuf_free(tx_pkts_burst[nb_sent]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ rte_pause();
+
+ /* Receive back packet on loopback interface. */
+ do {
+ rte_delay_ms(1);
+ nb_rx += rte_eth_rx_burst(port_id, 0, &rx_pkts_burst[nb_rx],
+ nb_sent - nb_rx);
+ if (nb_rx >= nb_sent)
+ break;
+ } while (j++ < 5 || nb_rx == 0);
+
+ if (nb_rx != nb_sent) {
+ printf("\nUnable to RX all %d packets", nb_sent);
+ while (--nb_rx)
+ rte_pktmbuf_free(rx_pkts_burst[nb_rx]);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ rte_pktmbuf_adj(rx_pkts_burst[i], RTE_ETHER_HDR_LEN);
+
+ ret = test_ipsec_post_process(rx_pkts_burst[i], td,
+ res_d, silent, flags);
+ if (ret != TEST_SUCCESS) {
+ for ( ; i < nb_rx; i++)
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ goto out;
+ }
+
+ ret = test_ipsec_stats_verify(ctx, ses, flags,
+ td->ipsec_xform.direction);
+ if (ret != TEST_SUCCESS) {
+ for ( ; i < nb_rx; i++)
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ goto out;
+ }
+
+ rte_pktmbuf_free(rx_pkts_burst[i]);
+ rx_pkts_burst[i] = NULL;
+ }
+
+out:
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ destroy_default_flow(port_id);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ctx, ses);
+ ses = NULL;
+
+ return ret;
+}
+
+static int
+ut_setup_inline_ipsec(void)
+{
+ int ret;
+
+ /* Start device */
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ /* always enable promiscuous */
+ ret = rte_eth_promiscuous_enable(port_id);
+ if (ret != 0) {
+ printf("rte_eth_promiscuous_enable: err=%s, port=%d\n",
+ rte_strerror(-ret), port_id);
+ return ret;
+ }
+
+ check_all_ports_link_status(1, RTE_PORT_ALL);
+
+ return 0;
+}
+
+static void
+ut_teardown_inline_ipsec(void)
+{
+ uint16_t portid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ ret = rte_eth_dev_stop(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), portid);
+ }
+}
+
+static int
+inline_ipsec_testsuite_setup(void)
+{
+ uint16_t nb_rxd;
+ uint16_t nb_txd;
+ uint16_t nb_ports;
+ int ret;
+ uint16_t nb_rx_queue = 1, nb_tx_queue = 1;
+
+ printf("Start inline IPsec test.\n");
+
+ nb_ports = rte_eth_dev_count_avail();
+ if (nb_ports < NB_ETHPORTS_USED) {
+ printf("At least %u port(s) used for test\n",
+ NB_ETHPORTS_USED);
+ return TEST_SKIPPED;
+ }
+
+ ret = init_mempools(NB_MBUF);
+ if (ret)
+ return ret;
+
+ if (tx_pkts_burst == NULL) {
+ tx_pkts_burst = (struct rte_mbuf **)rte_calloc("tx_buff",
+ MAX_TRAFFIC_BURST,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE);
+ if (!tx_pkts_burst)
+ return TEST_FAILED;
+
+ rx_pkts_burst = (struct rte_mbuf **)rte_calloc("rx_buff",
+ MAX_TRAFFIC_BURST,
+ sizeof(void *),
+ RTE_CACHE_LINE_SIZE);
+ if (!rx_pkts_burst)
+ return TEST_FAILED;
+ }
+
+ printf("Generate %d packets\n", MAX_TRAFFIC_BURST);
+
+ nb_rxd = RTE_TEST_RX_DESC_DEFAULT;
+ nb_txd = RTE_TEST_TX_DESC_DEFAULT;
+
+ /* configuring port 0 for the test is enough */
+ port_id = 0;
+ /* port configure */
+ ret = rte_eth_dev_configure(port_id, nb_rx_queue,
+ nb_tx_queue, &port_conf);
+ if (ret < 0) {
+ printf("Cannot configure device: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ ret = rte_eth_macaddr_get(port_id, &ports_eth_addr[port_id]);
+ if (ret < 0) {
+ printf("Cannot get mac address: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ printf("Port %u ", port_id);
+ print_ethaddr("Address:", &ports_eth_addr[port_id]);
+ printf("\n");
+
+ /* tx queue setup */
+ ret = rte_eth_tx_queue_setup(port_id, 0, nb_txd,
+ SOCKET_ID_ANY, &tx_conf);
+ if (ret < 0) {
+ printf("rte_eth_tx_queue_setup: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ /* rx queue steup */
+ ret = rte_eth_rx_queue_setup(port_id, 0, nb_rxd, SOCKET_ID_ANY,
+ &rx_conf, mbufpool);
+ if (ret < 0) {
+ printf("rte_eth_rx_queue_setup: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+ test_ipsec_alg_list_populate();
+
+ return 0;
+}
+
+static void
+inline_ipsec_testsuite_teardown(void)
+{
+ uint16_t portid;
+ int ret;
+
+ /* port tear down */
+ RTE_ETH_FOREACH_DEV(portid) {
+ ret = rte_eth_dev_reset(portid);
+ if (ret != 0)
+ printf("rte_eth_dev_reset: err=%s, port=%u\n",
+ rte_strerror(-ret), port_id);
+ }
+}
+
+static int
+test_ipsec_inline_proto_known_vec(const void *test_data)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ memcpy(&td_outb, test_data, sizeof(td_outb));
+
+ if (td_outb.aead ||
+ td_outb.xform.chain.cipher.cipher.algo != RTE_CRYPTO_CIPHER_NULL) {
+ /* Disable IV gen to be able to test with known vectors */
+ td_outb.ipsec_xform.options.iv_gen_disable = 1;
+ }
+
+ return test_ipsec_inline_proto_process(&td_outb, NULL, 1,
+ false, &flags);
+}
+
+static struct unit_test_suite inline_ipsec_testsuite = {
+ .suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
+ .setup = inline_ipsec_testsuite_setup,
+ .teardown = inline_ipsec_testsuite_teardown,
+ .unit_test_cases = {
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 192)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_192_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-GCM 256)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_256_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha256),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA384 [24B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha384),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA512 [32B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha512),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv6 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec, &pkt_aes_256_gcm_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv6 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_aes_128_cbc_hmac_sha256_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound known vector (ESP tunnel mode IPv4 NULL AES-XCBC-MAC [12B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec,
+ &pkt_null_aes_xcbc),
+
+ TEST_CASES_END() /**< NULL terminate unit test array */
+ },
+};
+
+
+static int
+test_inline_ipsec(void)
+{
+ return unit_test_suite_runner(&inline_ipsec_testsuite);
+}
+
+#endif /* !RTE_EXEC_ENV_WINDOWS */
+
+REGISTER_TEST_COMMAND(inline_ipsec_autotest, test_inline_ipsec);
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
new file mode 100644
index 0000000000..d1074da36a
--- /dev/null
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2022 Marvell.
+ */
+#ifndef _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+#define _TEST_INLINE_IPSEC_REASSEMBLY_VECTORS_H_
+
+#include "test_cryptodev_security_ipsec.h"
+
+uint8_t dummy_ipv4_eth_hdr[] = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x08, 0x00,
+};
+uint8_t dummy_ipv6_eth_hdr[] = {
+ /* ETH */
+ 0xf1, 0xf1, 0xf1, 0xf1, 0xf1, 0xf1,
+ 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
+};
+
+#endif
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
index e49cacecef..46d60f7369 100644
--- a/doc/guides/rel_notes/release_22_07.rst
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -104,6 +104,11 @@ New Features
* ``RTE_EVENT_QUEUE_ATTR_WEIGHT``
* ``RTE_EVENT_QUEUE_ATTR_AFFINITY``
+* **Added security inline protocol (IPsec) tests in dpdk-test.**
+
+ Added various functional test cases in dpdk-test to verify
+ inline IPsec protocol offload using loopback interface.
+
Removed Items
-------------
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v7 2/7] test/security: add inline inbound IPsec cases
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
@ 2022-05-24 7:22 ` Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 3/7] test/security: add combined mode inline " Akhil Goyal
` (5 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 7:22 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added test cases for inline Inbound protocol offload
verification with known test vectors from Lookaside mode.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 65 +++++++++++++++++++++++++++
1 file changed, 65 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 4b960ddfe0..4a95b25a0b 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -824,6 +824,24 @@ test_ipsec_inline_proto_known_vec(const void *test_data)
false, &flags);
}
+static int
+test_ipsec_inline_proto_known_vec_inb(const void *test_data)
+{
+ const struct ipsec_test_data *td = test_data;
+ struct ipsec_test_flags flags;
+ struct ipsec_test_data td_inb;
+
+ memset(&flags, 0, sizeof(flags));
+
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS)
+ test_ipsec_td_in_from_out(td, &td_inb);
+ else
+ memcpy(&td_inb, td, sizeof(td_inb));
+
+ return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags);
+}
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -870,6 +888,53 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_known_vec,
&pkt_null_aes_xcbc),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 192)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_192_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-GCM 256)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_256_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_128_cbc_null),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha256),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA384 [24B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha384),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 AES-CBC 128 HMAC-SHA512 [32B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha512),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv6 AES-GCM 128)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb, &pkt_aes_256_gcm_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv6 AES-CBC 128 HMAC-SHA256 [16B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_aes_128_cbc_hmac_sha256_v6),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Inbound known vector (ESP tunnel mode IPv4 NULL AES-XCBC-MAC [12B ICV])",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_inb,
+ &pkt_null_aes_xcbc),
+
+
TEST_CASES_END() /**< NULL terminate unit test array */
},
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v7 3/7] test/security: add combined mode inline IPsec cases
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
@ 2022-05-24 7:22 ` Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
` (4 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 7:22 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added combined encap and decap test cases for various algorithm
combinations
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 102 ++++++++++++++++++++++++++
1 file changed, 102 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 4a95b25a0b..a44a4f9b04 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -665,6 +665,92 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
return ret;
}
+static int
+test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_data td_inb;
+ unsigned int i, nb_pkts = 1, pass_cnt = 0, fail_cnt = 0;
+ int ret;
+
+ if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
+ flags->sa_expiry_pkts_hard)
+ nb_pkts = IPSEC_TEST_PACKETS_MAX;
+
+ for (i = 0; i < RTE_DIM(alg_list); i++) {
+ test_ipsec_td_prepare(alg_list[i].param1,
+ alg_list[i].param2,
+ flags, &td_outb, 1);
+
+ if (!td_outb.aead) {
+ enum rte_crypto_cipher_algorithm cipher_alg;
+ enum rte_crypto_auth_algorithm auth_alg;
+
+ cipher_alg = td_outb.xform.chain.cipher.cipher.algo;
+ auth_alg = td_outb.xform.chain.auth.auth.algo;
+
+ if (td_outb.aes_gmac && cipher_alg != RTE_CRYPTO_CIPHER_NULL)
+ continue;
+
+ /* ICV is not applicable for NULL auth */
+ if (flags->icv_corrupt &&
+ auth_alg == RTE_CRYPTO_AUTH_NULL)
+ continue;
+
+ /* IV is not applicable for NULL cipher */
+ if (flags->iv_gen &&
+ cipher_alg == RTE_CRYPTO_CIPHER_NULL)
+ continue;
+ }
+
+ if (flags->udp_encap)
+ td_outb.ipsec_xform.options.udp_encap = 1;
+
+ ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
+ false, flags);
+ if (ret == TEST_SKIPPED)
+ continue;
+
+ if (ret == TEST_FAILED) {
+ printf("\n TEST FAILED");
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+ fail_cnt++;
+ continue;
+ }
+
+ test_ipsec_td_update(&td_inb, &td_outb, 1, flags);
+
+ ret = test_ipsec_inline_proto_process(&td_inb, NULL, nb_pkts,
+ false, flags);
+ if (ret == TEST_SKIPPED)
+ continue;
+
+ if (ret == TEST_FAILED) {
+ printf("\n TEST FAILED");
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+ fail_cnt++;
+ continue;
+ }
+
+ if (flags->display_alg)
+ test_ipsec_display_alg(alg_list[i].param1,
+ alg_list[i].param2);
+
+ pass_cnt++;
+ }
+
+ printf("Tests passed: %d, failed: %d", pass_cnt, fail_cnt);
+ if (fail_cnt > 0)
+ return TEST_FAILED;
+ if (pass_cnt > 0)
+ return TEST_SUCCESS;
+ else
+ return TEST_SKIPPED;
+}
+
+
static int
ut_setup_inline_ipsec(void)
{
@@ -841,6 +927,17 @@ test_ipsec_inline_proto_known_vec_inb(const void *test_data)
return test_ipsec_inline_proto_process(&td_inb, NULL, 1, false, &flags);
}
+static int
+test_ipsec_inline_proto_display_list(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.display_alg = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
@@ -934,6 +1031,11 @@ static struct unit_test_suite inline_ipsec_testsuite = {
test_ipsec_inline_proto_known_vec_inb,
&pkt_null_aes_xcbc),
+ TEST_CASE_NAMED_ST(
+ "Combined test alg list",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_display_list),
+
TEST_CASES_END() /**< NULL terminate unit test array */
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v7 4/7] test/security: add inline IPsec reassembly cases
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (2 preceding siblings ...)
2022-05-24 7:22 ` [PATCH v7 3/7] test/security: add combined mode inline " Akhil Goyal
@ 2022-05-24 7:22 ` Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
` (3 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 7:22 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added unit test cases for IP reassembly of inline IPsec
inbound scenarios.
In these cases, known test vectors of fragments are first
processed for inline outbound processing and then received
back on loopback interface for inbound processing along with
IP reassembly of the corresponding decrypted packets.
The resultant plain text reassembled packet is compared with
original unfragmented packet.
In this patch, cases are added for 2/4/5 fragments for both
IPv4 and IPv6 packets. A few negative test cases are also added
like incomplete fragments, out of place fragments, duplicate
fragments.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 423 ++++++++++-
app/test/test_security_inline_proto_vectors.h | 684 ++++++++++++++++++
2 files changed, 1106 insertions(+), 1 deletion(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index a44a4f9b04..e3e25af619 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -41,6 +41,9 @@ test_inline_ipsec(void)
#define MAX_TRAFFIC_BURST 2048
#define NB_MBUF 10240
+#define ENCAP_DECAP_BURST_SZ 33
+#define APP_REASS_TIMEOUT 10
+
extern struct ipsec_test_data pkt_aes_128_gcm;
extern struct ipsec_test_data pkt_aes_192_gcm;
extern struct ipsec_test_data pkt_aes_256_gcm;
@@ -94,6 +97,8 @@ uint16_t port_id;
static uint64_t link_mbps;
+static int ip_reassembly_dynfield_offset = -1;
+
static struct rte_flow *default_flow[RTE_MAX_ETHPORTS];
/* Create Inline IPsec session */
@@ -530,6 +535,349 @@ destroy_default_flow(uint16_t portid)
struct rte_mbuf **tx_pkts_burst;
struct rte_mbuf **rx_pkts_burst;
+static int
+compare_pkt_data(struct rte_mbuf *m, uint8_t *ref, unsigned int tot_len)
+{
+ unsigned int len;
+ unsigned int nb_segs = m->nb_segs;
+ unsigned int matched = 0;
+ struct rte_mbuf *save = m;
+
+ while (m) {
+ len = tot_len;
+ if (len > m->data_len)
+ len = m->data_len;
+ if (len != 0) {
+ if (memcmp(rte_pktmbuf_mtod(m, char *),
+ ref + matched, len)) {
+ printf("\n====Reassembly case failed: Data Mismatch");
+ rte_hexdump(stdout, "Reassembled",
+ rte_pktmbuf_mtod(m, char *),
+ len);
+ rte_hexdump(stdout, "reference",
+ ref + matched,
+ len);
+ return TEST_FAILED;
+ }
+ }
+ tot_len -= len;
+ matched += len;
+ m = m->next;
+ }
+
+ if (tot_len) {
+ printf("\n====Reassembly case failed: Data Missing %u",
+ tot_len);
+ printf("\n====nb_segs %u, tot_len %u", nb_segs, tot_len);
+ rte_pktmbuf_dump(stderr, save, -1);
+ return TEST_FAILED;
+ }
+ return TEST_SUCCESS;
+}
+
+static inline bool
+is_ip_reassembly_incomplete(struct rte_mbuf *mbuf)
+{
+ static uint64_t ip_reassembly_dynflag;
+ int ip_reassembly_dynflag_offset;
+
+ if (ip_reassembly_dynflag == 0) {
+ ip_reassembly_dynflag_offset = rte_mbuf_dynflag_lookup(
+ RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME, NULL);
+ if (ip_reassembly_dynflag_offset < 0)
+ return false;
+ ip_reassembly_dynflag = RTE_BIT64(ip_reassembly_dynflag_offset);
+ }
+
+ return (mbuf->ol_flags & ip_reassembly_dynflag) != 0;
+}
+
+static void
+free_mbuf(struct rte_mbuf *mbuf)
+{
+ rte_eth_ip_reassembly_dynfield_t dynfield;
+
+ if (!mbuf)
+ return;
+
+ if (!is_ip_reassembly_incomplete(mbuf)) {
+ rte_pktmbuf_free(mbuf);
+ } else {
+ if (ip_reassembly_dynfield_offset < 0)
+ return;
+
+ while (mbuf) {
+ dynfield = *RTE_MBUF_DYNFIELD(mbuf,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ rte_pktmbuf_free(mbuf);
+ mbuf = dynfield.next_frag;
+ }
+ }
+}
+
+
+static int
+get_and_verify_incomplete_frags(struct rte_mbuf *mbuf,
+ struct reassembly_vector *vector)
+{
+ rte_eth_ip_reassembly_dynfield_t *dynfield[MAX_PKT_BURST];
+ int j = 0, ret;
+ /**
+ * IP reassembly offload is incomplete, and fragments are listed in
+ * dynfield which can be reassembled in SW.
+ */
+ printf("\nHW IP Reassembly is not complete; attempt SW IP Reassembly,"
+ "\nMatching with original frags.");
+
+ if (ip_reassembly_dynfield_offset < 0)
+ return -1;
+
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(mbuf, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(mbuf, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(mbuf, ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(dynfield[j]->next_frag, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(dynfield[j]->next_frag, vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (ret)
+ return ret;
+
+ while ((dynfield[j]->nb_frags > 1) &&
+ is_ip_reassembly_incomplete(dynfield[j]->next_frag)) {
+ j++;
+ dynfield[j] = RTE_MBUF_DYNFIELD(dynfield[j-1]->next_frag,
+ ip_reassembly_dynfield_offset,
+ rte_eth_ip_reassembly_dynfield_t *);
+ printf("\ncomparing frag: %d", j);
+ /* Skip Ethernet header comparison */
+ rte_pktmbuf_adj(dynfield[j]->next_frag, RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(dynfield[j]->next_frag,
+ vector->frags[j]->data, vector->frags[j]->len);
+ if (ret)
+ return ret;
+ }
+ return ret;
+}
+
+static int
+test_ipsec_with_reassembly(struct reassembly_vector *vector,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session *out_ses[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_security_session *in_ses[ENCAP_DECAP_BURST_SZ] = {0};
+ struct rte_eth_ip_reassembly_params reass_capa = {0};
+ struct rte_security_session_conf sess_conf_out = {0};
+ struct rte_security_session_conf sess_conf_in = {0};
+ unsigned int nb_tx, burst_sz, nb_sent = 0;
+ struct rte_crypto_sym_xform cipher_out = {0};
+ struct rte_crypto_sym_xform auth_out = {0};
+ struct rte_crypto_sym_xform aead_out = {0};
+ struct rte_crypto_sym_xform cipher_in = {0};
+ struct rte_crypto_sym_xform auth_in = {0};
+ struct rte_crypto_sym_xform aead_in = {0};
+ struct ipsec_test_data sa_data;
+ struct rte_security_ctx *ctx;
+ unsigned int i, nb_rx = 0, j;
+ uint32_t ol_flags;
+ int ret = 0;
+
+ burst_sz = vector->burst ? ENCAP_DECAP_BURST_SZ : 1;
+ nb_tx = vector->nb_frags * burst_sz;
+
+ rte_eth_dev_stop(port_id);
+ if (ret != 0) {
+ printf("rte_eth_dev_stop: err=%s, port=%u\n",
+ rte_strerror(-ret), port_id);
+ return ret;
+ }
+ rte_eth_ip_reassembly_capability_get(port_id, &reass_capa);
+ if (reass_capa.max_frags < vector->nb_frags)
+ return TEST_SKIPPED;
+ if (reass_capa.timeout_ms > APP_REASS_TIMEOUT) {
+ reass_capa.timeout_ms = APP_REASS_TIMEOUT;
+ rte_eth_ip_reassembly_conf_set(port_id, &reass_capa);
+ }
+
+ ret = rte_eth_dev_start(port_id);
+ if (ret < 0) {
+ printf("rte_eth_dev_start: err=%d, port=%d\n",
+ ret, port_id);
+ return ret;
+ }
+
+ memset(tx_pkts_burst, 0, sizeof(tx_pkts_burst[0]) * nb_tx);
+ memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_tx);
+
+ for (i = 0; i < nb_tx; i += vector->nb_frags) {
+ for (j = 0; j < vector->nb_frags; j++) {
+ tx_pkts_burst[i+j] = init_packet(mbufpool,
+ vector->frags[j]->data,
+ vector->frags[j]->len);
+ if (tx_pkts_burst[i+j] == NULL) {
+ ret = -1;
+ printf("\n packed init failed\n");
+ goto out;
+ }
+ }
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data,
+ sizeof(struct ipsec_test_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+ sa_data.ipsec_xform.direction =
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
+ if (sa_data.aead) {
+ sess_conf_out.crypto_xform = &aead_out;
+ } else {
+ sess_conf_out.crypto_xform = &cipher_out;
+ sess_conf_out.crypto_xform->next = &auth_out;
+ }
+
+ /* Create Inline IPsec outbound session. */
+ ret = create_inline_ipsec_session(&sa_data, port_id,
+ &out_ses[i], &ctx, &ol_flags, flags,
+ &sess_conf_out);
+ if (ret) {
+ printf("\nInline outbound session create failed\n");
+ goto out;
+ }
+ }
+
+ j = 0;
+ for (i = 0; i < nb_tx; i++) {
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx,
+ out_ses[j], tx_pkts_burst[i], NULL);
+ tx_pkts_burst[i]->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+
+ /* Move to next SA after nb_frags */
+ if ((i + 1) % vector->nb_frags == 0)
+ j++;
+ }
+
+ for (i = 0; i < burst_sz; i++) {
+ memcpy(&sa_data, vector->sa_data,
+ sizeof(struct ipsec_test_data));
+ /* Update SPI for every new SA */
+ sa_data.ipsec_xform.spi += i;
+ sa_data.ipsec_xform.direction =
+ RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
+
+ if (sa_data.aead) {
+ sess_conf_in.crypto_xform = &aead_in;
+ } else {
+ sess_conf_in.crypto_xform = &auth_in;
+ sess_conf_in.crypto_xform->next = &cipher_in;
+ }
+ /* Create Inline IPsec inbound session. */
+ ret = create_inline_ipsec_session(&sa_data, port_id, &in_ses[i],
+ &ctx, &ol_flags, flags, &sess_conf_in);
+ if (ret) {
+ printf("\nInline inbound session create failed\n");
+ goto out;
+ }
+ }
+
+ /* Retrieve reassembly dynfield offset if available */
+ if (ip_reassembly_dynfield_offset < 0 && vector->nb_frags > 1)
+ ip_reassembly_dynfield_offset = rte_mbuf_dynfield_lookup(
+ RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME, NULL);
+
+
+ ret = create_default_flow(port_id);
+ if (ret)
+ goto out;
+
+ nb_sent = rte_eth_tx_burst(port_id, 0, tx_pkts_burst, nb_tx);
+ if (nb_sent != nb_tx) {
+ ret = -1;
+ printf("\nFailed to tx %u pkts", nb_tx);
+ goto out;
+ }
+
+ rte_delay_ms(1);
+
+ /* Retry few times before giving up */
+ nb_rx = 0;
+ j = 0;
+ do {
+ nb_rx += rte_eth_rx_burst(port_id, 0, &rx_pkts_burst[nb_rx],
+ nb_tx - nb_rx);
+ j++;
+ if (nb_rx >= nb_tx)
+ break;
+ rte_delay_ms(1);
+ } while (j < 5 || !nb_rx);
+
+ /* Check for minimum number of Rx packets expected */
+ if ((vector->nb_frags == 1 && nb_rx != nb_tx) ||
+ (vector->nb_frags > 1 && nb_rx < burst_sz)) {
+ printf("\nreceived less Rx pkts(%u) pkts\n", nb_rx);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ for (i = 0; i < nb_rx; i++) {
+ if (vector->nb_frags > 1 &&
+ is_ip_reassembly_incomplete(rx_pkts_burst[i])) {
+ ret = get_and_verify_incomplete_frags(rx_pkts_burst[i],
+ vector);
+ if (ret != TEST_SUCCESS)
+ break;
+ continue;
+ }
+
+ if (rx_pkts_burst[i]->ol_flags &
+ RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED ||
+ !(rx_pkts_burst[i]->ol_flags & RTE_MBUF_F_RX_SEC_OFFLOAD)) {
+ printf("\nsecurity offload failed\n");
+ ret = TEST_FAILED;
+ break;
+ }
+
+ if (vector->full_pkt->len + RTE_ETHER_HDR_LEN !=
+ rx_pkts_burst[i]->pkt_len) {
+ printf("\nreassembled/decrypted packet length mismatch\n");
+ ret = TEST_FAILED;
+ break;
+ }
+ rte_pktmbuf_adj(rx_pkts_burst[i], RTE_ETHER_HDR_LEN);
+ ret = compare_pkt_data(rx_pkts_burst[i],
+ vector->full_pkt->data,
+ vector->full_pkt->len);
+ if (ret != TEST_SUCCESS)
+ break;
+ }
+
+out:
+ destroy_default_flow(port_id);
+
+ /* Clear session data. */
+ for (i = 0; i < burst_sz; i++) {
+ if (out_ses[i])
+ rte_security_session_destroy(ctx, out_ses[i]);
+ if (in_ses[i])
+ rte_security_session_destroy(ctx, in_ses[i]);
+ }
+
+ for (i = nb_sent; i < nb_tx; i++)
+ free_mbuf(tx_pkts_burst[i]);
+ for (i = 0; i < nb_rx; i++)
+ free_mbuf(rx_pkts_burst[i]);
+ return ret;
+}
+
static int
test_ipsec_inline_proto_process(struct ipsec_test_data *td,
struct ipsec_test_data *res_d,
@@ -779,6 +1127,7 @@ ut_setup_inline_ipsec(void)
static void
ut_teardown_inline_ipsec(void)
{
+ struct rte_eth_ip_reassembly_params reass_conf = {0};
uint16_t portid;
int ret;
@@ -788,6 +1137,9 @@ ut_teardown_inline_ipsec(void)
if (ret != 0)
printf("rte_eth_dev_stop: err=%s, port=%u\n",
rte_strerror(-ret), portid);
+
+ /* Clear reassembly configuration */
+ rte_eth_ip_reassembly_conf_set(portid, &reass_conf);
}
}
@@ -890,6 +1242,36 @@ inline_ipsec_testsuite_teardown(void)
}
}
+static int
+test_inline_ip_reassembly(const void *testdata)
+{
+ struct reassembly_vector reassembly_td = {0};
+ const struct reassembly_vector *td = testdata;
+ struct ip_reassembly_test_packet full_pkt;
+ struct ip_reassembly_test_packet frags[MAX_FRAGS];
+ struct ipsec_test_flags flags = {0};
+ int i = 0;
+
+ reassembly_td.sa_data = td->sa_data;
+ reassembly_td.nb_frags = td->nb_frags;
+ reassembly_td.burst = td->burst;
+
+ memcpy(&full_pkt, td->full_pkt,
+ sizeof(struct ip_reassembly_test_packet));
+ reassembly_td.full_pkt = &full_pkt;
+
+ test_vector_payload_populate(reassembly_td.full_pkt, true);
+ for (; i < reassembly_td.nb_frags; i++) {
+ memcpy(&frags[i], td->frags[i],
+ sizeof(struct ip_reassembly_test_packet));
+ reassembly_td.frags[i] = &frags[i];
+ test_vector_payload_populate(reassembly_td.frags[i],
+ (i == 0) ? true : false);
+ }
+
+ return test_ipsec_with_reassembly(&reassembly_td, &flags);
+}
+
static int
test_ipsec_inline_proto_known_vec(const void *test_data)
{
@@ -1036,7 +1418,46 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_display_list),
-
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 2 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_2frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 2 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_2frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_4frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_4frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with 5 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_5frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv6 Reassembly with 5 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv6_5frag_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with incomplete fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_incomplete_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with overlapping fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_overlap_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with out of order fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_out_of_order_vector),
+ TEST_CASE_NAMED_WITH_DATA(
+ "IPv4 Reassembly with burst of 4 fragments",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_inline_ip_reassembly, &ipv4_4frag_burst_vector),
TEST_CASES_END() /**< NULL terminate unit test array */
},
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index d1074da36a..c18965d80f 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -17,4 +17,688 @@ uint8_t dummy_ipv6_eth_hdr[] = {
0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0xf2, 0x86, 0xdd,
};
+#define MAX_FRAG_LEN 1500
+#define MAX_FRAGS 6
+#define MAX_PKT_LEN (MAX_FRAG_LEN * MAX_FRAGS)
+
+struct ip_reassembly_test_packet {
+ uint32_t len;
+ uint32_t l4_offset;
+ uint8_t data[MAX_PKT_LEN];
+};
+
+struct reassembly_vector {
+ /* input/output text in struct ipsec_test_data are not used */
+ struct ipsec_test_data *sa_data;
+ struct ip_reassembly_test_packet *full_pkt;
+ struct ip_reassembly_test_packet *frags[MAX_FRAGS];
+ uint16_t nb_frags;
+ bool burst;
+};
+
+/* The source file includes below test vectors */
+/* IPv6:
+ *
+ * 1) pkt_ipv6_udp_p1
+ * pkt_ipv6_udp_p1_f1
+ * pkt_ipv6_udp_p1_f2
+ *
+ * 2) pkt_ipv6_udp_p2
+ * pkt_ipv6_udp_p2_f1
+ * pkt_ipv6_udp_p2_f2
+ * pkt_ipv6_udp_p2_f3
+ * pkt_ipv6_udp_p2_f4
+ *
+ * 3) pkt_ipv6_udp_p3
+ * pkt_ipv6_udp_p3_f1
+ * pkt_ipv6_udp_p3_f2
+ * pkt_ipv6_udp_p3_f3
+ * pkt_ipv6_udp_p3_f4
+ * pkt_ipv6_udp_p3_f5
+ */
+
+/* IPv4:
+ *
+ * 1) pkt_ipv4_udp_p1
+ * pkt_ipv4_udp_p1_f1
+ * pkt_ipv4_udp_p1_f2
+ *
+ * 2) pkt_ipv4_udp_p2
+ * pkt_ipv4_udp_p2_f1
+ * pkt_ipv4_udp_p2_f2
+ * pkt_ipv4_udp_p2_f3
+ * pkt_ipv4_udp_p2_f4
+ *
+ * 3) pkt_ipv4_udp_p3
+ * pkt_ipv4_udp_p3_f1
+ * pkt_ipv4_udp_p3_f2
+ * pkt_ipv4_udp_p3_f3
+ * pkt_ipv4_udp_p3_f4
+ * pkt_ipv4_udp_p3_f5
+ */
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1 = {
+ .len = 1500,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0xb4, 0x2C, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x5c, 0x92, 0xac, 0xf1,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xb4, 0x2b, 0xe8,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p1_f2 = {
+ .len = 172,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x00, 0x84, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x38, 0x5c, 0x92, 0xac, 0xf1,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2 = {
+ .len = 4482,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x11, 0x5a, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x64, 0x6c, 0x68, 0x9f,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x5a, 0x8a, 0x11,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f2 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f3 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p2_f4 = {
+ .len = 482,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0xba, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa8, 0x64, 0x6c, 0x68, 0x9f,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3 = {
+ .len = 5782,
+ .l4_offset = 40,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x16, 0x6e, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f1 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x00, 0x01, 0x65, 0xcf, 0x5a, 0xae,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x6e, 0x2f, 0x99,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f2 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x05, 0x39, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f3 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0a, 0x71, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f4 = {
+ .len = 1384,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x05, 0x40, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x0f, 0xa9, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv6_udp_p3_f5 = {
+ .len = 446,
+ .l4_offset = 48,
+ .data = {
+ /* IP */
+ 0x60, 0x00, 0x00, 0x00, 0x01, 0x96, 0x2c, 0x40,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x02,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0xff, 0xff, 0x02, 0x00, 0x00, 0x02,
+ 0x11, 0x00, 0x14, 0xe0, 0x65, 0xcf, 0x5a, 0xae,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1 = {
+ .len = 1500,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0xdc, 0x00, 0x01, 0x00, 0x00,
+ 0x40, 0x11, 0x66, 0x0d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x01, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5d, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x05, 0xc8, 0xb8, 0x4c,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p1_f2 = {
+ .len = 100,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x00, 0x64, 0x00, 0x01, 0x00, 0xaf,
+ 0x40, 0x11, 0x6a, 0xd6, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2 = {
+ .len = 4482,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x11, 0x82, 0x00, 0x02, 0x00, 0x00,
+ 0x40, 0x11, 0x5a, 0x66, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5c, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x11, 0x6e, 0x16, 0x76,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f2 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xad, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f3 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x02, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfe, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p2_f4 = {
+ .len = 282,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x01, 0x1a, 0x00, 0x02, 0x02, 0x0d,
+ 0x40, 0x11, 0x68, 0xc1, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3 = {
+ .len = 5782,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x16, 0x96, 0x00, 0x03, 0x00, 0x00,
+ 0x40, 0x11, 0x55, 0x51, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x08, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f1 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0x00,
+ 0x40, 0x11, 0x46, 0x5b, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+
+ /* UDP */
+ 0x80, 0x00, 0x27, 0x10, 0x16, 0x82, 0xbb, 0xfd,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f2 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x20, 0xaf,
+ 0x40, 0x11, 0x45, 0xac, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f3 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x21, 0x5e,
+ 0x40, 0x11, 0x44, 0xfd, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f4 = {
+ .len = 1420,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x05, 0x8c, 0x00, 0x03, 0x22, 0x0d,
+ 0x40, 0x11, 0x44, 0x4e, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+struct ip_reassembly_test_packet pkt_ipv4_udp_p3_f5 = {
+ .len = 182,
+ .l4_offset = 20,
+ .data = {
+ /* IP */
+ 0x45, 0x00, 0x00, 0xb6, 0x00, 0x03, 0x02, 0xbc,
+ 0x40, 0x11, 0x68, 0x75, 0x0d, 0x00, 0x00, 0x02,
+ 0x02, 0x00, 0x00, 0x02,
+ },
+};
+
+static inline void
+test_vector_payload_populate(struct ip_reassembly_test_packet *pkt,
+ bool first_frag)
+{
+ uint32_t i = pkt->l4_offset;
+
+ /**
+ * For non-fragmented packets and first frag, skip 8 bytes from
+ * l4_offset for UDP header.
+ */
+ if (first_frag)
+ i += 8;
+
+ for (; i < pkt->len; i++)
+ pkt->data[i] = 0x58;
+}
+
+struct ipsec_test_data conf_aes_128_gcm = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+
+struct ipsec_test_data conf_aes_128_gcm_v6_tunnel = {
+ .key = {
+ .data = {
+ 0xfe, 0xff, 0xe9, 0x92, 0x86, 0x65, 0x73, 0x1c,
+ 0x6d, 0x6a, 0x8f, 0x94, 0x67, 0x30, 0x83, 0x08
+ },
+ },
+
+ .salt = {
+ .data = {
+ 0xca, 0xfe, 0xba, 0xbe
+ },
+ .len = 4,
+ },
+
+ .iv = {
+ .data = {
+ 0xfa, 0xce, 0xdb, 0xad, 0xde, 0xca, 0xf8, 0x88
+ },
+ },
+
+ .ipsec_xform = {
+ .spi = 0xa5f8,
+ .salt = 0xbebafeca,
+ .options.esn = 0,
+ .options.udp_encap = 0,
+ .options.copy_dscp = 0,
+ .options.copy_flabel = 0,
+ .options.copy_df = 0,
+ .options.dec_ttl = 0,
+ .options.ecn = 0,
+ .options.stats = 0,
+ .options.tunnel_hdr_verify = 0,
+ .options.ip_csum_enable = 0,
+ .options.l4_csum_enable = 0,
+ .options.ip_reassembly_en = 1,
+ .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
+ .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
+ .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
+ .tunnel.type = RTE_SECURITY_IPSEC_TUNNEL_IPV4,
+ .replay_win_sz = 0,
+ },
+
+ .aead = true,
+
+ .xform = {
+ .aead = {
+ .next = NULL,
+ .type = RTE_CRYPTO_SYM_XFORM_AEAD,
+ .aead = {
+ .op = RTE_CRYPTO_AEAD_OP_ENCRYPT,
+ .algo = RTE_CRYPTO_AEAD_AES_GCM,
+ .key.length = 16,
+ .iv.length = 12,
+ .iv.offset = 0,
+ .digest_length = 16,
+ .aad_length = 12,
+ },
+ },
+ },
+};
+
+const struct reassembly_vector ipv4_2frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv6_2frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p1,
+ .frags[0] = &pkt_ipv6_udp_p1_f1,
+ .frags[1] = &pkt_ipv6_udp_p1_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv4_4frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = false,
+};
+
+const struct reassembly_vector ipv6_4frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p2,
+ .frags[0] = &pkt_ipv6_udp_p2_f1,
+ .frags[1] = &pkt_ipv6_udp_p2_f2,
+ .frags[2] = &pkt_ipv6_udp_p2_f3,
+ .frags[3] = &pkt_ipv6_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_5frag_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p3,
+ .frags[0] = &pkt_ipv4_udp_p3_f1,
+ .frags[1] = &pkt_ipv4_udp_p3_f2,
+ .frags[2] = &pkt_ipv4_udp_p3_f3,
+ .frags[3] = &pkt_ipv4_udp_p3_f4,
+ .frags[4] = &pkt_ipv4_udp_p3_f5,
+ .nb_frags = 5,
+ .burst = false,
+};
+const struct reassembly_vector ipv6_5frag_vector = {
+ .sa_data = &conf_aes_128_gcm_v6_tunnel,
+ .full_pkt = &pkt_ipv6_udp_p3,
+ .frags[0] = &pkt_ipv6_udp_p3_f1,
+ .frags[1] = &pkt_ipv6_udp_p3_f2,
+ .frags[2] = &pkt_ipv6_udp_p3_f3,
+ .frags[3] = &pkt_ipv6_udp_p3_f4,
+ .frags[4] = &pkt_ipv6_udp_p3_f5,
+ .nb_frags = 5,
+ .burst = false,
+};
+/* Negative test cases. */
+const struct reassembly_vector ipv4_incomplete_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .nb_frags = 2,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_overlap_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p1,
+ .frags[0] = &pkt_ipv4_udp_p1_f1,
+ .frags[1] = &pkt_ipv4_udp_p1_f1, /* Overlap */
+ .frags[2] = &pkt_ipv4_udp_p1_f2,
+ .nb_frags = 3,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_out_of_order_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f3,
+ .frags[2] = &pkt_ipv4_udp_p2_f4,
+ .frags[3] = &pkt_ipv4_udp_p2_f2, /* out of order */
+ .nb_frags = 4,
+ .burst = false,
+};
+const struct reassembly_vector ipv4_4frag_burst_vector = {
+ .sa_data = &conf_aes_128_gcm,
+ .full_pkt = &pkt_ipv4_udp_p2,
+ .frags[0] = &pkt_ipv4_udp_p2_f1,
+ .frags[1] = &pkt_ipv4_udp_p2_f2,
+ .frags[2] = &pkt_ipv4_udp_p2_f3,
+ .frags[3] = &pkt_ipv4_udp_p2_f4,
+ .nb_frags = 4,
+ .burst = true,
+};
+
#endif
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v7 5/7] test/security: add more inline IPsec functional cases
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (3 preceding siblings ...)
2022-05-24 7:22 ` [PATCH v7 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
@ 2022-05-24 7:22 ` Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
` (2 subsequent siblings)
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 7:22 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added more inline IPsec functional verification cases.
These cases do not have known vectors but are verified
using encap + decap test for all the algo combinations.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 517 ++++++++++++++++++++++++++
1 file changed, 517 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index e3e25af619..ecd7d69097 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1321,6 +1321,394 @@ test_ipsec_inline_proto_display_list(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_udp_encap(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.udp_encap = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_udp_ports_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.udp_encap = true;
+ flags.udp_ports_verify = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_err_icv_corrupt(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.icv_corrupt = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_dst_addr_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_DST_ADDR;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_src_dst_addr_verify(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.tunnel_hdr_verify = RTE_SECURITY_IPSEC_TUNNEL_VERIFY_SRC_DST_ADDR;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_inner_ip_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ip_csum = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_inner_l4_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.l4_csum = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v4_in_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.tunnel_ipv6 = false;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v6_in_v6(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v4_in_v6(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.tunnel_ipv6 = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_tunnel_v6_in_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = false;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_transport_v4(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = false;
+ flags.transport = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_transport_l4_csum(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .l4_csum = true,
+ .transport = true,
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_stats(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.stats_success = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_pkt_fragment(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.fragment = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+
+}
+
+static int
+test_ipsec_inline_proto_copy_df_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_COPY_DF_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_copy_df_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_COPY_DF_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_set_df_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_SET_DF_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_set_df_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.df = TEST_IPSEC_SET_DF_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_copy_dscp_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_copy_dscp_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_dscp_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_dscp_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_COPY_DSCP_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_dscp_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_SET_DSCP_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.dscp = TEST_IPSEC_SET_DSCP_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .dec_ttl_or_hop_limit = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_hop_limit_decrement(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .ipv6 = true,
+ .dec_ttl_or_hop_limit = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_iv_gen(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.iv_gen = true;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
+{
+ struct ipsec_test_data td_outb;
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+ flags.fragment = true;
+
+ memcpy(&td_outb, test_data, sizeof(td_outb));
+
+ /* Disable IV gen to be able to test with known vectors */
+ td_outb.ipsec_xform.options.iv_gen_disable = 1;
+
+ return test_ipsec_inline_proto_process(&td_outb, NULL, 1, false,
+ &flags);
+}
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -1367,6 +1755,13 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_known_vec,
&pkt_null_aes_xcbc),
+
+ TEST_CASE_NAMED_WITH_DATA(
+ "Outbound fragmented packet",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_known_vec_fragmented,
+ &pkt_aes_128_gcm_frag),
+
TEST_CASE_NAMED_WITH_DATA(
"Inbound known vector (ESP tunnel mode IPv4 AES-GCM 128)",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
@@ -1418,6 +1813,128 @@ static struct unit_test_suite inline_ipsec_testsuite = {
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_display_list),
+ TEST_CASE_NAMED_ST(
+ "UDP encapsulation",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_udp_encap),
+ TEST_CASE_NAMED_ST(
+ "UDP encapsulation ports verification test",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_udp_ports_verify),
+ TEST_CASE_NAMED_ST(
+ "Negative test: ICV corruption",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_err_icv_corrupt),
+ TEST_CASE_NAMED_ST(
+ "Tunnel dst addr verification",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_dst_addr_verify),
+ TEST_CASE_NAMED_ST(
+ "Tunnel src and dst addr verification",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_src_dst_addr_verify),
+ TEST_CASE_NAMED_ST(
+ "Inner IP checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_inner_ip_csum),
+ TEST_CASE_NAMED_ST(
+ "Inner L4 checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_inner_l4_csum),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv4 in IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v4_in_v4),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv6 in IPv6",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v6_in_v6),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv4 in IPv6",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v4_in_v6),
+ TEST_CASE_NAMED_ST(
+ "Tunnel IPv6 in IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_tunnel_v6_in_v4),
+ TEST_CASE_NAMED_ST(
+ "Transport IPv4",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_transport_v4),
+ TEST_CASE_NAMED_ST(
+ "Transport l4 checksum",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_transport_l4_csum),
+ TEST_CASE_NAMED_ST(
+ "Statistics: success",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_stats),
+ TEST_CASE_NAMED_ST(
+ "Fragmented packet",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_fragment),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header copy DF (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_copy_df_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header copy DF (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_copy_df_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header set DF 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_set_df_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header set DF 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_set_df_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 copy DSCP (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_copy_dscp_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 copy DSCP (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_copy_dscp_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 set DSCP 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_set_dscp_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 set DSCP 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy DSCP (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_dscp_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy DSCP (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_dscp_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set DSCP 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_dscp_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set DSCP 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv4 decrement inner TTL",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv4_ttl_decrement),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 decrement inner hop limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_hop_limit_decrement),
+ TEST_CASE_NAMED_ST(
+ "IV generation",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_iv_gen),
+
+
TEST_CASE_NAMED_WITH_DATA(
"IPv4 Reassembly with 2 fragments",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v7 6/7] test/security: add ESN and anti-replay cases for inline
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (4 preceding siblings ...)
2022-05-24 7:22 ` [PATCH v7 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
@ 2022-05-24 7:22 ` Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
2022-05-24 8:05 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Anoob Joseph
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 7:22 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Akhil Goyal, Fan Zhang
Added cases to test anti replay for inline IPsec processing
with and without extended sequence number support.
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_security_inline_proto.c | 311 ++++++++++++++++++++++++++
1 file changed, 311 insertions(+)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index ecd7d69097..518fb09113 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -1098,6 +1098,139 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
return TEST_SKIPPED;
}
+static int
+test_ipsec_inline_proto_process_with_esn(struct ipsec_test_data td[],
+ struct ipsec_test_data res_d[],
+ int nb_pkts,
+ bool silent,
+ const struct ipsec_test_flags *flags)
+{
+ struct rte_security_session_conf sess_conf = {0};
+ struct ipsec_test_data *res_d_tmp = NULL;
+ struct rte_crypto_sym_xform cipher = {0};
+ struct rte_crypto_sym_xform auth = {0};
+ struct rte_crypto_sym_xform aead = {0};
+ struct rte_mbuf *rx_pkt = NULL;
+ struct rte_mbuf *tx_pkt = NULL;
+ int nb_rx, nb_sent;
+ struct rte_security_session *ses;
+ struct rte_security_ctx *ctx;
+ uint32_t ol_flags;
+ int i, ret;
+
+ if (td[0].aead) {
+ sess_conf.crypto_xform = &aead;
+ } else {
+ if (td[0].ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ sess_conf.crypto_xform = &cipher;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ sess_conf.crypto_xform->next = &auth;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ } else {
+ sess_conf.crypto_xform = &auth;
+ sess_conf.crypto_xform->type = RTE_CRYPTO_SYM_XFORM_AUTH;
+ sess_conf.crypto_xform->next = &cipher;
+ sess_conf.crypto_xform->next->type = RTE_CRYPTO_SYM_XFORM_CIPHER;
+ }
+ }
+
+ /* Create Inline IPsec session. */
+ ret = create_inline_ipsec_session(&td[0], port_id, &ses, &ctx,
+ &ol_flags, flags, &sess_conf);
+ if (ret)
+ return ret;
+
+ if (td[0].ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
+ ret = create_default_flow(port_id);
+ if (ret)
+ goto out;
+ }
+
+ for (i = 0; i < nb_pkts; i++) {
+ tx_pkt = init_packet(mbufpool, td[i].input_text.data,
+ td[i].input_text.len);
+ if (tx_pkt == NULL) {
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (test_ipsec_pkt_update(rte_pktmbuf_mtod_offset(tx_pkt,
+ uint8_t *, RTE_ETHER_HDR_LEN), flags)) {
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ if (td[i].ipsec_xform.direction ==
+ RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
+ if (flags->antireplay) {
+ sess_conf.ipsec.esn.value =
+ td[i].ipsec_xform.esn.value;
+ ret = rte_security_session_update(ctx, ses,
+ &sess_conf);
+ if (ret) {
+ printf("Could not update ESN in session\n");
+ rte_pktmbuf_free(tx_pkt);
+ ret = TEST_SKIPPED;
+ goto out;
+ }
+ }
+ if (ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA)
+ rte_security_set_pkt_metadata(ctx, ses,
+ tx_pkt, NULL);
+ tx_pkt->ol_flags |= RTE_MBUF_F_TX_SEC_OFFLOAD;
+ }
+ /* Send packet to ethdev for inline IPsec processing. */
+ nb_sent = rte_eth_tx_burst(port_id, 0, &tx_pkt, 1);
+ if (nb_sent != 1) {
+ printf("\nUnable to TX packets");
+ rte_pktmbuf_free(tx_pkt);
+ ret = TEST_FAILED;
+ goto out;
+ }
+
+ rte_pause();
+
+ /* Receive back packet on loopback interface. */
+ do {
+ rte_delay_ms(1);
+ nb_rx = rte_eth_rx_burst(port_id, 0, &rx_pkt, 1);
+ } while (nb_rx == 0);
+
+ rte_pktmbuf_adj(rx_pkt, RTE_ETHER_HDR_LEN);
+
+ if (res_d != NULL)
+ res_d_tmp = &res_d[i];
+
+ ret = test_ipsec_post_process(rx_pkt, &td[i],
+ res_d_tmp, silent, flags);
+ if (ret != TEST_SUCCESS) {
+ rte_pktmbuf_free(rx_pkt);
+ goto out;
+ }
+
+ ret = test_ipsec_stats_verify(ctx, ses, flags,
+ td->ipsec_xform.direction);
+ if (ret != TEST_SUCCESS) {
+ rte_pktmbuf_free(rx_pkt);
+ goto out;
+ }
+
+ rte_pktmbuf_free(rx_pkt);
+ rx_pkt = NULL;
+ tx_pkt = NULL;
+ }
+
+out:
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ destroy_default_flow(port_id);
+
+ /* Destroy session so that other cases can create the session again */
+ rte_security_session_destroy(ctx, ses);
+ ses = NULL;
+
+ return ret;
+}
static int
ut_setup_inline_ipsec(void)
@@ -1709,6 +1842,153 @@ test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
return test_ipsec_inline_proto_process(&td_outb, NULL, 1, false,
&flags);
}
+
+static int
+test_ipsec_inline_pkt_replay(const void *test_data, const uint64_t esn[],
+ bool replayed_pkt[], uint32_t nb_pkts, bool esn_en,
+ uint64_t winsz)
+{
+ struct ipsec_test_data td_outb[IPSEC_TEST_PACKETS_MAX];
+ struct ipsec_test_data td_inb[IPSEC_TEST_PACKETS_MAX];
+ struct ipsec_test_flags flags;
+ uint32_t i, ret = 0;
+
+ memset(&flags, 0, sizeof(flags));
+ flags.antireplay = true;
+
+ for (i = 0; i < nb_pkts; i++) {
+ memcpy(&td_outb[i], test_data, sizeof(td_outb));
+ td_outb[i].ipsec_xform.options.iv_gen_disable = 1;
+ td_outb[i].ipsec_xform.replay_win_sz = winsz;
+ td_outb[i].ipsec_xform.options.esn = esn_en;
+ }
+
+ for (i = 0; i < nb_pkts; i++)
+ td_outb[i].ipsec_xform.esn.value = esn[i];
+
+ ret = test_ipsec_inline_proto_process_with_esn(td_outb, td_inb,
+ nb_pkts, true, &flags);
+ if (ret != TEST_SUCCESS)
+ return ret;
+
+ test_ipsec_td_update(td_inb, td_outb, nb_pkts, &flags);
+
+ for (i = 0; i < nb_pkts; i++) {
+ td_inb[i].ipsec_xform.options.esn = esn_en;
+ /* Set antireplay flag for packets to be dropped */
+ td_inb[i].ar_packet = replayed_pkt[i];
+ }
+
+ ret = test_ipsec_inline_proto_process_with_esn(td_inb, NULL, nb_pkts,
+ true, &flags);
+
+ return ret;
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay(const void *test_data, uint64_t winsz)
+{
+
+ uint32_t nb_pkts = 5;
+ bool replayed_pkt[5];
+ uint64_t esn[5];
+
+ /* 1. Advance the TOP of the window to WS * 2 */
+ esn[0] = winsz * 2;
+ /* 2. Test sequence number within the new window(WS + 1) */
+ esn[1] = winsz + 1;
+ /* 3. Test sequence number less than the window BOTTOM */
+ esn[2] = winsz;
+ /* 4. Test sequence number in the middle of the window */
+ esn[3] = winsz + (winsz / 2);
+ /* 5. Test replay of the packet in the middle of the window */
+ esn[4] = winsz + (winsz / 2);
+
+ replayed_pkt[0] = false;
+ replayed_pkt[1] = false;
+ replayed_pkt[2] = true;
+ replayed_pkt[3] = false;
+ replayed_pkt[4] = true;
+
+ return test_ipsec_inline_pkt_replay(test_data, esn, replayed_pkt,
+ nb_pkts, false, winsz);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay1024(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay2048(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_inline_proto_pkt_antireplay4096(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_antireplay(test_data, 4096);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay(const void *test_data, uint64_t winsz)
+{
+
+ uint32_t nb_pkts = 7;
+ bool replayed_pkt[7];
+ uint64_t esn[7];
+
+ /* Set the initial sequence number */
+ esn[0] = (uint64_t)(0xFFFFFFFF - winsz);
+ /* 1. Advance the TOP of the window to (1<<32 + WS/2) */
+ esn[1] = (uint64_t)((1ULL << 32) + (winsz / 2));
+ /* 2. Test sequence number within new window (1<<32 + WS/2 + 1) */
+ esn[2] = (uint64_t)((1ULL << 32) - (winsz / 2) + 1);
+ /* 3. Test with sequence number within window (1<<32 - 1) */
+ esn[3] = (uint64_t)((1ULL << 32) - 1);
+ /* 4. Test with sequence number within window (1<<32 - 1) */
+ esn[4] = (uint64_t)(1ULL << 32);
+ /* 5. Test with duplicate sequence number within
+ * new window (1<<32 - 1)
+ */
+ esn[5] = (uint64_t)((1ULL << 32) - 1);
+ /* 6. Test with duplicate sequence number within new window (1<<32) */
+ esn[6] = (uint64_t)(1ULL << 32);
+
+ replayed_pkt[0] = false;
+ replayed_pkt[1] = false;
+ replayed_pkt[2] = false;
+ replayed_pkt[3] = false;
+ replayed_pkt[4] = false;
+ replayed_pkt[5] = true;
+ replayed_pkt[6] = true;
+
+ return test_ipsec_inline_pkt_replay(test_data, esn, replayed_pkt, nb_pkts,
+ true, winsz);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay1024(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 1024);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay2048(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 2048);
+}
+
+static int
+test_ipsec_inline_proto_pkt_esn_antireplay4096(const void *test_data)
+{
+ return test_ipsec_inline_proto_pkt_esn_antireplay(test_data, 4096);
+}
+
+
+
static struct unit_test_suite inline_ipsec_testsuite = {
.suite_name = "Inline IPsec Ethernet Device Unit Test Suite",
.setup = inline_ipsec_testsuite_setup,
@@ -1935,6 +2215,37 @@ static struct unit_test_suite inline_ipsec_testsuite = {
test_ipsec_inline_proto_iv_gen),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 1024",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay1024,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 2048",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay2048,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "Antireplay with window size 4096",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_antireplay4096,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 1024",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay1024,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 2048",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay2048,
+ &pkt_aes_128_gcm),
+ TEST_CASE_NAMED_WITH_DATA(
+ "ESN and Antireplay with window size 4096",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_pkt_esn_antireplay4096,
+ &pkt_aes_128_gcm),
+
TEST_CASE_NAMED_WITH_DATA(
"IPv4 Reassembly with 2 fragments",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v7 7/7] test/security: add inline IPsec IPv6 flow label cases
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (5 preceding siblings ...)
2022-05-24 7:22 ` [PATCH v7 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
@ 2022-05-24 7:22 ` Akhil Goyal
2022-05-24 8:05 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Anoob Joseph
7 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 7:22 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, anoobj,
konstantin.ananyev, ciara.power, ferruh.yigit, andrew.rybchenko,
ndabilpuram, vattunuru, Fan Zhang
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds unit tests for IPv6 flow label set & copy
operations.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
app/test/test_cryptodev_security_ipsec.c | 35 ++++++++++-
app/test/test_cryptodev_security_ipsec.h | 10 +++
app/test/test_security_inline_proto.c | 79 ++++++++++++++++++++++++
3 files changed, 123 insertions(+), 1 deletion(-)
diff --git a/app/test/test_cryptodev_security_ipsec.c b/app/test/test_cryptodev_security_ipsec.c
index 14c6ba681f..408bd0bc82 100644
--- a/app/test/test_cryptodev_security_ipsec.c
+++ b/app/test/test_cryptodev_security_ipsec.c
@@ -495,6 +495,10 @@ test_ipsec_td_prepare(const struct crypto_param *param1,
flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1)
td->ipsec_xform.options.copy_dscp = 1;
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1)
+ td->ipsec_xform.options.copy_flabel = 1;
+
if (flags->dec_ttl_or_hop_limit)
td->ipsec_xform.options.dec_ttl = 1;
}
@@ -933,6 +937,7 @@ test_ipsec_iph6_hdr_validate(const struct rte_ipv6_hdr *iph6,
const struct ipsec_test_flags *flags)
{
uint32_t vtc_flow;
+ uint32_t flabel;
uint8_t dscp;
if (!is_valid_ipv6_pkt(iph6)) {
@@ -959,6 +964,23 @@ test_ipsec_iph6_hdr_validate(const struct rte_ipv6_hdr *iph6,
}
}
+ flabel = vtc_flow & RTE_IPV6_HDR_FL_MASK;
+
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
+ if (flabel != TEST_IPSEC_FLABEL_VAL) {
+ printf("FLABEL value is not matching [exp: %x, actual: %x]\n",
+ TEST_IPSEC_FLABEL_VAL, flabel);
+ return -1;
+ }
+ } else {
+ if (flabel != 0) {
+ printf("FLABEL value is set [exp: 0, actual: %x]\n",
+ flabel);
+ return -1;
+ }
+ }
+
return 0;
}
@@ -1159,7 +1181,11 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct ipsec_test_flags *flags)
if (flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_1 ||
flags->dscp == TEST_IPSEC_SET_DSCP_0_INNER_1 ||
flags->dscp == TEST_IPSEC_COPY_DSCP_INNER_0 ||
- flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0) {
+ flags->dscp == TEST_IPSEC_SET_DSCP_1_INNER_0 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1 ||
+ flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_0 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0) {
if (is_ipv4(iph4)) {
uint8_t tos;
@@ -1187,6 +1213,13 @@ test_ipsec_pkt_update(uint8_t *pkt, const struct ipsec_test_flags *flags)
else
vtc_flow &= ~RTE_IPV6_HDR_DSCP_MASK;
+ if (flags->flabel == TEST_IPSEC_COPY_FLABEL_INNER_1 ||
+ flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1)
+ vtc_flow |= (RTE_IPV6_HDR_FL_MASK &
+ (TEST_IPSEC_FLABEL_VAL << RTE_IPV6_HDR_FL_SHIFT));
+ else
+ vtc_flow &= ~RTE_IPV6_HDR_FL_MASK;
+
iph6->vtc_flow = rte_cpu_to_be_32(vtc_flow);
}
}
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 0d9b5b6e2e..744dd64a9e 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -73,6 +73,15 @@ enum dscp_flags {
TEST_IPSEC_SET_DSCP_1_INNER_0,
};
+#define TEST_IPSEC_FLABEL_VAL 0x1234
+
+enum flabel_flags {
+ TEST_IPSEC_COPY_FLABEL_INNER_0 = 1,
+ TEST_IPSEC_COPY_FLABEL_INNER_1,
+ TEST_IPSEC_SET_FLABEL_0_INNER_1,
+ TEST_IPSEC_SET_FLABEL_1_INNER_0,
+};
+
struct ipsec_test_flags {
bool display_alg;
bool sa_expiry_pkts_soft;
@@ -92,6 +101,7 @@ struct ipsec_test_flags {
bool antireplay;
enum df_flags df;
enum dscp_flags dscp;
+ enum flabel_flags flabel;
bool dec_ttl_or_hop_limit;
bool ah;
};
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 518fb09113..82d27550f4 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -162,6 +162,13 @@ create_inline_ipsec_session(struct ipsec_test_data *sa, uint16_t portid,
sess_conf->ipsec.tunnel.ipv6.dscp =
TEST_IPSEC_DSCP_VAL;
+ if (flags->flabel == TEST_IPSEC_SET_FLABEL_0_INNER_1)
+ sess_conf->ipsec.tunnel.ipv6.flabel = 0;
+
+ if (flags->flabel == TEST_IPSEC_SET_FLABEL_1_INNER_0)
+ sess_conf->ipsec.tunnel.ipv6.flabel =
+ TEST_IPSEC_FLABEL_VAL;
+
memcpy(&sess_conf->ipsec.tunnel.ipv6.src_addr, &src_v6,
sizeof(src_v6));
memcpy(&sess_conf->ipsec.tunnel.ipv6.dst_addr, &dst_v6,
@@ -1792,6 +1799,62 @@ test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_ipv6_copy_flabel_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_copy_flabel_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_COPY_FLABEL_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_SET_FLABEL_0_INNER_1;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags;
+
+ memset(&flags, 0, sizeof(flags));
+
+ flags.ipv6 = true;
+ flags.tunnel_ipv6 = true;
+ flags.flabel = TEST_IPSEC_SET_FLABEL_1_INNER_0;
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_ipv4_ttl_decrement(const void *data __rte_unused)
{
@@ -2201,6 +2264,22 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"Tunnel header IPv6 set DSCP 1 (inner 0)",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_ipv6_set_dscp_1_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy FLABEL (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_flabel_inner_0),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 copy FLABEL (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_copy_flabel_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set FLABEL 0 (inner 1)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_flabel_0_inner_1),
+ TEST_CASE_NAMED_ST(
+ "Tunnel header IPv6 set FLABEL 1 (inner 0)",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_ipv6_set_flabel_1_inner_0),
TEST_CASE_NAMED_ST(
"Tunnel header IPv4 decrement inner TTL",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
` (6 preceding siblings ...)
2022-05-24 7:22 ` [PATCH v7 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
@ 2022-05-24 8:05 ` Anoob Joseph
2022-05-24 9:38 ` Akhil Goyal
7 siblings, 1 reply; 184+ messages in thread
From: Anoob Joseph @ 2022-05-24 8:05 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: thomas, david.marchand, hemant.agrawal, konstantin.ananyev,
ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru, Akhil Goyal
>
> IP reassembly offload was added in last release.
> The test app for unit testing IP reassembly of inline inbound IPsec flows is added
> in this patchset.
> For testing IP reassembly, base inline IPsec is also added. The app is enhanced in
> v4 to handle more functional unit test cases for inline IPsec similar to Lookaside
> IPsec.
> The functions from Lookaside more are reused to verify functional cases.
>
> Changes in v7:
> - fixed compilation
>
> Changes in v6:
> - Addressed comments from Anoob.
>
> changes in v5:
> - removed soft/hard expiry patches which are deferred for next release
> - skipped tests if no port is added.
> - added release notes.
> Changes in v4:
> - rebased over next-crypto
> - updated app to take benefit from Lookaside protocol test functions.
> - Added more functional cases
> - Added soft and hard expiry event subtypes in ethdev for testing SA soft and
> hard pkt/byte expiry events.
> - reassembly cases are squashed in a single patch
>
> Changes in v3:
> - incorporated latest ethdev changes for reassembly.
> - skipped build on windows as it needs rte_ipsec lib which is not
> compiled on windows.
> changes in v2:
> - added IPsec burst mode case
> - updated as per the latest ethdev changes.
>
>
> Akhil Goyal (6):
> app/test: add unit cases for inline IPsec offload
> test/security: add inline inbound IPsec cases
> test/security: add combined mode inline IPsec cases
> test/security: add inline IPsec reassembly cases
> test/security: add more inline IPsec functional cases
> test/security: add ESN and anti-replay cases for inline
>
> Vamsi Attunuru (1):
> test/security: add inline IPsec IPv6 flow label cases
>
> MAINTAINERS | 2 +-
> app/test/meson.build | 1 +
> app/test/test_cryptodev_security_ipsec.c | 35 +-
> app/test/test_cryptodev_security_ipsec.h | 10 +
> app/test/test_security_inline_proto.c | 2382 +++++++++++++++++
> app/test/test_security_inline_proto_vectors.h | 704 +++++
> doc/guides/rel_notes/release_22_07.rst | 5 +
> 7 files changed, 3137 insertions(+), 2 deletions(-) create mode 100644
> app/test/test_security_inline_proto.c
> create mode 100644 app/test/test_security_inline_proto_vectors.h
>
> --
> 2.25.1
Series Acked-by: Anoob Joseph <anoobj@marvell.com>
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases
2022-05-24 8:05 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Anoob Joseph
@ 2022-05-24 9:38 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-05-24 9:38 UTC (permalink / raw)
To: Anoob Joseph, dev
Cc: thomas, david.marchand, hemant.agrawal, konstantin.ananyev,
ciara.power, ferruh.yigit, andrew.rybchenko,
Nithin Kumar Dabilpuram, Vamsi Krishna Attunuru
> > Akhil Goyal (6):
> > app/test: add unit cases for inline IPsec offload
> > test/security: add inline inbound IPsec cases
> > test/security: add combined mode inline IPsec cases
> > test/security: add inline IPsec reassembly cases
> > test/security: add more inline IPsec functional cases
> > test/security: add ESN and anti-replay cases for inline
> >
> > Vamsi Attunuru (1):
> > test/security: add inline IPsec IPv6 flow label cases
> >
> > MAINTAINERS | 2 +-
> > app/test/meson.build | 1 +
> > app/test/test_cryptodev_security_ipsec.c | 35 +-
> > app/test/test_cryptodev_security_ipsec.h | 10 +
> > app/test/test_security_inline_proto.c | 2382 +++++++++++++++++
> > app/test/test_security_inline_proto_vectors.h | 704 +++++
> > doc/guides/rel_notes/release_22_07.rst | 5 +
> > 7 files changed, 3137 insertions(+), 2 deletions(-) create mode 100644
> > app/test/test_security_inline_proto.c
> > create mode 100644 app/test/test_security_inline_proto_vectors.h
> >
> > --
> > 2.25.1
>
> Series Acked-by: Anoob Joseph <anoobj@marvell.com>
Applied to dpdk-next-crypto
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 0/3] Add and test IPsec SA expiry events
2022-04-16 19:25 ` [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
2022-04-19 8:58 ` Thomas Monjalon
@ 2022-09-24 13:57 ` Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
` (3 more replies)
1 sibling, 4 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-24 13:57 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram, Akhil Goyal
This patchset is carried forward from last release patches [1]
which added test application changes to test inline IPsec.
These patches were not merged due to the ABI compatibility issues
due to the extension of enum.
Changes in this version:
added reference to struct which raised these interrupts.
[1] https://patches.dpdk.org/project/dpdk/patch/20220416192530.173895-8-gakhil@marvell.com/
Vamsi Attunuru (3):
ethdev: add IPsec SA expiry event subtypes
test/security: add inline IPsec SA soft expiry cases
test/security: add inline IPsec SA hard expiry cases
app/test/test_cryptodev_security_ipsec.h | 2 +
app/test/test_security_inline_proto.c | 158 +++++++++++++++++-
app/test/test_security_inline_proto_vectors.h | 6 +
lib/ethdev/rte_ethdev.h | 23 ++-
4 files changed, 186 insertions(+), 3 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes
2022-09-24 13:57 ` [PATCH v5 0/3] Add and test IPsec SA expiry events Akhil Goyal
@ 2022-09-24 13:57 ` Akhil Goyal
2022-09-24 14:02 ` Akhil Goyal
2022-09-26 14:02 ` Thomas Monjalon
2022-09-24 13:57 ` [PATCH v5 2/3] test/security: add inline IPsec SA soft expiry cases Akhil Goyal
` (2 subsequent siblings)
3 siblings, 2 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-24 13:57 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram, Akhil Goyal
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds new event subtypes for notifying expiry
events upon reaching IPsec SA soft packet expiry and
hard packet/byte expiry limits.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
lib/ethdev/rte_ethdev.h | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 2e783536c1..d730676a0e 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3875,8 +3875,26 @@ enum rte_eth_event_ipsec_subtype {
RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW,
/** Soft time expiry of SA */
RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
- /** Soft byte expiry of SA */
+ /**
+ * Soft byte expiry of SA determined by @ref bytes_soft_limit
+ * defined in @ref rte_security_ipsec_lifetime
+ */
RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
+ /**
+ * Soft packet expiry of SA determined by @ref packets_soft_limit
+ * defined in @ref rte_security_ipsec_lifetime
+ */
+ RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
+ /**
+ * Hard byte expiry of SA determined by @ref bytes_hard_limit
+ * defined in @ref rte_security_ipsec_lifetime
+ */
+ RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
+ /**
+ * Hard packet expiry of SA determined by @ref packets_hard_limit
+ * defined in @ref rte_security_ipsec_lifetime
+ */
+ RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
/** Max value of this enum */
RTE_ETH_EVENT_IPSEC_MAX
};
@@ -3898,6 +3916,9 @@ struct rte_eth_event_ipsec_desc {
* - @ref RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW
* - @ref RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY
* - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY
*
* @see struct rte_security_session_conf
*
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 2/3] test/security: add inline IPsec SA soft expiry cases
2022-09-24 13:57 ` [PATCH v5 0/3] Add and test IPsec SA expiry events Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
@ 2022-09-24 13:57 ` Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 3/3] test/security: add inline IPsec SA hard " Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 0/3] Add and test IPsec SA expiry events Akhil Goyal
3 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-24 13:57 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds unit tests for packet & byte soft expiry events.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
app/test/test_cryptodev_security_ipsec.h | 2 +
app/test/test_security_inline_proto.c | 105 +++++++++++++++++-
app/test/test_security_inline_proto_vectors.h | 6 +
3 files changed, 112 insertions(+), 1 deletion(-)
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 744dd64a9e..9a3c021dd8 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -86,6 +86,8 @@ struct ipsec_test_flags {
bool display_alg;
bool sa_expiry_pkts_soft;
bool sa_expiry_pkts_hard;
+ bool sa_expiry_bytes_soft;
+ bool sa_expiry_bytes_hard;
bool icv_corrupt;
bool iv_gen;
uint32_t tunnel_hdr_verify;
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 5f26a04b06..5747ee0990 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -947,6 +947,62 @@ event_rx_burst(struct rte_mbuf **rx_pkts, uint16_t nb_pkts_to_rx)
return nb_rx;
}
+static int
+test_ipsec_inline_sa_exp_event_callback(uint16_t port_id,
+ enum rte_eth_event_type type, void *param, void *ret_param)
+{
+ struct sa_expiry_vector *vector = (struct sa_expiry_vector *)param;
+ struct rte_eth_event_ipsec_desc *event_desc = NULL;
+
+ RTE_SET_USED(port_id);
+
+ if (type != RTE_ETH_EVENT_IPSEC)
+ return -1;
+
+ event_desc = ret_param;
+ if (event_desc == NULL) {
+ printf("Event descriptor not set\n");
+ return -1;
+ }
+ vector->notify_event = true;
+ if (event_desc->metadata != (uint64_t)vector->sa_data) {
+ printf("Mismatch in event specific metadata\n");
+ return -1;
+ }
+ if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY) {
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
+ return 0;
+ } else if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY) {
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ return 0;
+ } else if (event_desc->subtype >= RTE_ETH_EVENT_IPSEC_MAX) {
+ printf("Invalid IPsec event reported\n");
+ return -1;
+ }
+
+ return -1;
+}
+
+static enum rte_eth_event_ipsec_subtype
+test_ipsec_inline_setup_expiry_vector(struct sa_expiry_vector *vector,
+ const struct ipsec_test_flags *flags,
+ struct ipsec_test_data *tdata)
+{
+ enum rte_eth_event_ipsec_subtype event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+
+ vector->event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+ vector->notify_event = false;
+ vector->sa_data = (void *)tdata;
+ if (flags->sa_expiry_pkts_soft)
+ event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
+ else
+ event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ rte_eth_dev_callback_register(port_id, RTE_ETH_EVENT_IPSEC,
+ test_ipsec_inline_sa_exp_event_callback, vector);
+
+ return event;
+}
+
static int
test_ipsec_inline_proto_process(struct ipsec_test_data *td,
struct ipsec_test_data *res_d,
@@ -954,10 +1010,12 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
bool silent,
const struct ipsec_test_flags *flags)
{
+ enum rte_eth_event_ipsec_subtype event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
struct rte_security_session_conf sess_conf = {0};
struct rte_crypto_sym_xform cipher = {0};
struct rte_crypto_sym_xform auth = {0};
struct rte_crypto_sym_xform aead = {0};
+ struct sa_expiry_vector vector = {0};
struct rte_security_session *ses;
struct rte_security_ctx *ctx;
int nb_rx = 0, nb_sent;
@@ -966,6 +1024,12 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ return TEST_SUCCESS;
+ event = test_ipsec_inline_setup_expiry_vector(&vector, flags, td);
+ }
+
if (td->aead) {
sess_conf.crypto_xform = &aead;
} else {
@@ -1083,6 +1147,15 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
out:
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
destroy_default_flow(port_id);
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (vector.notify_event && (vector.event == event))
+ ret = TEST_SUCCESS;
+ else
+ ret = TEST_FAILED;
+
+ rte_eth_dev_callback_unregister(port_id, RTE_ETH_EVENT_IPSEC,
+ test_ipsec_inline_sa_exp_event_callback, &vector);
+ }
/* Destroy session so that other cases can create the session again */
rte_security_session_destroy(ctx, ses);
@@ -1100,6 +1173,7 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
int ret;
if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
+ flags->sa_expiry_bytes_soft ||
flags->sa_expiry_pkts_hard)
nb_pkts = IPSEC_TEST_PACKETS_MAX;
@@ -1132,6 +1206,11 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
if (flags->udp_encap)
td_outb.ipsec_xform.options.udp_encap = 1;
+ if (flags->sa_expiry_bytes_soft)
+ td_outb.ipsec_xform.life.bytes_soft_limit =
+ (((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
+ * nb_pkts) >> 3) - 1;
+
ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
false, flags);
if (ret == TEST_SKIPPED)
@@ -2242,6 +2321,23 @@ test_ipsec_inline_proto_iv_gen(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_sa_pkt_soft_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_pkts_soft = true
+ };
+ return test_ipsec_inline_proto_all(&flags);
+}
+static int
+test_ipsec_inline_proto_sa_byte_soft_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_bytes_soft = true
+ };
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
{
@@ -2644,7 +2740,14 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"IV generation",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_iv_gen),
-
+ TEST_CASE_NAMED_ST(
+ "SA soft expiry with packet limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_pkt_soft_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA soft expiry with byte limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_byte_soft_expiry),
TEST_CASE_NAMED_WITH_DATA(
"Antireplay with window size 1024",
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index c18965d80f..003537e200 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -36,6 +36,12 @@ struct reassembly_vector {
bool burst;
};
+struct sa_expiry_vector {
+ struct ipsec_session_data *sa_data;
+ enum rte_eth_event_ipsec_subtype event;
+ bool notify_event;
+};
+
/* The source file includes below test vectors */
/* IPv6:
*
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v5 3/3] test/security: add inline IPsec SA hard expiry cases
2022-09-24 13:57 ` [PATCH v5 0/3] Add and test IPsec SA expiry events Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 2/3] test/security: add inline IPsec SA soft expiry cases Akhil Goyal
@ 2022-09-24 13:57 ` Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 0/3] Add and test IPsec SA expiry events Akhil Goyal
3 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-24 13:57 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds hard expiry unit tests for both packet
and byte limits.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
app/test/test_security_inline_proto.c | 71 +++++++++++++++++++++++----
1 file changed, 61 insertions(+), 10 deletions(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 5747ee0990..8d0dd7765c 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -969,18 +969,25 @@ test_ipsec_inline_sa_exp_event_callback(uint16_t port_id,
printf("Mismatch in event specific metadata\n");
return -1;
}
- if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY) {
+ switch (event_desc->subtype) {
+ case RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY:
vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
- return 0;
- } else if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY) {
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY:
vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
- return 0;
- } else if (event_desc->subtype >= RTE_ETH_EVENT_IPSEC_MAX) {
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY:
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY;
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY:
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY;
+ break;
+ default:
printf("Invalid IPsec event reported\n");
return -1;
}
- return -1;
+ return 0;
}
static enum rte_eth_event_ipsec_subtype
@@ -995,8 +1002,12 @@ test_ipsec_inline_setup_expiry_vector(struct sa_expiry_vector *vector,
vector->sa_data = (void *)tdata;
if (flags->sa_expiry_pkts_soft)
event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
- else
+ else if (flags->sa_expiry_bytes_soft)
event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ else if (flags->sa_expiry_pkts_hard)
+ event = RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY;
+ else
+ event = RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY;
rte_eth_dev_callback_register(port_id, RTE_ETH_EVENT_IPSEC,
test_ipsec_inline_sa_exp_event_callback, vector);
@@ -1024,7 +1035,8 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
- if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_pkts_hard || flags->sa_expiry_bytes_hard) {
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
return TEST_SUCCESS;
event = test_ipsec_inline_setup_expiry_vector(&vector, flags, td);
@@ -1112,7 +1124,9 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
break;
} while (j++ < 5 || nb_rx == 0);
- if (nb_rx != nb_sent) {
+ if (!flags->sa_expiry_pkts_hard &&
+ !flags->sa_expiry_bytes_hard &&
+ (nb_rx != nb_sent)) {
printf("\nUnable to RX all %d packets, received(%i)",
nb_sent, nb_rx);
while (--nb_rx >= 0)
@@ -1147,7 +1161,8 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
out:
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
destroy_default_flow(port_id);
- if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_pkts_hard || flags->sa_expiry_bytes_hard) {
if (vector.notify_event && (vector.event == event))
ret = TEST_SUCCESS;
else
@@ -1174,6 +1189,7 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_bytes_hard ||
flags->sa_expiry_pkts_hard)
nb_pkts = IPSEC_TEST_PACKETS_MAX;
@@ -1210,6 +1226,13 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
td_outb.ipsec_xform.life.bytes_soft_limit =
(((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
* nb_pkts) >> 3) - 1;
+ if (flags->sa_expiry_pkts_hard)
+ td_outb.ipsec_xform.life.packets_hard_limit =
+ IPSEC_TEST_PACKETS_MAX - 1;
+ if (flags->sa_expiry_bytes_hard)
+ td_outb.ipsec_xform.life.bytes_hard_limit =
+ (((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
+ * nb_pkts) >> 3) - 1;
ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
false, flags);
@@ -2338,6 +2361,26 @@ test_ipsec_inline_proto_sa_byte_soft_expiry(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_sa_pkt_hard_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_pkts_hard = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_sa_byte_hard_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_bytes_hard = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
{
@@ -2748,6 +2791,14 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"SA soft expiry with byte limit",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_sa_byte_soft_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA hard expiry with packet limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_pkt_hard_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA hard expiry with byte limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_byte_hard_expiry),
TEST_CASE_NAMED_WITH_DATA(
"Antireplay with window size 1024",
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes
2022-09-24 13:57 ` [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
@ 2022-09-24 14:02 ` Akhil Goyal
2022-09-26 14:02 ` Thomas Monjalon
1 sibling, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-24 14:02 UTC (permalink / raw)
To: Akhil Goyal, dev, thomas
Cc: david.marchand, hemant.agrawal, Vamsi Krishna Attunuru,
ferruh.yigit, andrew.rybchenko, konstantin.v.ananyev,
Jerin Jacob Kollanukkaran, Ankur Dwivedi, Anoob Joseph,
Nithin Kumar Dabilpuram
Hi Thomas,
> Subject: [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes
>
> From: Vamsi Attunuru <vattunuru@marvell.com>
>
> Patch adds new event subtypes for notifying expiry
> events upon reaching IPsec SA soft packet expiry and
> hard packet/byte expiry limits.
>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
Can you ack this patch if no further comments?
This patch is from last release deferred to 22.11.
https://patches.dpdk.org/project/dpdk/patch/20220416192530.173895-8-gakhil@marvell.com/
Regards,
Akhil
^ permalink raw reply [flat|nested] 184+ messages in thread
* Re: [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes
2022-09-24 13:57 ` [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
2022-09-24 14:02 ` Akhil Goyal
@ 2022-09-26 14:02 ` Thomas Monjalon
2022-09-27 18:44 ` [EXT] " Akhil Goyal
1 sibling, 1 reply; 184+ messages in thread
From: Thomas Monjalon @ 2022-09-26 14:02 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram
24/09/2022 15:57, Akhil Goyal:
> From: Vamsi Attunuru <vattunuru@marvell.com>
>
> Patch adds new event subtypes for notifying expiry
> events upon reaching IPsec SA soft packet expiry and
> hard packet/byte expiry limits.
>
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> ---
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -3875,8 +3875,26 @@ enum rte_eth_event_ipsec_subtype {
> RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW,
> /** Soft time expiry of SA */
> RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
> - /** Soft byte expiry of SA */
> + /**
> + * Soft byte expiry of SA determined by @ref bytes_soft_limit
> + * defined in @ref rte_security_ipsec_lifetime
> + */
> RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
> + /**
> + * Soft packet expiry of SA determined by @ref packets_soft_limit
> + * defined in @ref rte_security_ipsec_lifetime
> + */
> + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
> + /**
> + * Hard byte expiry of SA determined by @ref bytes_hard_limit
> + * defined in @ref rte_security_ipsec_lifetime
> + */
> + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
> + /**
> + * Hard packet expiry of SA determined by @ref packets_hard_limit
> + * defined in @ref rte_security_ipsec_lifetime
> + */
> + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
> /** Max value of this enum */
> RTE_ETH_EVENT_IPSEC_MAX
I would prefer we remove this MAX value, but it would be another patch.
Acked-by: Thomas Monjalon <thomas@monjalon.net>
You can merge this patch in the crypto tree.
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 0/3] Add and test IPsec SA expiry events
2022-09-24 13:57 ` [PATCH v5 0/3] Add and test IPsec SA expiry events Akhil Goyal
` (2 preceding siblings ...)
2022-09-24 13:57 ` [PATCH v5 3/3] test/security: add inline IPsec SA hard " Akhil Goyal
@ 2022-09-26 17:07 ` Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
` (2 more replies)
3 siblings, 3 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-26 17:07 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram, Akhil Goyal
This patchset is carried forward from last release patches [1]
which added test application changes to test inline IPsec.
These patches were not merged due to the ABI compatibility issues
due to the extension of enum.
Changes in v6:
fix doc build in 1/3
Changes in v5:
added reference to struct which raised these interrupts.
[1] https://patches.dpdk.org/project/dpdk/patch/20220416192530.173895-8-gakhil@marvell.com/
Vamsi Attunuru (3):
ethdev: add IPsec SA expiry event subtypes
test/security: add inline IPsec SA soft expiry cases
test/security: add inline IPsec SA hard expiry cases
app/test/test_cryptodev_security_ipsec.h | 2 +
app/test/test_security_inline_proto.c | 158 +++++++++++++++++-
app/test/test_security_inline_proto_vectors.h | 6 +
lib/ethdev/rte_ethdev.h | 23 ++-
4 files changed, 186 insertions(+), 3 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 1/3] ethdev: add IPsec SA expiry event subtypes
2022-09-26 17:07 ` [PATCH v6 0/3] Add and test IPsec SA expiry events Akhil Goyal
@ 2022-09-26 17:07 ` Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 2/3] test/security: add inline IPsec SA soft expiry cases Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 3/3] test/security: add inline IPsec SA hard " Akhil Goyal
2 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-26 17:07 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram, Akhil Goyal
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds new event subtypes for notifying expiry
events upon reaching IPsec SA soft packet expiry and
hard packet/byte expiry limits.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
lib/ethdev/rte_ethdev.h | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 2e783536c1..3ee6786a79 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -3875,8 +3875,26 @@ enum rte_eth_event_ipsec_subtype {
RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW,
/** Soft time expiry of SA */
RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
- /** Soft byte expiry of SA */
+ /**
+ * Soft byte expiry of SA determined by
+ * @ref rte_security_ipsec_lifetime::bytes_soft_limit
+ */
RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
+ /**
+ * Soft packet expiry of SA determined by
+ * @ref rte_security_ipsec_lifetime::packets_soft_limit
+ */
+ RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
+ /**
+ * Hard byte expiry of SA determined by
+ * @ref rte_security_ipsec_lifetime::bytes_hard_limit
+ */
+ RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
+ /**
+ * Hard packet expiry of SA determined by
+ * @ref rte_security_ipsec_lifetime::packets_hard_limit
+ */
+ RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
/** Max value of this enum */
RTE_ETH_EVENT_IPSEC_MAX
};
@@ -3898,6 +3916,9 @@ struct rte_eth_event_ipsec_desc {
* - @ref RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW
* - @ref RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY
* - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY
+ * - @ref RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY
*
* @see struct rte_security_session_conf
*
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 2/3] test/security: add inline IPsec SA soft expiry cases
2022-09-26 17:07 ` [PATCH v6 0/3] Add and test IPsec SA expiry events Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
@ 2022-09-26 17:07 ` Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 3/3] test/security: add inline IPsec SA hard " Akhil Goyal
2 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-26 17:07 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram, Akhil Goyal
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds unit tests for packet & byte soft expiry events.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_cryptodev_security_ipsec.h | 2 +
app/test/test_security_inline_proto.c | 105 +++++++++++++++++-
app/test/test_security_inline_proto_vectors.h | 6 +
3 files changed, 112 insertions(+), 1 deletion(-)
diff --git a/app/test/test_cryptodev_security_ipsec.h b/app/test/test_cryptodev_security_ipsec.h
index 744dd64a9e..9a3c021dd8 100644
--- a/app/test/test_cryptodev_security_ipsec.h
+++ b/app/test/test_cryptodev_security_ipsec.h
@@ -86,6 +86,8 @@ struct ipsec_test_flags {
bool display_alg;
bool sa_expiry_pkts_soft;
bool sa_expiry_pkts_hard;
+ bool sa_expiry_bytes_soft;
+ bool sa_expiry_bytes_hard;
bool icv_corrupt;
bool iv_gen;
uint32_t tunnel_hdr_verify;
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 5f26a04b06..5747ee0990 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -947,6 +947,62 @@ event_rx_burst(struct rte_mbuf **rx_pkts, uint16_t nb_pkts_to_rx)
return nb_rx;
}
+static int
+test_ipsec_inline_sa_exp_event_callback(uint16_t port_id,
+ enum rte_eth_event_type type, void *param, void *ret_param)
+{
+ struct sa_expiry_vector *vector = (struct sa_expiry_vector *)param;
+ struct rte_eth_event_ipsec_desc *event_desc = NULL;
+
+ RTE_SET_USED(port_id);
+
+ if (type != RTE_ETH_EVENT_IPSEC)
+ return -1;
+
+ event_desc = ret_param;
+ if (event_desc == NULL) {
+ printf("Event descriptor not set\n");
+ return -1;
+ }
+ vector->notify_event = true;
+ if (event_desc->metadata != (uint64_t)vector->sa_data) {
+ printf("Mismatch in event specific metadata\n");
+ return -1;
+ }
+ if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY) {
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
+ return 0;
+ } else if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY) {
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ return 0;
+ } else if (event_desc->subtype >= RTE_ETH_EVENT_IPSEC_MAX) {
+ printf("Invalid IPsec event reported\n");
+ return -1;
+ }
+
+ return -1;
+}
+
+static enum rte_eth_event_ipsec_subtype
+test_ipsec_inline_setup_expiry_vector(struct sa_expiry_vector *vector,
+ const struct ipsec_test_flags *flags,
+ struct ipsec_test_data *tdata)
+{
+ enum rte_eth_event_ipsec_subtype event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+
+ vector->event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
+ vector->notify_event = false;
+ vector->sa_data = (void *)tdata;
+ if (flags->sa_expiry_pkts_soft)
+ event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
+ else
+ event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ rte_eth_dev_callback_register(port_id, RTE_ETH_EVENT_IPSEC,
+ test_ipsec_inline_sa_exp_event_callback, vector);
+
+ return event;
+}
+
static int
test_ipsec_inline_proto_process(struct ipsec_test_data *td,
struct ipsec_test_data *res_d,
@@ -954,10 +1010,12 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
bool silent,
const struct ipsec_test_flags *flags)
{
+ enum rte_eth_event_ipsec_subtype event = RTE_ETH_EVENT_IPSEC_UNKNOWN;
struct rte_security_session_conf sess_conf = {0};
struct rte_crypto_sym_xform cipher = {0};
struct rte_crypto_sym_xform auth = {0};
struct rte_crypto_sym_xform aead = {0};
+ struct sa_expiry_vector vector = {0};
struct rte_security_session *ses;
struct rte_security_ctx *ctx;
int nb_rx = 0, nb_sent;
@@ -966,6 +1024,12 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
+ return TEST_SUCCESS;
+ event = test_ipsec_inline_setup_expiry_vector(&vector, flags, td);
+ }
+
if (td->aead) {
sess_conf.crypto_xform = &aead;
} else {
@@ -1083,6 +1147,15 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
out:
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
destroy_default_flow(port_id);
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (vector.notify_event && (vector.event == event))
+ ret = TEST_SUCCESS;
+ else
+ ret = TEST_FAILED;
+
+ rte_eth_dev_callback_unregister(port_id, RTE_ETH_EVENT_IPSEC,
+ test_ipsec_inline_sa_exp_event_callback, &vector);
+ }
/* Destroy session so that other cases can create the session again */
rte_security_session_destroy(ctx, ses);
@@ -1100,6 +1173,7 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
int ret;
if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
+ flags->sa_expiry_bytes_soft ||
flags->sa_expiry_pkts_hard)
nb_pkts = IPSEC_TEST_PACKETS_MAX;
@@ -1132,6 +1206,11 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
if (flags->udp_encap)
td_outb.ipsec_xform.options.udp_encap = 1;
+ if (flags->sa_expiry_bytes_soft)
+ td_outb.ipsec_xform.life.bytes_soft_limit =
+ (((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
+ * nb_pkts) >> 3) - 1;
+
ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
false, flags);
if (ret == TEST_SKIPPED)
@@ -2242,6 +2321,23 @@ test_ipsec_inline_proto_iv_gen(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_sa_pkt_soft_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_pkts_soft = true
+ };
+ return test_ipsec_inline_proto_all(&flags);
+}
+static int
+test_ipsec_inline_proto_sa_byte_soft_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_bytes_soft = true
+ };
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
{
@@ -2644,7 +2740,14 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"IV generation",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_iv_gen),
-
+ TEST_CASE_NAMED_ST(
+ "SA soft expiry with packet limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_pkt_soft_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA soft expiry with byte limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_byte_soft_expiry),
TEST_CASE_NAMED_WITH_DATA(
"Antireplay with window size 1024",
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index c18965d80f..003537e200 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -36,6 +36,12 @@ struct reassembly_vector {
bool burst;
};
+struct sa_expiry_vector {
+ struct ipsec_session_data *sa_data;
+ enum rte_eth_event_ipsec_subtype event;
+ bool notify_event;
+};
+
/* The source file includes below test vectors */
/* IPv6:
*
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* [PATCH v6 3/3] test/security: add inline IPsec SA hard expiry cases
2022-09-26 17:07 ` [PATCH v6 0/3] Add and test IPsec SA expiry events Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 2/3] test/security: add inline IPsec SA soft expiry cases Akhil Goyal
@ 2022-09-26 17:07 ` Akhil Goyal
2 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-26 17:07 UTC (permalink / raw)
To: dev
Cc: thomas, david.marchand, hemant.agrawal, vattunuru, ferruh.yigit,
andrew.rybchenko, konstantin.v.ananyev, jerinj, adwivedi, anoobj,
ndabilpuram, Akhil Goyal
From: Vamsi Attunuru <vattunuru@marvell.com>
Patch adds hard expiry unit tests for both packet
and byte limits.
Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
app/test/test_security_inline_proto.c | 71 +++++++++++++++++++++++----
1 file changed, 61 insertions(+), 10 deletions(-)
diff --git a/app/test/test_security_inline_proto.c b/app/test/test_security_inline_proto.c
index 5747ee0990..8d0dd7765c 100644
--- a/app/test/test_security_inline_proto.c
+++ b/app/test/test_security_inline_proto.c
@@ -969,18 +969,25 @@ test_ipsec_inline_sa_exp_event_callback(uint16_t port_id,
printf("Mismatch in event specific metadata\n");
return -1;
}
- if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY) {
+ switch (event_desc->subtype) {
+ case RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY:
vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
- return 0;
- } else if (event_desc->subtype == RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY) {
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY:
vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
- return 0;
- } else if (event_desc->subtype >= RTE_ETH_EVENT_IPSEC_MAX) {
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY:
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY;
+ break;
+ case RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY:
+ vector->event = RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY;
+ break;
+ default:
printf("Invalid IPsec event reported\n");
return -1;
}
- return -1;
+ return 0;
}
static enum rte_eth_event_ipsec_subtype
@@ -995,8 +1002,12 @@ test_ipsec_inline_setup_expiry_vector(struct sa_expiry_vector *vector,
vector->sa_data = (void *)tdata;
if (flags->sa_expiry_pkts_soft)
event = RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY;
- else
+ else if (flags->sa_expiry_bytes_soft)
event = RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY;
+ else if (flags->sa_expiry_pkts_hard)
+ event = RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY;
+ else
+ event = RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY;
rte_eth_dev_callback_register(port_id, RTE_ETH_EVENT_IPSEC,
test_ipsec_inline_sa_exp_event_callback, vector);
@@ -1024,7 +1035,8 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
memset(rx_pkts_burst, 0, sizeof(rx_pkts_burst[0]) * nb_pkts);
- if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_pkts_hard || flags->sa_expiry_bytes_hard) {
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
return TEST_SUCCESS;
event = test_ipsec_inline_setup_expiry_vector(&vector, flags, td);
@@ -1112,7 +1124,9 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
break;
} while (j++ < 5 || nb_rx == 0);
- if (nb_rx != nb_sent) {
+ if (!flags->sa_expiry_pkts_hard &&
+ !flags->sa_expiry_bytes_hard &&
+ (nb_rx != nb_sent)) {
printf("\nUnable to RX all %d packets, received(%i)",
nb_sent, nb_rx);
while (--nb_rx >= 0)
@@ -1147,7 +1161,8 @@ test_ipsec_inline_proto_process(struct ipsec_test_data *td,
out:
if (td->ipsec_xform.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
destroy_default_flow(port_id);
- if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft) {
+ if (flags->sa_expiry_pkts_soft || flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_pkts_hard || flags->sa_expiry_bytes_hard) {
if (vector.notify_event && (vector.event == event))
ret = TEST_SUCCESS;
else
@@ -1174,6 +1189,7 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
if (flags->iv_gen || flags->sa_expiry_pkts_soft ||
flags->sa_expiry_bytes_soft ||
+ flags->sa_expiry_bytes_hard ||
flags->sa_expiry_pkts_hard)
nb_pkts = IPSEC_TEST_PACKETS_MAX;
@@ -1210,6 +1226,13 @@ test_ipsec_inline_proto_all(const struct ipsec_test_flags *flags)
td_outb.ipsec_xform.life.bytes_soft_limit =
(((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
* nb_pkts) >> 3) - 1;
+ if (flags->sa_expiry_pkts_hard)
+ td_outb.ipsec_xform.life.packets_hard_limit =
+ IPSEC_TEST_PACKETS_MAX - 1;
+ if (flags->sa_expiry_bytes_hard)
+ td_outb.ipsec_xform.life.bytes_hard_limit =
+ (((td_outb.output_text.len + RTE_ETHER_HDR_LEN)
+ * nb_pkts) >> 3) - 1;
ret = test_ipsec_inline_proto_process(&td_outb, &td_inb, nb_pkts,
false, flags);
@@ -2338,6 +2361,26 @@ test_ipsec_inline_proto_sa_byte_soft_expiry(const void *data __rte_unused)
return test_ipsec_inline_proto_all(&flags);
}
+static int
+test_ipsec_inline_proto_sa_pkt_hard_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_pkts_hard = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
+static int
+test_ipsec_inline_proto_sa_byte_hard_expiry(const void *data __rte_unused)
+{
+ struct ipsec_test_flags flags = {
+ .sa_expiry_bytes_hard = true
+ };
+
+ return test_ipsec_inline_proto_all(&flags);
+}
+
static int
test_ipsec_inline_proto_known_vec_fragmented(const void *test_data)
{
@@ -2748,6 +2791,14 @@ static struct unit_test_suite inline_ipsec_testsuite = {
"SA soft expiry with byte limit",
ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
test_ipsec_inline_proto_sa_byte_soft_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA hard expiry with packet limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_pkt_hard_expiry),
+ TEST_CASE_NAMED_ST(
+ "SA hard expiry with byte limit",
+ ut_setup_inline_ipsec, ut_teardown_inline_ipsec,
+ test_ipsec_inline_proto_sa_byte_hard_expiry),
TEST_CASE_NAMED_WITH_DATA(
"Antireplay with window size 1024",
--
2.25.1
^ permalink raw reply [flat|nested] 184+ messages in thread
* RE: [EXT] Re: [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes
2022-09-26 14:02 ` Thomas Monjalon
@ 2022-09-27 18:44 ` Akhil Goyal
0 siblings, 0 replies; 184+ messages in thread
From: Akhil Goyal @ 2022-09-27 18:44 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, david.marchand, hemant.agrawal, Vamsi Krishna Attunuru,
ferruh.yigit, andrew.rybchenko, konstantin.v.ananyev,
Jerin Jacob Kollanukkaran, Ankur Dwivedi, Anoob Joseph,
Nithin Kumar Dabilpuram
> 24/09/2022 15:57, Akhil Goyal:
> > From: Vamsi Attunuru <vattunuru@marvell.com>
> >
> > Patch adds new event subtypes for notifying expiry
> > events upon reaching IPsec SA soft packet expiry and
> > hard packet/byte expiry limits.
> >
> > Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -3875,8 +3875,26 @@ enum rte_eth_event_ipsec_subtype {
> > RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW,
> > /** Soft time expiry of SA */
> > RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY,
> > - /** Soft byte expiry of SA */
> > + /**
> > + * Soft byte expiry of SA determined by @ref bytes_soft_limit
> > + * defined in @ref rte_security_ipsec_lifetime
> > + */
> > RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY,
> > + /**
> > + * Soft packet expiry of SA determined by @ref packets_soft_limit
> > + * defined in @ref rte_security_ipsec_lifetime
> > + */
> > + RTE_ETH_EVENT_IPSEC_SA_PKT_EXPIRY,
> > + /**
> > + * Hard byte expiry of SA determined by @ref bytes_hard_limit
> > + * defined in @ref rte_security_ipsec_lifetime
> > + */
> > + RTE_ETH_EVENT_IPSEC_SA_BYTE_HARD_EXPIRY,
> > + /**
> > + * Hard packet expiry of SA determined by @ref packets_hard_limit
> > + * defined in @ref rte_security_ipsec_lifetime
> > + */
> > + RTE_ETH_EVENT_IPSEC_SA_PKT_HARD_EXPIRY,
> > /** Max value of this enum */
> > RTE_ETH_EVENT_IPSEC_MAX
>
> I would prefer we remove this MAX value, but it would be another patch.
>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
> You can merge this patch in the crypto tree.
>
Series Applied to dpdk-next-crypto
^ permalink raw reply [flat|nested] 184+ messages in thread
end of thread, other threads:[~2022-09-27 18:44 UTC | newest]
Thread overview: 184+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-23 10:02 [dpdk-dev] [PATCH] RFC: ethdev: add reassembly offload Akhil Goyal
2021-08-23 10:18 ` Andrew Rybchenko
2021-08-29 13:14 ` [dpdk-dev] [EXT] " Akhil Goyal
2021-09-21 19:59 ` Thomas Monjalon
2021-09-07 8:47 ` [dpdk-dev] " Ferruh Yigit
2021-09-08 10:29 ` [dpdk-dev] [EXT] " Anoob Joseph
2021-09-13 6:56 ` Xu, Rosen
2021-09-13 7:22 ` Andrew Rybchenko
2021-09-14 5:14 ` Anoob Joseph
2021-09-08 6:34 ` [dpdk-dev] " Xu, Rosen
2021-09-08 6:36 ` Xu, Rosen
2022-01-03 15:08 ` [PATCH 0/8] ethdev: introduce IP " Akhil Goyal
2022-01-03 15:08 ` [PATCH 1/8] " Akhil Goyal
2022-01-11 16:03 ` Ananyev, Konstantin
2022-01-22 7:38 ` Andrew Rybchenko
2022-01-30 16:53 ` [EXT] " Akhil Goyal
2022-01-03 15:08 ` [PATCH 2/8] ethdev: add dev op for IP reassembly configuration Akhil Goyal
2022-01-11 16:09 ` Ananyev, Konstantin
2022-01-11 18:54 ` Akhil Goyal
2022-01-12 10:22 ` Ananyev, Konstantin
2022-01-12 10:32 ` Akhil Goyal
2022-01-12 10:48 ` Ananyev, Konstantin
2022-01-12 11:06 ` Akhil Goyal
2022-01-13 13:31 ` Akhil Goyal
2022-01-13 14:41 ` Ananyev, Konstantin
2022-01-03 15:08 ` [PATCH 3/8] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
2022-01-11 17:04 ` Ananyev, Konstantin
2022-01-11 18:44 ` Akhil Goyal
2022-01-12 10:30 ` Ananyev, Konstantin
2022-01-12 10:59 ` Akhil Goyal
2022-01-13 22:29 ` Ananyev, Konstantin
2022-01-13 13:18 ` Akhil Goyal
2022-01-13 14:36 ` Ananyev, Konstantin
2022-01-13 15:04 ` Akhil Goyal
2022-01-03 15:08 ` [PATCH 4/8] security: add IPsec option for " Akhil Goyal
2022-01-03 15:08 ` [PATCH 5/8] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 1/4] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 2/4] app/test: add IP reassembly case with no frags Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 3/4] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
2022-01-20 16:48 ` [PATCH v2 4/4] app/test: add IP reassembly negative cases Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 0/4] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 1/4] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 2/4] app/test: add IP reassembly case with no frags Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 3/4] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
2022-02-17 17:23 ` [PATCH v3 4/4] app/test: add IP reassembly negative cases Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 01/10] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 02/10] test/security: add inline inbound IPsec cases Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 03/10] test/security: add combined mode inline " Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 04/10] test/security: add inline IPsec reassembly cases Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 05/10] test/security: add more inline IPsec functional cases Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 06/10] test/security: add ESN and anti-replay cases for inline Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 07/10] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
2022-04-19 8:58 ` Thomas Monjalon
2022-04-19 10:14 ` [EXT] " Akhil Goyal
2022-04-19 10:19 ` Anoob Joseph
2022-04-19 10:37 ` Thomas Monjalon
2022-04-19 10:39 ` Anoob Joseph
2022-04-19 10:47 ` Thomas Monjalon
2022-04-19 12:27 ` Akhil Goyal
2022-04-19 15:41 ` Ray Kinsella
2022-04-20 13:51 ` Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 0/3] Add and test IPsec SA expiry events Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
2022-09-24 14:02 ` Akhil Goyal
2022-09-26 14:02 ` Thomas Monjalon
2022-09-27 18:44 ` [EXT] " Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 2/3] test/security: add inline IPsec SA soft expiry cases Akhil Goyal
2022-09-24 13:57 ` [PATCH v5 3/3] test/security: add inline IPsec SA hard " Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 0/3] Add and test IPsec SA expiry events Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 1/3] ethdev: add IPsec SA expiry event subtypes Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 2/3] test/security: add inline IPsec SA soft expiry cases Akhil Goyal
2022-09-26 17:07 ` [PATCH v6 3/3] test/security: add inline IPsec SA hard " Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 08/10] test/security: add inline IPsec SA soft " Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 09/10] test/security: add inline IPsec SA hard " Akhil Goyal
2022-04-16 19:25 ` [PATCH v4 10/10] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
2022-04-18 3:44 ` Anoob Joseph
2022-04-18 3:55 ` Akhil Goyal
2022-04-25 12:38 ` [PATCH v4 00/10] app/test: add inline IPsec and reassembly cases Poczatek, Jakub
2022-04-27 15:10 ` [PATCH v5 0/7] " Akhil Goyal
2022-04-27 15:10 ` [PATCH v5 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-04-27 15:44 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
2022-04-27 15:44 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 3/7] test/security: add combined mode inline " Akhil Goyal
2022-04-27 15:45 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
2022-04-27 15:45 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
2022-04-27 15:46 ` Zhang, Roy Fan
2022-04-27 15:10 ` [PATCH v5 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
2022-04-27 15:46 ` Zhang, Roy Fan
2022-04-28 5:25 ` Anoob Joseph
2022-04-27 15:10 ` [PATCH v5 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
2022-04-27 15:46 ` Zhang, Roy Fan
2022-04-27 15:42 ` [PATCH v5 0/7] app/test: add inline IPsec and reassembly cases Zhang, Roy Fan
2022-05-13 7:31 ` [PATCH v6 " Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 3/7] test/security: add combined mode inline " Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
2022-05-13 7:31 ` [PATCH v6 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
2022-05-13 7:32 ` [PATCH v6 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
2022-05-13 7:32 ` [PATCH v6 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 1/7] app/test: add unit cases for inline IPsec offload Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 2/7] test/security: add inline inbound IPsec cases Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 3/7] test/security: add combined mode inline " Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 4/7] test/security: add inline IPsec reassembly cases Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 5/7] test/security: add more inline IPsec functional cases Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 6/7] test/security: add ESN and anti-replay cases for inline Akhil Goyal
2022-05-24 7:22 ` [PATCH v7 7/7] test/security: add inline IPsec IPv6 flow label cases Akhil Goyal
2022-05-24 8:05 ` [PATCH v7 0/7] app/test: add inline IPsec and reassembly cases Anoob Joseph
2022-05-24 9:38 ` Akhil Goyal
2022-01-03 15:08 ` [PATCH 6/8] app/test: add IP reassembly case with no frags Akhil Goyal
2022-01-03 15:08 ` [PATCH 7/8] app/test: add IP reassembly cases with multiple fragments Akhil Goyal
2022-01-03 15:08 ` [PATCH 8/8] app/test: add IP reassembly negative cases Akhil Goyal
2022-01-06 9:51 ` [PATCH 0/8] ethdev: introduce IP reassembly offload David Marchand
2022-01-06 9:54 ` [EXT] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 0/4] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 1/4] " Akhil Goyal
2022-01-20 16:45 ` Stephen Hemminger
2022-01-20 17:11 ` [EXT] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 2/4] ethdev: add dev op to set/get IP reassembly configuration Akhil Goyal
2022-01-22 8:17 ` Andrew Rybchenko
2022-01-30 16:30 ` [EXT] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 4/4] security: add IPsec option for " Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 1/4] " Akhil Goyal
2022-02-01 14:11 ` Ferruh Yigit
2022-02-02 10:57 ` [EXT] " Akhil Goyal
2022-02-02 14:05 ` Ferruh Yigit
2022-01-30 17:59 ` [PATCH v3 2/4] ethdev: add dev op to set/get IP reassembly configuration Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 3/4] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
2022-02-01 14:11 ` Ferruh Yigit
2022-02-02 9:13 ` [EXT] " Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 4/4] security: add IPsec option for " Akhil Goyal
2022-02-01 14:12 ` Ferruh Yigit
2022-02-02 9:15 ` [EXT] " Akhil Goyal
2022-02-02 14:04 ` Ferruh Yigit
2022-02-01 14:10 ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Ferruh Yigit
2022-02-02 9:05 ` [EXT] " Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 0/3] " Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 1/3] " Akhil Goyal
2022-02-04 22:20 ` Akhil Goyal
2022-02-07 13:53 ` Ferruh Yigit
2022-02-07 14:36 ` [EXT] " Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
2022-02-07 13:58 ` Ferruh Yigit
2022-02-07 14:20 ` [EXT] " Akhil Goyal
2022-02-07 14:56 ` Ferruh Yigit
2022-02-07 16:20 ` Akhil Goyal
2022-02-07 16:41 ` Ferruh Yigit
2022-02-07 17:17 ` Akhil Goyal
2022-02-07 17:23 ` Stephen Hemminger
2022-02-07 17:28 ` Ferruh Yigit
2022-02-07 18:01 ` Stephen Hemminger
2022-02-07 18:28 ` [EXT] " Akhil Goyal
2022-02-07 19:08 ` Stephen Hemminger
2022-02-07 17:29 ` Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 3/3] security: add IPsec option for " Akhil Goyal
2022-02-08 9:01 ` David Marchand
2022-02-08 9:18 ` [EXT] " Akhil Goyal
2022-02-08 9:27 ` David Marchand
2022-02-08 10:45 ` Akhil Goyal
2022-02-08 13:19 ` Akhil Goyal
2022-02-08 19:55 ` David Marchand
2022-02-08 20:01 ` Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 1/3] " Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
2022-02-08 20:11 ` [PATCH v5 3/3] security: add IPsec option for " Akhil Goyal
2022-02-08 22:20 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
2022-02-08 22:20 ` [PATCH v6 1/3] " Akhil Goyal
2022-02-10 8:54 ` Ferruh Yigit
2022-02-10 10:08 ` Andrew Rybchenko
2022-02-10 10:20 ` Ferruh Yigit
2022-02-10 10:30 ` Ferruh Yigit
2022-02-08 22:20 ` [PATCH v6 2/3] ethdev: add mbuf dynfield for incomplete IP reassembly Akhil Goyal
2022-02-10 8:54 ` Ferruh Yigit
2022-02-08 22:20 ` [PATCH v6 3/3] security: add IPsec option for " Akhil Goyal
2022-02-10 8:54 ` [PATCH v6 0/3] ethdev: introduce IP reassembly offload Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).