* [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw @ 2019-12-08 12:30 Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph ` (14 more replies) 0 siblings, 15 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev This series introduces event-mode additions to ipsec-secgw. This effort is based on the proposed changes for l2fwd-event and the additions in l3fwd for event support. With this series, ipsec-secgw would be able to run in eventmode. The worker thread (executing loop) would be receiving events and would be submitting it back to the eventdev after the processing. This way, multicore scaling and h/w assisted scheduling is achieved by making use of the eventdev capabilities. Since the underlying event device would be having varying capabilities, the worker thread could be drafted differently to maximize performance. This series introduces usage of multiple worker threads, among which the one to be used will be determined by the operating conditions and the underlying device capabilities. For example, if an event device - eth device pair has Tx internal port, then application can do tx_adapter_enqueue() instead of regular event_enqueue(). So a thread making an assumption that the device pair has internal port will not be the right solution for another pair. The infrastructure added with these patches aims to help application to have multiple worker threads, there by extracting maximum performance from every device without affecting existing paths/use cases. The eventmode configuration is predefined. All packets reaching one eth port will hit one event queue. All event queues will be mapped to all event ports. So all cores will be able to receive traffic from all ports. When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event device will ensure the ordering. Ordering would be lost when tried in PARALLEL. Following command line options are introduced, --transfer-mode: to choose between poll mode & event mode --schedule-type: to specify the scheduling type (RTE_SCHED_TYPE_ORDERED/ RTE_SCHED_TYPE_ATOMIC/ RTE_SCHED_TYPE_PARALLEL) --process-dir: outbound/inbound --process-mode: app mode /driver mode The two s/w config options added to ipsec-secgw can be used in benchmarking h/w performance, 1. process-dir : states whether the direction is outbound/inbound. This option aims to avoid an unnecessary check of determining whether inbound/outbound processing need to be done on the packet. For each option a different light weight worker thread would be executed. 2. process-mode: states whether the application has to run in driver mode or app mode. Driver-mode: This mode will have bare minimum changes in the application to support ipsec. There woudn't be any lookup etc done in the application. And for inline-protocol use case, the thread would resemble l2fwd as the ipsec processing would be done entirely in the h/w. This mode can be used to benchmark the raw performance of the h/w. All the application side steps (like lookup) can be redone based on the requirement of the end user. Hence the need for a mode which would report the raw performance. App-mode: This mode will have all the features currently implemented with ipsec-secgw (non librte_ipsec mode). All the lookups etc would follow the existing methods and would report numbers that can be compared against regular ipsec-secgw benchmark numbers. Example commands to execute ipsec-secgw in various modes on OCTEONTX2 platform, #Inbound driver mode ./ipsec-secgw -w 0002:02:00.0,nb_ipsec_in_sa=128 -w 0002:03:00.0,nb_ipsec_in_sa=128 -w 0002:04:00.0,nb_ipsec_in_sa=128 -w 0002:07:00.0,nb_ipsec_in_sa=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x7 – -P -p 0xf --config "(0,0,0),(1,0,0),(2,0,0),(3,0,0)" -f dpdk_internal/100g_4.3.cfg --transfer-mode 1 --schedule-type 2 --process-mode app --process-dir in #Inbound app mode ./ipsec-secgw -w 0002:02:00.0,nb_ipsec_in_sa=128 -w 0002:03:00.0,nb_ipsec_in_sa=128 -w 0002:04:00.0,nb_ipsec_in_sa=128 -w 0002:07:00.0,nb_ipsec_in_sa=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x3f – -P -p 0xf --config "(0,0,0),(1,0,0),(2,0,0),(3,0,0)" -f dpdk_internal/100g_4.3.cfg --transfer-mode 1 --schedule-type 2 --process-mode drv --process-dir in #Outbound driver mode ./ipsec-secgw -w 0002:02:00.0 -w 0002:03:00.0 -w 0002:04:00.0 -w 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1f – -P -p 0xf --config "(0,0,0),(1,0,0),(2,0,0),(3,0,0)" -f a-aes-gcm-new.cfg --transfer-mode 1 --schedule-type 2 --process-mode app --process-dir out #Outbound app mode ./ipsec-secgw -w 0002:02:00.0 -w 0002:03:00.0 -w 0002:04:00.0 -w 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x7f – -P -p 0xf --config "(0,0,0),(1,0,0),(2,0,0),(3,0,0)" -f a-aes-gcm-new.cfg --transfer-mode 1 --schedule-type 2 --process-mode drv --process-dir out This series doesn't introduce any library change. And the decision to add eventmode additions in ipsec-secgw was approved by the Tech Board. This series adds non burst tx internal port workers only. It provides infrastructure for non internal port workers, however does not define any. Also, only inline ipsec mode is supported by the worker threads added. Following are planned features, 1. Add burst mode workers. 2. Add non internal port workers. 3. Verify support for Rx core (the support is added but lack of h/w to verify). 4. Add lookaside protocol support. Following are features that Marvell won't be attempting. 1. Inline crypto support. 2. Lookaside crypto support. For the features that Marvell won't be attempting, new workers can be introduced by the respective stake holders. This series is tested on Marvell OCTEONTX2. Ankur Dwivedi (3): examples/ipsec-secgw: add default rte_flow for inline Rx examples/ipsec-secgw: add driver outbound worker examples/ipsec-secgw: add app outbound worker Anoob Joseph (5): examples/ipsec-secgw: add framework for eventmode helper examples/ipsec-secgw: add eventdev port-lcore link examples/ipsec-secgw: add Rx adapter support examples/ipsec-secgw: add Tx adapter support examples/ipsec-secgw: add routines to display config Lukasz Bartosik (6): examples/ipsec-secgw: add routines to launch workers examples/ipsec-secgw: add support for internal ports examples/ipsec-secgw: add eventmode to ipsec-secgw examples/ipsec-secgw: add app inbound worker examples/ipsec-secgw: add app processing code examples/ipsec-secgw: add cmd line option for bufs examples/ipsec-secgw/Makefile | 2 + examples/ipsec-secgw/event_helper.c | 1742 +++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 324 +++++++ examples/ipsec-secgw/ipsec-secgw.c | 533 +++++++++-- examples/ipsec-secgw/ipsec-secgw.h | 81 ++ examples/ipsec-secgw/ipsec.c | 17 + examples/ipsec-secgw/ipsec.h | 36 +- examples/ipsec-secgw/ipsec_worker.c | 766 +++++++++++++++ examples/ipsec-secgw/ipsec_worker.h | 39 + examples/ipsec-secgw/meson.build | 4 +- examples/ipsec-secgw/sa.c | 11 - 11 files changed, 3446 insertions(+), 109 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c create mode 100644 examples/ipsec-secgw/ipsec_worker.h -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-16 14:20 ` Ananyev, Konstantin 2019-12-08 12:30 ` [dpdk-dev] [PATCH 02/14] examples/ipsec-secgw: add framework for eventmode helper Anoob Joseph ` (13 subsequent siblings) 14 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev From: Ankur Dwivedi <adwivedi@marvell.com> The default flow created would enable security processing on all ESP packets. If the default flow is created, SA based rte_flow creation would be skipped. Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Anoob Joseph <anoobj@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 56 ++++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/ipsec.c | 8 ++++++ examples/ipsec-secgw/ipsec.h | 6 ++++ 3 files changed, 70 insertions(+) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 3b5aaf6..7506922 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -128,6 +128,8 @@ struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" @@ -2406,6 +2408,55 @@ reassemble_init(void) return rc; } +static int +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) +{ + int ret = 0; + + /* Add the default ipsec flow to detect all ESP packets for rx */ + if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) { + struct rte_flow_action action[2]; + struct rte_flow_item pattern[2]; + struct rte_flow_attr attr = {0}; + struct rte_flow_error err; + struct rte_flow *flow; + + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; + pattern[0].spec = NULL; + pattern[0].mask = NULL; + pattern[0].last = NULL; + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; + + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; + action[0].conf = NULL; + action[1].type = RTE_FLOW_ACTION_TYPE_END; + action[1].conf = NULL; + + attr.egress = 0; + attr.ingress = 1; + + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); + if (ret) { + RTE_LOG(ERR, IPSEC, + "Failed to validate ipsec flow %s\n", + err.message); + goto exit; + } + + flow = rte_flow_create(port_id, &attr, pattern, action, &err); + if (flow == NULL) { + RTE_LOG(ERR, IPSEC, + "Failed to create ipsec flow %s\n", + err.message); + ret = -rte_errno; + goto exit; + } + flow_info_tbl[port_id].rx_def_flow = flow; + } +exit: + return ret; +} + int32_t main(int32_t argc, char **argv) { @@ -2478,6 +2529,11 @@ main(int32_t argc, char **argv) sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); port_init(portid, req_rx_offloads, req_tx_offloads); + /* Create default ipsec flow for the ethernet device */ + ret = create_default_ipsec_flow(portid, req_rx_offloads); + if (ret) + printf("Cannot create default flow, err=%d, port=%d\n", + ret, portid); } cryptodevs_init(); diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index d4b5712..e529f68 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -261,6 +261,12 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, unsigned int i; unsigned int j; + /* + * Don't create flow if default flow is already created + */ + if (flow_info_tbl[sa->portid].rx_def_flow) + goto set_cdev_id; + ret = rte_eth_dev_info_get(sa->portid, &dev_info); if (ret != 0) { RTE_LOG(ERR, IPSEC, @@ -396,6 +402,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, ips->security.ol_flags = sec_cap->ol_flags; ips->security.ctx = sec_ctx; } + +set_cdev_id: sa->cdev_id_qp = 0; return 0; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 8e07521..28ff07d 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -81,6 +81,12 @@ struct app_sa_prm { extern struct app_sa_prm app_sa_prm; +struct flow_info { + struct rte_flow *rx_def_flow; +}; + +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + enum { IPSEC_SESSION_PRIMARY = 0, IPSEC_SESSION_FALLBACK = 1, -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx 2019-12-08 12:30 ` [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph @ 2019-12-16 14:20 ` Ananyev, Konstantin 2019-12-16 15:58 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-16 14:20 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, dev > From: Ankur Dwivedi <adwivedi@marvell.com> > > The default flow created would enable security processing on all ESP > packets. If the default flow is created, SA based rte_flow creation > would be skipped. I suppose that one depends on: http://patches.dpdk.org/patch/63621/ http://patches.dpdk.org/cover/63625/ to work as expected? If so probably worth to mention in that header or in cover letter (or both). > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > --- > examples/ipsec-secgw/ipsec-secgw.c | 56 ++++++++++++++++++++++++++++++++++++++ > examples/ipsec-secgw/ipsec.c | 8 ++++++ > examples/ipsec-secgw/ipsec.h | 6 ++++ > 3 files changed, 70 insertions(+) > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index 3b5aaf6..7506922 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -128,6 +128,8 @@ struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { > { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } > }; > > +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; Need to be initialized with zeroes somewhere. > + > #define CMD_LINE_OPT_CONFIG "config" > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > @@ -2406,6 +2408,55 @@ reassemble_init(void) > return rc; > } > > +static int > +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) > +{ > + int ret = 0; > + > + /* Add the default ipsec flow to detect all ESP packets for rx */ > + if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) { > + struct rte_flow_action action[2]; > + struct rte_flow_item pattern[2]; > + struct rte_flow_attr attr = {0}; > + struct rte_flow_error err; > + struct rte_flow *flow; > + > + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; > + pattern[0].spec = NULL; > + pattern[0].mask = NULL; > + pattern[0].last = NULL; > + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; > + > + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; > + action[0].conf = NULL; > + action[1].type = RTE_FLOW_ACTION_TYPE_END; > + action[1].conf = NULL; > + > + attr.egress = 0; > + attr.ingress = 1; > + > + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); > + if (ret) { As I understand, flow_validate() is used here to query does this capability (multiple security sessions for same flow) is supported by PMD/HW? If so, then probably no need for error message if it doesn't. > + RTE_LOG(ERR, IPSEC, > + "Failed to validate ipsec flow %s\n", > + err.message); > + goto exit; > + } > + > + flow = rte_flow_create(port_id, &attr, pattern, action, &err); Same question as for http://patches.dpdk.org/patch/63621/, why do you need it at all? What it will enable/disable? > + if (flow == NULL) { > + RTE_LOG(ERR, IPSEC, > + "Failed to create ipsec flow %s\n", > + err.message); > + ret = -rte_errno; > + goto exit; Why not just 'return ret;' here? > + } > + flow_info_tbl[port_id].rx_def_flow = flow; > + } > +exit: > + return ret; > +} > + > int32_t > main(int32_t argc, char **argv) > { > @@ -2478,6 +2529,11 @@ main(int32_t argc, char **argv) > > sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); > port_init(portid, req_rx_offloads, req_tx_offloads); > + /* Create default ipsec flow for the ethernet device */ > + ret = create_default_ipsec_flow(portid, req_rx_offloads); > + if (ret) > + printf("Cannot create default flow, err=%d, port=%d\n", > + ret, portid); Again it is an optional feature, so not sure if we need to report it for every port. Might be better to do visa-versa: LOG(INFO, ...) when create_default() was successfull. > } > > cryptodevs_init(); > diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c > index d4b5712..e529f68 100644 > --- a/examples/ipsec-secgw/ipsec.c > +++ b/examples/ipsec-secgw/ipsec.c > @@ -261,6 +261,12 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, > unsigned int i; > unsigned int j; > > + /* > + * Don't create flow if default flow is already created > + */ > + if (flow_info_tbl[sa->portid].rx_def_flow) > + goto set_cdev_id; As a nit: would be great to avoid introducing extra gotos. > + As I can see, that block of code is for RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO only. Is that what intended? BTW, for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL, it seems rte_flow is never created anyway inside that function. > ret = rte_eth_dev_info_get(sa->portid, &dev_info); > if (ret != 0) { > RTE_LOG(ERR, IPSEC, > @@ -396,6 +402,8 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, > ips->security.ol_flags = sec_cap->ol_flags; > ips->security.ctx = sec_ctx; > } > + > +set_cdev_id: > sa->cdev_id_qp = 0; > > return 0; > diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h > index 8e07521..28ff07d 100644 > --- a/examples/ipsec-secgw/ipsec.h > +++ b/examples/ipsec-secgw/ipsec.h > @@ -81,6 +81,12 @@ struct app_sa_prm { > > extern struct app_sa_prm app_sa_prm; > > +struct flow_info { > + struct rte_flow *rx_def_flow; > +}; > + > +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > + > enum { > IPSEC_SESSION_PRIMARY = 0, > IPSEC_SESSION_FALLBACK = 1, > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx 2019-12-16 14:20 ` Ananyev, Konstantin @ 2019-12-16 15:58 ` Anoob Joseph 2020-01-09 12:01 ` Lukas Bartosik 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2019-12-16 15:58 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev Hi Konstantin, Thanks for the review. Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Monday, December 16, 2019 7:51 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; > Nicolau, Radu <radu.nicolau@intel.com>; Thomas Monjalon > <thomas@monjalon.net> > Cc: Ankur Dwivedi <adwivedi@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Archana Muniganti <marchana@marvell.com>; > Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; Lukas Bartosik <lbartosik@marvell.com>; > dev@dpdk.org > Subject: [EXT] RE: [PATCH 01/14] examples/ipsec-secgw: add default rte_flow > for inline Rx > > External Email > > ---------------------------------------------------------------------- > > > From: Ankur Dwivedi <adwivedi@marvell.com> > > > > The default flow created would enable security processing on all ESP > > packets. If the default flow is created, SA based rte_flow creation > > would be skipped. > > I suppose that one depends on: > http://patches.dpdk.org/patch/63621/ > http://patches.dpdk.org/cover/63625/ > to work as expected? > If so probably worth to mention in that header or in cover letter (or both). [Anoob] Yes. Usually the dependency is not added in the commit header. I'll update the v2 cover letter with such details. > > > > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > --- > > examples/ipsec-secgw/ipsec-secgw.c | 56 > ++++++++++++++++++++++++++++++++++++++ > > examples/ipsec-secgw/ipsec.c | 8 ++++++ > > examples/ipsec-secgw/ipsec.h | 6 ++++ > > 3 files changed, 70 insertions(+) > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > b/examples/ipsec-secgw/ipsec-secgw.c > > index 3b5aaf6..7506922 100644 > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > @@ -128,6 +128,8 @@ struct ethaddr_info > ethaddr_tbl[RTE_MAX_ETHPORTS] = { > > { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; > > > > +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > > Need to be initialized with zeroes somewhere. [Anoob] Will add it in v2. > > > + > > #define CMD_LINE_OPT_CONFIG "config" > > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > > @@ -2406,6 +2408,55 @@ reassemble_init(void) > > return rc; > > } > > > > +static int > > +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) { > > + int ret = 0; > > + > > + /* Add the default ipsec flow to detect all ESP packets for rx */ > > + if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) { > > + struct rte_flow_action action[2]; > > + struct rte_flow_item pattern[2]; > > + struct rte_flow_attr attr = {0}; > > + struct rte_flow_error err; > > + struct rte_flow *flow; > > + > > + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; > > + pattern[0].spec = NULL; > > + pattern[0].mask = NULL; > > + pattern[0].last = NULL; > > + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; > > + > > + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; > > + action[0].conf = NULL; > > + action[1].type = RTE_FLOW_ACTION_TYPE_END; > > + action[1].conf = NULL; > > + > > + attr.egress = 0; > > + attr.ingress = 1; > > + > > + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); > > + if (ret) { > > As I understand, flow_validate() is used here to query does this capability > (multiple security sessions for same flow) is supported by PMD/HW? > If so, then probably no need for error message if it doesn't. [Anoob] Yes. Will remove the error log. > > > + RTE_LOG(ERR, IPSEC, > > + "Failed to validate ipsec flow %s\n", > > + err.message); > > + goto exit; > > + } > > + > > + flow = rte_flow_create(port_id, &attr, pattern, action, &err); > > Same question as for http://patches.dpdk.org/patch/63621/ , why do you need it at all? > What it will enable/disable? [Anoob] Your followup question there accurately describes the usage. If the application wants to enable H/w IPsec processing only on a specific SPI range, it will be allowed so with this kind of flow. Let's say, application wants to allow H/w processing only for SPI 1-8192. In that case, either 8192 rte_flows need to be created, or one rte_flow rule with SPI 1-8192 range can be created. Any SPI outside the range won't match the rule and rte_flow could have further rules to act on such packets. > > > + if (flow == NULL) { > > + RTE_LOG(ERR, IPSEC, > > + "Failed to create ipsec flow %s\n", > > + err.message); > > + ret = -rte_errno; > > + goto exit; > > Why not just 'return ret;' here? [Anoob] Will fix in v2. > > > + } > > + flow_info_tbl[port_id].rx_def_flow = flow; > > + } > > +exit: > > + return ret; > > +} > > + > > int32_t > > main(int32_t argc, char **argv) > > { > > @@ -2478,6 +2529,11 @@ main(int32_t argc, char **argv) > > > > sa_check_offloads(portid, &req_rx_offloads, > &req_tx_offloads); > > port_init(portid, req_rx_offloads, req_tx_offloads); > > + /* Create default ipsec flow for the ethernet device */ > > + ret = create_default_ipsec_flow(portid, req_rx_offloads); > > + if (ret) > > + printf("Cannot create default flow, err=%d, > port=%d\n", > > + ret, portid); > > Again it is an optional feature, so not sure if we need to report it for every port. > Might be better to do visa-versa: LOG(INFO, ...) when create_default() was > successfull. [Anoob] Will update in v2. > > > } > > > > cryptodevs_init(); > > diff --git a/examples/ipsec-secgw/ipsec.c > > b/examples/ipsec-secgw/ipsec.c index d4b5712..e529f68 100644 > > --- a/examples/ipsec-secgw/ipsec.c > > +++ b/examples/ipsec-secgw/ipsec.c > > @@ -261,6 +261,12 @@ create_inline_session(struct socket_ctx *skt_ctx, > struct ipsec_sa *sa, > > unsigned int i; > > unsigned int j; > > > > + /* > > + * Don't create flow if default flow is already created > > + */ > > + if (flow_info_tbl[sa->portid].rx_def_flow) > > + goto set_cdev_id; > > As a nit: would be great to avoid introducing extra gotos. [Anoob] So, set the cdev_id and return here itself? Will make that change in v2. > > > + > > As I can see, that block of code is for > RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO only. > Is that what intended? [Anoob] Yes > BTW, for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL, it seems rte_flow > is never created anyway inside that function. [Anoob] Yes. Current ipsec-secgw doesn't have rte_flow creation for inline protocol. It is done only for inline crypto. The default flow that we are adding is applicable for both inline crypto & inline protocol. Hence adding the extra check in inline crypto path to avoid creating duplicate rte_flows. > > > ret = rte_eth_dev_info_get(sa->portid, &dev_info); > > if (ret != 0) { > > RTE_LOG(ERR, IPSEC, > > @@ -396,6 +402,8 @@ create_inline_session(struct socket_ctx *skt_ctx, > struct ipsec_sa *sa, > > ips->security.ol_flags = sec_cap->ol_flags; > > ips->security.ctx = sec_ctx; > > } > > + > > +set_cdev_id: > > sa->cdev_id_qp = 0; > > > > return 0; > > diff --git a/examples/ipsec-secgw/ipsec.h > > b/examples/ipsec-secgw/ipsec.h index 8e07521..28ff07d 100644 > > --- a/examples/ipsec-secgw/ipsec.h > > +++ b/examples/ipsec-secgw/ipsec.h > > @@ -81,6 +81,12 @@ struct app_sa_prm { > > > > extern struct app_sa_prm app_sa_prm; > > > > +struct flow_info { > > + struct rte_flow *rx_def_flow; > > +}; > > + > > +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > > + > > enum { > > IPSEC_SESSION_PRIMARY = 0, > > IPSEC_SESSION_FALLBACK = 1, > > -- > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx 2019-12-16 15:58 ` Anoob Joseph @ 2020-01-09 12:01 ` Lukas Bartosik 2020-01-09 19:09 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Lukas Bartosik @ 2020-01-09 12:01 UTC (permalink / raw) To: Anoob Joseph, Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see my question inline. Thanks, Lukasz On 16.12.2019 16:58, Anoob Joseph wrote: > Hi Konstantin, > > Thanks for the review. Please see inline. > > Thanks, > Anoob > >> -----Original Message----- >> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> >> Sent: Monday, December 16, 2019 7:51 PM >> To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; >> Nicolau, Radu <radu.nicolau@intel.com>; Thomas Monjalon >> <thomas@monjalon.net> >> Cc: Ankur Dwivedi <adwivedi@marvell.com>; Jerin Jacob Kollanukkaran >> <jerinj@marvell.com>; Narayana Prasad Raju Athreya >> <pathreya@marvell.com>; Archana Muniganti <marchana@marvell.com>; >> Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna Attunuru >> <vattunuru@marvell.com>; Lukas Bartosik <lbartosik@marvell.com>; >> dev@dpdk.org >> Subject: [EXT] RE: [PATCH 01/14] examples/ipsec-secgw: add default rte_flow >> for inline Rx >> >> External Email >> >> ---------------------------------------------------------------------- >> >>> From: Ankur Dwivedi <adwivedi@marvell.com> >>> >>> The default flow created would enable security processing on all ESP >>> packets. If the default flow is created, SA based rte_flow creation >>> would be skipped. >> >> I suppose that one depends on: >> http://patches.dpdk.org/patch/63621/ >> http://patches.dpdk.org/cover/63625/ >> to work as expected? >> If so probably worth to mention in that header or in cover letter (or both). > > [Anoob] Yes. Usually the dependency is not added in the commit header. I'll update the v2 cover letter with such details. > >> >>> >>> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> >>> Signed-off-by: Anoob Joseph <anoobj@marvell.com> >>> --- >>> examples/ipsec-secgw/ipsec-secgw.c | 56 >> ++++++++++++++++++++++++++++++++++++++ >>> examples/ipsec-secgw/ipsec.c | 8 ++++++ >>> examples/ipsec-secgw/ipsec.h | 6 ++++ >>> 3 files changed, 70 insertions(+) >>> >>> diff --git a/examples/ipsec-secgw/ipsec-secgw.c >>> b/examples/ipsec-secgw/ipsec-secgw.c >>> index 3b5aaf6..7506922 100644 >>> --- a/examples/ipsec-secgw/ipsec-secgw.c >>> +++ b/examples/ipsec-secgw/ipsec-secgw.c >>> @@ -128,6 +128,8 @@ struct ethaddr_info >> ethaddr_tbl[RTE_MAX_ETHPORTS] = { >>> { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; >>> >>> +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; >> >> Need to be initialized with zeroes somewhere. > > [Anoob] Will add it in v2. [Lukasz] Is there any reason to initialize flow_info_tbl explicitly with zeros ? As a global array it will be automatically zeroized by the compiler. >> >>> + >>> #define CMD_LINE_OPT_CONFIG "config" >>> #define CMD_LINE_OPT_SINGLE_SA "single-sa" >>> #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" >>> @@ -2406,6 +2408,55 @@ reassemble_init(void) >>> return rc; >>> } >>> >>> +static int >>> +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) { >>> + int ret = 0; >>> + >>> + /* Add the default ipsec flow to detect all ESP packets for rx */ >>> + if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) { >>> + struct rte_flow_action action[2]; >>> + struct rte_flow_item pattern[2]; >>> + struct rte_flow_attr attr = {0}; >>> + struct rte_flow_error err; >>> + struct rte_flow *flow; >>> + >>> + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; >>> + pattern[0].spec = NULL; >>> + pattern[0].mask = NULL; >>> + pattern[0].last = NULL; >>> + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; >>> + >>> + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; >>> + action[0].conf = NULL; >>> + action[1].type = RTE_FLOW_ACTION_TYPE_END; >>> + action[1].conf = NULL; >>> + >>> + attr.egress = 0; >>> + attr.ingress = 1; >>> + >>> + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); >>> + if (ret) { >> >> As I understand, flow_validate() is used here to query does this capability >> (multiple security sessions for same flow) is supported by PMD/HW? >> If so, then probably no need for error message if it doesn't. > > [Anoob] Yes. Will remove the error log. > >> >>> + RTE_LOG(ERR, IPSEC, >>> + "Failed to validate ipsec flow %s\n", >>> + err.message); >>> + goto exit; >>> + } >>> + >>> + flow = rte_flow_create(port_id, &attr, pattern, action, &err); >> >> Same question as for http://patches.dpdk.org/patch/63621/ , why do you need it at all? >> What it will enable/disable? > > [Anoob] Your followup question there accurately describes the usage. If the application wants to enable H/w IPsec processing only on a specific SPI range, it will be allowed so with this kind of flow. > > Let's say, application wants to allow H/w processing only for SPI 1-8192. In that case, either 8192 rte_flows need to be created, or one rte_flow rule with SPI 1-8192 range can be created. Any SPI outside the range won't match the rule and rte_flow could have further rules to act on such packets. > >> >>> + if (flow == NULL) { >>> + RTE_LOG(ERR, IPSEC, >>> + "Failed to create ipsec flow %s\n", >>> + err.message); >>> + ret = -rte_errno; >>> + goto exit; >> >> Why not just 'return ret;' here? > > [Anoob] Will fix in v2. > >> >>> + } >>> + flow_info_tbl[port_id].rx_def_flow = flow; >>> + } >>> +exit: >>> + return ret; >>> +} >>> + >>> int32_t >>> main(int32_t argc, char **argv) >>> { >>> @@ -2478,6 +2529,11 @@ main(int32_t argc, char **argv) >>> >>> sa_check_offloads(portid, &req_rx_offloads, >> &req_tx_offloads); >>> port_init(portid, req_rx_offloads, req_tx_offloads); >>> + /* Create default ipsec flow for the ethernet device */ >>> + ret = create_default_ipsec_flow(portid, req_rx_offloads); >>> + if (ret) >>> + printf("Cannot create default flow, err=%d, >> port=%d\n", >>> + ret, portid); >> >> Again it is an optional feature, so not sure if we need to report it for every port. >> Might be better to do visa-versa: LOG(INFO, ...) when create_default() was >> successfull. > > [Anoob] Will update in v2. > >> >>> } >>> >>> cryptodevs_init(); >>> diff --git a/examples/ipsec-secgw/ipsec.c >>> b/examples/ipsec-secgw/ipsec.c index d4b5712..e529f68 100644 >>> --- a/examples/ipsec-secgw/ipsec.c >>> +++ b/examples/ipsec-secgw/ipsec.c >>> @@ -261,6 +261,12 @@ create_inline_session(struct socket_ctx *skt_ctx, >> struct ipsec_sa *sa, >>> unsigned int i; >>> unsigned int j; >>> >>> + /* >>> + * Don't create flow if default flow is already created >>> + */ >>> + if (flow_info_tbl[sa->portid].rx_def_flow) >>> + goto set_cdev_id; >> >> As a nit: would be great to avoid introducing extra gotos. > > [Anoob] So, set the cdev_id and return here itself? > > Will make that change in v2. > >> >>> + >> >> As I can see, that block of code is for >> RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO only. >> Is that what intended? > > [Anoob] Yes > >> BTW, for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL, it seems rte_flow >> is never created anyway inside that function. > > [Anoob] Yes. Current ipsec-secgw doesn't have rte_flow creation for inline protocol. It is done only for inline crypto. The default flow that we are adding is applicable for both inline crypto & inline protocol. Hence adding the extra check in inline crypto path to avoid creating duplicate rte_flows. > >> >>> ret = rte_eth_dev_info_get(sa->portid, &dev_info); >>> if (ret != 0) { >>> RTE_LOG(ERR, IPSEC, >>> @@ -396,6 +402,8 @@ create_inline_session(struct socket_ctx *skt_ctx, >> struct ipsec_sa *sa, >>> ips->security.ol_flags = sec_cap->ol_flags; >>> ips->security.ctx = sec_ctx; >>> } >>> + >>> +set_cdev_id: >>> sa->cdev_id_qp = 0; >>> >>> return 0; >>> diff --git a/examples/ipsec-secgw/ipsec.h >>> b/examples/ipsec-secgw/ipsec.h index 8e07521..28ff07d 100644 >>> --- a/examples/ipsec-secgw/ipsec.h >>> +++ b/examples/ipsec-secgw/ipsec.h >>> @@ -81,6 +81,12 @@ struct app_sa_prm { >>> >>> extern struct app_sa_prm app_sa_prm; >>> >>> +struct flow_info { >>> + struct rte_flow *rx_def_flow; >>> +}; >>> + >>> +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; >>> + >>> enum { >>> IPSEC_SESSION_PRIMARY = 0, >>> IPSEC_SESSION_FALLBACK = 1, >>> -- >>> 2.7.4 > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx 2020-01-09 12:01 ` Lukas Bartosik @ 2020-01-09 19:09 ` Ananyev, Konstantin 2020-01-13 11:40 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-09 19:09 UTC (permalink / raw) To: Lukas Bartosik, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev > >>> > >>> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > >>> Signed-off-by: Anoob Joseph <anoobj@marvell.com> > >>> --- > >>> examples/ipsec-secgw/ipsec-secgw.c | 56 > >> ++++++++++++++++++++++++++++++++++++++ > >>> examples/ipsec-secgw/ipsec.c | 8 ++++++ > >>> examples/ipsec-secgw/ipsec.h | 6 ++++ > >>> 3 files changed, 70 insertions(+) > >>> > >>> diff --git a/examples/ipsec-secgw/ipsec-secgw.c > >>> b/examples/ipsec-secgw/ipsec-secgw.c > >>> index 3b5aaf6..7506922 100644 > >>> --- a/examples/ipsec-secgw/ipsec-secgw.c > >>> +++ b/examples/ipsec-secgw/ipsec-secgw.c > >>> @@ -128,6 +128,8 @@ struct ethaddr_info > >> ethaddr_tbl[RTE_MAX_ETHPORTS] = { > >>> { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; > >>> > >>> +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > >> > >> Need to be initialized with zeroes somewhere. > > > > [Anoob] Will add it in v2. > > [Lukasz] Is there any reason to initialize flow_info_tbl explicitly with zeros ? As a global array it will be automatically > zeroized by the compiler. I think, it wouldn't. Only static ones will be silently initialized by compiler. Otherwise it could be anything. > > >> > >>> + > >>> #define CMD_LINE_OPT_CONFIG "config" > >>> #define CMD_LINE_OPT_SINGLE_SA "single-sa" > >>> #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > >>> @@ -2406,6 +2408,55 @@ reassemble_init(void) > >>> return rc; > >>> } > >>> > >>> +static int > >>> +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) { > >>> + int ret = 0; > >>> + > >>> + /* Add the default ipsec flow to detect all ESP packets for rx */ > >>> + if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) { > >>> + struct rte_flow_action action[2]; > >>> + struct rte_flow_item pattern[2]; > >>> + struct rte_flow_attr attr = {0}; > >>> + struct rte_flow_error err; > >>> + struct rte_flow *flow; > >>> + > >>> + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; > >>> + pattern[0].spec = NULL; > >>> + pattern[0].mask = NULL; > >>> + pattern[0].last = NULL; > >>> + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; > >>> + > >>> + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; > >>> + action[0].conf = NULL; > >>> + action[1].type = RTE_FLOW_ACTION_TYPE_END; > >>> + action[1].conf = NULL; > >>> + > >>> + attr.egress = 0; > >>> + attr.ingress = 1; > >>> + > >>> + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); > >>> + if (ret) { > >> > >> As I understand, flow_validate() is used here to query does this capability > >> (multiple security sessions for same flow) is supported by PMD/HW? > >> If so, then probably no need for error message if it doesn't. > > > > [Anoob] Yes. Will remove the error log. > > > >> > >>> + RTE_LOG(ERR, IPSEC, > >>> + "Failed to validate ipsec flow %s\n", > >>> + err.message); > >>> + goto exit; > >>> + } > >>> + > >>> + flow = rte_flow_create(port_id, &attr, pattern, action, &err); > >> > >> Same question as for http://patches.dpdk.org/patch/63621/ , why do you need it at all? > >> What it will enable/disable? > > > > [Anoob] Your followup question there accurately describes the usage. If the application wants to enable H/w IPsec processing only on a > specific SPI range, it will be allowed so with this kind of flow. > > > > Let's say, application wants to allow H/w processing only for SPI 1-8192. In that case, either 8192 rte_flows need to be created, or one > rte_flow rule with SPI 1-8192 range can be created. Any SPI outside the range won't match the rule and rte_flow could have further rules to > act on such packets. > > > >> > >>> + if (flow == NULL) { > >>> + RTE_LOG(ERR, IPSEC, > >>> + "Failed to create ipsec flow %s\n", > >>> + err.message); > >>> + ret = -rte_errno; > >>> + goto exit; > >> > >> Why not just 'return ret;' here? > > > > [Anoob] Will fix in v2. > > > >> > >>> + } > >>> + flow_info_tbl[port_id].rx_def_flow = flow; > >>> + } > >>> +exit: > >>> + return ret; > >>> +} > >>> + > >>> int32_t > >>> main(int32_t argc, char **argv) > >>> { > >>> @@ -2478,6 +2529,11 @@ main(int32_t argc, char **argv) > >>> > >>> sa_check_offloads(portid, &req_rx_offloads, > >> &req_tx_offloads); > >>> port_init(portid, req_rx_offloads, req_tx_offloads); > >>> + /* Create default ipsec flow for the ethernet device */ > >>> + ret = create_default_ipsec_flow(portid, req_rx_offloads); > >>> + if (ret) > >>> + printf("Cannot create default flow, err=%d, > >> port=%d\n", > >>> + ret, portid); > >> > >> Again it is an optional feature, so not sure if we need to report it for every port. > >> Might be better to do visa-versa: LOG(INFO, ...) when create_default() was > >> successfull. > > > > [Anoob] Will update in v2. > > > >> > >>> } > >>> > >>> cryptodevs_init(); > >>> diff --git a/examples/ipsec-secgw/ipsec.c > >>> b/examples/ipsec-secgw/ipsec.c index d4b5712..e529f68 100644 > >>> --- a/examples/ipsec-secgw/ipsec.c > >>> +++ b/examples/ipsec-secgw/ipsec.c > >>> @@ -261,6 +261,12 @@ create_inline_session(struct socket_ctx *skt_ctx, > >> struct ipsec_sa *sa, > >>> unsigned int i; > >>> unsigned int j; > >>> > >>> + /* > >>> + * Don't create flow if default flow is already created > >>> + */ > >>> + if (flow_info_tbl[sa->portid].rx_def_flow) > >>> + goto set_cdev_id; > >> > >> As a nit: would be great to avoid introducing extra gotos. > > > > [Anoob] So, set the cdev_id and return here itself? > > > > Will make that change in v2. > > > >> > >>> + > >> > >> As I can see, that block of code is for > >> RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO only. > >> Is that what intended? > > > > [Anoob] Yes > > > >> BTW, for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL, it seems rte_flow > >> is never created anyway inside that function. > > > > [Anoob] Yes. Current ipsec-secgw doesn't have rte_flow creation for inline protocol. It is done only for inline crypto. The default flow that > we are adding is applicable for both inline crypto & inline protocol. Hence adding the extra check in inline crypto path to avoid creating > duplicate rte_flows. > > > >> > >>> ret = rte_eth_dev_info_get(sa->portid, &dev_info); > >>> if (ret != 0) { > >>> RTE_LOG(ERR, IPSEC, > >>> @@ -396,6 +402,8 @@ create_inline_session(struct socket_ctx *skt_ctx, > >> struct ipsec_sa *sa, > >>> ips->security.ol_flags = sec_cap->ol_flags; > >>> ips->security.ctx = sec_ctx; > >>> } > >>> + > >>> +set_cdev_id: > >>> sa->cdev_id_qp = 0; > >>> > >>> return 0; > >>> diff --git a/examples/ipsec-secgw/ipsec.h > >>> b/examples/ipsec-secgw/ipsec.h index 8e07521..28ff07d 100644 > >>> --- a/examples/ipsec-secgw/ipsec.h > >>> +++ b/examples/ipsec-secgw/ipsec.h > >>> @@ -81,6 +81,12 @@ struct app_sa_prm { > >>> > >>> extern struct app_sa_prm app_sa_prm; > >>> > >>> +struct flow_info { > >>> + struct rte_flow *rx_def_flow; > >>> +}; > >>> + > >>> +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > >>> + > >>> enum { > >>> IPSEC_SESSION_PRIMARY = 0, > >>> IPSEC_SESSION_FALLBACK = 1, > >>> -- > >>> 2.7.4 > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx 2020-01-09 19:09 ` Ananyev, Konstantin @ 2020-01-13 11:40 ` Ananyev, Konstantin 0 siblings, 0 replies; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-13 11:40 UTC (permalink / raw) To: Lukas Bartosik, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Yigit, Ferruh > > >>> > > >>> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > >>> Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > >>> --- > > >>> examples/ipsec-secgw/ipsec-secgw.c | 56 > > >> ++++++++++++++++++++++++++++++++++++++ > > >>> examples/ipsec-secgw/ipsec.c | 8 ++++++ > > >>> examples/ipsec-secgw/ipsec.h | 6 ++++ > > >>> 3 files changed, 70 insertions(+) > > >>> > > >>> diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > >>> b/examples/ipsec-secgw/ipsec-secgw.c > > >>> index 3b5aaf6..7506922 100644 > > >>> --- a/examples/ipsec-secgw/ipsec-secgw.c > > >>> +++ b/examples/ipsec-secgw/ipsec-secgw.c > > >>> @@ -128,6 +128,8 @@ struct ethaddr_info > > >> ethaddr_tbl[RTE_MAX_ETHPORTS] = { > > >>> { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; > > >>> > > >>> +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > > >> > > >> Need to be initialized with zeroes somewhere. > > > > > > [Anoob] Will add it in v2. > > > > [Lukasz] Is there any reason to initialize flow_info_tbl explicitly with zeros ? As a global array it will be automatically > > zeroized by the compiler. > > I think, it wouldn't. > Only static ones will be silently initialized by compiler. > Otherwise it could be anything. Actually as pointed by Ferruh: Compiler wouldn't zero it out, but it will make it a'common' symbol and let linker to decide. As there is no other symbols for that var, linker should put it into .bss. So it seems I was too conservative, and it is safe not to have explicit initialization here. Konstantin > > > > > >> > > >>> + > > >>> #define CMD_LINE_OPT_CONFIG "config" > > >>> #define CMD_LINE_OPT_SINGLE_SA "single-sa" > > >>> #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > > >>> @@ -2406,6 +2408,55 @@ reassemble_init(void) > > >>> return rc; > > >>> } > > >>> > > >>> +static int > > >>> +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) { > > >>> + int ret = 0; > > >>> + > > >>> + /* Add the default ipsec flow to detect all ESP packets for rx */ > > >>> + if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) { > > >>> + struct rte_flow_action action[2]; > > >>> + struct rte_flow_item pattern[2]; > > >>> + struct rte_flow_attr attr = {0}; > > >>> + struct rte_flow_error err; > > >>> + struct rte_flow *flow; > > >>> + > > >>> + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; > > >>> + pattern[0].spec = NULL; > > >>> + pattern[0].mask = NULL; > > >>> + pattern[0].last = NULL; > > >>> + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; > > >>> + > > >>> + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; > > >>> + action[0].conf = NULL; > > >>> + action[1].type = RTE_FLOW_ACTION_TYPE_END; > > >>> + action[1].conf = NULL; > > >>> + > > >>> + attr.egress = 0; > > >>> + attr.ingress = 1; > > >>> + > > >>> + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); > > >>> + if (ret) { > > >> > > >> As I understand, flow_validate() is used here to query does this capability > > >> (multiple security sessions for same flow) is supported by PMD/HW? > > >> If so, then probably no need for error message if it doesn't. > > > > > > [Anoob] Yes. Will remove the error log. > > > > > >> > > >>> + RTE_LOG(ERR, IPSEC, > > >>> + "Failed to validate ipsec flow %s\n", > > >>> + err.message); > > >>> + goto exit; > > >>> + } > > >>> + > > >>> + flow = rte_flow_create(port_id, &attr, pattern, action, &err); > > >> > > >> Same question as for http://patches.dpdk.org/patch/63621/ , why do you need it at all? > > >> What it will enable/disable? > > > > > > [Anoob] Your followup question there accurately describes the usage. If the application wants to enable H/w IPsec processing only on a > > specific SPI range, it will be allowed so with this kind of flow. > > > > > > Let's say, application wants to allow H/w processing only for SPI 1-8192. In that case, either 8192 rte_flows need to be created, or one > > rte_flow rule with SPI 1-8192 range can be created. Any SPI outside the range won't match the rule and rte_flow could have further rules > to > > act on such packets. > > > > > >> > > >>> + if (flow == NULL) { > > >>> + RTE_LOG(ERR, IPSEC, > > >>> + "Failed to create ipsec flow %s\n", > > >>> + err.message); > > >>> + ret = -rte_errno; > > >>> + goto exit; > > >> > > >> Why not just 'return ret;' here? > > > > > > [Anoob] Will fix in v2. > > > > > >> > > >>> + } > > >>> + flow_info_tbl[port_id].rx_def_flow = flow; > > >>> + } > > >>> +exit: > > >>> + return ret; > > >>> +} > > >>> + > > >>> int32_t > > >>> main(int32_t argc, char **argv) > > >>> { > > >>> @@ -2478,6 +2529,11 @@ main(int32_t argc, char **argv) > > >>> > > >>> sa_check_offloads(portid, &req_rx_offloads, > > >> &req_tx_offloads); > > >>> port_init(portid, req_rx_offloads, req_tx_offloads); > > >>> + /* Create default ipsec flow for the ethernet device */ > > >>> + ret = create_default_ipsec_flow(portid, req_rx_offloads); > > >>> + if (ret) > > >>> + printf("Cannot create default flow, err=%d, > > >> port=%d\n", > > >>> + ret, portid); > > >> > > >> Again it is an optional feature, so not sure if we need to report it for every port. > > >> Might be better to do visa-versa: LOG(INFO, ...) when create_default() was > > >> successfull. > > > > > > [Anoob] Will update in v2. > > > > > >> > > >>> } > > >>> > > >>> cryptodevs_init(); > > >>> diff --git a/examples/ipsec-secgw/ipsec.c > > >>> b/examples/ipsec-secgw/ipsec.c index d4b5712..e529f68 100644 > > >>> --- a/examples/ipsec-secgw/ipsec.c > > >>> +++ b/examples/ipsec-secgw/ipsec.c > > >>> @@ -261,6 +261,12 @@ create_inline_session(struct socket_ctx *skt_ctx, > > >> struct ipsec_sa *sa, > > >>> unsigned int i; > > >>> unsigned int j; > > >>> > > >>> + /* > > >>> + * Don't create flow if default flow is already created > > >>> + */ > > >>> + if (flow_info_tbl[sa->portid].rx_def_flow) > > >>> + goto set_cdev_id; > > >> > > >> As a nit: would be great to avoid introducing extra gotos. > > > > > > [Anoob] So, set the cdev_id and return here itself? > > > > > > Will make that change in v2. > > > > > >> > > >>> + > > >> > > >> As I can see, that block of code is for > > >> RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO only. > > >> Is that what intended? > > > > > > [Anoob] Yes > > > > > >> BTW, for RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL, it seems rte_flow > > >> is never created anyway inside that function. > > > > > > [Anoob] Yes. Current ipsec-secgw doesn't have rte_flow creation for inline protocol. It is done only for inline crypto. The default flow > that > > we are adding is applicable for both inline crypto & inline protocol. Hence adding the extra check in inline crypto path to avoid creating > > duplicate rte_flows. > > > > > >> > > >>> ret = rte_eth_dev_info_get(sa->portid, &dev_info); > > >>> if (ret != 0) { > > >>> RTE_LOG(ERR, IPSEC, > > >>> @@ -396,6 +402,8 @@ create_inline_session(struct socket_ctx *skt_ctx, > > >> struct ipsec_sa *sa, > > >>> ips->security.ol_flags = sec_cap->ol_flags; > > >>> ips->security.ctx = sec_ctx; > > >>> } > > >>> + > > >>> +set_cdev_id: > > >>> sa->cdev_id_qp = 0; > > >>> > > >>> return 0; > > >>> diff --git a/examples/ipsec-secgw/ipsec.h > > >>> b/examples/ipsec-secgw/ipsec.h index 8e07521..28ff07d 100644 > > >>> --- a/examples/ipsec-secgw/ipsec.h > > >>> +++ b/examples/ipsec-secgw/ipsec.h > > >>> @@ -81,6 +81,12 @@ struct app_sa_prm { > > >>> > > >>> extern struct app_sa_prm app_sa_prm; > > >>> > > >>> +struct flow_info { > > >>> + struct rte_flow *rx_def_flow; > > >>> +}; > > >>> + > > >>> +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > > >>> + > > >>> enum { > > >>> IPSEC_SESSION_PRIMARY = 0, > > >>> IPSEC_SESSION_FALLBACK = 1, > > >>> -- > > >>> 2.7.4 > > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 02/14] examples/ipsec-secgw: add framework for eventmode helper 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 03/14] examples/ipsec-secgw: add eventdev port-lcore link Anoob Joseph ` (12 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add framework for eventmode helper. Event mode would involve initialization of multiple devices, like eventdev, ethdev etc. Add routines to initialize and uninitialize event devices. Generate a default config for event devices if it is not specified in the configuration. The init routine will iterate over available event devices and their properties and will set the config accordingly. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/event_helper.c | 311 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 115 +++++++++++++ examples/ipsec-secgw/meson.build | 4 +- 4 files changed, 429 insertions(+), 2 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index a4977f6..09e3c5a 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -15,6 +15,7 @@ SRCS-y += sa.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c new file mode 100644 index 0000000..b11e861 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.c @@ -0,0 +1,311 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2019 Marvell International Ltd. + */ +#include <rte_ethdev.h> +#include <rte_eventdev.h> + +#include "event_helper.h" + +static int +eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct rte_event_dev_info dev_info; + int nb_eventdev; + int i, ret; + + /* Get the number of event devices */ + nb_eventdev = rte_event_dev_count(); + + if (nb_eventdev == 0) { + EH_LOG_ERR("No event devices detected"); + return -EINVAL; + } + + for (i = 0; i < nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Read event device info */ + ret = rte_event_dev_info_get(i, &dev_info); + + if (ret < 0) { + EH_LOG_ERR("Failed to read event device info %d", ret); + return ret; + } + + /* Check if enough ports are available */ + if (dev_info.max_event_ports < 2) { + EH_LOG_ERR("Not enough event ports available"); + return -EINVAL; + } + + /* Save number of queues & ports available */ + eventdev_config->eventdev_id = i; + eventdev_config->nb_eventqueue = dev_info.max_event_queues; + eventdev_config->nb_eventport = dev_info.max_event_ports; + eventdev_config->ev_queue_mode = + RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + /* One port is required for eth Rx adapter */ + eventdev_config->nb_eventport -= 1; + + /* One port is reserved for eth Tx adapter */ + eventdev_config->nb_eventport -= 1; + + /* Update the number of event devices */ + em_conf->nb_eventdev++; + } + + return 0; +} + +static int +eh_validate_conf(struct eventmode_conf *em_conf) +{ + int ret; + + /* + * Check if event devs are specified. Else probe the event devices + * and initialize the config with all ports & queues available + */ + if (em_conf->nb_eventdev == 0) { + ret = eh_set_default_conf_eventdev(em_conf); + if (ret != 0) + return ret; + } + + return 0; +} + +static int +eh_initialize_eventdev(struct eventmode_conf *em_conf) +{ + struct rte_event_queue_conf eventq_conf = {0}; + struct rte_event_dev_info evdev_default_conf; + struct rte_event_dev_config eventdev_conf; + struct eventdev_params *eventdev_config; + int nb_eventdev = em_conf->nb_eventdev; + uint8_t eventdev_id; + int nb_eventqueue; + uint8_t i, j; + int ret; + + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Get event dev ID */ + eventdev_id = eventdev_config->eventdev_id; + + /* Get the number of queues */ + nb_eventqueue = eventdev_config->nb_eventqueue; + + /* One queue is reserved for the final stage (doing eth tx) */ + nb_eventqueue += 1; + + /* Reset the default conf */ + memset(&evdev_default_conf, 0, + sizeof(struct rte_event_dev_info)); + + /* Get default conf of eventdev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR( + "Error in getting event device info[devID:%d]", + eventdev_id); + return ret; + } + + memset(&eventdev_conf, 0, sizeof(struct rte_event_dev_config)); + eventdev_conf.nb_events_limit = + evdev_default_conf.max_num_events; + eventdev_conf.nb_event_queues = nb_eventqueue; + eventdev_conf.nb_event_ports = + eventdev_config->nb_eventport; + eventdev_conf.nb_event_queue_flows = + evdev_default_conf.max_event_queue_flows; + eventdev_conf.nb_event_port_dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + eventdev_conf.nb_event_port_enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Configure event device */ + ret = rte_event_dev_configure(eventdev_id, &eventdev_conf); + if (ret < 0) { + EH_LOG_ERR("Error in configuring event device"); + return ret; + } + + /* Configure event queues */ + for (j = 0; j < nb_eventqueue; j++) { + + memset(&eventq_conf, 0, + sizeof(struct rte_event_queue_conf)); + + /* Read the requested conf */ + + /* Per event dev queues can be ATQ or SINGLE LINK */ + eventq_conf.event_queue_cfg = + eventdev_config->ev_queue_mode; + /* + * All queues need to be set with sched_type as + * schedule type for the application stage. One queue + * would be reserved for the final eth tx stage. This + * will be an atomic queue. + */ + if (j == nb_eventqueue-1) { + eventq_conf.schedule_type = + RTE_SCHED_TYPE_ATOMIC; + } else { + eventq_conf.schedule_type = + em_conf->ext_params.sched_type; + } + + /* Set max atomic flows to 1024 */ + eventq_conf.nb_atomic_flows = 1024; + eventq_conf.nb_atomic_order_sequences = 1024; + + /* Setup the queue */ + ret = rte_event_queue_setup(eventdev_id, j, + &eventq_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event queue %d", + ret); + return ret; + } + } + + /* Configure event ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + ret = rte_event_port_setup(eventdev_id, j, NULL); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event port %d", + ret); + return ret; + } + } + } + + /* Start event devices */ + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + ret = rte_event_dev_start(eventdev_config->eventdev_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start event device %d, %d", + i, ret); + return ret; + } + } + return 0; +} + +int32_t +eh_devs_init(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t port_id; + int ret; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Validate the requested config */ + ret = eh_validate_conf(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to validate the requested config %d", ret); + return ret; + } + + /* Stop eth devices before setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + rte_eth_dev_stop(port_id); + } + + /* Setup eventdev */ + ret = eh_initialize_eventdev(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize event dev %d", ret); + return ret; + } + + /* Start eth devices after setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + ret = rte_eth_dev_start(port_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start eth dev %d, %d", + port_id, ret); + return ret; + } + } + + return 0; +} + +int32_t +eh_devs_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t id; + int ret, i; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Stop and release event devices */ + for (i = 0; i < em_conf->nb_eventdev; i++) { + + id = em_conf->eventdev_config[i].eventdev_id; + rte_event_dev_stop(id); + + ret = rte_event_dev_close(id); + if (ret < 0) { + EH_LOG_ERR("Failed to close event dev %d, %d", + id, ret); + return ret; + } + } + + return 0; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h new file mode 100644 index 0000000..5a33fed --- /dev/null +++ b/examples/ipsec-secgw/event_helper.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2019 Marvell International Ltd. + */ +#ifndef _EVENT_HELPER_H_ +#define _EVENT_HELPER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <rte_log.h> + +#define RTE_LOGTYPE_EH RTE_LOGTYPE_USER4 + +#define EH_LOG_ERR(...) \ + RTE_LOG(ERR, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/* Max event devices supported */ +#define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS + +/** + * Packet transfer mode of the application + */ +enum eh_pkt_transfer_mode { + EH_PKT_TRANSFER_MODE_POLL = 0, + EH_PKT_TRANSFER_MODE_EVENT, +}; + +/* Event dev params */ +struct eventdev_params { + uint8_t eventdev_id; + uint8_t nb_eventqueue; + uint8_t nb_eventport; + uint8_t ev_queue_mode; +}; + +/* Eventmode conf data */ +struct eventmode_conf { + int nb_eventdev; + /**< No of event devs */ + struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; + /**< Per event dev conf */ + union { + RTE_STD_C11 + struct { + uint64_t sched_type : 2; + /**< Schedule type */ + }; + uint64_t u64; + } ext_params; + /**< 64 bit field to specify extended params */ +}; + +/** + * Event helper configuration + */ +struct eh_conf { + enum eh_pkt_transfer_mode mode; + /**< Packet transfer mode of the application */ + uint32_t eth_portmask; + /**< + * Mask of the eth ports to be used. This portmask would be + * checked while initializing devices using helper routines. + */ + void *mode_params; + /**< Mode specific parameters */ +}; + +/** + * Initialize event mode devices + * + * Application can call this function to get the event devices, eth devices + * and eth rx & tx adapters initialized according to the default config or + * config populated using the command line args. + * + * Application is expected to initialize the eth devices and then the event + * mode helper subsystem will stop & start eth devices according to its + * requirement. Call to this function should be done after the eth devices + * are successfully initialized. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_init(struct eh_conf *conf); + +/** + * Release event mode devices + * + * Application can call this function to release event devices, + * eth rx & tx adapters according to the config. + * + * Call to this function should be done before application stops + * and closes eth devices. This function will not close and stop + * eth devices. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_uninit(struct eh_conf *conf); + +#ifdef __cplusplus +} +#endif + +#endif /* _EVENT_HELPER_H_ */ diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 9ece345..20f4064 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -6,9 +6,9 @@ # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' -deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec'] +deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c' + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 03/14] examples/ipsec-secgw: add eventdev port-lcore link 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 02/14] examples/ipsec-secgw: add framework for eventmode helper Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support Anoob Joseph ` (11 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add event device port-lcore link and specify which event queues should be connected to the event port. Generate a default config for event port-lcore links if it is not specified in the configuration. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues are to be linked with every port. This enables one core to receive packets fromall ethernet ports. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 131 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 33 +++++++++ 2 files changed, 164 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index b11e861..d0157f4 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,11 +1,35 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2019 Marvell International Ltd. */ +#include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_malloc.h> #include "event_helper.h" +static inline unsigned int +eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) +{ + unsigned int next_core; + +get_next_core: + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Skip cores reserved as eth cores */ + if (rte_bitmap_get(em_conf->eth_core_mask, next_core)) { + prev_core = next_core; + goto get_next_core; + } + + return next_core; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -62,6 +86,74 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_link(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct eh_event_link_info *link; + unsigned int lcore_id = -1; + int link_index; + int i, j; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If there are more event ports, then some ports + * won't be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link config, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues + * to the port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Loop through the ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + + /* Get next active core id */ + lcore_id = eh_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_port_id = j; + link->lcore_id = lcore_id; + + /* + * Don't set eventq_id as by default all queues + * need to be mapped to the port, which is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + } + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -76,6 +168,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if links are specified. Else generate a default config for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = eh_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -87,6 +189,8 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) struct rte_event_dev_config eventdev_conf; struct eventdev_params *eventdev_config; int nb_eventdev = em_conf->nb_eventdev; + struct eh_event_link_info *link; + uint8_t *queue = NULL; uint8_t eventdev_id; int nb_eventqueue; uint8_t i, j; @@ -189,6 +293,33 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) } } + /* Make event queue - event port link */ + for (j = 0; j < em_conf->nb_link; j++) { + + /* Get link info */ + link = &(em_conf->link[j]); + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); + + /* Link queue to port */ + ret = rte_event_port_link(eventdev_id, link->event_port_id, + queue, NULL, 1); + if (ret < 0) { + EH_LOG_ERR("Failed to link event port %d", ret); + return ret; + } + } + /* Start event devices */ for (i = 0; i < nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 5a33fed..2d217e2 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,13 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max event queues supported per event device */ +#define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV + +/* Max event-lcore links */ +#define EVENT_MODE_MAX_LCORE_LINKS \ + (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) + /** * Packet transfer mode of the application */ @@ -36,17 +43,43 @@ struct eventdev_params { uint8_t ev_queue_mode; }; +/** + * Event-lcore link configuration + */ +struct eh_event_link_info { + uint8_t eventdev_id; + /**< Event device ID */ + uint8_t event_port_id; + /**< Event port ID */ + uint8_t eventq_id; + /**< Event queue to be linked to the port */ + uint8_t lcore_id; + /**< Lcore to be polling on this port */ +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_link; + /**< No of links */ + struct eh_event_link_info + link[EVENT_MODE_MAX_LCORE_LINKS]; + /**< Per link conf */ + struct rte_bitmap *eth_core_mask; + /**< Core mask of cores to be used for software Rx and Tx */ union { RTE_STD_C11 struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (2 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 03/14] examples/ipsec-secgw: add eventdev port-lcore link Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-11 11:33 ` Akhil Goyal 2019-12-23 18:48 ` Ananyev, Konstantin 2019-12-08 12:30 ` [dpdk-dev] [PATCH 05/14] examples/ipsec-secgw: add Tx " Anoob Joseph ` (10 subsequent siblings) 14 siblings, 2 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add Rx adapter support. The event helper init routine will initialize the Rx adapter according to the configuration. If Rx adapter config is not present it will generate a default config. It will check the available eth ports and event queues and map them 1:1. So one eth port will be connected to one event queue. This way event queue ID could be used to figure out the port on which a packet came in. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 289 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/event_helper.h | 29 ++++ 2 files changed, 317 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index d0157f4..f0eca01 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -4,10 +4,60 @@ #include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_event_eth_rx_adapter.h> #include <rte_malloc.h> #include "event_helper.h" +static int +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) +{ + int i; + int count = 0; + + RTE_LCORE_FOREACH(i) { + /* Check if this core is enabled in core mask*/ + if (rte_bitmap_get(eth_core_mask, i)) { + /* We have found enabled core */ + count++; + } + } + return count; +} + +static inline unsigned int +eh_get_next_eth_core(struct eventmode_conf *em_conf) +{ + static unsigned int prev_core = -1; + unsigned int next_core; + + /* + * Make sure we have at least one eth core running, else the following + * logic would lead to an infinite loop. + */ + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { + EH_LOG_ERR("No enabled eth core found"); + return RTE_MAX_LCORE; + } + +get_next_core: + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 1); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Update prev_core */ + prev_core = next_core; + + /* Only some cores are marked as eth cores. Skip others */ + if (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))) + goto get_next_core; + + return next_core; +} + static inline unsigned int eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) { @@ -154,6 +204,87 @@ eh_set_default_conf_link(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct rx_adapter_conf *adapter; + int eventdev_id; + int nb_eth_dev; + int adapter_id; + int conn_id; + int i; + + /* Create one adapter with all eth queues mapped to event queues 1:1 */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + adapter = &(em_conf->rx_adapter[adapter_id]); + + /* Set adapter conf */ + adapter->eventdev_id = eventdev_id; + adapter->adapter_id = adapter_id; + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Map all queues of one eth device (port) to one event + * queue. Each port will have an individual connection. + * + */ + + /* Make sure there is enough event queues for 1:1 mapping */ + if (nb_eth_dev > eventdev_config->nb_eventqueue) { + EH_LOG_ERR("Not enough event queues for 1:1 mapping " + "[eth devs: %d, event queues: %d]\n", + nb_eth_dev, eventdev_config->nb_eventqueue); + return -EINVAL; + } + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = adapter->nb_connections; + + /* Get the connection */ + conn = &(adapter->conn[conn_id]); + + /* Set 1:1 mapping between eth ports & event queues*/ + conn->ethdev_id = i; + conn->eventq_id = i; + + /* Add all eth queues of one eth port to one event queue */ + conn->ethdev_rx_qid = -1; + + /* Update no of connections */ + adapter->nb_connections++; + + } + + /* We have setup one adapter */ + em_conf->nb_rx_adapter = 1; + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -178,6 +309,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if rx adapters are specified. Else generate a default config + * with one rx adapter and all eth queues - event queue mapped. + */ + if (em_conf->nb_rx_adapter == 0) { + ret = eh_set_default_conf_rx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -336,6 +477,113 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) return 0; } +static int +eh_rx_adapter_configure(struct eventmode_conf *em_conf, + struct rx_adapter_conf *adapter) +{ + struct rte_event_eth_rx_adapter_queue_conf queue_conf = {0}; + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct rx_adapter_connection_info *conn; + uint8_t eventdev_id; + uint32_t service_id; + int ret; + int j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = 1200; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create Rx adapter */ + ret = rte_event_eth_rx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, + &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create rx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + + queue_conf.rx_queue_flags = + RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID; + + for (j = 0; j < adapter->nb_connections; j++) { + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Setup queue conf */ + queue_conf.ev.queue_id = conn->eventq_id; + queue_conf.ev.sched_type = em_conf->ext_params.sched_type; + + /* Set flow ID as ethdev ID */ + queue_conf.ev.flow_id = conn->ethdev_id; + + /* Add queue to the adapter */ + ret = rte_event_eth_rx_adapter_queue_add( + adapter->adapter_id, + conn->ethdev_id, + conn->ethdev_rx_qid, + &queue_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to rx adapter %d", + ret); + return ret; + } + } + + /* Get the service ID used by rx adapter */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by rx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_rx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start rx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_conf *adapter; + int i, ret; + + /* Configure rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + ret = eh_rx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure rx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -359,6 +607,9 @@ eh_devs_init(struct eh_conf *conf) /* Get eventmode conf */ em_conf = (struct eventmode_conf *)(conf->mode_params); + /* Eventmode conf would need eth portmask */ + em_conf->eth_portmask = conf->eth_portmask; + /* Validate the requested config */ ret = eh_validate_conf(em_conf); if (ret < 0) { @@ -383,6 +634,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Rx adapter */ + ret = eh_initialize_rx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize rx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -405,8 +663,8 @@ int32_t eh_devs_uninit(struct eh_conf *conf) { struct eventmode_conf *em_conf; + int ret, i, j; uint16_t id; - int ret, i; if (conf == NULL) { EH_LOG_ERR("Invalid event helper configuration"); @@ -424,6 +682,35 @@ eh_devs_uninit(struct eh_conf *conf) /* Get eventmode conf */ em_conf = (struct eventmode_conf *)(conf->mode_params); + /* Stop and release rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + + id = em_conf->rx_adapter[i].adapter_id; + ret = rte_event_eth_rx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop rx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->rx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_rx_adapter_queue_del(id, + em_conf->rx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove rx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_rx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free rx adapter %d", ret); + return ret; + } + } + /* Stop and release event devices */ for (i = 0; i < em_conf->nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 2d217e2..0f89c31 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,12 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max Rx adapters supported */ +#define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS + +/* Max Rx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -57,12 +63,33 @@ struct eh_event_link_info { /**< Lcore to be polling on this port */ }; +/* Rx adapter connection info */ +struct rx_adapter_connection_info { + uint8_t ethdev_id; + uint8_t eventq_id; + int32_t ethdev_rx_qid; +}; + +/* Rx adapter conf */ +struct rx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t rx_core_id; + uint8_t nb_connections; + struct rx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_rx_adapter; + /**< No of Rx adapters */ + struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; + /**< Rx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -70,6 +97,8 @@ struct eventmode_conf { /**< Per link conf */ struct rte_bitmap *eth_core_mask; /**< Core mask of cores to be used for software Rx and Tx */ + uint32_t eth_portmask; + /**< Mask of the eth ports to be used */ union { RTE_STD_C11 struct { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support 2019-12-08 12:30 ` [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support Anoob Joseph @ 2019-12-11 11:33 ` Akhil Goyal 2019-12-12 5:18 ` Anoob Joseph 2019-12-23 18:48 ` Ananyev, Konstantin 1 sibling, 1 reply; 147+ messages in thread From: Akhil Goyal @ 2019-12-11 11:33 UTC (permalink / raw) To: Anoob Joseph, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Hi Anoob, I have just started looking into this patchset. Will be doing a detailed review soon. But an initial comment. Why do you need 1-1 mapping between event queue and ethdev queue. All eth and crypto queues will be attached to eventdev. And there may be single event queue Or multiple but not necessarily equal to eth queues. > + > + /* Make sure there is enough event queues for 1:1 mapping */ > + if (nb_eth_dev > eventdev_config->nb_eventqueue) { > + EH_LOG_ERR("Not enough event queues for 1:1 mapping " > + "[eth devs: %d, event queues: %d]\n", > + nb_eth_dev, eventdev_config->nb_eventqueue); > + return -EINVAL; > + } > + ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support 2019-12-11 11:33 ` Akhil Goyal @ 2019-12-12 5:18 ` Anoob Joseph 0 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-12 5:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, Konstantin Ananyev, dev Hi Akhil, Please see inline. Thanks, Anoob > -----Original Message----- > From: Akhil Goyal <akhil.goyal@nxp.com> > Sent: Wednesday, December 11, 2019 5:04 PM > To: Anoob Joseph <anoobj@marvell.com>; Radu Nicolau > <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>; > Lukas Bartosik <lbartosik@marvell.com>; Konstantin Ananyev > <konstantin.ananyev@intel.com>; dev@dpdk.org > Subject: [EXT] RE: [PATCH 04/14] examples/ipsec-secgw: add Rx adapter > support > > External Email > > ---------------------------------------------------------------------- > Hi Anoob, > > I have just started looking into this patchset. Will be doing a detailed review > soon. > But an initial comment. > Why do you need 1-1 mapping between event queue and ethdev queue. > > All eth and crypto queues will be attached to eventdev. And there may be single > event queue Or multiple but not necessarily equal to eth queues. [Anoob] You are right. We could have one single event queue which could handle all traffic. But the idea of more event queues is to better isolate independent traffic flows. If all traffic is forwarded to single event queue, it could lead to artificial dependency between otherwise independent flows and underutilization of resources. But having a single event queue is also a valid case, and we shouldn't have that case failing. So I'll have an else case for the below check and would adjust the code to work with single event queue. > > + > > + /* Make sure there is enough event queues for 1:1 mapping */ > > + if (nb_eth_dev > eventdev_config->nb_eventqueue) { > > + EH_LOG_ERR("Not enough event queues for 1:1 mapping " > > + "[eth devs: %d, event queues: %d]\n", > > + nb_eth_dev, eventdev_config->nb_eventqueue); > > + return -EINVAL; > > + } > > + ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support 2019-12-08 12:30 ` [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support Anoob Joseph 2019-12-11 11:33 ` Akhil Goyal @ 2019-12-23 18:48 ` Ananyev, Konstantin 2020-01-07 6:12 ` Anoob Joseph 1 sibling, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-23 18:48 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, dev > Add Rx adapter support. The event helper init routine will initialize > the Rx adapter according to the configuration. If Rx adapter config > is not present it will generate a default config. It will check the > available eth ports and event queues and map them 1:1. So one eth port > will be connected to one event queue. This way event queue ID could > be used to figure out the port on which a packet came in. > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/event_helper.c | 289 +++++++++++++++++++++++++++++++++++- > examples/ipsec-secgw/event_helper.h | 29 ++++ > 2 files changed, 317 insertions(+), 1 deletion(-) > > diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c > index d0157f4..f0eca01 100644 > --- a/examples/ipsec-secgw/event_helper.c > +++ b/examples/ipsec-secgw/event_helper.c > @@ -4,10 +4,60 @@ > #include <rte_bitmap.h> > #include <rte_ethdev.h> > #include <rte_eventdev.h> > +#include <rte_event_eth_rx_adapter.h> > #include <rte_malloc.h> > > #include "event_helper.h" > > +static int > +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) > +{ > + int i; > + int count = 0; > + > + RTE_LCORE_FOREACH(i) { > + /* Check if this core is enabled in core mask*/ > + if (rte_bitmap_get(eth_core_mask, i)) { > + /* We have found enabled core */ > + count++; > + } > + } > + return count; > +} > + > +static inline unsigned int > +eh_get_next_eth_core(struct eventmode_conf *em_conf) > +{ > + static unsigned int prev_core = -1; > + unsigned int next_core; > + > + /* > + * Make sure we have at least one eth core running, else the following > + * logic would lead to an infinite loop. > + */ > + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { > + EH_LOG_ERR("No enabled eth core found"); > + return RTE_MAX_LCORE; > + } > + > +get_next_core: > + /* Get the next core */ > + next_core = rte_get_next_lcore(prev_core, 0, 1); > + > + /* Check if we have reached max lcores */ > + if (next_core == RTE_MAX_LCORE) > + return next_core; > + > + /* Update prev_core */ > + prev_core = next_core; > + > + /* Only some cores are marked as eth cores. Skip others */ > + if (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))) > + goto get_next_core; Are loops statements forbidden in C now? 😉 As a generic comment - too many (unnecessary) gotos in this patch series. It is not uncommon to see 2-3 labels inside the function and bunch gotos to them. Would be good to rework the code a bit to get rid of them. > + > + return next_core; > +} > + > static inline unsigned int > eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) > { > @@ -154,6 +204,87 @@ eh_set_default_conf_link(struct eventmode_conf *em_conf) > } > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support 2019-12-23 18:48 ` Ananyev, Konstantin @ 2020-01-07 6:12 ` Anoob Joseph 2020-01-07 14:32 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-07 6:12 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Tuesday, December 24, 2019 12:18 AM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; > Nicolau, Radu <radu.nicolau@intel.com>; Thomas Monjalon > <thomas@monjalon.net> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>; > Lukas Bartosik <lbartosik@marvell.com>; dev@dpdk.org > Subject: [EXT] RE: [PATCH 04/14] examples/ipsec-secgw: add Rx adapter > support > > External Email > > ---------------------------------------------------------------------- > > Add Rx adapter support. The event helper init routine will initialize > > the Rx adapter according to the configuration. If Rx adapter config is > > not present it will generate a default config. It will check the > > available eth ports and event queues and map them 1:1. So one eth port > > will be connected to one event queue. This way event queue ID could be > > used to figure out the port on which a packet came in. > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > --- > > examples/ipsec-secgw/event_helper.c | 289 > > +++++++++++++++++++++++++++++++++++- > > examples/ipsec-secgw/event_helper.h | 29 ++++ > > 2 files changed, 317 insertions(+), 1 deletion(-) > > > > diff --git a/examples/ipsec-secgw/event_helper.c > > b/examples/ipsec-secgw/event_helper.c > > index d0157f4..f0eca01 100644 > > --- a/examples/ipsec-secgw/event_helper.c > > +++ b/examples/ipsec-secgw/event_helper.c > > @@ -4,10 +4,60 @@ > > #include <rte_bitmap.h> > > #include <rte_ethdev.h> > > #include <rte_eventdev.h> > > +#include <rte_event_eth_rx_adapter.h> > > #include <rte_malloc.h> > > > > #include "event_helper.h" > > > > +static int > > +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { > > + int i; > > + int count = 0; > > + > > + RTE_LCORE_FOREACH(i) { > > + /* Check if this core is enabled in core mask*/ > > + if (rte_bitmap_get(eth_core_mask, i)) { > > + /* We have found enabled core */ > > + count++; > > + } > > + } > > + return count; > > +} > > + > > +static inline unsigned int > > +eh_get_next_eth_core(struct eventmode_conf *em_conf) { > > + static unsigned int prev_core = -1; > > + unsigned int next_core; > > + > > + /* > > + * Make sure we have at least one eth core running, else the following > > + * logic would lead to an infinite loop. > > + */ > > + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { > > + EH_LOG_ERR("No enabled eth core found"); > > + return RTE_MAX_LCORE; > > + } > > + > > +get_next_core: > > + /* Get the next core */ > > + next_core = rte_get_next_lcore(prev_core, 0, 1); > > + > > + /* Check if we have reached max lcores */ > > + if (next_core == RTE_MAX_LCORE) > > + return next_core; > > + > > + /* Update prev_core */ > > + prev_core = next_core; > > + > > + /* Only some cores are marked as eth cores. Skip others */ > > + if (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))) > > + goto get_next_core; > > Are loops statements forbidden in C now? 😉 > As a generic comment - too many (unnecessary) gotos in this patch series. > It is not uncommon to see 2-3 labels inside the function and bunch gotos to > them. > Would be good to rework the code a bit to get rid of them. [Anoob] Sure. Will rework the code and see if the gotos can be minimized. In this case, it seemed more straightforward to have goto instead of the loop. Will recheck anyway. > > > + > > + return next_core; > > +} > > + > > static inline unsigned int > > eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int > > prev_core) { @@ -154,6 +204,87 @@ eh_set_default_conf_link(struct > > eventmode_conf *em_conf) } > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support 2020-01-07 6:12 ` Anoob Joseph @ 2020-01-07 14:32 ` Ananyev, Konstantin 0 siblings, 0 replies; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-07 14:32 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev > > > Add Rx adapter support. The event helper init routine will initialize > > > the Rx adapter according to the configuration. If Rx adapter config is > > > not present it will generate a default config. It will check the > > > available eth ports and event queues and map them 1:1. So one eth port > > > will be connected to one event queue. This way event queue ID could be > > > used to figure out the port on which a packet came in. > > > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > --- > > > examples/ipsec-secgw/event_helper.c | 289 > > > +++++++++++++++++++++++++++++++++++- > > > examples/ipsec-secgw/event_helper.h | 29 ++++ > > > 2 files changed, 317 insertions(+), 1 deletion(-) > > > > > > diff --git a/examples/ipsec-secgw/event_helper.c > > > b/examples/ipsec-secgw/event_helper.c > > > index d0157f4..f0eca01 100644 > > > --- a/examples/ipsec-secgw/event_helper.c > > > +++ b/examples/ipsec-secgw/event_helper.c > > > @@ -4,10 +4,60 @@ > > > #include <rte_bitmap.h> > > > #include <rte_ethdev.h> > > > #include <rte_eventdev.h> > > > +#include <rte_event_eth_rx_adapter.h> > > > #include <rte_malloc.h> > > > > > > #include "event_helper.h" > > > > > > +static int > > > +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { > > > + int i; > > > + int count = 0; > > > + > > > + RTE_LCORE_FOREACH(i) { > > > + /* Check if this core is enabled in core mask*/ > > > + if (rte_bitmap_get(eth_core_mask, i)) { > > > + /* We have found enabled core */ > > > + count++; > > > + } > > > + } > > > + return count; > > > +} > > > + > > > +static inline unsigned int > > > +eh_get_next_eth_core(struct eventmode_conf *em_conf) { > > > + static unsigned int prev_core = -1; > > > + unsigned int next_core; > > > + > > > + /* > > > + * Make sure we have at least one eth core running, else the following > > > + * logic would lead to an infinite loop. > > > + */ > > > + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { > > > + EH_LOG_ERR("No enabled eth core found"); > > > + return RTE_MAX_LCORE; > > > + } > > > + > > > +get_next_core: > > > + /* Get the next core */ > > > + next_core = rte_get_next_lcore(prev_core, 0, 1); > > > + > > > + /* Check if we have reached max lcores */ > > > + if (next_core == RTE_MAX_LCORE) > > > + return next_core; > > > + > > > + /* Update prev_core */ > > > + prev_core = next_core; > > > + > > > + /* Only some cores are marked as eth cores. Skip others */ > > > + if (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))) > > > + goto get_next_core; > > > > Are loops statements forbidden in C now? 😉 > > As a generic comment - too many (unnecessary) gotos in this patch series. > > It is not uncommon to see 2-3 labels inside the function and bunch gotos to > > them. > > Would be good to rework the code a bit to get rid of them. > > [Anoob] Sure. Will rework the code and see if the gotos can be minimized. In this case, it seemed more straightforward to have goto > instead of the loop. Will recheck anyway. The code above looks like a classical do {..} while (...); example, no? > > > > > > + > > > + return next_core; > > > +} > > > + > > > static inline unsigned int > > > eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int > > > prev_core) { @@ -154,6 +204,87 @@ eh_set_default_conf_link(struct > > > eventmode_conf *em_conf) } > > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 05/14] examples/ipsec-secgw: add Tx adapter support 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (3 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 06/14] examples/ipsec-secgw: add routines to display config Anoob Joseph ` (9 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add Tx adapter support. The event helper init routine will initialize the Tx adapter according to the configuration. If Tx adapter config is not present it will generate a default config. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 326 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 374 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index f0eca01..9c07cc7 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -5,6 +5,7 @@ #include <rte_ethdev.h> #include <rte_eventdev.h> #include <rte_event_eth_rx_adapter.h> +#include <rte_event_eth_tx_adapter.h> #include <rte_malloc.h> #include "event_helper.h" @@ -80,6 +81,22 @@ eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) return next_core; } +static struct eventdev_params * +eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) +{ + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + if (em_conf->eventdev_config[i].eventdev_id == eventdev_id) + break; + } + + /* No match */ + if (i == em_conf->nb_eventdev) + return NULL; + + return &(em_conf->eventdev_config[i]); +} static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -285,6 +302,99 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct tx_adapter_conf *tx_adapter; + int eventdev_id; + int adapter_id; + int nb_eth_dev; + int conn_id; + int i; + + /* + * Create one Tx adapter with all eth queues mapped to event queues + * 1:1. + */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + tx_adapter = &(em_conf->tx_adapter[adapter_id]); + + /* Set adapter conf */ + tx_adapter->eventdev_id = eventdev_id; + tx_adapter->adapter_id = adapter_id; + + /* TODO: Tx core is required only when internal port is not present */ + + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Application uses one event queue per adapter for submitting + * packets for Tx. Reserve the last queue available and decrement + * the total available event queues for this + */ + + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + + /* Update the number of event queues available in eventdev */ + eventdev_config->nb_eventqueue--; + + /* + * Map all Tx queues of the eth device (port) to the event device. + */ + + /* Set defaults for connections */ + + /* + * One eth device (port) is one connection. Map all Tx queues + * of the device to the Tx adapter. + */ + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = tx_adapter->nb_connections; + + /* Get the connection */ + conn = &(tx_adapter->conn[conn_id]); + + /* Add ethdev to connections */ + conn->ethdev_id = i; + + /* Add all eth tx queues to adapter */ + conn->ethdev_tx_qid = -1; + + /* Update no of connections */ + tx_adapter->nb_connections++; + } + + /* We have setup one adapter */ + em_conf->nb_tx_adapter = 1; + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -319,6 +429,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if tx adapters are specified. Else generate a default config + * with one tx adapter. + */ + if (em_conf->nb_tx_adapter == 0) { + ret = eh_set_default_conf_tx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -584,6 +704,142 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int +eh_tx_adapter_configure(struct eventmode_conf *em_conf, + struct tx_adapter_conf *adapter) +{ + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + uint8_t tx_port_id = 0; + uint8_t eventdev_id; + uint32_t service_id; + int ret, j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + /* Create Tx adapter */ + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = + evdev_default_conf.max_num_events; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create Tx adapter */ + ret = rte_event_eth_tx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, + &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create tx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Add queue to the adapter */ + ret = rte_event_eth_tx_adapter_queue_add( + adapter->adapter_id, + conn->ethdev_id, + conn->ethdev_tx_qid); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to tx adapter %d", + ret); + return ret; + } + } + + /* Get event port used by the adapter */ + ret = rte_event_eth_tx_adapter_event_port_get( + adapter->adapter_id, + &tx_port_id); + if (ret) { + EH_LOG_ERR("Failed to get tx adapter port id %d", ret); + return ret; + } + + /* + * TODO: event queue for Tx adapter is required only if the + * INTERNAL PORT is not present. + */ + + /* + * Tx event queue is reserved for Tx adapter. Unlink this queue + * from all other ports + * + */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + rte_event_port_unlink(eventdev_id, j, + &(adapter->tx_ev_queue), 1); + } + + ret = rte_event_port_link( + eventdev_id, + tx_port_id, + &(adapter->tx_ev_queue), + NULL, 1); + if (ret != 1) { + EH_LOG_ERR("Failed to link event queue to port"); + return ret; + } + + /* Get the service ID used by Tx adapter */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by tx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start tx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_conf *adapter; + int i, ret; + + /* Configure Tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + ret = eh_tx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure tx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -641,6 +897,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Tx adapter */ + ret = eh_initialize_tx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize tx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -725,5 +988,68 @@ eh_devs_uninit(struct eh_conf *conf) } } + /* Stop and release tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + + id = em_conf->tx_adapter[i].adapter_id; + ret = rte_event_eth_tx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop tx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->tx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_tx_adapter_queue_del(id, + em_conf->tx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove tx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_tx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free tx adapter %d", ret); + return ret; + } + } + return 0; } + +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) +{ + struct eventdev_params *eventdev_config; + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + if (eventdev_config == NULL) { + EH_LOG_ERR("Failed to read eventdev config"); + return -EINVAL; + } + + /* + * The last queue is reserved to be used as atomic queue for the + * last stage (eth packet tx stage) + */ + return eventdev_config->nb_eventqueue - 1; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 0f89c31..da35726 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -23,9 +23,15 @@ extern "C" { /* Max Rx adapters supported */ #define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS +/* Max Tx adapters supported */ +#define EVENT_MODE_MAX_TX_ADAPTERS RTE_EVENT_MAX_DEVS + /* Max Rx adapter connections */ #define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 +/* Max Tx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -33,6 +39,9 @@ extern "C" { #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Tx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS + /** * Packet transfer mode of the application */ @@ -80,6 +89,23 @@ struct rx_adapter_conf { conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; }; +/* Tx adapter connection info */ +struct tx_adapter_connection_info { + uint8_t ethdev_id; + int32_t ethdev_tx_qid; +}; + +/* Tx adapter conf */ +struct tx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t tx_core_id; + uint8_t nb_connections; + struct tx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER]; + uint8_t tx_ev_queue; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; @@ -90,6 +116,10 @@ struct eventmode_conf { /**< No of Rx adapters */ struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; /**< Rx adapter conf */ + uint8_t nb_tx_adapter; + /**< No of Tx adapters */ + struct tx_adapter_conf tx_adapter[EVENT_MODE_MAX_TX_ADAPTERS]; + /** Tx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -170,6 +200,24 @@ eh_devs_init(struct eh_conf *conf); int32_t eh_devs_uninit(struct eh_conf *conf); +/** + * Get eventdev tx queue + * + * If the application uses event device which does not support internal port + * then it needs to submit the events to a Tx queue before final transmission. + * This Tx queue will be created internally by the eventmode helper subsystem, + * and application will need its queue ID when it runs the execution loop. + * + * @param mode_conf + * Event helper configuration + * @param eventdev_id + * Event device ID + * @return + * Tx queue ID + */ +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 06/14] examples/ipsec-secgw: add routines to display config 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (4 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 05/14] examples/ipsec-secgw: add Tx " Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 07/14] examples/ipsec-secgw: add routines to launch workers Anoob Joseph ` (8 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add routines to display the eventmode configuration. This gives an overview of the devices used. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 207 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 14 +++ 2 files changed, 221 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 9c07cc7..f120e43 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -840,6 +840,210 @@ eh_initialize_tx_adapter(struct eventmode_conf *em_conf) return 0; } +static void +eh_display_operating_mode(struct eventmode_conf *em_conf) +{ + char sched_types[][32] = { + "RTE_SCHED_TYPE_ORDERED", + "RTE_SCHED_TYPE_ATOMIC", + "RTE_SCHED_TYPE_PARALLEL", + }; + EH_LOG_INFO("Operating mode:"); + + EH_LOG_INFO("\tScheduling type: \t%s", + sched_types[em_conf->ext_params.sched_type]); + + EH_LOG_INFO(""); +} + +static void +eh_display_event_dev_conf(struct eventmode_conf *em_conf) +{ + char queue_mode[][32] = { + "", + "ATQ (ALL TYPE QUEUE)", + "SINGLE LINK", + }; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Event Device Configuration:"); + + for (i = 0; i < em_conf->nb_eventdev; i++) { + sprintf(print_buf, + "\tDev ID: %-2d \tQueues: %-2d \tPorts: %-2d", + em_conf->eventdev_config[i].eventdev_id, + em_conf->eventdev_config[i].nb_eventqueue, + em_conf->eventdev_config[i].nb_eventport); + sprintf(print_buf + strlen(print_buf), + "\tQueue mode: %s", + queue_mode[em_conf->eventdev_config[i].ev_queue_mode]); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +static void +eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_rx_adapter = em_conf->nb_rx_adapter; + struct rx_adapter_connection_info *conn; + struct rx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Rx adapters configured: %d", nb_rx_adapter); + + for (i = 0; i < nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + EH_LOG_INFO( + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" + "\tRx core: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id, + adapter->rx_core_id); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_rx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2d", + conn->ethdev_rx_qid); + + sprintf(print_buf + strlen(print_buf), + "\tEvent queue: %-2d", conn->eventq_id); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_tx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_tx_adapter = em_conf->nb_tx_adapter; + struct tx_adapter_connection_info *conn; + struct tx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Tx adapters configured: %d", nb_tx_adapter); + + for (i = 0; i < nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + sprintf(print_buf, + "\tTx adapter ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id); + if (adapter->tx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->tx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2d,\tInput event queue: %-2d", + adapter->tx_core_id, adapter->tx_ev_queue); + + EH_LOG_INFO("%s", print_buf); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_tx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2d", + conn->ethdev_tx_qid); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_link_conf(struct eventmode_conf *em_conf) +{ + struct eh_event_link_info *link; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Links configured: %d", em_conf->nb_link); + + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + + sprintf(print_buf, + "\tEvent dev ID: %-2d\tEvent port: %-2d", + link->eventdev_id, + link->event_port_id); + + if (em_conf->ext_params.all_ev_queue_to_ev_port) + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2s\t", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2d\t", link->eventq_id); + + sprintf(print_buf + strlen(print_buf), + "Lcore: %-2d", link->lcore_id); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +void +eh_display_conf(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Display user exposed operating modes */ + eh_display_operating_mode(em_conf); + + /* Display event device conf */ + eh_display_event_dev_conf(em_conf); + + /* Display Rx adapter conf */ + eh_display_rx_adapter_conf(em_conf); + + /* Display Tx adapter conf */ + eh_display_tx_adapter_conf(em_conf); + + /* Display event-lcore link */ + eh_display_link_conf(em_conf); +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -873,6 +1077,9 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Display the current configuration */ + eh_display_conf(conf); + /* Stop eth devices before setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index da35726..3e2627f 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -17,6 +17,11 @@ extern "C" { RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) +#define EH_LOG_INFO(...) \ + RTE_LOG(INFO, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS @@ -218,6 +223,15 @@ eh_devs_uninit(struct eh_conf *conf); uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); +/** + * Display event mode configuration + * + * @param conf + * Event helper configuration + */ +void +eh_display_conf(struct eh_conf *conf); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 07/14] examples/ipsec-secgw: add routines to launch workers 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (5 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 06/14] examples/ipsec-secgw: add routines to display config Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 08/14] examples/ipsec-secgw: add support for internal ports Anoob Joseph ` (7 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> With eventmode, workers can be drafted differently according to the capabilities of the underlying event device. The added functions will receive an array of such workers and probe the eventmode properties to choose the worker. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 340 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 +++++ 2 files changed, 388 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index f120e43..a67132a 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -7,9 +7,12 @@ #include <rte_event_eth_rx_adapter.h> #include <rte_event_eth_tx_adapter.h> #include <rte_malloc.h> +#include <stdbool.h> #include "event_helper.h" +static volatile bool eth_core_running; + static int eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { @@ -97,6 +100,16 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } +static inline bool +eh_dev_has_burst_mode(uint8_t dev_id) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(dev_id, &dev_info); + return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE) ? + true : false; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -704,6 +717,260 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int32_t +eh_start_worker_eth_core(struct eventmode_conf *conf, uint32_t lcore_id) +{ + uint32_t service_id[EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE]; + struct rx_adapter_conf *rx_adapter; + struct tx_adapter_conf *tx_adapter; + int service_count = 0; + int adapter_id; + int32_t ret; + int i; + + EH_LOG_INFO("Entering eth_core processing on lcore %u", lcore_id); + + /* + * Parse adapter config to check which of all Rx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_rx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per rx core"); + break; + } + + rx_adapter = &(conf->rx_adapter[i]); + if (rx_adapter->rx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = rx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by rx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + /* + * Parse adapter config to see which all Tx adapters need + * to be handled this core. + */ + for (i = 0; i < conf->nb_tx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per tx core"); + break; + } + + tx_adapter = &conf->tx_adapter[i]; + if (tx_adapter->tx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = tx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by tx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + eth_core_running = true; + + while (eth_core_running) { + for (i = 0; i < service_count; i++) { + /* Initiate adapter service */ + rte_service_run_iter_on_app_lcore(service_id[i], 0); + } + } + + return 0; +} + +static int32_t +eh_stop_worker_eth_core(void) +{ + if (eth_core_running) { + EH_LOG_INFO("Stopping eth cores"); + eth_core_running = false; + } + return 0; +} + +static struct eh_app_worker_params * +eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, + struct eh_app_worker_params *app_wrkrs, uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params curr_conf = { + {{0} }, NULL}; + struct eh_event_link_info *link = NULL; + struct eh_app_worker_params *tmp_wrkr; + struct eventmode_conf *em_conf; + uint8_t eventdev_id; + int i; + + /* Get eventmode config */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* + * Use event device from the first lcore-event link. + * + * Assumption: All lcore-event links tied to a core are using the + * same event device. In other words, one core would be polling on + * queues of a single event device only. + */ + + /* Get a link for this lcore */ + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + if (link->lcore_id == lcore_id) + break; + } + + if (link == NULL) { + EH_LOG_ERR( + "No valid link found for lcore %d", lcore_id); + return NULL; + } + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* Populate the curr_conf with the capabilities */ + + /* Check for burst mode */ + if (eh_dev_has_burst_mode(eventdev_id)) + curr_conf.cap.burst = EH_RX_TYPE_BURST; + else + curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + + /* Parse the passed list and see if we have matching capabilities */ + + /* Initialize the pointer used to traverse the list */ + tmp_wrkr = app_wrkrs; + + for (i = 0; i < nb_wrkr_param; i++, tmp_wrkr++) { + + /* Skip this if capabilities are not matching */ + if (tmp_wrkr->cap.u64 != curr_conf.cap.u64) + continue; + + /* If the checks pass, we have a match */ + return tmp_wrkr; + } + + return NULL; +} + +static int +eh_verify_match_worker(struct eh_app_worker_params *match_wrkr) +{ + /* Verify registered worker */ + if (match_wrkr->worker_thread == NULL) { + EH_LOG_ERR("No worker registered"); + return 0; + } + + /* Success */ + return 1; +} + +static uint8_t +eh_get_event_lcore_links(uint32_t lcore_id, struct eh_conf *conf, + struct eh_event_link_info **links) +{ + struct eh_event_link_info *link_cache; + struct eventmode_conf *em_conf = NULL; + struct eh_event_link_info *link; + uint8_t lcore_nb_link = 0; + size_t single_link_size; + size_t cache_size; + int index = 0; + int i; + + if (conf == NULL || links == NULL) { + EH_LOG_ERR("Invalid args"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + if (em_conf == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Update the number of links for this core */ + lcore_nb_link++; + + } + } + + /* Compute size of one entry to be copied */ + single_link_size = sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + cache_size = lcore_nb_link * + sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + link_cache = calloc(1, cache_size); + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Cache the link */ + memcpy(&link_cache[index], link, single_link_size); + + /* Update index */ + index++; + } + } + + /* Update the links for application to use the cached links */ + *links = link_cache; + + /* Return the number of cached links */ + return lcore_nb_link; +} + static int eh_tx_adapter_configure(struct eventmode_conf *em_conf, struct tx_adapter_conf *adapter) @@ -1227,6 +1494,79 @@ eh_devs_uninit(struct eh_conf *conf) return 0; } +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params *match_wrkr; + struct eh_event_link_info *links = NULL; + struct eventmode_conf *em_conf; + uint32_t lcore_id; + uint8_t nb_links; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Check if this is eth core */ + if (rte_bitmap_get(em_conf->eth_core_mask, lcore_id)) { + eh_start_worker_eth_core(em_conf, lcore_id); + return; + } + + if (app_wrkr == NULL || nb_wrkr_param == 0) { + EH_LOG_ERR("Invalid args"); + return; + } + + /* + * This is a regular worker thread. The application registers + * multiple workers with various capabilities. Run worker + * based on the selected capabilities of the event + * device configured. + */ + + /* Get the first matching worker for the event device */ + match_wrkr = eh_find_worker(lcore_id, conf, app_wrkr, nb_wrkr_param); + if (match_wrkr == NULL) { + EH_LOG_ERR("Failed to match worker registered for lcore %d", + lcore_id); + goto clean_and_exit; + } + + /* Verify sanity of the matched worker */ + if (eh_verify_match_worker(match_wrkr) != 1) { + EH_LOG_ERR("Failed to validate the matched worker"); + goto clean_and_exit; + } + + /* Get worker links */ + nb_links = eh_get_event_lcore_links(lcore_id, conf, &links); + + /* Launch the worker thread */ + match_wrkr->worker_thread(links, nb_links); + + /* Free links info memory */ + free(links); + +clean_and_exit: + + /* Flag eth_cores to stop, if started */ + eh_stop_worker_eth_core(); +} + uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 3e2627f..aad87f7 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -44,6 +44,9 @@ extern "C" { #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Rx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE EVENT_MODE_MAX_RX_ADAPTERS + /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS @@ -55,6 +58,14 @@ enum eh_pkt_transfer_mode { EH_PKT_TRANSFER_MODE_EVENT, }; +/** + * Event mode packet rx types + */ +enum eh_rx_types { + EH_RX_TYPE_NON_BURST = 0, + EH_RX_TYPE_BURST +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -165,6 +176,22 @@ struct eh_conf { /**< Mode specific parameters */ }; +/* Workers registered by the application */ +struct eh_app_worker_params { + union { + RTE_STD_C11 + struct { + uint64_t burst : 1; + /**< Specify status of rx type burst */ + }; + uint64_t u64; + } cap; + /**< Capabilities of this worker */ + void (*worker_thread)(struct eh_event_link_info *links, + uint8_t nb_links); + /**< Worker thread */ +}; + /** * Initialize event mode devices * @@ -232,6 +259,27 @@ eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); void eh_display_conf(struct eh_conf *conf); + +/** + * Launch eventmode worker + * + * The application can request the eventmode helper subsystem to launch the + * worker based on the capabilities of event device and the options selected + * while initializing the eventmode. + * + * @param conf + * Event helper configuration + * @param app_wrkr + * List of all the workers registered by application, along with its + * capabilities + * @param nb_wrkr_param + * Number of workers passed by the application + * + */ +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 08/14] examples/ipsec-secgw: add support for internal ports 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (6 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 07/14] examples/ipsec-secgw: add routines to launch workers Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph ` (6 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add support for Rx and Tx internal ports. When internal ports are available then a packet can be received from eth port and forwarded to event queue by HW without any software intervention. The same applies to Tx side where a packet sent to an event queue can by forwarded by HW to eth port without any software intervention. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 231 ++++++++++++++++++++++++++++-------- examples/ipsec-secgw/event_helper.h | 11 ++ 2 files changed, 195 insertions(+), 47 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index a67132a..6549875 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -100,6 +100,39 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } + +static inline bool +eh_dev_has_rx_internal_port(uint8_t eventdev_id) +{ + int j; + bool flag = true; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_rx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + +static inline bool +eh_dev_has_tx_internal_port(uint8_t eventdev_id) +{ + int j; + bool flag = true; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_tx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + static inline bool eh_dev_has_burst_mode(uint8_t dev_id) { @@ -115,7 +148,9 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { struct eventdev_params *eventdev_config; struct rte_event_dev_info dev_info; + int lcore_count; int nb_eventdev; + int nb_eth_dev; int i, ret; /* Get the number of event devices */ @@ -126,6 +161,17 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return -EINVAL; } + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + if (nb_eth_dev == 0) { + EH_LOG_ERR("No eth devices detected"); + return -EINVAL; + } + + /* Get the number of lcores */ + lcore_count = rte_lcore_count(); + for (i = 0; i < nb_eventdev; i++) { /* Get the event dev conf */ @@ -152,11 +198,17 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES; - /* One port is required for eth Rx adapter */ - eventdev_config->nb_eventport -= 1; + /* Check if there are more queues than required */ + if (eventdev_config->nb_eventqueue > nb_eth_dev + 1) { + /* One queue is reserved for Tx */ + eventdev_config->nb_eventqueue = nb_eth_dev + 1; + } - /* One port is reserved for eth Tx adapter */ - eventdev_config->nb_eventport -= 1; + /* Check if there are more ports than required */ + if (eventdev_config->nb_eventport > lcore_count) { + /* One port per lcore is enough */ + eventdev_config->nb_eventport = lcore_count; + } /* Update the number of event devices */ em_conf->nb_eventdev++; @@ -165,6 +217,42 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return 0; } +static void +eh_do_capability_check(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + int all_internal_ports = 1; + uint32_t eventdev_id; + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + eventdev_id = eventdev_config->eventdev_id; + + /* Check if event device has internal port for Rx & Tx */ + if (eh_dev_has_rx_internal_port(eventdev_id) && + eh_dev_has_tx_internal_port(eventdev_id)) { + eventdev_config->all_internal_ports = 1; + } else { + all_internal_ports = 0; + } + } + + /* + * If Rx & Tx internal ports are supported by all event devices then + * eth cores won't be required. Override the eth core mask requested + * and decrement number of event queues by one as it won't be needed + * for Tx. + */ + if (all_internal_ports) { + rte_bitmap_reset(em_conf->eth_core_mask); + for (i = 0; i < em_conf->nb_eventdev; i++) + em_conf->eventdev_config[i].nb_eventqueue--; + } +} + static int eh_set_default_conf_link(struct eventmode_conf *em_conf) { @@ -239,6 +327,9 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) struct rx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct rx_adapter_conf *adapter; + bool rx_internal_port = true; + int nb_eventqueue; + uint32_t caps = 0; int eventdev_id; int nb_eth_dev; int adapter_id; @@ -268,7 +359,14 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Set adapter conf */ adapter->eventdev_id = eventdev_id; adapter->adapter_id = adapter_id; - adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * If event device does not have internal ports for passing + * packets then reserved one queue for Tx path + */ + nb_eventqueue = eventdev_config->all_internal_ports ? + eventdev_config->nb_eventqueue : + eventdev_config->nb_eventqueue - 1; /* * Map all queues of one eth device (port) to one event @@ -277,10 +375,10 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) */ /* Make sure there is enough event queues for 1:1 mapping */ - if (nb_eth_dev > eventdev_config->nb_eventqueue) { + if (nb_eth_dev > nb_eventqueue) { EH_LOG_ERR("Not enough event queues for 1:1 mapping " "[eth devs: %d, event queues: %d]\n", - nb_eth_dev, eventdev_config->nb_eventqueue); + nb_eth_dev, nb_eventqueue); return -EINVAL; } @@ -303,11 +401,24 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Add all eth queues of one eth port to one event queue */ conn->ethdev_rx_qid = -1; + /* Get Rx adapter capabilities */ + rte_event_eth_rx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + rx_internal_port = false; + /* Update no of connections */ adapter->nb_connections++; } + if (rx_internal_port) { + /* Rx core is not required */ + adapter->rx_core_id = -1; + } else { + /* Rx core is required */ + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + } + /* We have setup one adapter */ em_conf->nb_rx_adapter = 1; @@ -320,6 +431,8 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) struct tx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct tx_adapter_conf *tx_adapter; + bool tx_internal_port = true; + uint32_t caps = 0; int eventdev_id; int adapter_id; int nb_eth_dev; @@ -353,22 +466,6 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) tx_adapter->eventdev_id = eventdev_id; tx_adapter->adapter_id = adapter_id; - /* TODO: Tx core is required only when internal port is not present */ - - tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); - - /* - * Application uses one event queue per adapter for submitting - * packets for Tx. Reserve the last queue available and decrement - * the total available event queues for this - */ - - /* Queue numbers start at 0 */ - tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; - - /* Update the number of event queues available in eventdev */ - eventdev_config->nb_eventqueue--; - /* * Map all Tx queues of the eth device (port) to the event device. */ @@ -398,10 +495,30 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) /* Add all eth tx queues to adapter */ conn->ethdev_tx_qid = -1; + /* Get Tx adapter capabilities */ + rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + tx_internal_port = false; + /* Update no of connections */ tx_adapter->nb_connections++; } + if (tx_internal_port) { + /* Tx core is not required */ + tx_adapter->tx_core_id = -1; + } else { + /* Tx core is required */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Use one event queue per adapter for submitting packets + * for Tx. Reserving the last queue available + */ + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + } + /* We have setup one adapter */ em_conf->nb_tx_adapter = 1; return 0; @@ -422,6 +539,9 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* Perform capability check for the selected event devices */ + eh_do_capability_check(em_conf); + /* * Check if links are specified. Else generate a default config for * the event ports used. @@ -481,9 +601,6 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) /* Get the number of queues */ nb_eventqueue = eventdev_config->nb_eventqueue; - /* One queue is reserved for the final stage (doing eth tx) */ - nb_eventqueue += 1; - /* Reset the default conf */ memset(&evdev_default_conf, 0, sizeof(struct rte_event_dev_info)); @@ -530,11 +647,13 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode; /* * All queues need to be set with sched_type as - * schedule type for the application stage. One queue - * would be reserved for the final eth tx stage. This - * will be an atomic queue. + * schedule type for the application stage. One + * queue would be reserved for the final eth tx + * stage if event device does not have internal + * ports. This will be an atomic queue. */ - if (j == nb_eventqueue-1) { + if (!eventdev_config->all_internal_ports && + j == nb_eventqueue-1) { eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; } else { @@ -650,10 +769,6 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf, } /* Setup various connections in the adapter */ - - queue_conf.rx_queue_flags = - RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID; - for (j = 0; j < adapter->nb_connections; j++) { /* Get connection */ conn = &(adapter->conn[j]); @@ -661,9 +776,7 @@ eh_rx_adapter_configure(struct eventmode_conf *em_conf, /* Setup queue conf */ queue_conf.ev.queue_id = conn->eventq_id; queue_conf.ev.sched_type = em_conf->ext_params.sched_type; - - /* Set flow ID as ethdev ID */ - queue_conf.ev.flow_id = conn->ethdev_id; + queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV; /* Add queue to the adapter */ ret = rte_event_eth_rx_adapter_queue_add( @@ -859,6 +972,12 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, /* Populate the curr_conf with the capabilities */ + /* Check for Tx internal port */ + if (eh_dev_has_tx_internal_port(eventdev_id)) + curr_conf.cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + else + curr_conf.cap.tx_internal_port = EH_TX_TYPE_NO_INTERNAL_PORT; + /* Check for burst mode */ if (eh_dev_has_burst_mode(eventdev_id)) curr_conf.cap.burst = EH_RX_TYPE_BURST; @@ -1034,6 +1153,18 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } } + /* + * Check if Tx core is assigned. If Tx core is not assigned, then + * the adapter has internal port for submitting packets for Tx + * and so Tx event queue & port setup is not required + */ + if (adapter->tx_core_id == (uint32_t) (-1)) { + /* Internal port is present */ + goto skip_tx_queue_port_setup; + } + + /* Setup Tx queue & port */ + /* Get event port used by the adapter */ ret = rte_event_eth_tx_adapter_event_port_get( adapter->adapter_id, @@ -1044,11 +1175,6 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } /* - * TODO: event queue for Tx adapter is required only if the - * INTERNAL PORT is not present. - */ - - /* * Tx event queue is reserved for Tx adapter. Unlink this queue * from all other ports * @@ -1058,6 +1184,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, &(adapter->tx_ev_queue), 1); } + /* Link Tx event queue to Tx port */ ret = rte_event_port_link( eventdev_id, tx_port_id, @@ -1079,6 +1206,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, rte_service_set_runstate_mapped_check(service_id, 0); +skip_tx_queue_port_setup: /* Start adapter */ ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); if (ret < 0) { @@ -1163,13 +1291,22 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); - EH_LOG_INFO( - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" - "\tRx core: %-2d", + sprintf(print_buf, + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, - adapter->eventdev_id, - adapter->rx_core_id); + adapter->eventdev_id); + if (adapter->rx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->rx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2d", adapter->rx_core_id); + + EH_LOG_INFO("%s", print_buf); for (j = 0; j < adapter->nb_connections; j++) { conn = &(adapter->conn[j]); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index aad87f7..2895dfa 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -66,12 +66,21 @@ enum eh_rx_types { EH_RX_TYPE_BURST }; +/** + * Event mode packet tx types + */ +enum eh_tx_types { + EH_TX_TYPE_INTERNAL_PORT = 0, + EH_TX_TYPE_NO_INTERNAL_PORT +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; uint8_t nb_eventqueue; uint8_t nb_eventport; uint8_t ev_queue_mode; + uint8_t all_internal_ports; }; /** @@ -183,6 +192,8 @@ struct eh_app_worker_params { struct { uint64_t burst : 1; /**< Specify status of rx type burst */ + uint64_t tx_internal_port : 1; + /**< Specify whether tx internal port is available */ }; uint64_t u64; } cap; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (7 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 08/14] examples/ipsec-secgw: add support for internal ports Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-23 16:43 ` Ananyev, Konstantin 2019-12-24 12:47 ` Ananyev, Konstantin 2019-12-08 12:30 ` [dpdk-dev] [PATCH 10/14] examples/ipsec-secgw: add app inbound worker Anoob Joseph ` (5 subsequent siblings) 14 siblings, 2 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add eventmode support to ipsec-secgw. This uses event helper to setup and use the eventmode capabilities. Add driver inbound worker. Example command: ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 --schedule-type 2 --process-mode drv --process-dir in Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/event_helper.c | 3 + examples/ipsec-secgw/event_helper.h | 26 +++ examples/ipsec-secgw/ipsec-secgw.c | 344 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec.h | 7 + examples/ipsec-secgw/ipsec_worker.c | 180 +++++++++++++++++++ examples/ipsec-secgw/meson.build | 2 +- 7 files changed, 555 insertions(+), 8 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec_worker.c diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index 09e3c5a..f6fd94c 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -15,6 +15,7 @@ SRCS-y += sa.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += ipsec_worker.c SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 6549875..44f997d 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -984,6 +984,9 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, else curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + curr_conf.cap.ipsec_mode = conf->ipsec_mode; + curr_conf.cap.ipsec_dir = conf->ipsec_dir; + /* Parse the passed list and see if we have matching capabilities */ /* Initialize the pointer used to traverse the list */ diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 2895dfa..07849b0 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -74,6 +74,22 @@ enum eh_tx_types { EH_TX_TYPE_NO_INTERNAL_PORT }; +/** + * Event mode ipsec mode types + */ +enum eh_ipsec_mode_types { + EH_IPSEC_MODE_TYPE_APP = 0, + EH_IPSEC_MODE_TYPE_DRIVER +}; + +/** + * Event mode ipsec direction types + */ +enum eh_ipsec_dir_types { + EH_IPSEC_DIR_TYPE_OUTBOUND = 0, + EH_IPSEC_DIR_TYPE_INBOUND, +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -183,6 +199,12 @@ struct eh_conf { */ void *mode_params; /**< Mode specific parameters */ + + /** Application specific params */ + enum eh_ipsec_mode_types ipsec_mode; + /**< Mode of ipsec run */ + enum eh_ipsec_dir_types ipsec_dir; + /**< Direction of ipsec processing */ }; /* Workers registered by the application */ @@ -194,6 +216,10 @@ struct eh_app_worker_params { /**< Specify status of rx type burst */ uint64_t tx_internal_port : 1; /**< Specify whether tx internal port is available */ + uint64_t ipsec_mode : 1; + /**< Specify ipsec processing level */ + uint64_t ipsec_dir : 1; + /**< Specify direction of ipsec */ }; uint64_t u64; } cap; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 7506922..c5d95b9 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2,6 +2,7 @@ * Copyright(c) 2016 Intel Corporation */ +#include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> @@ -14,6 +15,7 @@ #include <sys/queue.h> #include <stdarg.h> #include <errno.h> +#include <signal.h> #include <getopt.h> #include <rte_common.h> @@ -41,12 +43,17 @@ #include <rte_jhash.h> #include <rte_cryptodev.h> #include <rte_security.h> +#include <rte_bitmap.h> +#include <rte_eventdev.h> #include <rte_ip.h> #include <rte_ip_frag.h> +#include "event_helper.h" #include "ipsec.h" #include "parser.h" +volatile bool force_quit; + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define MAX_JUMBO_PKT_LEN 9600 @@ -133,12 +140,21 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" +#define CMD_LINE_OPT_IPSEC_MODE "process-mode" +#define CMD_LINE_OPT_IPSEC_DIR "process-dir" #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" #define CMD_LINE_OPT_REASSEMBLE "reassemble" #define CMD_LINE_OPT_MTU "mtu" #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" +#define CMD_LINE_ARG_APP "app" +#define CMD_LINE_ARG_DRV "drv" +#define CMD_LINE_ARG_INB "in" +#define CMD_LINE_ARG_OUT "out" + enum { /* long options mapped to a short option */ @@ -149,7 +165,11 @@ enum { CMD_LINE_OPT_CONFIG_NUM, CMD_LINE_OPT_SINGLE_SA_NUM, CMD_LINE_OPT_CRYPTODEV_MASK_NUM, + CMD_LINE_OPT_TRANSFER_MODE_NUM, + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, CMD_LINE_OPT_RX_OFFLOAD_NUM, + CMD_LINE_OPT_IPSEC_MODE_NUM, + CMD_LINE_OPT_IPSEC_DIR_NUM, CMD_LINE_OPT_TX_OFFLOAD_NUM, CMD_LINE_OPT_REASSEMBLE_NUM, CMD_LINE_OPT_MTU_NUM, @@ -160,6 +180,10 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, + {CMD_LINE_OPT_IPSEC_MODE, 1, 0, CMD_LINE_OPT_IPSEC_MODE_NUM}, + {CMD_LINE_OPT_IPSEC_DIR, 1, 0, CMD_LINE_OPT_IPSEC_DIR_NUM}, {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, @@ -1094,8 +1118,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, } /* main processing loop */ -static int32_t -main_loop(__attribute__((unused)) void *dummy) +void +ipsec_poll_mode_worker(void) { struct rte_mbuf *pkts[MAX_PKT_BURST]; uint32_t lcore_id; @@ -1137,7 +1161,7 @@ main_loop(__attribute__((unused)) void *dummy) if (qconf->nb_rx_queue == 0) { RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", lcore_id); - return 0; + return; } RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); @@ -1150,7 +1174,7 @@ main_loop(__attribute__((unused)) void *dummy) lcore_id, portid, queueid); } - while (1) { + while (!force_quit) { cur_tsc = rte_rdtsc(); /* TX queue buffer drain */ @@ -1277,6 +1301,10 @@ print_usage(const char *prgname) " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" " [--cryptodev_mask MASK]" + " [--transfer-mode MODE]" + " [--schedule-type TYPE]" + " [--process-mode MODE]" + " [--process-dir DIR]" " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" @@ -1298,6 +1326,22 @@ print_usage(const char *prgname) " bypassing the SP\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" + " --transfer-mode MODE\n" + " 0: Packet transfer via polling (default)\n" + " 1: Packet transfer via eventdev\n" + " --schedule-type TYPE queue schedule type, used only when\n" + " transfer mode is set to eventdev\n" + " 0: Ordered (default)\n" + " 1: Atomic\n" + " 2: Parallel\n" + " --process-mode MODE processing mode, used only when\n" + " transfer mode is set to eventdev\n" + " \"app\" : application mode (default)\n" + " \"drv\" : driver mode\n" + " --process-dir DIR processing direction, used only when\n" + " transfer mode is set to eventdev\n" + " \"out\" : outbound (default)\n" + " \"in\" : inbound\n" " --" CMD_LINE_OPT_RX_OFFLOAD ": bitmask of the RX HW offload capabilities to enable/use\n" " (DEV_RX_OFFLOAD_*)\n" @@ -1433,7 +1477,89 @@ print_app_sa_prm(const struct app_sa_prm *prm) } static int32_t -parse_args(int32_t argc, char **argv) +eh_parse_decimal(const char *str) +{ + unsigned long num; + char *end = NULL; + + num = strtoul(str, &end, 10); + if ((str[0] == '\0') || (end == NULL) || (*end != '\0')) + return -EINVAL; + + return num; +} + +static int +parse_transfer_mode(struct eh_conf *conf, const char *optarg) +{ + int32_t parsed_dec; + + parsed_dec = eh_parse_decimal(optarg); + if (parsed_dec != EH_PKT_TRANSFER_MODE_POLL && + parsed_dec != EH_PKT_TRANSFER_MODE_EVENT) { + printf("Unsupported packet transfer mode"); + return -EINVAL; + } + conf->mode = parsed_dec; + return 0; +} + +static int +parse_schedule_type(struct eh_conf *conf, const char *optarg) +{ + struct eventmode_conf *em_conf = NULL; + int32_t parsed_dec; + + parsed_dec = eh_parse_decimal(optarg); + if (parsed_dec != RTE_SCHED_TYPE_ORDERED && + parsed_dec != RTE_SCHED_TYPE_ATOMIC && + parsed_dec != RTE_SCHED_TYPE_PARALLEL) + return -EINVAL; + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + em_conf->ext_params.sched_type = parsed_dec; + + return 0; +} + +static int +parse_ipsec_mode(struct eh_conf *conf, const char *optarg) +{ + if (!strncmp(CMD_LINE_ARG_APP, optarg, strlen(CMD_LINE_ARG_APP)) && + strlen(optarg) == strlen(CMD_LINE_ARG_APP)) + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + else if (!strncmp(CMD_LINE_ARG_DRV, optarg, strlen(CMD_LINE_ARG_DRV)) && + strlen(optarg) == strlen(CMD_LINE_ARG_DRV)) + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + else { + printf("Unsupported ipsec mode\n"); + return -EINVAL; + } + + return 0; +} + +static int +parse_ipsec_dir(struct eh_conf *conf, const char *optarg) +{ + if (!strncmp(CMD_LINE_ARG_INB, optarg, strlen(CMD_LINE_ARG_INB)) && + strlen(optarg) == strlen(CMD_LINE_ARG_INB)) + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; + else if (!strncmp(CMD_LINE_ARG_OUT, optarg, strlen(CMD_LINE_ARG_OUT)) && + strlen(optarg) == strlen(CMD_LINE_ARG_OUT)) + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; + else { + printf("Unsupported ipsec direction\n"); + return -EINVAL; + } + + return 0; +} + +static int32_t +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) { int opt; int64_t ret; @@ -1536,6 +1662,43 @@ parse_args(int32_t argc, char **argv) /* else */ enabled_cryptodev_mask = ret; break; + + case CMD_LINE_OPT_TRANSFER_MODE_NUM: + ret = parse_transfer_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid packet transfer mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: + ret = parse_schedule_type(eh_conf, optarg); + if (ret < 0) { + printf("Invalid queue schedule type\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_IPSEC_MODE_NUM: + ret = parse_ipsec_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid ipsec mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_IPSEC_DIR_NUM: + ret = parse_ipsec_dir(eh_conf, optarg); + if (ret < 0) { + printf("Invalid ipsec direction\n"); + print_usage(prgname); + return -1; + } + break; + case CMD_LINE_OPT_RX_OFFLOAD_NUM: ret = parse_mask(optarg, &dev_rx_offload); if (ret != 0) { @@ -2457,6 +2620,132 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) return ret; } +static struct eh_conf * +eh_conf_init(void) +{ + struct eventmode_conf *em_conf = NULL; + struct eh_conf *conf = NULL; + unsigned int eth_core_id; + uint32_t nb_bytes; + void *mem = NULL; + + /* Allocate memory for config */ + conf = calloc(1, sizeof(struct eh_conf)); + if (conf == NULL) { + printf("Failed to allocate memory for eventmode helper conf"); + goto err; + } + + /* Set default conf */ + + /* Packet transfer mode: poll */ + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; + + /* Keep all ethernet ports enabled by default */ + conf->eth_portmask = -1; + + /* Allocate memory for event mode params */ + conf->mode_params = + calloc(1, sizeof(struct eventmode_conf)); + if (conf->mode_params == NULL) { + printf("Failed to allocate memory for event mode params"); + goto err; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Allocate and initialize bitmap for eth cores */ + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); + if (!nb_bytes) { + printf("Failed to get bitmap footprint"); + goto err; + } + + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, + RTE_CACHE_LINE_SIZE); + if (!mem) { + printf("Failed to allocate memory for eth cores bitmap\n"); + goto err; + } + + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, mem, nb_bytes); + if (!em_conf->eth_core_mask) { + printf("Failed to initialize bitmap"); + goto err; + } + + /* Schedule type: ordered */ + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + + /* Set two cores as eth cores for Rx & Tx */ + + /* Use first core other than master core as Rx core */ + eth_core_id = rte_get_next_lcore(0, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + /* Use next core as Tx core */ + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + return conf; +err: + rte_free(mem); + free(em_conf); + free(conf); + return NULL; +} + +static void +eh_conf_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf = NULL; + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Free evenmode configuration memory */ + rte_free(em_conf->eth_core_mask); + free(em_conf); + free(conf); +} + +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + uint16_t port_id; + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + + /* Destroy the default ipsec flow */ + RTE_ETH_FOREACH_DEV(port_id) { + if ((enabled_port_mask & (1 << port_id)) == 0) + continue; + if (flow_info_tbl[port_id].rx_def_flow) { + struct rte_flow_error err; + int ret; + ret = rte_flow_destroy(port_id, + flow_info_tbl[port_id].rx_def_flow, + &err); + if (ret) + RTE_LOG(ERR, IPSEC, + "Failed to destroy flow for port %u, " + "err msg: %s\n", port_id, err.message); + } + } + } +} + int32_t main(int32_t argc, char **argv) { @@ -2466,6 +2755,7 @@ main(int32_t argc, char **argv) uint8_t socket_id; uint16_t portid; uint64_t req_rx_offloads, req_tx_offloads; + struct eh_conf *eh_conf = NULL; size_t sess_sz; /* init EAL */ @@ -2475,8 +2765,17 @@ main(int32_t argc, char **argv) argc -= ret; argv += ret; + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* initialize event helper configuration */ + eh_conf = eh_conf_init(); + if (eh_conf == NULL) + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); + /* parse application arguments (after the EAL ones) */ - ret = parse_args(argc, argv); + ret = parse_args(argc, argv, eh_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid parameters\n"); @@ -2592,12 +2891,43 @@ main(int32_t argc, char **argv) check_all_ports_link_status(enabled_port_mask); + /* + * Set the enabled port mask in helper config for use by helper + * sub-system. This will be used while intializing devices using + * helper sub-system. + */ + eh_conf->eth_portmask = enabled_port_mask; + + /* Initialize eventmode components */ + ret = eh_devs_init(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); + /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); + RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } + /* Uninitialize eventmode components */ + ret = eh_devs_uninit(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); + + /* Free eventmode configuration memory */ + eh_conf_uninit(eh_conf); + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + printf("Closing port %d...", portid); + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } + printf("Bye...\n"); + return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 28ff07d..0b9fc04 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -247,6 +247,13 @@ struct ipsec_traffic { struct traffic_type ip6; }; + +void +ipsec_poll_mode_worker(void); + +int +ipsec_launch_one_lcore(void *args); + uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t len); diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c new file mode 100644 index 0000000..87c657b --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -0,0 +1,180 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2016 Intel Corporation + * Copyright (C) 2019 Marvell International Ltd. + */ +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <stdint.h> +#include <inttypes.h> +#include <sys/types.h> +#include <sys/queue.h> +#include <netinet/in.h> +#include <setjmp.h> +#include <stdarg.h> +#include <ctype.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_log.h> +#include <rte_memcpy.h> +#include <rte_atomic.h> +#include <rte_cycles.h> +#include <rte_prefetch.h> +#include <rte_lcore.h> +#include <rte_branch_prediction.h> +#include <rte_event_eth_tx_adapter.h> +#include <rte_ether.h> +#include <rte_ethdev.h> +#include <rte_eventdev.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "ipsec.h" +#include "event_helper.h" + +extern volatile bool force_quit; + +static inline void +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) +{ + /* Save the destination port in the mbuf */ + m->port = port_id; + + /* Save eth queue for Tx */ + rte_event_eth_tx_adapter_txq_set(m, 0); +} + +/* + * Event mode exposes various operating modes depending on the + * capabilities of the event device and the operating mode + * selected. + */ + +/* Workers registered */ +#define IPSEC_EVENTMODE_WORKERS 1 + +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - driver mode - inbound + */ +static void +ipsec_wrkr_non_burst_int_port_drvr_mode_inb(struct eh_event_link_info *links, + uint8_t nb_links) +{ + unsigned int nb_rx = 0; + struct rte_mbuf *pkt; + unsigned int port_id; + struct rte_event ev; + uint32_t lcore_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + goto exit; + } + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "driver mode - inbound) on lcore %d\n", lcore_id); + + /* We have valid links */ + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + port_id = ev.queue_id; + pkt = ev.mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } + +exit: + return; +} + +static uint8_t +ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) +{ + struct eh_app_worker_params *wrkr; + uint8_t nb_wrkr_param = 0; + + /* Save workers */ + wrkr = wrkrs; + + /* Non-burst - Tx internal port - driver mode - inbound */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drvr_mode_inb; + + nb_wrkr_param++; + return nb_wrkr_param; +} + +static void +ipsec_eventmode_worker(struct eh_conf *conf) +{ + struct eh_app_worker_params ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { + {{{0} }, NULL } }; + uint8_t nb_wrkr_param; + + /* Populate l2fwd_wrkr params */ + nb_wrkr_param = ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); + + /* + * Launch correct worker after checking + * the event device's capabilities. + */ + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); +} + +int ipsec_launch_one_lcore(void *args) +{ + struct eh_conf *conf; + + conf = (struct eh_conf *)args; + + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { + /* Run in poll mode */ + ipsec_poll_mode_worker(); + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { + /* Run in event mode */ + ipsec_eventmode_worker(conf); + } + return 0; +} diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 20f4064..ab40ca5 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c', 'ipsec_worker.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2019-12-08 12:30 ` [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph @ 2019-12-23 16:43 ` Ananyev, Konstantin 2020-01-03 10:18 ` Anoob Joseph 2019-12-24 12:47 ` Ananyev, Konstantin 1 sibling, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-23 16:43 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > > Add eventmode support to ipsec-secgw. This uses event helper to setup > and use the eventmode capabilities. Add driver inbound worker. > > Example command: > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w 0002:07:00.0 > -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > --schedule-type 2 --process-mode drv --process-dir in > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/Makefile | 1 + > examples/ipsec-secgw/event_helper.c | 3 + > examples/ipsec-secgw/event_helper.h | 26 +++ > examples/ipsec-secgw/ipsec-secgw.c | 344 +++++++++++++++++++++++++++++++++++- > examples/ipsec-secgw/ipsec.h | 7 + > examples/ipsec-secgw/ipsec_worker.c | 180 +++++++++++++++++++ > examples/ipsec-secgw/meson.build | 2 +- > 7 files changed, 555 insertions(+), 8 deletions(-) > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > > diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile > index 09e3c5a..f6fd94c 100644 > --- a/examples/ipsec-secgw/Makefile > +++ b/examples/ipsec-secgw/Makefile > @@ -15,6 +15,7 @@ SRCS-y += sa.c > SRCS-y += rt.c > SRCS-y += ipsec_process.c > SRCS-y += ipsec-secgw.c > +SRCS-y += ipsec_worker.c > SRCS-y += event_helper.c > > CFLAGS += -gdwarf-2 > diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c > index 6549875..44f997d 100644 > --- a/examples/ipsec-secgw/event_helper.c > +++ b/examples/ipsec-secgw/event_helper.c > @@ -984,6 +984,9 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, > else > curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; > > + curr_conf.cap.ipsec_mode = conf->ipsec_mode; > + curr_conf.cap.ipsec_dir = conf->ipsec_dir; > + > /* Parse the passed list and see if we have matching capabilities */ > > /* Initialize the pointer used to traverse the list */ > diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h > index 2895dfa..07849b0 100644 > --- a/examples/ipsec-secgw/event_helper.h > +++ b/examples/ipsec-secgw/event_helper.h > @@ -74,6 +74,22 @@ enum eh_tx_types { > EH_TX_TYPE_NO_INTERNAL_PORT > }; > > +/** > + * Event mode ipsec mode types > + */ > +enum eh_ipsec_mode_types { > + EH_IPSEC_MODE_TYPE_APP = 0, > + EH_IPSEC_MODE_TYPE_DRIVER > +}; > + > +/** > + * Event mode ipsec direction types > + */ > +enum eh_ipsec_dir_types { > + EH_IPSEC_DIR_TYPE_OUTBOUND = 0, > + EH_IPSEC_DIR_TYPE_INBOUND, > +}; > + > /* Event dev params */ > struct eventdev_params { > uint8_t eventdev_id; > @@ -183,6 +199,12 @@ struct eh_conf { > */ > void *mode_params; > /**< Mode specific parameters */ > + > + /** Application specific params */ > + enum eh_ipsec_mode_types ipsec_mode; > + /**< Mode of ipsec run */ > + enum eh_ipsec_dir_types ipsec_dir; > + /**< Direction of ipsec processing */ > }; > > /* Workers registered by the application */ > @@ -194,6 +216,10 @@ struct eh_app_worker_params { > /**< Specify status of rx type burst */ > uint64_t tx_internal_port : 1; > /**< Specify whether tx internal port is available */ > + uint64_t ipsec_mode : 1; > + /**< Specify ipsec processing level */ > + uint64_t ipsec_dir : 1; > + /**< Specify direction of ipsec */ > }; > uint64_t u64; > } cap; > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index 7506922..c5d95b9 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -2,6 +2,7 @@ > * Copyright(c) 2016 Intel Corporation > */ > > +#include <stdbool.h> > #include <stdio.h> > #include <stdlib.h> > #include <stdint.h> > @@ -14,6 +15,7 @@ > #include <sys/queue.h> > #include <stdarg.h> > #include <errno.h> > +#include <signal.h> > #include <getopt.h> > > #include <rte_common.h> > @@ -41,12 +43,17 @@ > #include <rte_jhash.h> > #include <rte_cryptodev.h> > #include <rte_security.h> > +#include <rte_bitmap.h> > +#include <rte_eventdev.h> > #include <rte_ip.h> > #include <rte_ip_frag.h> > > +#include "event_helper.h" > #include "ipsec.h" > #include "parser.h" > > +volatile bool force_quit; > + > #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > #define MAX_JUMBO_PKT_LEN 9600 > @@ -133,12 +140,21 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > #define CMD_LINE_OPT_CONFIG "config" > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" > +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" > +#define CMD_LINE_OPT_IPSEC_MODE "process-mode" > +#define CMD_LINE_OPT_IPSEC_DIR "process-dir" > #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" > #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" > #define CMD_LINE_OPT_REASSEMBLE "reassemble" > #define CMD_LINE_OPT_MTU "mtu" > #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" > > +#define CMD_LINE_ARG_APP "app" > +#define CMD_LINE_ARG_DRV "drv" > +#define CMD_LINE_ARG_INB "in" > +#define CMD_LINE_ARG_OUT "out" > + > enum { > /* long options mapped to a short option */ > > @@ -149,7 +165,11 @@ enum { > CMD_LINE_OPT_CONFIG_NUM, > CMD_LINE_OPT_SINGLE_SA_NUM, > CMD_LINE_OPT_CRYPTODEV_MASK_NUM, > + CMD_LINE_OPT_TRANSFER_MODE_NUM, > + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, > CMD_LINE_OPT_RX_OFFLOAD_NUM, > + CMD_LINE_OPT_IPSEC_MODE_NUM, > + CMD_LINE_OPT_IPSEC_DIR_NUM, > CMD_LINE_OPT_TX_OFFLOAD_NUM, > CMD_LINE_OPT_REASSEMBLE_NUM, > CMD_LINE_OPT_MTU_NUM, > @@ -160,6 +180,10 @@ static const struct option lgopts[] = { > {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, > {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, > {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, > + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, > + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, > + {CMD_LINE_OPT_IPSEC_MODE, 1, 0, CMD_LINE_OPT_IPSEC_MODE_NUM}, > + {CMD_LINE_OPT_IPSEC_DIR, 1, 0, CMD_LINE_OPT_IPSEC_DIR_NUM}, > {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, > {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, > {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, > @@ -1094,8 +1118,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, > } > > /* main processing loop */ > -static int32_t > -main_loop(__attribute__((unused)) void *dummy) > +void > +ipsec_poll_mode_worker(void) > { > struct rte_mbuf *pkts[MAX_PKT_BURST]; > uint32_t lcore_id; > @@ -1137,7 +1161,7 @@ main_loop(__attribute__((unused)) void *dummy) > if (qconf->nb_rx_queue == 0) { > RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", > lcore_id); > - return 0; > + return; > } > > RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); > @@ -1150,7 +1174,7 @@ main_loop(__attribute__((unused)) void *dummy) > lcore_id, portid, queueid); > } > > - while (1) { > + while (!force_quit) { > cur_tsc = rte_rdtsc(); > > /* TX queue buffer drain */ > @@ -1277,6 +1301,10 @@ print_usage(const char *prgname) > " --config (port,queue,lcore)[,(port,queue,lcore)]" > " [--single-sa SAIDX]" > " [--cryptodev_mask MASK]" > + " [--transfer-mode MODE]" > + " [--schedule-type TYPE]" > + " [--process-mode MODE]" > + " [--process-dir DIR]" > " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" > " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" > " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" > @@ -1298,6 +1326,22 @@ print_usage(const char *prgname) > " bypassing the SP\n" > " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" > " devices to configure\n" > + " --transfer-mode MODE\n" > + " 0: Packet transfer via polling (default)\n" > + " 1: Packet transfer via eventdev\n" > + " --schedule-type TYPE queue schedule type, used only when\n" > + " transfer mode is set to eventdev\n" > + " 0: Ordered (default)\n" > + " 1: Atomic\n" > + " 2: Parallel\n" For last two, why not something huma-readable? I.E. == --tranfer-mode=(poll|event) or so. Same for schedule-type. > + " --process-mode MODE processing mode, used only when\n" > + " transfer mode is set to eventdev\n" > + " \"app\" : application mode (default)\n" > + " \"drv\" : driver mode\n" > + " --process-dir DIR processing direction, used only when\n" > + " transfer mode is set to eventdev\n" > + " \"out\" : outbound (default)\n" > + " \"in\" : inbound\n" Hmm and why in eventdev mode it is not possible to handle both inbound and outbound traffic? Where is the limitation: eventdev framework/PMD/ipsec-secgw? > " --" CMD_LINE_OPT_RX_OFFLOAD > ": bitmask of the RX HW offload capabilities to enable/use\n" > " (DEV_RX_OFFLOAD_*)\n" > @@ -1433,7 +1477,89 @@ print_app_sa_prm(const struct app_sa_prm *prm) > } > > static int32_t > -parse_args(int32_t argc, char **argv) > +eh_parse_decimal(const char *str) > +{ > + unsigned long num; > + char *end = NULL; > + > + num = strtoul(str, &end, 10); > + if ((str[0] == '\0') || (end == NULL) || (*end != '\0')) > + return -EINVAL; > + > + return num; > +} There already exists parse_decimal(), why to create a dup? > + > +static int > +parse_transfer_mode(struct eh_conf *conf, const char *optarg) > +{ > + int32_t parsed_dec; > + > + parsed_dec = eh_parse_decimal(optarg); > + if (parsed_dec != EH_PKT_TRANSFER_MODE_POLL && > + parsed_dec != EH_PKT_TRANSFER_MODE_EVENT) { > + printf("Unsupported packet transfer mode"); > + return -EINVAL; > + } > + conf->mode = parsed_dec; > + return 0; > +} > + > +static int > +parse_schedule_type(struct eh_conf *conf, const char *optarg) > +{ > + struct eventmode_conf *em_conf = NULL; > + int32_t parsed_dec; > + > + parsed_dec = eh_parse_decimal(optarg); > + if (parsed_dec != RTE_SCHED_TYPE_ORDERED && > + parsed_dec != RTE_SCHED_TYPE_ATOMIC && > + parsed_dec != RTE_SCHED_TYPE_PARALLEL) > + return -EINVAL; > + > + /* Get eventmode conf */ > + em_conf = (struct eventmode_conf *)(conf->mode_params); > + > + em_conf->ext_params.sched_type = parsed_dec; > + > + return 0; > +} > + > +static int > +parse_ipsec_mode(struct eh_conf *conf, const char *optarg) > +{ > + if (!strncmp(CMD_LINE_ARG_APP, optarg, strlen(CMD_LINE_ARG_APP)) && > + strlen(optarg) == strlen(CMD_LINE_ARG_APP)) Ugh, that's an ugly construction, why not just: if (strcmp(CMD_LINE_ARG_APP, optarg) == 0) ? > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > + else if (!strncmp(CMD_LINE_ARG_DRV, optarg, strlen(CMD_LINE_ARG_DRV)) && > + strlen(optarg) == strlen(CMD_LINE_ARG_DRV)) > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > + else { > + printf("Unsupported ipsec mode\n"); > + return -EINVAL; > + } > + > + return 0; > +} > + > +static int > +parse_ipsec_dir(struct eh_conf *conf, const char *optarg) > +{ > + if (!strncmp(CMD_LINE_ARG_INB, optarg, strlen(CMD_LINE_ARG_INB)) && > + strlen(optarg) == strlen(CMD_LINE_ARG_INB)) > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > + else if (!strncmp(CMD_LINE_ARG_OUT, optarg, strlen(CMD_LINE_ARG_OUT)) && > + strlen(optarg) == strlen(CMD_LINE_ARG_OUT)) > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > + else { > + printf("Unsupported ipsec direction\n"); > + return -EINVAL; > + } > + > + return 0; > +} > + > +static int32_t > +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > { > int opt; > int64_t ret; > @@ -1536,6 +1662,43 @@ parse_args(int32_t argc, char **argv) > /* else */ > enabled_cryptodev_mask = ret; > break; > + > + case CMD_LINE_OPT_TRANSFER_MODE_NUM: > + ret = parse_transfer_mode(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid packet transfer mode\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: > + ret = parse_schedule_type(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid queue schedule type\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > + case CMD_LINE_OPT_IPSEC_MODE_NUM: > + ret = parse_ipsec_mode(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid ipsec mode\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > + case CMD_LINE_OPT_IPSEC_DIR_NUM: > + ret = parse_ipsec_dir(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid ipsec direction\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > case CMD_LINE_OPT_RX_OFFLOAD_NUM: > ret = parse_mask(optarg, &dev_rx_offload); > if (ret != 0) { > @@ -2457,6 +2620,132 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) > return ret; > } > > +static struct eh_conf * > +eh_conf_init(void) > +{ > + struct eventmode_conf *em_conf = NULL; > + struct eh_conf *conf = NULL; > + unsigned int eth_core_id; > + uint32_t nb_bytes; > + void *mem = NULL; > + > + /* Allocate memory for config */ > + conf = calloc(1, sizeof(struct eh_conf)); > + if (conf == NULL) { > + printf("Failed to allocate memory for eventmode helper conf"); > + goto err; > + } > + > + /* Set default conf */ > + > + /* Packet transfer mode: poll */ > + conf->mode = EH_PKT_TRANSFER_MODE_POLL; > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > + > + /* Keep all ethernet ports enabled by default */ > + conf->eth_portmask = -1; > + > + /* Allocate memory for event mode params */ > + conf->mode_params = > + calloc(1, sizeof(struct eventmode_conf)); > + if (conf->mode_params == NULL) { > + printf("Failed to allocate memory for event mode params"); > + goto err; > + } > + > + /* Get eventmode conf */ > + em_conf = (struct eventmode_conf *)(conf->mode_params); > + > + /* Allocate and initialize bitmap for eth cores */ > + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); > + if (!nb_bytes) { > + printf("Failed to get bitmap footprint"); > + goto err; > + } > + > + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, > + RTE_CACHE_LINE_SIZE); > + if (!mem) { > + printf("Failed to allocate memory for eth cores bitmap\n"); > + goto err; > + } > + > + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, mem, nb_bytes); > + if (!em_conf->eth_core_mask) { > + printf("Failed to initialize bitmap"); > + goto err; > + } > + > + /* Schedule type: ordered */ > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; > + > + /* Set two cores as eth cores for Rx & Tx */ > + > + /* Use first core other than master core as Rx core */ > + eth_core_id = rte_get_next_lcore(0, /* curr core */ > + 1, /* skip master core */ > + 0 /* wrap */); > + > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > + > + /* Use next core as Tx core */ > + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ > + 1, /* skip master core */ > + 0 /* wrap */); > + > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > + > + return conf; > +err: > + rte_free(mem); > + free(em_conf); > + free(conf); > + return NULL; > +} > + > +static void > +eh_conf_uninit(struct eh_conf *conf) > +{ > + struct eventmode_conf *em_conf = NULL; > + > + /* Get eventmode conf */ > + em_conf = (struct eventmode_conf *)(conf->mode_params); > + > + /* Free evenmode configuration memory */ > + rte_free(em_conf->eth_core_mask); > + free(em_conf); > + free(conf); > +} > + > +static void > +signal_handler(int signum) > +{ > + if (signum == SIGINT || signum == SIGTERM) { > + uint16_t port_id; > + printf("\n\nSignal %d received, preparing to exit...\n", > + signum); > + force_quit = true; > + > + /* Destroy the default ipsec flow */ > + RTE_ETH_FOREACH_DEV(port_id) { > + if ((enabled_port_mask & (1 << port_id)) == 0) > + continue; > + if (flow_info_tbl[port_id].rx_def_flow) { > + struct rte_flow_error err; > + int ret; As we are going to call dev_stop(), etc. at force_quit below, is there any reason to call rte_flow_destroy() here? Just curious. > + ret = rte_flow_destroy(port_id, > + flow_info_tbl[port_id].rx_def_flow, > + &err); > + if (ret) > + RTE_LOG(ERR, IPSEC, > + "Failed to destroy flow for port %u, " > + "err msg: %s\n", port_id, err.message); > + } > + } > + } > +} > + > int32_t > main(int32_t argc, char **argv) > { > @@ -2466,6 +2755,7 @@ main(int32_t argc, char **argv) > uint8_t socket_id; > uint16_t portid; > uint64_t req_rx_offloads, req_tx_offloads; > + struct eh_conf *eh_conf = NULL; > size_t sess_sz; > > /* init EAL */ > @@ -2475,8 +2765,17 @@ main(int32_t argc, char **argv) > argc -= ret; > argv += ret; > > + force_quit = false; > + signal(SIGINT, signal_handler); > + signal(SIGTERM, signal_handler); > + > + /* initialize event helper configuration */ > + eh_conf = eh_conf_init(); > + if (eh_conf == NULL) > + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); > + > /* parse application arguments (after the EAL ones) */ > - ret = parse_args(argc, argv); > + ret = parse_args(argc, argv, eh_conf); > if (ret < 0) > rte_exit(EXIT_FAILURE, "Invalid parameters\n"); > > @@ -2592,12 +2891,43 @@ main(int32_t argc, char **argv) > > check_all_ports_link_status(enabled_port_mask); > > + /* > + * Set the enabled port mask in helper config for use by helper > + * sub-system. This will be used while intializing devices using > + * helper sub-system. > + */ > + eh_conf->eth_portmask = enabled_port_mask; > + > + /* Initialize eventmode components */ > + ret = eh_devs_init(eh_conf); > + if (ret < 0) > + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); > + > /* launch per-lcore init on every lcore */ > - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); > + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); > + > RTE_LCORE_FOREACH_SLAVE(lcore_id) { > if (rte_eal_wait_lcore(lcore_id) < 0) > return -1; > } > > + /* Uninitialize eventmode components */ > + ret = eh_devs_uninit(eh_conf); > + if (ret < 0) > + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); > + > + /* Free eventmode configuration memory */ > + eh_conf_uninit(eh_conf); > + > + RTE_ETH_FOREACH_DEV(portid) { > + if ((enabled_port_mask & (1 << portid)) == 0) > + continue; > + printf("Closing port %d...", portid); > + rte_eth_dev_stop(portid); > + rte_eth_dev_close(portid); > + printf(" Done\n"); > + } > + printf("Bye...\n"); > + > return 0; > } ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2019-12-23 16:43 ` Ananyev, Konstantin @ 2020-01-03 10:18 ` Anoob Joseph 2020-01-06 15:45 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-03 10:18 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: dev <dev-bounces@dpdk.org> On Behalf Of Ananyev, Konstantin > Sent: Monday, December 23, 2019 10:13 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal > <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas > Monjalon <thomas@monjalon.net> > Cc: Lukas Bartosik <lbartosik@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add > eventmode to ipsec-secgw > > > > > Add eventmode support to ipsec-secgw. This uses event helper to setup > > and use the eventmode capabilities. Add driver inbound worker. > > > > Example command: > > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w > > 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > > --schedule-type 2 --process-mode drv --process-dir in > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > --- > > examples/ipsec-secgw/Makefile | 1 + > > examples/ipsec-secgw/event_helper.c | 3 + > > examples/ipsec-secgw/event_helper.h | 26 +++ > > examples/ipsec-secgw/ipsec-secgw.c | 344 > +++++++++++++++++++++++++++++++++++- > > examples/ipsec-secgw/ipsec.h | 7 + > > examples/ipsec-secgw/ipsec_worker.c | 180 +++++++++++++++++++ > > examples/ipsec-secgw/meson.build | 2 +- > > 7 files changed, 555 insertions(+), 8 deletions(-) create mode > > 100644 examples/ipsec-secgw/ipsec_worker.c > > > > diff --git a/examples/ipsec-secgw/Makefile > > b/examples/ipsec-secgw/Makefile index 09e3c5a..f6fd94c 100644 > > --- a/examples/ipsec-secgw/Makefile > > +++ b/examples/ipsec-secgw/Makefile > > @@ -15,6 +15,7 @@ SRCS-y += sa.c > > SRCS-y += rt.c > > SRCS-y += ipsec_process.c > > SRCS-y += ipsec-secgw.c > > +SRCS-y += ipsec_worker.c > > SRCS-y += event_helper.c > > > > CFLAGS += -gdwarf-2 > > diff --git a/examples/ipsec-secgw/event_helper.c > > b/examples/ipsec-secgw/event_helper.c > > index 6549875..44f997d 100644 > > --- a/examples/ipsec-secgw/event_helper.c > > +++ b/examples/ipsec-secgw/event_helper.c > > @@ -984,6 +984,9 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf > *conf, > > else > > curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; > > > > + curr_conf.cap.ipsec_mode = conf->ipsec_mode; > > + curr_conf.cap.ipsec_dir = conf->ipsec_dir; > > + > > /* Parse the passed list and see if we have matching capabilities */ > > > > /* Initialize the pointer used to traverse the list */ diff --git > > a/examples/ipsec-secgw/event_helper.h > > b/examples/ipsec-secgw/event_helper.h > > index 2895dfa..07849b0 100644 > > --- a/examples/ipsec-secgw/event_helper.h > > +++ b/examples/ipsec-secgw/event_helper.h > > @@ -74,6 +74,22 @@ enum eh_tx_types { > > EH_TX_TYPE_NO_INTERNAL_PORT > > }; > > > > +/** > > + * Event mode ipsec mode types > > + */ > > +enum eh_ipsec_mode_types { > > + EH_IPSEC_MODE_TYPE_APP = 0, > > + EH_IPSEC_MODE_TYPE_DRIVER > > +}; > > + > > +/** > > + * Event mode ipsec direction types > > + */ > > +enum eh_ipsec_dir_types { > > + EH_IPSEC_DIR_TYPE_OUTBOUND = 0, > > + EH_IPSEC_DIR_TYPE_INBOUND, > > +}; > > + > > /* Event dev params */ > > struct eventdev_params { > > uint8_t eventdev_id; > > @@ -183,6 +199,12 @@ struct eh_conf { > > */ > > void *mode_params; > > /**< Mode specific parameters */ > > + > > + /** Application specific params */ > > + enum eh_ipsec_mode_types ipsec_mode; > > + /**< Mode of ipsec run */ > > + enum eh_ipsec_dir_types ipsec_dir; > > + /**< Direction of ipsec processing */ > > }; > > > > /* Workers registered by the application */ @@ -194,6 +216,10 @@ > > struct eh_app_worker_params { > > /**< Specify status of rx type burst */ > > uint64_t tx_internal_port : 1; > > /**< Specify whether tx internal port is available */ > > + uint64_t ipsec_mode : 1; > > + /**< Specify ipsec processing level */ > > + uint64_t ipsec_dir : 1; > > + /**< Specify direction of ipsec */ > > }; > > uint64_t u64; > > } cap; > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > b/examples/ipsec-secgw/ipsec-secgw.c > > index 7506922..c5d95b9 100644 > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > @@ -2,6 +2,7 @@ > > * Copyright(c) 2016 Intel Corporation > > */ > > > > +#include <stdbool.h> > > #include <stdio.h> > > #include <stdlib.h> > > #include <stdint.h> > > @@ -14,6 +15,7 @@ > > #include <sys/queue.h> > > #include <stdarg.h> > > #include <errno.h> > > +#include <signal.h> > > #include <getopt.h> > > > > #include <rte_common.h> > > @@ -41,12 +43,17 @@ > > #include <rte_jhash.h> > > #include <rte_cryptodev.h> > > #include <rte_security.h> > > +#include <rte_bitmap.h> > > +#include <rte_eventdev.h> > > #include <rte_ip.h> > > #include <rte_ip_frag.h> > > > > +#include "event_helper.h" > > #include "ipsec.h" > > #include "parser.h" > > > > +volatile bool force_quit; > > + > > #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > > > #define MAX_JUMBO_PKT_LEN 9600 > > @@ -133,12 +140,21 @@ struct flow_info > flow_info_tbl[RTE_MAX_ETHPORTS]; > > #define CMD_LINE_OPT_CONFIG "config" > > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > > +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" > > +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" > > +#define CMD_LINE_OPT_IPSEC_MODE "process-mode" > > +#define CMD_LINE_OPT_IPSEC_DIR "process-dir" > > #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" > > #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" > > #define CMD_LINE_OPT_REASSEMBLE "reassemble" > > #define CMD_LINE_OPT_MTU "mtu" > > #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" > > > > +#define CMD_LINE_ARG_APP "app" > > +#define CMD_LINE_ARG_DRV "drv" > > +#define CMD_LINE_ARG_INB "in" > > +#define CMD_LINE_ARG_OUT "out" > > + > > enum { > > /* long options mapped to a short option */ > > > > @@ -149,7 +165,11 @@ enum { > > CMD_LINE_OPT_CONFIG_NUM, > > CMD_LINE_OPT_SINGLE_SA_NUM, > > CMD_LINE_OPT_CRYPTODEV_MASK_NUM, > > + CMD_LINE_OPT_TRANSFER_MODE_NUM, > > + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, > > CMD_LINE_OPT_RX_OFFLOAD_NUM, > > + CMD_LINE_OPT_IPSEC_MODE_NUM, > > + CMD_LINE_OPT_IPSEC_DIR_NUM, > > CMD_LINE_OPT_TX_OFFLOAD_NUM, > > CMD_LINE_OPT_REASSEMBLE_NUM, > > CMD_LINE_OPT_MTU_NUM, > > @@ -160,6 +180,10 @@ static const struct option lgopts[] = { > > {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, > > {CMD_LINE_OPT_SINGLE_SA, 1, 0, > CMD_LINE_OPT_SINGLE_SA_NUM}, > > {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, > > CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, > > + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, > CMD_LINE_OPT_TRANSFER_MODE_NUM}, > > + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, > CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, > > + {CMD_LINE_OPT_IPSEC_MODE, 1, 0, > CMD_LINE_OPT_IPSEC_MODE_NUM}, > > + {CMD_LINE_OPT_IPSEC_DIR, 1, 0, > CMD_LINE_OPT_IPSEC_DIR_NUM}, > > {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, > CMD_LINE_OPT_RX_OFFLOAD_NUM}, > > {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, > CMD_LINE_OPT_TX_OFFLOAD_NUM}, > > {CMD_LINE_OPT_REASSEMBLE, 1, 0, > CMD_LINE_OPT_REASSEMBLE_NUM}, @@ > > -1094,8 +1118,8 @@ drain_outbound_crypto_queues(const struct > > lcore_conf *qconf, } > > > > /* main processing loop */ > > -static int32_t > > -main_loop(__attribute__((unused)) void *dummy) > > +void > > +ipsec_poll_mode_worker(void) > > { > > struct rte_mbuf *pkts[MAX_PKT_BURST]; > > uint32_t lcore_id; > > @@ -1137,7 +1161,7 @@ main_loop(__attribute__((unused)) void > *dummy) > > if (qconf->nb_rx_queue == 0) { > > RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", > > lcore_id); > > - return 0; > > + return; > > } > > > > RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); > > @@ -1150,7 +1174,7 @@ main_loop(__attribute__((unused)) void > *dummy) > > lcore_id, portid, queueid); > > } > > > > - while (1) { > > + while (!force_quit) { > > cur_tsc = rte_rdtsc(); > > > > /* TX queue buffer drain */ > > @@ -1277,6 +1301,10 @@ print_usage(const char *prgname) > > " --config (port,queue,lcore)[,(port,queue,lcore)]" > > " [--single-sa SAIDX]" > > " [--cryptodev_mask MASK]" > > + " [--transfer-mode MODE]" > > + " [--schedule-type TYPE]" > > + " [--process-mode MODE]" > > + " [--process-dir DIR]" > > " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" > > " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" > > " [--" CMD_LINE_OPT_REASSEMBLE " > REASSEMBLE_TABLE_SIZE]" > > @@ -1298,6 +1326,22 @@ print_usage(const char *prgname) > > " bypassing the SP\n" > > " --cryptodev_mask MASK: Hexadecimal bitmask of the > crypto\n" > > " devices to configure\n" > > + " --transfer-mode MODE\n" > > + " 0: Packet transfer via polling (default)\n" > > + " 1: Packet transfer via eventdev\n" > > + " --schedule-type TYPE queue schedule type, used only > when\n" > > + " transfer mode is set to eventdev\n" > > + " 0: Ordered (default)\n" > > + " 1: Atomic\n" > > + " 2: Parallel\n" > > For last two, why not something huma-readable? > I.E. == --tranfer-mode=(poll|event) or so. > Same for schedule-type. [Anoob] Will do so in v2. > > > + " --process-mode MODE processing mode, used only > when\n" > > + " transfer mode is set to eventdev\n" > > + " \"app\" : application mode (default)\n" > > + " \"drv\" : driver mode\n" > > + " --process-dir DIR processing direction, used only when\n" > > + " transfer mode is set to eventdev\n" > > + " \"out\" : outbound (default)\n" > > + " \"in\" : inbound\n" > > Hmm and why in eventdev mode it is not possible to handle both inbound > and outbound traffic? > Where is the limitation: eventdev framework/PMD/ipsec-secgw? [Anoob] It's not a limitation of any of the nodes. The current ipsec-segcw has a data path check of port to determine whether inbound or outbound processing need to be done. In case of poll-mode, we have specific cores polling fixed eth port & queue. So the extra check involved doesn't cost much. But in case of event-mode, we will have both inbound & outbound packets ending up on same core. So the penalty of running inbound & outbound at the same time (and relying on data path check) is more in case of event mode. For inline ipsec implementation, this impact isn't that much and we were able to minimize the perf degradation to 1%. I would expect lookaside crypto/protocol to have higher impacts. Said that, I'm okay with removing the extra option and retaining the current behavior. If you think single instance of ipsec-secgw should work bidirectional, I can make the required changes and see the perf impact. > > > " --" CMD_LINE_OPT_RX_OFFLOAD > > ": bitmask of the RX HW offload capabilities to enable/use\n" > > " (DEV_RX_OFFLOAD_*)\n" > > @@ -1433,7 +1477,89 @@ print_app_sa_prm(const struct app_sa_prm > *prm) > > } > > > > static int32_t > > -parse_args(int32_t argc, char **argv) > > +eh_parse_decimal(const char *str) > > +{ > > + unsigned long num; > > + char *end = NULL; > > + > > + num = strtoul(str, &end, 10); > > + if ((str[0] == '\0') || (end == NULL) || (*end != '\0')) > > + return -EINVAL; > > + > > + return num; > > +} > > There already exists parse_decimal(), why to create a dup? [Anoob] Will this in v2. > > > + > > +static int > > +parse_transfer_mode(struct eh_conf *conf, const char *optarg) { > > + int32_t parsed_dec; > > + > > + parsed_dec = eh_parse_decimal(optarg); > > + if (parsed_dec != EH_PKT_TRANSFER_MODE_POLL && > > + parsed_dec != EH_PKT_TRANSFER_MODE_EVENT) { > > + printf("Unsupported packet transfer mode"); > > + return -EINVAL; > > + } > > + conf->mode = parsed_dec; > > + return 0; > > +} > > + > > +static int > > +parse_schedule_type(struct eh_conf *conf, const char *optarg) { > > + struct eventmode_conf *em_conf = NULL; > > + int32_t parsed_dec; > > + > > + parsed_dec = eh_parse_decimal(optarg); > > + if (parsed_dec != RTE_SCHED_TYPE_ORDERED && > > + parsed_dec != RTE_SCHED_TYPE_ATOMIC && > > + parsed_dec != RTE_SCHED_TYPE_PARALLEL) > > + return -EINVAL; > > + > > + /* Get eventmode conf */ > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > + > > + em_conf->ext_params.sched_type = parsed_dec; > > + > > + return 0; > > +} > > + > > +static int > > +parse_ipsec_mode(struct eh_conf *conf, const char *optarg) { > > + if (!strncmp(CMD_LINE_ARG_APP, optarg, > strlen(CMD_LINE_ARG_APP)) && > > + strlen(optarg) == strlen(CMD_LINE_ARG_APP)) > > Ugh, that's an ugly construction, why not just: > if (strcmp(CMD_LINE_ARG_APP, optarg) == 0) ? [Anoob] Will fix this in v2. > > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > + else if (!strncmp(CMD_LINE_ARG_DRV, optarg, > strlen(CMD_LINE_ARG_DRV)) && > > + strlen(optarg) == strlen(CMD_LINE_ARG_DRV)) > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > + else { > > + printf("Unsupported ipsec mode\n"); > > + return -EINVAL; > > + } > > + > > + return 0; > > +} > > + > > +static int > > +parse_ipsec_dir(struct eh_conf *conf, const char *optarg) { > > + if (!strncmp(CMD_LINE_ARG_INB, optarg, > strlen(CMD_LINE_ARG_INB)) && > > + strlen(optarg) == strlen(CMD_LINE_ARG_INB)) > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > + else if (!strncmp(CMD_LINE_ARG_OUT, optarg, > strlen(CMD_LINE_ARG_OUT)) && > > + strlen(optarg) == strlen(CMD_LINE_ARG_OUT)) > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > + else { > > + printf("Unsupported ipsec direction\n"); > > + return -EINVAL; > > + } > > + > > + return 0; > > +} > > + > > +static int32_t > > +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > { > > int opt; > > int64_t ret; > > @@ -1536,6 +1662,43 @@ parse_args(int32_t argc, char **argv) > > /* else */ > > enabled_cryptodev_mask = ret; > > break; > > + > > + case CMD_LINE_OPT_TRANSFER_MODE_NUM: > > + ret = parse_transfer_mode(eh_conf, optarg); > > + if (ret < 0) { > > + printf("Invalid packet transfer mode\n"); > > + print_usage(prgname); > > + return -1; > > + } > > + break; > > + > > + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: > > + ret = parse_schedule_type(eh_conf, optarg); > > + if (ret < 0) { > > + printf("Invalid queue schedule type\n"); > > + print_usage(prgname); > > + return -1; > > + } > > + break; > > + > > + case CMD_LINE_OPT_IPSEC_MODE_NUM: > > + ret = parse_ipsec_mode(eh_conf, optarg); > > + if (ret < 0) { > > + printf("Invalid ipsec mode\n"); > > + print_usage(prgname); > > + return -1; > > + } > > + break; > > + > > + case CMD_LINE_OPT_IPSEC_DIR_NUM: > > + ret = parse_ipsec_dir(eh_conf, optarg); > > + if (ret < 0) { > > + printf("Invalid ipsec direction\n"); > > + print_usage(prgname); > > + return -1; > > + } > > + break; > > + > > case CMD_LINE_OPT_RX_OFFLOAD_NUM: > > ret = parse_mask(optarg, &dev_rx_offload); > > if (ret != 0) { > > @@ -2457,6 +2620,132 @@ create_default_ipsec_flow(uint16_t port_id, > uint64_t rx_offloads) > > return ret; > > } > > > > +static struct eh_conf * > > +eh_conf_init(void) > > +{ > > + struct eventmode_conf *em_conf = NULL; > > + struct eh_conf *conf = NULL; > > + unsigned int eth_core_id; > > + uint32_t nb_bytes; > > + void *mem = NULL; > > + > > + /* Allocate memory for config */ > > + conf = calloc(1, sizeof(struct eh_conf)); > > + if (conf == NULL) { > > + printf("Failed to allocate memory for eventmode helper > conf"); > > + goto err; > > + } > > + > > + /* Set default conf */ > > + > > + /* Packet transfer mode: poll */ > > + conf->mode = EH_PKT_TRANSFER_MODE_POLL; > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > + > > + /* Keep all ethernet ports enabled by default */ > > + conf->eth_portmask = -1; > > + > > + /* Allocate memory for event mode params */ > > + conf->mode_params = > > + calloc(1, sizeof(struct eventmode_conf)); > > + if (conf->mode_params == NULL) { > > + printf("Failed to allocate memory for event mode params"); > > + goto err; > > + } > > + > > + /* Get eventmode conf */ > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > + > > + /* Allocate and initialize bitmap for eth cores */ > > + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); > > + if (!nb_bytes) { > > + printf("Failed to get bitmap footprint"); > > + goto err; > > + } > > + > > + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, > > + RTE_CACHE_LINE_SIZE); > > + if (!mem) { > > + printf("Failed to allocate memory for eth cores bitmap\n"); > > + goto err; > > + } > > + > > + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, > mem, nb_bytes); > > + if (!em_conf->eth_core_mask) { > > + printf("Failed to initialize bitmap"); > > + goto err; > > + } > > + > > + /* Schedule type: ordered */ > > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; > > + > > + /* Set two cores as eth cores for Rx & Tx */ > > + > > + /* Use first core other than master core as Rx core */ > > + eth_core_id = rte_get_next_lcore(0, /* curr core */ > > + 1, /* skip master core */ > > + 0 /* wrap */); > > + > > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > > + > > + /* Use next core as Tx core */ > > + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core > */ > > + 1, /* skip master core */ > > + 0 /* wrap */); > > + > > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > > + > > + return conf; > > +err: > > + rte_free(mem); > > + free(em_conf); > > + free(conf); > > + return NULL; > > +} > > + > > +static void > > +eh_conf_uninit(struct eh_conf *conf) > > +{ > > + struct eventmode_conf *em_conf = NULL; > > + > > + /* Get eventmode conf */ > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > + > > + /* Free evenmode configuration memory */ > > + rte_free(em_conf->eth_core_mask); > > + free(em_conf); > > + free(conf); > > +} > > + > > +static void > > +signal_handler(int signum) > > +{ > > + if (signum == SIGINT || signum == SIGTERM) { > > + uint16_t port_id; > > + printf("\n\nSignal %d received, preparing to exit...\n", > > + signum); > > + force_quit = true; > > + > > + /* Destroy the default ipsec flow */ > > + RTE_ETH_FOREACH_DEV(port_id) { > > + if ((enabled_port_mask & (1 << port_id)) == 0) > > + continue; > > + if (flow_info_tbl[port_id].rx_def_flow) { > > + struct rte_flow_error err; > > + int ret; > > As we are going to call dev_stop(), etc. at force_quit below, is there any > reason to call rte_flow_destroy() here? > Just curious. [Anoob] dev_stop() should clear all the rte_flow entries. But doing it from the app as a good citizen. 😊 I can remove it since the same is not done for SA specific rte_flows created for inline crypto. > > > + ret = rte_flow_destroy(port_id, > > + flow_info_tbl[port_id].rx_def_flow, > > + &err); > > + if (ret) > > + RTE_LOG(ERR, IPSEC, > > + "Failed to destroy flow for port %u, " > > + "err msg: %s\n", port_id, > err.message); > > + } > > + } > > + } > > +} > > + > > int32_t > > main(int32_t argc, char **argv) > > { > > @@ -2466,6 +2755,7 @@ main(int32_t argc, char **argv) > > uint8_t socket_id; > > uint16_t portid; > > uint64_t req_rx_offloads, req_tx_offloads; > > + struct eh_conf *eh_conf = NULL; > > size_t sess_sz; > > > > /* init EAL */ > > @@ -2475,8 +2765,17 @@ main(int32_t argc, char **argv) > > argc -= ret; > > argv += ret; > > > > + force_quit = false; > > + signal(SIGINT, signal_handler); > > + signal(SIGTERM, signal_handler); > > + > > + /* initialize event helper configuration */ > > + eh_conf = eh_conf_init(); > > + if (eh_conf == NULL) > > + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); > > + > > /* parse application arguments (after the EAL ones) */ > > - ret = parse_args(argc, argv); > > + ret = parse_args(argc, argv, eh_conf); > > if (ret < 0) > > rte_exit(EXIT_FAILURE, "Invalid parameters\n"); > > > > @@ -2592,12 +2891,43 @@ main(int32_t argc, char **argv) > > > > check_all_ports_link_status(enabled_port_mask); > > > > + /* > > + * Set the enabled port mask in helper config for use by helper > > + * sub-system. This will be used while intializing devices using > > + * helper sub-system. > > + */ > > + eh_conf->eth_portmask = enabled_port_mask; > > + > > + /* Initialize eventmode components */ > > + ret = eh_devs_init(eh_conf); > > + if (ret < 0) > > + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", > ret); > > + > > /* launch per-lcore init on every lcore */ > > - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); > > + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, > > +CALL_MASTER); > > + > > RTE_LCORE_FOREACH_SLAVE(lcore_id) { > > if (rte_eal_wait_lcore(lcore_id) < 0) > > return -1; > > } > > > > + /* Uninitialize eventmode components */ > > + ret = eh_devs_uninit(eh_conf); > > + if (ret < 0) > > + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", > ret); > > + > > + /* Free eventmode configuration memory */ > > + eh_conf_uninit(eh_conf); > > + > > + RTE_ETH_FOREACH_DEV(portid) { > > + if ((enabled_port_mask & (1 << portid)) == 0) > > + continue; > > + printf("Closing port %d...", portid); > > + rte_eth_dev_stop(portid); > > + rte_eth_dev_close(portid); > > + printf(" Done\n"); > > + } > > + printf("Bye...\n"); > > + > > return 0; > > } ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-03 10:18 ` Anoob Joseph @ 2020-01-06 15:45 ` Ananyev, Konstantin 2020-01-09 6:17 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-06 15:45 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev > > > Add eventmode support to ipsec-secgw. This uses event helper to setup > > > and use the eventmode capabilities. Add driver inbound worker. > > > > > > Example command: > > > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w > > > 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > > > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > > > --schedule-type 2 --process-mode drv --process-dir in > > > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > --- > > > examples/ipsec-secgw/Makefile | 1 + > > > examples/ipsec-secgw/event_helper.c | 3 + > > > examples/ipsec-secgw/event_helper.h | 26 +++ > > > examples/ipsec-secgw/ipsec-secgw.c | 344 > > +++++++++++++++++++++++++++++++++++- > > > examples/ipsec-secgw/ipsec.h | 7 + > > > examples/ipsec-secgw/ipsec_worker.c | 180 +++++++++++++++++++ > > > examples/ipsec-secgw/meson.build | 2 +- > > > 7 files changed, 555 insertions(+), 8 deletions(-) create mode > > > 100644 examples/ipsec-secgw/ipsec_worker.c > > > > > > diff --git a/examples/ipsec-secgw/Makefile > > > b/examples/ipsec-secgw/Makefile index 09e3c5a..f6fd94c 100644 > > > --- a/examples/ipsec-secgw/Makefile > > > +++ b/examples/ipsec-secgw/Makefile > > > @@ -15,6 +15,7 @@ SRCS-y += sa.c > > > SRCS-y += rt.c > > > SRCS-y += ipsec_process.c > > > SRCS-y += ipsec-secgw.c > > > +SRCS-y += ipsec_worker.c > > > SRCS-y += event_helper.c > > > > > > CFLAGS += -gdwarf-2 > > > diff --git a/examples/ipsec-secgw/event_helper.c > > > b/examples/ipsec-secgw/event_helper.c > > > index 6549875..44f997d 100644 > > > --- a/examples/ipsec-secgw/event_helper.c > > > +++ b/examples/ipsec-secgw/event_helper.c > > > @@ -984,6 +984,9 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf > > *conf, > > > else > > > curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; > > > > > > + curr_conf.cap.ipsec_mode = conf->ipsec_mode; > > > + curr_conf.cap.ipsec_dir = conf->ipsec_dir; > > > + > > > /* Parse the passed list and see if we have matching capabilities */ > > > > > > /* Initialize the pointer used to traverse the list */ diff --git > > > a/examples/ipsec-secgw/event_helper.h > > > b/examples/ipsec-secgw/event_helper.h > > > index 2895dfa..07849b0 100644 > > > --- a/examples/ipsec-secgw/event_helper.h > > > +++ b/examples/ipsec-secgw/event_helper.h > > > @@ -74,6 +74,22 @@ enum eh_tx_types { > > > EH_TX_TYPE_NO_INTERNAL_PORT > > > }; > > > > > > +/** > > > + * Event mode ipsec mode types > > > + */ > > > +enum eh_ipsec_mode_types { > > > + EH_IPSEC_MODE_TYPE_APP = 0, > > > + EH_IPSEC_MODE_TYPE_DRIVER > > > +}; > > > + > > > +/** > > > + * Event mode ipsec direction types > > > + */ > > > +enum eh_ipsec_dir_types { > > > + EH_IPSEC_DIR_TYPE_OUTBOUND = 0, > > > + EH_IPSEC_DIR_TYPE_INBOUND, > > > +}; > > > + > > > /* Event dev params */ > > > struct eventdev_params { > > > uint8_t eventdev_id; > > > @@ -183,6 +199,12 @@ struct eh_conf { > > > */ > > > void *mode_params; > > > /**< Mode specific parameters */ > > > + > > > + /** Application specific params */ > > > + enum eh_ipsec_mode_types ipsec_mode; > > > + /**< Mode of ipsec run */ > > > + enum eh_ipsec_dir_types ipsec_dir; > > > + /**< Direction of ipsec processing */ > > > }; > > > > > > /* Workers registered by the application */ @@ -194,6 +216,10 @@ > > > struct eh_app_worker_params { > > > /**< Specify status of rx type burst */ > > > uint64_t tx_internal_port : 1; > > > /**< Specify whether tx internal port is available */ > > > + uint64_t ipsec_mode : 1; > > > + /**< Specify ipsec processing level */ > > > + uint64_t ipsec_dir : 1; > > > + /**< Specify direction of ipsec */ > > > }; > > > uint64_t u64; > > > } cap; > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > > b/examples/ipsec-secgw/ipsec-secgw.c > > > index 7506922..c5d95b9 100644 > > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > > @@ -2,6 +2,7 @@ > > > * Copyright(c) 2016 Intel Corporation > > > */ > > > > > > +#include <stdbool.h> > > > #include <stdio.h> > > > #include <stdlib.h> > > > #include <stdint.h> > > > @@ -14,6 +15,7 @@ > > > #include <sys/queue.h> > > > #include <stdarg.h> > > > #include <errno.h> > > > +#include <signal.h> > > > #include <getopt.h> > > > > > > #include <rte_common.h> > > > @@ -41,12 +43,17 @@ > > > #include <rte_jhash.h> > > > #include <rte_cryptodev.h> > > > #include <rte_security.h> > > > +#include <rte_bitmap.h> > > > +#include <rte_eventdev.h> > > > #include <rte_ip.h> > > > #include <rte_ip_frag.h> > > > > > > +#include "event_helper.h" > > > #include "ipsec.h" > > > #include "parser.h" > > > > > > +volatile bool force_quit; > > > + > > > #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > > > > > #define MAX_JUMBO_PKT_LEN 9600 > > > @@ -133,12 +140,21 @@ struct flow_info > > flow_info_tbl[RTE_MAX_ETHPORTS]; > > > #define CMD_LINE_OPT_CONFIG "config" > > > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > > > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > > > +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" > > > +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" > > > +#define CMD_LINE_OPT_IPSEC_MODE "process-mode" > > > +#define CMD_LINE_OPT_IPSEC_DIR "process-dir" > > > #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" > > > #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" > > > #define CMD_LINE_OPT_REASSEMBLE "reassemble" > > > #define CMD_LINE_OPT_MTU "mtu" > > > #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" > > > > > > +#define CMD_LINE_ARG_APP "app" > > > +#define CMD_LINE_ARG_DRV "drv" > > > +#define CMD_LINE_ARG_INB "in" > > > +#define CMD_LINE_ARG_OUT "out" > > > + > > > enum { > > > /* long options mapped to a short option */ > > > > > > @@ -149,7 +165,11 @@ enum { > > > CMD_LINE_OPT_CONFIG_NUM, > > > CMD_LINE_OPT_SINGLE_SA_NUM, > > > CMD_LINE_OPT_CRYPTODEV_MASK_NUM, > > > + CMD_LINE_OPT_TRANSFER_MODE_NUM, > > > + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, > > > CMD_LINE_OPT_RX_OFFLOAD_NUM, > > > + CMD_LINE_OPT_IPSEC_MODE_NUM, > > > + CMD_LINE_OPT_IPSEC_DIR_NUM, > > > CMD_LINE_OPT_TX_OFFLOAD_NUM, > > > CMD_LINE_OPT_REASSEMBLE_NUM, > > > CMD_LINE_OPT_MTU_NUM, > > > @@ -160,6 +180,10 @@ static const struct option lgopts[] = { > > > {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, > > > {CMD_LINE_OPT_SINGLE_SA, 1, 0, > > CMD_LINE_OPT_SINGLE_SA_NUM}, > > > {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, > > > CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, > > > + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, > > CMD_LINE_OPT_TRANSFER_MODE_NUM}, > > > + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, > > CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, > > > + {CMD_LINE_OPT_IPSEC_MODE, 1, 0, > > CMD_LINE_OPT_IPSEC_MODE_NUM}, > > > + {CMD_LINE_OPT_IPSEC_DIR, 1, 0, > > CMD_LINE_OPT_IPSEC_DIR_NUM}, > > > {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, > > CMD_LINE_OPT_RX_OFFLOAD_NUM}, > > > {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, > > CMD_LINE_OPT_TX_OFFLOAD_NUM}, > > > {CMD_LINE_OPT_REASSEMBLE, 1, 0, > > CMD_LINE_OPT_REASSEMBLE_NUM}, @@ > > > -1094,8 +1118,8 @@ drain_outbound_crypto_queues(const struct > > > lcore_conf *qconf, } > > > > > > /* main processing loop */ > > > -static int32_t > > > -main_loop(__attribute__((unused)) void *dummy) > > > +void > > > +ipsec_poll_mode_worker(void) > > > { > > > struct rte_mbuf *pkts[MAX_PKT_BURST]; > > > uint32_t lcore_id; > > > @@ -1137,7 +1161,7 @@ main_loop(__attribute__((unused)) void > > *dummy) > > > if (qconf->nb_rx_queue == 0) { > > > RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", > > > lcore_id); > > > - return 0; > > > + return; > > > } > > > > > > RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); > > > @@ -1150,7 +1174,7 @@ main_loop(__attribute__((unused)) void > > *dummy) > > > lcore_id, portid, queueid); > > > } > > > > > > - while (1) { > > > + while (!force_quit) { > > > cur_tsc = rte_rdtsc(); > > > > > > /* TX queue buffer drain */ > > > @@ -1277,6 +1301,10 @@ print_usage(const char *prgname) > > > " --config (port,queue,lcore)[,(port,queue,lcore)]" > > > " [--single-sa SAIDX]" > > > " [--cryptodev_mask MASK]" > > > + " [--transfer-mode MODE]" > > > + " [--schedule-type TYPE]" > > > + " [--process-mode MODE]" > > > + " [--process-dir DIR]" > > > " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" > > > " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" > > > " [--" CMD_LINE_OPT_REASSEMBLE " > > REASSEMBLE_TABLE_SIZE]" > > > @@ -1298,6 +1326,22 @@ print_usage(const char *prgname) > > > " bypassing the SP\n" > > > " --cryptodev_mask MASK: Hexadecimal bitmask of the > > crypto\n" > > > " devices to configure\n" > > > + " --transfer-mode MODE\n" > > > + " 0: Packet transfer via polling (default)\n" > > > + " 1: Packet transfer via eventdev\n" > > > + " --schedule-type TYPE queue schedule type, used only > > when\n" > > > + " transfer mode is set to eventdev\n" > > > + " 0: Ordered (default)\n" > > > + " 1: Atomic\n" > > > + " 2: Parallel\n" > > > > For last two, why not something huma-readable? > > I.E. == --tranfer-mode=(poll|event) or so. > > Same for schedule-type. > > [Anoob] Will do so in v2. > > > > > > + " --process-mode MODE processing mode, used only > > when\n" > > > + " transfer mode is set to eventdev\n" > > > + " \"app\" : application mode (default)\n" > > > + " \"drv\" : driver mode\n" > > > + " --process-dir DIR processing direction, used only when\n" > > > + " transfer mode is set to eventdev\n" > > > + " \"out\" : outbound (default)\n" > > > + " \"in\" : inbound\n" > > > > Hmm and why in eventdev mode it is not possible to handle both inbound > > and outbound traffic? > > Where is the limitation: eventdev framework/PMD/ipsec-secgw? > > [Anoob] It's not a limitation of any of the nodes. The current ipsec-segcw has a data path check of port to determine whether inbound or > outbound processing need to be done. > In case of poll-mode, we have specific cores polling fixed eth port & queue. So the extra check > involved doesn't cost much. > But in case of event-mode, we will have both inbound & outbound packets ending up on same core. For poll mode we can have one core handling several ports. Some of them could be inbound, other outbound, so it is a switch based on port value. My thought was that the same switch based on port_id can be done in event-mode too. But might be I am missing something here. > So the penalty of running inbound & > outbound at the same time (and relying on data path check) is more in case of event mode. For inline ipsec implementation, this impact isn't > that much and we were able to minimize the perf degradation to 1%. I would expect lookaside crypto/protocol to have higher impacts. > > Said that, I'm okay with removing the extra option and retaining the current behavior. If you think single instance of ipsec-secgw should > work bidirectional, I can make the required changes and see the perf impact. I think it would be good if event-mode could work in bi-directional way (as poll mode does), but will leave final decision to you and other guys more familiar with event-dev details. > > > > > > " --" CMD_LINE_OPT_RX_OFFLOAD > > > ": bitmask of the RX HW offload capabilities to enable/use\n" > > > " (DEV_RX_OFFLOAD_*)\n" > > > @@ -1433,7 +1477,89 @@ print_app_sa_prm(const struct app_sa_prm > > *prm) > > > } > > > > > > static int32_t > > > -parse_args(int32_t argc, char **argv) > > > +eh_parse_decimal(const char *str) > > > +{ > > > + unsigned long num; > > > + char *end = NULL; > > > + > > > + num = strtoul(str, &end, 10); > > > + if ((str[0] == '\0') || (end == NULL) || (*end != '\0')) > > > + return -EINVAL; > > > + > > > + return num; > > > +} > > > > There already exists parse_decimal(), why to create a dup? > > [Anoob] Will this in v2. > > > > > > + > > > +static int > > > +parse_transfer_mode(struct eh_conf *conf, const char *optarg) { > > > + int32_t parsed_dec; > > > + > > > + parsed_dec = eh_parse_decimal(optarg); > > > + if (parsed_dec != EH_PKT_TRANSFER_MODE_POLL && > > > + parsed_dec != EH_PKT_TRANSFER_MODE_EVENT) { > > > + printf("Unsupported packet transfer mode"); > > > + return -EINVAL; > > > + } > > > + conf->mode = parsed_dec; > > > + return 0; > > > +} > > > + > > > +static int > > > +parse_schedule_type(struct eh_conf *conf, const char *optarg) { > > > + struct eventmode_conf *em_conf = NULL; > > > + int32_t parsed_dec; > > > + > > > + parsed_dec = eh_parse_decimal(optarg); > > > + if (parsed_dec != RTE_SCHED_TYPE_ORDERED && > > > + parsed_dec != RTE_SCHED_TYPE_ATOMIC && > > > + parsed_dec != RTE_SCHED_TYPE_PARALLEL) > > > + return -EINVAL; > > > + > > > + /* Get eventmode conf */ > > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > > + > > > + em_conf->ext_params.sched_type = parsed_dec; > > > + > > > + return 0; > > > +} > > > + > > > +static int > > > +parse_ipsec_mode(struct eh_conf *conf, const char *optarg) { > > > + if (!strncmp(CMD_LINE_ARG_APP, optarg, > > strlen(CMD_LINE_ARG_APP)) && > > > + strlen(optarg) == strlen(CMD_LINE_ARG_APP)) > > > > Ugh, that's an ugly construction, why not just: > > if (strcmp(CMD_LINE_ARG_APP, optarg) == 0) ? > > [Anoob] Will fix this in v2. > > > > > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > > + else if (!strncmp(CMD_LINE_ARG_DRV, optarg, > > strlen(CMD_LINE_ARG_DRV)) && > > > + strlen(optarg) == strlen(CMD_LINE_ARG_DRV)) > > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > > + else { > > > + printf("Unsupported ipsec mode\n"); > > > + return -EINVAL; > > > + } > > > + > > > + return 0; > > > +} > > > + > > > +static int > > > +parse_ipsec_dir(struct eh_conf *conf, const char *optarg) { > > > + if (!strncmp(CMD_LINE_ARG_INB, optarg, > > strlen(CMD_LINE_ARG_INB)) && > > > + strlen(optarg) == strlen(CMD_LINE_ARG_INB)) > > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > > + else if (!strncmp(CMD_LINE_ARG_OUT, optarg, > > strlen(CMD_LINE_ARG_OUT)) && > > > + strlen(optarg) == strlen(CMD_LINE_ARG_OUT)) > > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > > + else { > > > + printf("Unsupported ipsec direction\n"); > > > + return -EINVAL; > > > + } > > > + > > > + return 0; > > > +} > > > + > > > +static int32_t > > > +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > > { > > > int opt; > > > int64_t ret; > > > @@ -1536,6 +1662,43 @@ parse_args(int32_t argc, char **argv) > > > /* else */ > > > enabled_cryptodev_mask = ret; > > > break; > > > + > > > + case CMD_LINE_OPT_TRANSFER_MODE_NUM: > > > + ret = parse_transfer_mode(eh_conf, optarg); > > > + if (ret < 0) { > > > + printf("Invalid packet transfer mode\n"); > > > + print_usage(prgname); > > > + return -1; > > > + } > > > + break; > > > + > > > + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: > > > + ret = parse_schedule_type(eh_conf, optarg); > > > + if (ret < 0) { > > > + printf("Invalid queue schedule type\n"); > > > + print_usage(prgname); > > > + return -1; > > > + } > > > + break; > > > + > > > + case CMD_LINE_OPT_IPSEC_MODE_NUM: > > > + ret = parse_ipsec_mode(eh_conf, optarg); > > > + if (ret < 0) { > > > + printf("Invalid ipsec mode\n"); > > > + print_usage(prgname); > > > + return -1; > > > + } > > > + break; > > > + > > > + case CMD_LINE_OPT_IPSEC_DIR_NUM: > > > + ret = parse_ipsec_dir(eh_conf, optarg); > > > + if (ret < 0) { > > > + printf("Invalid ipsec direction\n"); > > > + print_usage(prgname); > > > + return -1; > > > + } > > > + break; > > > + > > > case CMD_LINE_OPT_RX_OFFLOAD_NUM: > > > ret = parse_mask(optarg, &dev_rx_offload); > > > if (ret != 0) { > > > @@ -2457,6 +2620,132 @@ create_default_ipsec_flow(uint16_t port_id, > > uint64_t rx_offloads) > > > return ret; > > > } > > > > > > +static struct eh_conf * > > > +eh_conf_init(void) > > > +{ > > > + struct eventmode_conf *em_conf = NULL; > > > + struct eh_conf *conf = NULL; > > > + unsigned int eth_core_id; > > > + uint32_t nb_bytes; > > > + void *mem = NULL; > > > + > > > + /* Allocate memory for config */ > > > + conf = calloc(1, sizeof(struct eh_conf)); > > > + if (conf == NULL) { > > > + printf("Failed to allocate memory for eventmode helper > > conf"); > > > + goto err; > > > + } > > > + > > > + /* Set default conf */ > > > + > > > + /* Packet transfer mode: poll */ > > > + conf->mode = EH_PKT_TRANSFER_MODE_POLL; > > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > > + > > > + /* Keep all ethernet ports enabled by default */ > > > + conf->eth_portmask = -1; > > > + > > > + /* Allocate memory for event mode params */ > > > + conf->mode_params = > > > + calloc(1, sizeof(struct eventmode_conf)); > > > + if (conf->mode_params == NULL) { > > > + printf("Failed to allocate memory for event mode params"); > > > + goto err; > > > + } > > > + > > > + /* Get eventmode conf */ > > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > > + > > > + /* Allocate and initialize bitmap for eth cores */ > > > + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); > > > + if (!nb_bytes) { > > > + printf("Failed to get bitmap footprint"); > > > + goto err; > > > + } > > > + > > > + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, > > > + RTE_CACHE_LINE_SIZE); > > > + if (!mem) { > > > + printf("Failed to allocate memory for eth cores bitmap\n"); > > > + goto err; > > > + } > > > + > > > + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, > > mem, nb_bytes); > > > + if (!em_conf->eth_core_mask) { > > > + printf("Failed to initialize bitmap"); > > > + goto err; > > > + } > > > + > > > + /* Schedule type: ordered */ > > > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; > > > + > > > + /* Set two cores as eth cores for Rx & Tx */ > > > + > > > + /* Use first core other than master core as Rx core */ > > > + eth_core_id = rte_get_next_lcore(0, /* curr core */ > > > + 1, /* skip master core */ > > > + 0 /* wrap */); > > > + > > > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > > > + > > > + /* Use next core as Tx core */ > > > + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core > > */ > > > + 1, /* skip master core */ > > > + 0 /* wrap */); > > > + > > > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > > > + > > > + return conf; > > > +err: > > > + rte_free(mem); > > > + free(em_conf); > > > + free(conf); > > > + return NULL; > > > +} > > > + > > > +static void > > > +eh_conf_uninit(struct eh_conf *conf) > > > +{ > > > + struct eventmode_conf *em_conf = NULL; > > > + > > > + /* Get eventmode conf */ > > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > > + > > > + /* Free evenmode configuration memory */ > > > + rte_free(em_conf->eth_core_mask); > > > + free(em_conf); > > > + free(conf); > > > +} > > > + > > > +static void > > > +signal_handler(int signum) > > > +{ > > > + if (signum == SIGINT || signum == SIGTERM) { > > > + uint16_t port_id; > > > + printf("\n\nSignal %d received, preparing to exit...\n", > > > + signum); > > > + force_quit = true; > > > + > > > + /* Destroy the default ipsec flow */ > > > + RTE_ETH_FOREACH_DEV(port_id) { > > > + if ((enabled_port_mask & (1 << port_id)) == 0) > > > + continue; > > > + if (flow_info_tbl[port_id].rx_def_flow) { > > > + struct rte_flow_error err; > > > + int ret; > > > > As we are going to call dev_stop(), etc. at force_quit below, is there any > > reason to call rte_flow_destroy() here? > > Just curious. > > [Anoob] dev_stop() should clear all the rte_flow entries. But doing it from the app as a good citizen. 😊 > > I can remove it since the same is not done for SA specific rte_flows created for inline crypto. No need to remove. My question was just stylish one: why not to do it at the same place where dev_stop()/dev_close() is done, to have everything in one place. > > > > > > + ret = rte_flow_destroy(port_id, > > > + flow_info_tbl[port_id].rx_def_flow, > > > + &err); > > > + if (ret) > > > + RTE_LOG(ERR, IPSEC, > > > + "Failed to destroy flow for port %u, " > > > + "err msg: %s\n", port_id, > > err.message); > > > + } > > > + } > > > + } > > > +} > > > + > > > int32_t > > > main(int32_t argc, char **argv) > > > { > > > @@ -2466,6 +2755,7 @@ main(int32_t argc, char **argv) > > > uint8_t socket_id; > > > uint16_t portid; > > > uint64_t req_rx_offloads, req_tx_offloads; > > > + struct eh_conf *eh_conf = NULL; > > > size_t sess_sz; > > > > > > /* init EAL */ > > > @@ -2475,8 +2765,17 @@ main(int32_t argc, char **argv) > > > argc -= ret; > > > argv += ret; > > > > > > + force_quit = false; > > > + signal(SIGINT, signal_handler); > > > + signal(SIGTERM, signal_handler); > > > + > > > + /* initialize event helper configuration */ > > > + eh_conf = eh_conf_init(); > > > + if (eh_conf == NULL) > > > + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); > > > + > > > /* parse application arguments (after the EAL ones) */ > > > - ret = parse_args(argc, argv); > > > + ret = parse_args(argc, argv, eh_conf); > > > if (ret < 0) > > > rte_exit(EXIT_FAILURE, "Invalid parameters\n"); > > > > > > @@ -2592,12 +2891,43 @@ main(int32_t argc, char **argv) > > > > > > check_all_ports_link_status(enabled_port_mask); > > > > > > + /* > > > + * Set the enabled port mask in helper config for use by helper > > > + * sub-system. This will be used while intializing devices using > > > + * helper sub-system. > > > + */ > > > + eh_conf->eth_portmask = enabled_port_mask; > > > + > > > + /* Initialize eventmode components */ > > > + ret = eh_devs_init(eh_conf); > > > + if (ret < 0) > > > + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", > > ret); > > > + > > > /* launch per-lcore init on every lcore */ > > > - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); > > > + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, > > > +CALL_MASTER); > > > + > > > RTE_LCORE_FOREACH_SLAVE(lcore_id) { > > > if (rte_eal_wait_lcore(lcore_id) < 0) > > > return -1; > > > } > > > > > > + /* Uninitialize eventmode components */ > > > + ret = eh_devs_uninit(eh_conf); > > > + if (ret < 0) > > > + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", > > ret); > > > + > > > + /* Free eventmode configuration memory */ > > > + eh_conf_uninit(eh_conf); > > > + > > > + RTE_ETH_FOREACH_DEV(portid) { > > > + if ((enabled_port_mask & (1 << portid)) == 0) > > > + continue; > > > + printf("Closing port %d...", portid); > > > + rte_eth_dev_stop(portid); > > > + rte_eth_dev_close(portid); > > > + printf(" Done\n"); > > > + } > > > + printf("Bye...\n"); > > > + > > > return 0; > > > } ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-06 15:45 ` Ananyev, Konstantin @ 2020-01-09 6:17 ` Anoob Joseph 0 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-09 6:17 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: dev <dev-bounces@dpdk.org> On Behalf Of Ananyev, Konstantin > Sent: Monday, January 6, 2020 9:15 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal > <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas > Monjalon <thomas@monjalon.net> > Cc: Lukas Bartosik <lbartosik@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add > eventmode to ipsec-secgw > > > > > > Add eventmode support to ipsec-secgw. This uses event helper to > > > > setup and use the eventmode capabilities. Add driver inbound worker. > > > > > > > > Example command: > > > > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w > > > > 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > > > > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > > > > --schedule-type 2 --process-mode drv --process-dir in > > > > > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > > --- > > > > examples/ipsec-secgw/Makefile | 1 + > > > > examples/ipsec-secgw/event_helper.c | 3 + > > > > examples/ipsec-secgw/event_helper.h | 26 +++ > > > > examples/ipsec-secgw/ipsec-secgw.c | 344 > > > +++++++++++++++++++++++++++++++++++- > > > > examples/ipsec-secgw/ipsec.h | 7 + > > > > examples/ipsec-secgw/ipsec_worker.c | 180 +++++++++++++++++++ > > > > examples/ipsec-secgw/meson.build | 2 +- > > > > 7 files changed, 555 insertions(+), 8 deletions(-) create mode > > > > 100644 examples/ipsec-secgw/ipsec_worker.c > > > > > > > > diff --git a/examples/ipsec-secgw/Makefile > > > > b/examples/ipsec-secgw/Makefile index 09e3c5a..f6fd94c 100644 > > > > --- a/examples/ipsec-secgw/Makefile > > > > +++ b/examples/ipsec-secgw/Makefile > > > > @@ -15,6 +15,7 @@ SRCS-y += sa.c > > > > SRCS-y += rt.c > > > > SRCS-y += ipsec_process.c > > > > SRCS-y += ipsec-secgw.c > > > > +SRCS-y += ipsec_worker.c > > > > SRCS-y += event_helper.c > > > > > > > > CFLAGS += -gdwarf-2 > > > > diff --git a/examples/ipsec-secgw/event_helper.c > > > > b/examples/ipsec-secgw/event_helper.c > > > > index 6549875..44f997d 100644 > > > > --- a/examples/ipsec-secgw/event_helper.c > > > > +++ b/examples/ipsec-secgw/event_helper.c > > > > @@ -984,6 +984,9 @@ eh_find_worker(uint32_t lcore_id, struct > > > > eh_conf > > > *conf, > > > > else > > > > curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; > > > > > > > > + curr_conf.cap.ipsec_mode = conf->ipsec_mode; > > > > + curr_conf.cap.ipsec_dir = conf->ipsec_dir; > > > > + > > > > /* Parse the passed list and see if we have matching > > > > capabilities */ > > > > > > > > /* Initialize the pointer used to traverse the list */ diff > > > > --git a/examples/ipsec-secgw/event_helper.h > > > > b/examples/ipsec-secgw/event_helper.h > > > > index 2895dfa..07849b0 100644 > > > > --- a/examples/ipsec-secgw/event_helper.h > > > > +++ b/examples/ipsec-secgw/event_helper.h > > > > @@ -74,6 +74,22 @@ enum eh_tx_types { > > > > EH_TX_TYPE_NO_INTERNAL_PORT > > > > }; > > > > > > > > +/** > > > > + * Event mode ipsec mode types > > > > + */ > > > > +enum eh_ipsec_mode_types { > > > > + EH_IPSEC_MODE_TYPE_APP = 0, > > > > + EH_IPSEC_MODE_TYPE_DRIVER > > > > +}; > > > > + > > > > +/** > > > > + * Event mode ipsec direction types */ enum eh_ipsec_dir_types { > > > > + EH_IPSEC_DIR_TYPE_OUTBOUND = 0, > > > > + EH_IPSEC_DIR_TYPE_INBOUND, > > > > +}; > > > > + > > > > /* Event dev params */ > > > > struct eventdev_params { > > > > uint8_t eventdev_id; > > > > @@ -183,6 +199,12 @@ struct eh_conf { > > > > */ > > > > void *mode_params; > > > > /**< Mode specific parameters */ > > > > + > > > > + /** Application specific params */ > > > > + enum eh_ipsec_mode_types ipsec_mode; > > > > + /**< Mode of ipsec run */ > > > > + enum eh_ipsec_dir_types ipsec_dir; > > > > + /**< Direction of ipsec processing */ > > > > }; > > > > > > > > /* Workers registered by the application */ @@ -194,6 +216,10 @@ > > > > struct eh_app_worker_params { > > > > /**< Specify status of rx type burst */ > > > > uint64_t tx_internal_port : 1; > > > > /**< Specify whether tx internal port is available */ > > > > + uint64_t ipsec_mode : 1; > > > > + /**< Specify ipsec processing level */ > > > > + uint64_t ipsec_dir : 1; > > > > + /**< Specify direction of ipsec */ > > > > }; > > > > uint64_t u64; > > > > } cap; > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > > > b/examples/ipsec-secgw/ipsec-secgw.c > > > > index 7506922..c5d95b9 100644 > > > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > > > @@ -2,6 +2,7 @@ > > > > * Copyright(c) 2016 Intel Corporation > > > > */ > > > > > > > > +#include <stdbool.h> > > > > #include <stdio.h> > > > > #include <stdlib.h> > > > > #include <stdint.h> > > > > @@ -14,6 +15,7 @@ > > > > #include <sys/queue.h> > > > > #include <stdarg.h> > > > > #include <errno.h> > > > > +#include <signal.h> > > > > #include <getopt.h> > > > > > > > > #include <rte_common.h> > > > > @@ -41,12 +43,17 @@ > > > > #include <rte_jhash.h> > > > > #include <rte_cryptodev.h> > > > > #include <rte_security.h> > > > > +#include <rte_bitmap.h> > > > > +#include <rte_eventdev.h> > > > > #include <rte_ip.h> > > > > #include <rte_ip_frag.h> > > > > > > > > +#include "event_helper.h" > > > > #include "ipsec.h" > > > > #include "parser.h" > > > > > > > > +volatile bool force_quit; > > > > + > > > > #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > > > > > > > #define MAX_JUMBO_PKT_LEN 9600 > > > > @@ -133,12 +140,21 @@ struct flow_info > > > flow_info_tbl[RTE_MAX_ETHPORTS]; > > > > #define CMD_LINE_OPT_CONFIG "config" > > > > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > > > > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > > > > +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" > > > > +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" > > > > +#define CMD_LINE_OPT_IPSEC_MODE "process-mode" > > > > +#define CMD_LINE_OPT_IPSEC_DIR "process-dir" > > > > #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" > > > > #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" > > > > #define CMD_LINE_OPT_REASSEMBLE "reassemble" > > > > #define CMD_LINE_OPT_MTU "mtu" > > > > #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" > > > > > > > > +#define CMD_LINE_ARG_APP "app" > > > > +#define CMD_LINE_ARG_DRV "drv" > > > > +#define CMD_LINE_ARG_INB "in" > > > > +#define CMD_LINE_ARG_OUT "out" > > > > + > > > > enum { > > > > /* long options mapped to a short option */ > > > > > > > > @@ -149,7 +165,11 @@ enum { > > > > CMD_LINE_OPT_CONFIG_NUM, > > > > CMD_LINE_OPT_SINGLE_SA_NUM, > > > > CMD_LINE_OPT_CRYPTODEV_MASK_NUM, > > > > + CMD_LINE_OPT_TRANSFER_MODE_NUM, > > > > + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, > > > > CMD_LINE_OPT_RX_OFFLOAD_NUM, > > > > + CMD_LINE_OPT_IPSEC_MODE_NUM, > > > > + CMD_LINE_OPT_IPSEC_DIR_NUM, > > > > CMD_LINE_OPT_TX_OFFLOAD_NUM, > > > > CMD_LINE_OPT_REASSEMBLE_NUM, > > > > CMD_LINE_OPT_MTU_NUM, > > > > @@ -160,6 +180,10 @@ static const struct option lgopts[] = { > > > > {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, > > > > {CMD_LINE_OPT_SINGLE_SA, 1, 0, > > > CMD_LINE_OPT_SINGLE_SA_NUM}, > > > > {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, > > > > CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, > > > > + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, > > > CMD_LINE_OPT_TRANSFER_MODE_NUM}, > > > > + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, > > > CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, > > > > + {CMD_LINE_OPT_IPSEC_MODE, 1, 0, > > > CMD_LINE_OPT_IPSEC_MODE_NUM}, > > > > + {CMD_LINE_OPT_IPSEC_DIR, 1, 0, > > > CMD_LINE_OPT_IPSEC_DIR_NUM}, > > > > {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, > > > CMD_LINE_OPT_RX_OFFLOAD_NUM}, > > > > {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, > > > CMD_LINE_OPT_TX_OFFLOAD_NUM}, > > > > {CMD_LINE_OPT_REASSEMBLE, 1, 0, > > > CMD_LINE_OPT_REASSEMBLE_NUM}, @@ > > > > -1094,8 +1118,8 @@ drain_outbound_crypto_queues(const struct > > > > lcore_conf *qconf, } > > > > > > > > /* main processing loop */ > > > > -static int32_t > > > > -main_loop(__attribute__((unused)) void *dummy) > > > > +void > > > > +ipsec_poll_mode_worker(void) > > > > { > > > > struct rte_mbuf *pkts[MAX_PKT_BURST]; > > > > uint32_t lcore_id; > > > > @@ -1137,7 +1161,7 @@ main_loop(__attribute__((unused)) void > > > *dummy) > > > > if (qconf->nb_rx_queue == 0) { > > > > RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", > > > > lcore_id); > > > > - return 0; > > > > + return; > > > > } > > > > > > > > RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", > > > > lcore_id); @@ -1150,7 +1174,7 @@ > main_loop(__attribute__((unused)) > > > > void > > > *dummy) > > > > lcore_id, portid, queueid); > > > > } > > > > > > > > - while (1) { > > > > + while (!force_quit) { > > > > cur_tsc = rte_rdtsc(); > > > > > > > > /* TX queue buffer drain */ > > > > @@ -1277,6 +1301,10 @@ print_usage(const char *prgname) > > > > " --config (port,queue,lcore)[,(port,queue,lcore)]" > > > > " [--single-sa SAIDX]" > > > > " [--cryptodev_mask MASK]" > > > > + " [--transfer-mode MODE]" > > > > + " [--schedule-type TYPE]" > > > > + " [--process-mode MODE]" > > > > + " [--process-dir DIR]" > > > > " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" > > > > " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" > > > > " [--" CMD_LINE_OPT_REASSEMBLE " > > > REASSEMBLE_TABLE_SIZE]" > > > > @@ -1298,6 +1326,22 @@ print_usage(const char *prgname) > > > > " bypassing the SP\n" > > > > " --cryptodev_mask MASK: Hexadecimal bitmask of the > > > crypto\n" > > > > " devices to configure\n" > > > > + " --transfer-mode MODE\n" > > > > + " 0: Packet transfer via polling (default)\n" > > > > + " 1: Packet transfer via eventdev\n" > > > > + " --schedule-type TYPE queue schedule type, used only > > > when\n" > > > > + " transfer mode is set to eventdev\n" > > > > + " 0: Ordered (default)\n" > > > > + " 1: Atomic\n" > > > > + " 2: Parallel\n" > > > > > > For last two, why not something huma-readable? > > > I.E. == --tranfer-mode=(poll|event) or so. > > > Same for schedule-type. > > > > [Anoob] Will do so in v2. > > > > > > > > > + " --process-mode MODE processing mode, used only > > > when\n" > > > > + " transfer mode is set to eventdev\n" > > > > + " \"app\" : application mode (default)\n" > > > > + " \"drv\" : driver mode\n" > > > > + " --process-dir DIR processing direction, used only when\n" > > > > + " transfer mode is set to eventdev\n" > > > > + " \"out\" : outbound (default)\n" > > > > + " \"in\" : inbound\n" > > > > > > Hmm and why in eventdev mode it is not possible to handle both > > > inbound and outbound traffic? > > > Where is the limitation: eventdev framework/PMD/ipsec-secgw? > > > > [Anoob] It's not a limitation of any of the nodes. The current > > ipsec-segcw has a data path check of port to determine whether inbound > or outbound processing need to be done. > > In case of poll-mode, we have specific cores polling fixed eth port & > > queue. So the extra check involved doesn't cost much. > > > > But in case of event-mode, we will have both inbound & outbound packets > ending up on same core. > > For poll mode we can have one core handling several ports. > Some of them could be inbound, other outbound, so it is a switch based on > port value. > My thought was that the same switch based on port_id can be done in > event-mode too. > But might be I am missing something here. [Anoob] Yes. You are right. Even in poll mode the same bidirectional processing on same core is possible. > > > So the penalty of running inbound & > > outbound at the same time (and relying on data path check) is more in > > case of event mode. For inline ipsec implementation, this impact isn't that > much and we were able to minimize the perf degradation to 1%. I would > expect lookaside crypto/protocol to have higher impacts. > > > > Said that, I'm okay with removing the extra option and retaining the > > current behavior. If you think single instance of ipsec-secgw should work > bidirectional, I can make the required changes and see the perf impact. > > I think it would be good if event-mode could work in bi-directional way (as > poll mode does), but will leave final decision to you and other guys more > familiar with event-dev details. [Anoob] Agreed. I'll have this reworked to have one thread. > > > > > > > > > > " --" CMD_LINE_OPT_RX_OFFLOAD > > > > ": bitmask of the RX HW offload capabilities to enable/use\n" > > > > " (DEV_RX_OFFLOAD_*)\n" > > > > @@ -1433,7 +1477,89 @@ print_app_sa_prm(const struct app_sa_prm > > > *prm) > > > > } > > > > > > > > static int32_t > > > > -parse_args(int32_t argc, char **argv) > > > > +eh_parse_decimal(const char *str) { > > > > + unsigned long num; > > > > + char *end = NULL; > > > > + > > > > + num = strtoul(str, &end, 10); > > > > + if ((str[0] == '\0') || (end == NULL) || (*end != '\0')) > > > > + return -EINVAL; > > > > + > > > > + return num; > > > > +} > > > > > > There already exists parse_decimal(), why to create a dup? > > > > [Anoob] Will this in v2. > > > > > > > > > + > > > > +static int > > > > +parse_transfer_mode(struct eh_conf *conf, const char *optarg) { > > > > + int32_t parsed_dec; > > > > + > > > > + parsed_dec = eh_parse_decimal(optarg); > > > > + if (parsed_dec != EH_PKT_TRANSFER_MODE_POLL && > > > > + parsed_dec != EH_PKT_TRANSFER_MODE_EVENT) { > > > > + printf("Unsupported packet transfer mode"); > > > > + return -EINVAL; > > > > + } > > > > + conf->mode = parsed_dec; > > > > + return 0; > > > > +} > > > > + > > > > +static int > > > > +parse_schedule_type(struct eh_conf *conf, const char *optarg) { > > > > + struct eventmode_conf *em_conf = NULL; > > > > + int32_t parsed_dec; > > > > + > > > > + parsed_dec = eh_parse_decimal(optarg); > > > > + if (parsed_dec != RTE_SCHED_TYPE_ORDERED && > > > > + parsed_dec != RTE_SCHED_TYPE_ATOMIC && > > > > + parsed_dec != RTE_SCHED_TYPE_PARALLEL) > > > > + return -EINVAL; > > > > + > > > > + /* Get eventmode conf */ > > > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > > > + > > > > + em_conf->ext_params.sched_type = parsed_dec; > > > > + > > > > + return 0; > > > > +} > > > > + > > > > +static int > > > > +parse_ipsec_mode(struct eh_conf *conf, const char *optarg) { > > > > + if (!strncmp(CMD_LINE_ARG_APP, optarg, > > > strlen(CMD_LINE_ARG_APP)) && > > > > + strlen(optarg) == strlen(CMD_LINE_ARG_APP)) > > > > > > Ugh, that's an ugly construction, why not just: > > > if (strcmp(CMD_LINE_ARG_APP, optarg) == 0) ? > > > > [Anoob] Will fix this in v2. > > > > > > > > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > > > + else if (!strncmp(CMD_LINE_ARG_DRV, optarg, > > > strlen(CMD_LINE_ARG_DRV)) && > > > > + strlen(optarg) == strlen(CMD_LINE_ARG_DRV)) > > > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > > > + else { > > > > + printf("Unsupported ipsec mode\n"); > > > > + return -EINVAL; > > > > + } > > > > + > > > > + return 0; > > > > +} > > > > + > > > > +static int > > > > +parse_ipsec_dir(struct eh_conf *conf, const char *optarg) { > > > > + if (!strncmp(CMD_LINE_ARG_INB, optarg, > > > strlen(CMD_LINE_ARG_INB)) && > > > > + strlen(optarg) == strlen(CMD_LINE_ARG_INB)) > > > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > > > + else if (!strncmp(CMD_LINE_ARG_OUT, optarg, > > > strlen(CMD_LINE_ARG_OUT)) && > > > > + strlen(optarg) == strlen(CMD_LINE_ARG_OUT)) > > > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > > > + else { > > > > + printf("Unsupported ipsec direction\n"); > > > > + return -EINVAL; > > > > + } > > > > + > > > > + return 0; > > > > +} > > > > + > > > > +static int32_t > > > > +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > > > { > > > > int opt; > > > > int64_t ret; > > > > @@ -1536,6 +1662,43 @@ parse_args(int32_t argc, char **argv) > > > > /* else */ > > > > enabled_cryptodev_mask = ret; > > > > break; > > > > + > > > > + case CMD_LINE_OPT_TRANSFER_MODE_NUM: > > > > + ret = parse_transfer_mode(eh_conf, optarg); > > > > + if (ret < 0) { > > > > + printf("Invalid packet transfer mode\n"); > > > > + print_usage(prgname); > > > > + return -1; > > > > + } > > > > + break; > > > > + > > > > + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: > > > > + ret = parse_schedule_type(eh_conf, optarg); > > > > + if (ret < 0) { > > > > + printf("Invalid queue schedule type\n"); > > > > + print_usage(prgname); > > > > + return -1; > > > > + } > > > > + break; > > > > + > > > > + case CMD_LINE_OPT_IPSEC_MODE_NUM: > > > > + ret = parse_ipsec_mode(eh_conf, optarg); > > > > + if (ret < 0) { > > > > + printf("Invalid ipsec mode\n"); > > > > + print_usage(prgname); > > > > + return -1; > > > > + } > > > > + break; > > > > + > > > > + case CMD_LINE_OPT_IPSEC_DIR_NUM: > > > > + ret = parse_ipsec_dir(eh_conf, optarg); > > > > + if (ret < 0) { > > > > + printf("Invalid ipsec direction\n"); > > > > + print_usage(prgname); > > > > + return -1; > > > > + } > > > > + break; > > > > + > > > > case CMD_LINE_OPT_RX_OFFLOAD_NUM: > > > > ret = parse_mask(optarg, &dev_rx_offload); > > > > if (ret != 0) { > > > > @@ -2457,6 +2620,132 @@ create_default_ipsec_flow(uint16_t > > > > port_id, > > > uint64_t rx_offloads) > > > > return ret; > > > > } > > > > > > > > +static struct eh_conf * > > > > +eh_conf_init(void) > > > > +{ > > > > + struct eventmode_conf *em_conf = NULL; > > > > + struct eh_conf *conf = NULL; > > > > + unsigned int eth_core_id; > > > > + uint32_t nb_bytes; > > > > + void *mem = NULL; > > > > + > > > > + /* Allocate memory for config */ > > > > + conf = calloc(1, sizeof(struct eh_conf)); > > > > + if (conf == NULL) { > > > > + printf("Failed to allocate memory for eventmode helper > > > conf"); > > > > + goto err; > > > > + } > > > > + > > > > + /* Set default conf */ > > > > + > > > > + /* Packet transfer mode: poll */ > > > > + conf->mode = EH_PKT_TRANSFER_MODE_POLL; > > > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > > > + > > > > + /* Keep all ethernet ports enabled by default */ > > > > + conf->eth_portmask = -1; > > > > + > > > > + /* Allocate memory for event mode params */ > > > > + conf->mode_params = > > > > + calloc(1, sizeof(struct eventmode_conf)); > > > > + if (conf->mode_params == NULL) { > > > > + printf("Failed to allocate memory for event mode params"); > > > > + goto err; > > > > + } > > > > + > > > > + /* Get eventmode conf */ > > > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > > > + > > > > + /* Allocate and initialize bitmap for eth cores */ > > > > + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); > > > > + if (!nb_bytes) { > > > > + printf("Failed to get bitmap footprint"); > > > > + goto err; > > > > + } > > > > + > > > > + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, > > > > + RTE_CACHE_LINE_SIZE); > > > > + if (!mem) { > > > > + printf("Failed to allocate memory for eth cores bitmap\n"); > > > > + goto err; > > > > + } > > > > + > > > > + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, > > > mem, nb_bytes); > > > > + if (!em_conf->eth_core_mask) { > > > > + printf("Failed to initialize bitmap"); > > > > + goto err; > > > > + } > > > > + > > > > + /* Schedule type: ordered */ > > > > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; > > > > + > > > > + /* Set two cores as eth cores for Rx & Tx */ > > > > + > > > > + /* Use first core other than master core as Rx core */ > > > > + eth_core_id = rte_get_next_lcore(0, /* curr core */ > > > > + 1, /* skip master core */ > > > > + 0 /* wrap */); > > > > + > > > > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > > > > + > > > > + /* Use next core as Tx core */ > > > > + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core > > > */ > > > > + 1, /* skip master core */ > > > > + 0 /* wrap */); > > > > + > > > > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > > > > + > > > > + return conf; > > > > +err: > > > > + rte_free(mem); > > > > + free(em_conf); > > > > + free(conf); > > > > + return NULL; > > > > +} > > > > + > > > > +static void > > > > +eh_conf_uninit(struct eh_conf *conf) { > > > > + struct eventmode_conf *em_conf = NULL; > > > > + > > > > + /* Get eventmode conf */ > > > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > > > + > > > > + /* Free evenmode configuration memory */ > > > > + rte_free(em_conf->eth_core_mask); > > > > + free(em_conf); > > > > + free(conf); > > > > +} > > > > + > > > > +static void > > > > +signal_handler(int signum) > > > > +{ > > > > + if (signum == SIGINT || signum == SIGTERM) { > > > > + uint16_t port_id; > > > > + printf("\n\nSignal %d received, preparing to exit...\n", > > > > + signum); > > > > + force_quit = true; > > > > + > > > > + /* Destroy the default ipsec flow */ > > > > + RTE_ETH_FOREACH_DEV(port_id) { > > > > + if ((enabled_port_mask & (1 << port_id)) == 0) > > > > + continue; > > > > + if (flow_info_tbl[port_id].rx_def_flow) { > > > > + struct rte_flow_error err; > > > > + int ret; > > > > > > As we are going to call dev_stop(), etc. at force_quit below, is > > > there any reason to call rte_flow_destroy() here? > > > Just curious. > > > > [Anoob] dev_stop() should clear all the rte_flow entries. But doing it > > from the app as a good citizen. 😊 > > > > I can remove it since the same is not done for SA specific rte_flows created > for inline crypto. > > No need to remove. > My question was just stylish one: > why not to do it at the same place where dev_stop()/dev_close() is done, to > have everything in one place. [Anoob] I misunderstood your query. Will have it moved close to dev_stop() etc. > > > > > > > > > > + ret = rte_flow_destroy(port_id, > > > > + flow_info_tbl[port_id].rx_def_flow, > > > > + &err); > > > > + if (ret) > > > > + RTE_LOG(ERR, IPSEC, > > > > + "Failed to destroy flow for port %u, " > > > > + "err msg: %s\n", port_id, > > > err.message); > > > > + } > > > > + } > > > > + } > > > > +} > > > > + > > > > int32_t > > > > main(int32_t argc, char **argv) > > > > { > > > > @@ -2466,6 +2755,7 @@ main(int32_t argc, char **argv) > > > > uint8_t socket_id; > > > > uint16_t portid; > > > > uint64_t req_rx_offloads, req_tx_offloads; > > > > + struct eh_conf *eh_conf = NULL; > > > > size_t sess_sz; > > > > > > > > /* init EAL */ > > > > @@ -2475,8 +2765,17 @@ main(int32_t argc, char **argv) > > > > argc -= ret; > > > > argv += ret; > > > > > > > > + force_quit = false; > > > > + signal(SIGINT, signal_handler); > > > > + signal(SIGTERM, signal_handler); > > > > + > > > > + /* initialize event helper configuration */ > > > > + eh_conf = eh_conf_init(); > > > > + if (eh_conf == NULL) > > > > + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); > > > > + > > > > /* parse application arguments (after the EAL ones) */ > > > > - ret = parse_args(argc, argv); > > > > + ret = parse_args(argc, argv, eh_conf); > > > > if (ret < 0) > > > > rte_exit(EXIT_FAILURE, "Invalid parameters\n"); > > > > > > > > @@ -2592,12 +2891,43 @@ main(int32_t argc, char **argv) > > > > > > > > check_all_ports_link_status(enabled_port_mask); > > > > > > > > + /* > > > > + * Set the enabled port mask in helper config for use by helper > > > > + * sub-system. This will be used while intializing devices using > > > > + * helper sub-system. > > > > + */ > > > > + eh_conf->eth_portmask = enabled_port_mask; > > > > + > > > > + /* Initialize eventmode components */ > > > > + ret = eh_devs_init(eh_conf); > > > > + if (ret < 0) > > > > + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", > > > ret); > > > > + > > > > /* launch per-lcore init on every lcore */ > > > > - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); > > > > + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, > > > > +CALL_MASTER); > > > > + > > > > RTE_LCORE_FOREACH_SLAVE(lcore_id) { > > > > if (rte_eal_wait_lcore(lcore_id) < 0) > > > > return -1; > > > > } > > > > > > > > + /* Uninitialize eventmode components */ > > > > + ret = eh_devs_uninit(eh_conf); > > > > + if (ret < 0) > > > > + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", > > > ret); > > > > + > > > > + /* Free eventmode configuration memory */ > > > > + eh_conf_uninit(eh_conf); > > > > + > > > > + RTE_ETH_FOREACH_DEV(portid) { > > > > + if ((enabled_port_mask & (1 << portid)) == 0) > > > > + continue; > > > > + printf("Closing port %d...", portid); > > > > + rte_eth_dev_stop(portid); > > > > + rte_eth_dev_close(portid); > > > > + printf(" Done\n"); > > > > + } > > > > + printf("Bye...\n"); > > > > + > > > > return 0; > > > > } ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2019-12-08 12:30 ` [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph 2019-12-23 16:43 ` Ananyev, Konstantin @ 2019-12-24 12:47 ` Ananyev, Konstantin 2020-01-03 10:20 ` Anoob Joseph 1 sibling, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-24 12:47 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > Add eventmode support to ipsec-secgw. This uses event helper to setup > and use the eventmode capabilities. Add driver inbound worker. > > Example command: > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w 0002:07:00.0 > -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > --schedule-type 2 --process-mode drv --process-dir in As I can see new event mode is totally orthogonal to the existing poll mode. Event mode has it is own data-path, and it doesn't reuse any part of poll-mode data-path code. Plus in event mode many poll-mode options: libirary/legacy mode, fragment/reassemble, replay-window, ESN, fall-back session, etc. are simply ignored. Also as I can read the current code - right now these modes can't be mixed and used together. User has to use either only event based or poll mode API/devices. If so, then at least we need a check (and report with error exit) for these mutually exclusive option variants. Probably even better would be to generate two separate binaries Let say: ipsec-secgw-event and ipsec-secgw-poll. We can still keep the same parent directory, makefile, common src files etc. for both. > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/Makefile | 1 + > examples/ipsec-secgw/event_helper.c | 3 + > examples/ipsec-secgw/event_helper.h | 26 +++ > examples/ipsec-secgw/ipsec-secgw.c | 344 +++++++++++++++++++++++++++++++++++- > examples/ipsec-secgw/ipsec.h | 7 + > examples/ipsec-secgw/ipsec_worker.c | 180 +++++++++++++++++++ > examples/ipsec-secgw/meson.build | 2 +- > 7 files changed, 555 insertions(+), 8 deletions(-) > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > > diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile > index 09e3c5a..f6fd94c 100644 > --- a/examples/ipsec-secgw/Makefile > +++ b/examples/ipsec-secgw/Makefile > @@ -15,6 +15,7 @@ SRCS-y += sa.c > SRCS-y += rt.c > SRCS-y += ipsec_process.c > SRCS-y += ipsec-secgw.c > +SRCS-y += ipsec_worker.c > SRCS-y += event_helper.c > > CFLAGS += -gdwarf-2 > diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c > index 6549875..44f997d 100644 > --- a/examples/ipsec-secgw/event_helper.c > +++ b/examples/ipsec-secgw/event_helper.c > @@ -984,6 +984,9 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, > else > curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; > > + curr_conf.cap.ipsec_mode = conf->ipsec_mode; > + curr_conf.cap.ipsec_dir = conf->ipsec_dir; > + > /* Parse the passed list and see if we have matching capabilities */ > > /* Initialize the pointer used to traverse the list */ > diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h > index 2895dfa..07849b0 100644 > --- a/examples/ipsec-secgw/event_helper.h > +++ b/examples/ipsec-secgw/event_helper.h > @@ -74,6 +74,22 @@ enum eh_tx_types { > EH_TX_TYPE_NO_INTERNAL_PORT > }; > > +/** > + * Event mode ipsec mode types > + */ > +enum eh_ipsec_mode_types { > + EH_IPSEC_MODE_TYPE_APP = 0, > + EH_IPSEC_MODE_TYPE_DRIVER > +}; > + > +/** > + * Event mode ipsec direction types > + */ > +enum eh_ipsec_dir_types { > + EH_IPSEC_DIR_TYPE_OUTBOUND = 0, > + EH_IPSEC_DIR_TYPE_INBOUND, > +}; > + > /* Event dev params */ > struct eventdev_params { > uint8_t eventdev_id; > @@ -183,6 +199,12 @@ struct eh_conf { > */ > void *mode_params; > /**< Mode specific parameters */ > + > + /** Application specific params */ > + enum eh_ipsec_mode_types ipsec_mode; > + /**< Mode of ipsec run */ > + enum eh_ipsec_dir_types ipsec_dir; > + /**< Direction of ipsec processing */ > }; > > /* Workers registered by the application */ > @@ -194,6 +216,10 @@ struct eh_app_worker_params { > /**< Specify status of rx type burst */ > uint64_t tx_internal_port : 1; > /**< Specify whether tx internal port is available */ > + uint64_t ipsec_mode : 1; > + /**< Specify ipsec processing level */ > + uint64_t ipsec_dir : 1; > + /**< Specify direction of ipsec */ > }; > uint64_t u64; > } cap; > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index 7506922..c5d95b9 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -2,6 +2,7 @@ > * Copyright(c) 2016 Intel Corporation > */ > > +#include <stdbool.h> > #include <stdio.h> > #include <stdlib.h> > #include <stdint.h> > @@ -14,6 +15,7 @@ > #include <sys/queue.h> > #include <stdarg.h> > #include <errno.h> > +#include <signal.h> > #include <getopt.h> > > #include <rte_common.h> > @@ -41,12 +43,17 @@ > #include <rte_jhash.h> > #include <rte_cryptodev.h> > #include <rte_security.h> > +#include <rte_bitmap.h> > +#include <rte_eventdev.h> > #include <rte_ip.h> > #include <rte_ip_frag.h> > > +#include "event_helper.h" > #include "ipsec.h" > #include "parser.h" > > +volatile bool force_quit; > + > #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > #define MAX_JUMBO_PKT_LEN 9600 > @@ -133,12 +140,21 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > #define CMD_LINE_OPT_CONFIG "config" > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" > +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" > +#define CMD_LINE_OPT_IPSEC_MODE "process-mode" > +#define CMD_LINE_OPT_IPSEC_DIR "process-dir" > #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" > #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" > #define CMD_LINE_OPT_REASSEMBLE "reassemble" > #define CMD_LINE_OPT_MTU "mtu" > #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" > > +#define CMD_LINE_ARG_APP "app" > +#define CMD_LINE_ARG_DRV "drv" > +#define CMD_LINE_ARG_INB "in" > +#define CMD_LINE_ARG_OUT "out" > + > enum { > /* long options mapped to a short option */ > > @@ -149,7 +165,11 @@ enum { > CMD_LINE_OPT_CONFIG_NUM, > CMD_LINE_OPT_SINGLE_SA_NUM, > CMD_LINE_OPT_CRYPTODEV_MASK_NUM, > + CMD_LINE_OPT_TRANSFER_MODE_NUM, > + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, > CMD_LINE_OPT_RX_OFFLOAD_NUM, > + CMD_LINE_OPT_IPSEC_MODE_NUM, > + CMD_LINE_OPT_IPSEC_DIR_NUM, > CMD_LINE_OPT_TX_OFFLOAD_NUM, > CMD_LINE_OPT_REASSEMBLE_NUM, > CMD_LINE_OPT_MTU_NUM, > @@ -160,6 +180,10 @@ static const struct option lgopts[] = { > {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, > {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, > {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, > + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, > + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, > + {CMD_LINE_OPT_IPSEC_MODE, 1, 0, CMD_LINE_OPT_IPSEC_MODE_NUM}, > + {CMD_LINE_OPT_IPSEC_DIR, 1, 0, CMD_LINE_OPT_IPSEC_DIR_NUM}, > {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, > {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, > {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, > @@ -1094,8 +1118,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, > } > > /* main processing loop */ > -static int32_t > -main_loop(__attribute__((unused)) void *dummy) > +void > +ipsec_poll_mode_worker(void) > { > struct rte_mbuf *pkts[MAX_PKT_BURST]; > uint32_t lcore_id; > @@ -1137,7 +1161,7 @@ main_loop(__attribute__((unused)) void *dummy) > if (qconf->nb_rx_queue == 0) { > RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", > lcore_id); > - return 0; > + return; > } > > RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); > @@ -1150,7 +1174,7 @@ main_loop(__attribute__((unused)) void *dummy) > lcore_id, portid, queueid); > } > > - while (1) { > + while (!force_quit) { > cur_tsc = rte_rdtsc(); > > /* TX queue buffer drain */ > @@ -1277,6 +1301,10 @@ print_usage(const char *prgname) > " --config (port,queue,lcore)[,(port,queue,lcore)]" > " [--single-sa SAIDX]" > " [--cryptodev_mask MASK]" > + " [--transfer-mode MODE]" > + " [--schedule-type TYPE]" > + " [--process-mode MODE]" > + " [--process-dir DIR]" > " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" > " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" > " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" > @@ -1298,6 +1326,22 @@ print_usage(const char *prgname) > " bypassing the SP\n" > " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" > " devices to configure\n" > + " --transfer-mode MODE\n" > + " 0: Packet transfer via polling (default)\n" > + " 1: Packet transfer via eventdev\n" > + " --schedule-type TYPE queue schedule type, used only when\n" > + " transfer mode is set to eventdev\n" > + " 0: Ordered (default)\n" > + " 1: Atomic\n" > + " 2: Parallel\n" > + " --process-mode MODE processing mode, used only when\n" > + " transfer mode is set to eventdev\n" > + " \"app\" : application mode (default)\n" > + " \"drv\" : driver mode\n" > + " --process-dir DIR processing direction, used only when\n" > + " transfer mode is set to eventdev\n" > + " \"out\" : outbound (default)\n" > + " \"in\" : inbound\n" > " --" CMD_LINE_OPT_RX_OFFLOAD > ": bitmask of the RX HW offload capabilities to enable/use\n" > " (DEV_RX_OFFLOAD_*)\n" > @@ -1433,7 +1477,89 @@ print_app_sa_prm(const struct app_sa_prm *prm) > } > > static int32_t > -parse_args(int32_t argc, char **argv) > +eh_parse_decimal(const char *str) > +{ > + unsigned long num; > + char *end = NULL; > + > + num = strtoul(str, &end, 10); > + if ((str[0] == '\0') || (end == NULL) || (*end != '\0')) > + return -EINVAL; > + > + return num; > +} > + > +static int > +parse_transfer_mode(struct eh_conf *conf, const char *optarg) > +{ > + int32_t parsed_dec; > + > + parsed_dec = eh_parse_decimal(optarg); > + if (parsed_dec != EH_PKT_TRANSFER_MODE_POLL && > + parsed_dec != EH_PKT_TRANSFER_MODE_EVENT) { > + printf("Unsupported packet transfer mode"); > + return -EINVAL; > + } > + conf->mode = parsed_dec; > + return 0; > +} > + > +static int > +parse_schedule_type(struct eh_conf *conf, const char *optarg) > +{ > + struct eventmode_conf *em_conf = NULL; > + int32_t parsed_dec; > + > + parsed_dec = eh_parse_decimal(optarg); > + if (parsed_dec != RTE_SCHED_TYPE_ORDERED && > + parsed_dec != RTE_SCHED_TYPE_ATOMIC && > + parsed_dec != RTE_SCHED_TYPE_PARALLEL) > + return -EINVAL; > + > + /* Get eventmode conf */ > + em_conf = (struct eventmode_conf *)(conf->mode_params); > + > + em_conf->ext_params.sched_type = parsed_dec; > + > + return 0; > +} > + > +static int > +parse_ipsec_mode(struct eh_conf *conf, const char *optarg) > +{ > + if (!strncmp(CMD_LINE_ARG_APP, optarg, strlen(CMD_LINE_ARG_APP)) && > + strlen(optarg) == strlen(CMD_LINE_ARG_APP)) > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > + else if (!strncmp(CMD_LINE_ARG_DRV, optarg, strlen(CMD_LINE_ARG_DRV)) && > + strlen(optarg) == strlen(CMD_LINE_ARG_DRV)) > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > + else { > + printf("Unsupported ipsec mode\n"); > + return -EINVAL; > + } > + > + return 0; > +} > + > +static int > +parse_ipsec_dir(struct eh_conf *conf, const char *optarg) > +{ > + if (!strncmp(CMD_LINE_ARG_INB, optarg, strlen(CMD_LINE_ARG_INB)) && > + strlen(optarg) == strlen(CMD_LINE_ARG_INB)) > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > + else if (!strncmp(CMD_LINE_ARG_OUT, optarg, strlen(CMD_LINE_ARG_OUT)) && > + strlen(optarg) == strlen(CMD_LINE_ARG_OUT)) > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > + else { > + printf("Unsupported ipsec direction\n"); > + return -EINVAL; > + } > + > + return 0; > +} > + > +static int32_t > +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > { > int opt; > int64_t ret; > @@ -1536,6 +1662,43 @@ parse_args(int32_t argc, char **argv) > /* else */ > enabled_cryptodev_mask = ret; > break; > + > + case CMD_LINE_OPT_TRANSFER_MODE_NUM: > + ret = parse_transfer_mode(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid packet transfer mode\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: > + ret = parse_schedule_type(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid queue schedule type\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > + case CMD_LINE_OPT_IPSEC_MODE_NUM: > + ret = parse_ipsec_mode(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid ipsec mode\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > + case CMD_LINE_OPT_IPSEC_DIR_NUM: > + ret = parse_ipsec_dir(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid ipsec direction\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > case CMD_LINE_OPT_RX_OFFLOAD_NUM: > ret = parse_mask(optarg, &dev_rx_offload); > if (ret != 0) { > @@ -2457,6 +2620,132 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) > return ret; > } > > +static struct eh_conf * > +eh_conf_init(void) > +{ > + struct eventmode_conf *em_conf = NULL; > + struct eh_conf *conf = NULL; > + unsigned int eth_core_id; > + uint32_t nb_bytes; > + void *mem = NULL; > + > + /* Allocate memory for config */ > + conf = calloc(1, sizeof(struct eh_conf)); > + if (conf == NULL) { > + printf("Failed to allocate memory for eventmode helper conf"); > + goto err; > + } > + > + /* Set default conf */ > + > + /* Packet transfer mode: poll */ > + conf->mode = EH_PKT_TRANSFER_MODE_POLL; > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > + > + /* Keep all ethernet ports enabled by default */ > + conf->eth_portmask = -1; > + > + /* Allocate memory for event mode params */ > + conf->mode_params = > + calloc(1, sizeof(struct eventmode_conf)); > + if (conf->mode_params == NULL) { > + printf("Failed to allocate memory for event mode params"); > + goto err; > + } > + > + /* Get eventmode conf */ > + em_conf = (struct eventmode_conf *)(conf->mode_params); > + > + /* Allocate and initialize bitmap for eth cores */ > + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); > + if (!nb_bytes) { > + printf("Failed to get bitmap footprint"); > + goto err; > + } > + > + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, > + RTE_CACHE_LINE_SIZE); > + if (!mem) { > + printf("Failed to allocate memory for eth cores bitmap\n"); > + goto err; > + } > + > + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, mem, nb_bytes); > + if (!em_conf->eth_core_mask) { > + printf("Failed to initialize bitmap"); > + goto err; > + } > + > + /* Schedule type: ordered */ > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; > + > + /* Set two cores as eth cores for Rx & Tx */ > + > + /* Use first core other than master core as Rx core */ > + eth_core_id = rte_get_next_lcore(0, /* curr core */ > + 1, /* skip master core */ > + 0 /* wrap */); > + > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > + > + /* Use next core as Tx core */ > + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ > + 1, /* skip master core */ > + 0 /* wrap */); > + > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > + > + return conf; > +err: > + rte_free(mem); > + free(em_conf); > + free(conf); > + return NULL; > +} > + > +static void > +eh_conf_uninit(struct eh_conf *conf) > +{ > + struct eventmode_conf *em_conf = NULL; > + > + /* Get eventmode conf */ > + em_conf = (struct eventmode_conf *)(conf->mode_params); > + > + /* Free evenmode configuration memory */ > + rte_free(em_conf->eth_core_mask); > + free(em_conf); > + free(conf); > +} > + > +static void > +signal_handler(int signum) > +{ > + if (signum == SIGINT || signum == SIGTERM) { > + uint16_t port_id; > + printf("\n\nSignal %d received, preparing to exit...\n", > + signum); > + force_quit = true; > + > + /* Destroy the default ipsec flow */ > + RTE_ETH_FOREACH_DEV(port_id) { > + if ((enabled_port_mask & (1 << port_id)) == 0) > + continue; > + if (flow_info_tbl[port_id].rx_def_flow) { > + struct rte_flow_error err; > + int ret; > + ret = rte_flow_destroy(port_id, > + flow_info_tbl[port_id].rx_def_flow, > + &err); > + if (ret) > + RTE_LOG(ERR, IPSEC, > + "Failed to destroy flow for port %u, " > + "err msg: %s\n", port_id, err.message); > + } > + } > + } > +} > + > int32_t > main(int32_t argc, char **argv) > { > @@ -2466,6 +2755,7 @@ main(int32_t argc, char **argv) > uint8_t socket_id; > uint16_t portid; > uint64_t req_rx_offloads, req_tx_offloads; > + struct eh_conf *eh_conf = NULL; > size_t sess_sz; > > /* init EAL */ > @@ -2475,8 +2765,17 @@ main(int32_t argc, char **argv) > argc -= ret; > argv += ret; > > + force_quit = false; > + signal(SIGINT, signal_handler); > + signal(SIGTERM, signal_handler); > + > + /* initialize event helper configuration */ > + eh_conf = eh_conf_init(); > + if (eh_conf == NULL) > + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); > + > /* parse application arguments (after the EAL ones) */ > - ret = parse_args(argc, argv); > + ret = parse_args(argc, argv, eh_conf); > if (ret < 0) > rte_exit(EXIT_FAILURE, "Invalid parameters\n"); > > @@ -2592,12 +2891,43 @@ main(int32_t argc, char **argv) > > check_all_ports_link_status(enabled_port_mask); > > + /* > + * Set the enabled port mask in helper config for use by helper > + * sub-system. This will be used while intializing devices using > + * helper sub-system. > + */ > + eh_conf->eth_portmask = enabled_port_mask; > + > + /* Initialize eventmode components */ > + ret = eh_devs_init(eh_conf); > + if (ret < 0) > + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); > + > /* launch per-lcore init on every lcore */ > - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); > + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); > + > RTE_LCORE_FOREACH_SLAVE(lcore_id) { > if (rte_eal_wait_lcore(lcore_id) < 0) > return -1; > } > > + /* Uninitialize eventmode components */ > + ret = eh_devs_uninit(eh_conf); > + if (ret < 0) > + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); > + > + /* Free eventmode configuration memory */ > + eh_conf_uninit(eh_conf); > + > + RTE_ETH_FOREACH_DEV(portid) { > + if ((enabled_port_mask & (1 << portid)) == 0) > + continue; > + printf("Closing port %d...", portid); > + rte_eth_dev_stop(portid); > + rte_eth_dev_close(portid); > + printf(" Done\n"); > + } > + printf("Bye...\n"); > + > return 0; > } > diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h > index 28ff07d..0b9fc04 100644 > --- a/examples/ipsec-secgw/ipsec.h > +++ b/examples/ipsec-secgw/ipsec.h > @@ -247,6 +247,13 @@ struct ipsec_traffic { > struct traffic_type ip6; > }; > > + > +void > +ipsec_poll_mode_worker(void); > + > +int > +ipsec_launch_one_lcore(void *args); > + > uint16_t > ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], > uint16_t nb_pkts, uint16_t len); > diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c > new file mode 100644 > index 0000000..87c657b > --- /dev/null > +++ b/examples/ipsec-secgw/ipsec_worker.c > @@ -0,0 +1,180 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2010-2016 Intel Corporation > + * Copyright (C) 2019 Marvell International Ltd. > + */ > +#include <stdio.h> > +#include <stdlib.h> > +#include <string.h> > +#include <stdint.h> > +#include <inttypes.h> > +#include <sys/types.h> > +#include <sys/queue.h> > +#include <netinet/in.h> > +#include <setjmp.h> > +#include <stdarg.h> > +#include <ctype.h> > +#include <stdbool.h> > + > +#include <rte_common.h> > +#include <rte_log.h> > +#include <rte_memcpy.h> > +#include <rte_atomic.h> > +#include <rte_cycles.h> > +#include <rte_prefetch.h> > +#include <rte_lcore.h> > +#include <rte_branch_prediction.h> > +#include <rte_event_eth_tx_adapter.h> > +#include <rte_ether.h> > +#include <rte_ethdev.h> > +#include <rte_eventdev.h> > +#include <rte_malloc.h> > +#include <rte_mbuf.h> > + > +#include "ipsec.h" > +#include "event_helper.h" > + > +extern volatile bool force_quit; > + > +static inline void > +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) > +{ > + /* Save the destination port in the mbuf */ > + m->port = port_id; > + > + /* Save eth queue for Tx */ > + rte_event_eth_tx_adapter_txq_set(m, 0); > +} > + > +/* > + * Event mode exposes various operating modes depending on the > + * capabilities of the event device and the operating mode > + * selected. > + */ > + > +/* Workers registered */ > +#define IPSEC_EVENTMODE_WORKERS 1 > + > +/* > + * Event mode worker > + * Operating parameters : non-burst - Tx internal port - driver mode - inbound > + */ > +static void > +ipsec_wrkr_non_burst_int_port_drvr_mode_inb(struct eh_event_link_info *links, > + uint8_t nb_links) > +{ > + unsigned int nb_rx = 0; > + struct rte_mbuf *pkt; > + unsigned int port_id; > + struct rte_event ev; > + uint32_t lcore_id; > + > + /* Check if we have links registered for this lcore */ > + if (nb_links == 0) { > + /* No links registered - exit */ > + goto exit; > + } > + > + /* Get core ID */ > + lcore_id = rte_lcore_id(); > + > + RTE_LOG(INFO, IPSEC, > + "Launching event mode worker (non-burst - Tx internal port - " > + "driver mode - inbound) on lcore %d\n", lcore_id); > + > + /* We have valid links */ > + > + /* Check if it's single link */ > + if (nb_links != 1) { > + RTE_LOG(INFO, IPSEC, > + "Multiple links not supported. Using first link\n"); > + } > + > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, > + links[0].event_port_id); > + while (!force_quit) { > + /* Read packet from event queues */ > + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > + links[0].event_port_id, > + &ev, /* events */ > + 1, /* nb_events */ > + 0 /* timeout_ticks */); > + > + if (nb_rx == 0) > + continue; > + > + port_id = ev.queue_id; > + pkt = ev.mbuf; > + > + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > + > + /* Process packet */ > + ipsec_event_pre_forward(pkt, port_id); > + > + /* > + * Since tx internal port is available, events can be > + * directly enqueued to the adapter and it would be > + * internally submitted to the eth device. > + */ > + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > + links[0].event_port_id, > + &ev, /* events */ > + 1, /* nb_events */ > + 0 /* flags */); > + } > + > +exit: > + return; > +} > + > +static uint8_t > +ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) > +{ > + struct eh_app_worker_params *wrkr; > + uint8_t nb_wrkr_param = 0; > + > + /* Save workers */ > + wrkr = wrkrs; > + > + /* Non-burst - Tx internal port - driver mode - inbound */ > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drvr_mode_inb; > + > + nb_wrkr_param++; > + return nb_wrkr_param; > +} > + > +static void > +ipsec_eventmode_worker(struct eh_conf *conf) > +{ > + struct eh_app_worker_params ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { > + {{{0} }, NULL } }; > + uint8_t nb_wrkr_param; > + > + /* Populate l2fwd_wrkr params */ > + nb_wrkr_param = ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); > + > + /* > + * Launch correct worker after checking > + * the event device's capabilities. > + */ > + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); > +} > + > +int ipsec_launch_one_lcore(void *args) > +{ > + struct eh_conf *conf; > + > + conf = (struct eh_conf *)args; > + > + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { > + /* Run in poll mode */ > + ipsec_poll_mode_worker(); > + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { > + /* Run in event mode */ > + ipsec_eventmode_worker(conf); > + } > + return 0; > +} > diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build > index 20f4064..ab40ca5 100644 > --- a/examples/ipsec-secgw/meson.build > +++ b/examples/ipsec-secgw/meson.build > @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] > allow_experimental_apis = true > sources = files( > 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', > - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' > + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c', 'ipsec_worker.c' > ) > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2019-12-24 12:47 ` Ananyev, Konstantin @ 2020-01-03 10:20 ` Anoob Joseph 2020-01-06 16:50 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-03 10:20 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Tuesday, December 24, 2019 6:18 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal > <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas > Monjalon <thomas@monjalon.net> > Cc: Lukas Bartosik <lbartosik@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; dev@dpdk.org > Subject: [EXT] RE: [PATCH 09/14] examples/ipsec-secgw: add eventmode to > ipsec-secgw > > External Email > > ---------------------------------------------------------------------- > > Add eventmode support to ipsec-secgw. This uses event helper to setup > > and use the eventmode capabilities. Add driver inbound worker. > > > > Example command: > > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w > > 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > > --schedule-type 2 --process-mode drv --process-dir in > > As I can see new event mode is totally orthogonal to the existing poll mode. > Event mode has it is own data-path, and it doesn't reuse any part of poll- > mode data-path code. > Plus in event mode many poll-mode options: > libirary/legacy mode, fragment/reassemble, replay-window, ESN, fall-back > session, etc. > are simply ignored. [Anoob] The features are not supported with the initial version. But the features are equally applicable to eventmode and is planned for the future. Also, fragment/reassemble, replay-window, ESN, fall-back session etc are not applicable for non-library mode. We can follow the same logic and allow for an extra arg (which is --transfer-mode). > Also as I can read the current code - > right now these modes can't be mixed and used together. > User has to use either only event based or poll mode API/devices. [Anoob] Same like how we cannot mix library and non-library modes. > > If so, then at least we need a check (and report with error exit) for these > mutually exclusive option variants. [Anoob] Will do that. > Probably even better would be to generate two separate binaries Let say: > ipsec-secgw-event and ipsec-secgw-poll. > We can still keep the same parent directory, makefile, common src files etc. > for both. [Anoob] I would be inclined to not fork the current application. Do you see any issues if the same binary could run in both modes. The default behavior would be poll mode (with existing behavior). > > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > --- > > examples/ipsec-secgw/Makefile | 1 + > > examples/ipsec-secgw/event_helper.c | 3 + > > examples/ipsec-secgw/event_helper.h | 26 +++ > > examples/ipsec-secgw/ipsec-secgw.c | 344 > +++++++++++++++++++++++++++++++++++- > > examples/ipsec-secgw/ipsec.h | 7 + > > examples/ipsec-secgw/ipsec_worker.c | 180 +++++++++++++++++++ > > examples/ipsec-secgw/meson.build | 2 +- > > 7 files changed, 555 insertions(+), 8 deletions(-) create mode > > 100644 examples/ipsec-secgw/ipsec_worker.c > > > > diff --git a/examples/ipsec-secgw/Makefile > > b/examples/ipsec-secgw/Makefile index 09e3c5a..f6fd94c 100644 > > --- a/examples/ipsec-secgw/Makefile > > +++ b/examples/ipsec-secgw/Makefile > > @@ -15,6 +15,7 @@ SRCS-y += sa.c > > SRCS-y += rt.c > > SRCS-y += ipsec_process.c > > SRCS-y += ipsec-secgw.c > > +SRCS-y += ipsec_worker.c > > SRCS-y += event_helper.c > > > > CFLAGS += -gdwarf-2 > > diff --git a/examples/ipsec-secgw/event_helper.c > > b/examples/ipsec-secgw/event_helper.c > > index 6549875..44f997d 100644 > > --- a/examples/ipsec-secgw/event_helper.c > > +++ b/examples/ipsec-secgw/event_helper.c > > @@ -984,6 +984,9 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf > *conf, > > else > > curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; > > > > + curr_conf.cap.ipsec_mode = conf->ipsec_mode; > > + curr_conf.cap.ipsec_dir = conf->ipsec_dir; > > + > > /* Parse the passed list and see if we have matching capabilities */ > > > > /* Initialize the pointer used to traverse the list */ diff --git > > a/examples/ipsec-secgw/event_helper.h > > b/examples/ipsec-secgw/event_helper.h > > index 2895dfa..07849b0 100644 > > --- a/examples/ipsec-secgw/event_helper.h > > +++ b/examples/ipsec-secgw/event_helper.h > > @@ -74,6 +74,22 @@ enum eh_tx_types { > > EH_TX_TYPE_NO_INTERNAL_PORT > > }; > > > > +/** > > + * Event mode ipsec mode types > > + */ > > +enum eh_ipsec_mode_types { > > + EH_IPSEC_MODE_TYPE_APP = 0, > > + EH_IPSEC_MODE_TYPE_DRIVER > > +}; > > + > > +/** > > + * Event mode ipsec direction types > > + */ > > +enum eh_ipsec_dir_types { > > + EH_IPSEC_DIR_TYPE_OUTBOUND = 0, > > + EH_IPSEC_DIR_TYPE_INBOUND, > > +}; > > + > > /* Event dev params */ > > struct eventdev_params { > > uint8_t eventdev_id; > > @@ -183,6 +199,12 @@ struct eh_conf { > > */ > > void *mode_params; > > /**< Mode specific parameters */ > > + > > + /** Application specific params */ > > + enum eh_ipsec_mode_types ipsec_mode; > > + /**< Mode of ipsec run */ > > + enum eh_ipsec_dir_types ipsec_dir; > > + /**< Direction of ipsec processing */ > > }; > > > > /* Workers registered by the application */ @@ -194,6 +216,10 @@ > > struct eh_app_worker_params { > > /**< Specify status of rx type burst */ > > uint64_t tx_internal_port : 1; > > /**< Specify whether tx internal port is available */ > > + uint64_t ipsec_mode : 1; > > + /**< Specify ipsec processing level */ > > + uint64_t ipsec_dir : 1; > > + /**< Specify direction of ipsec */ > > }; > > uint64_t u64; > > } cap; > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > b/examples/ipsec-secgw/ipsec-secgw.c > > index 7506922..c5d95b9 100644 > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > @@ -2,6 +2,7 @@ > > * Copyright(c) 2016 Intel Corporation > > */ > > > > +#include <stdbool.h> > > #include <stdio.h> > > #include <stdlib.h> > > #include <stdint.h> > > @@ -14,6 +15,7 @@ > > #include <sys/queue.h> > > #include <stdarg.h> > > #include <errno.h> > > +#include <signal.h> > > #include <getopt.h> > > > > #include <rte_common.h> > > @@ -41,12 +43,17 @@ > > #include <rte_jhash.h> > > #include <rte_cryptodev.h> > > #include <rte_security.h> > > +#include <rte_bitmap.h> > > +#include <rte_eventdev.h> > > #include <rte_ip.h> > > #include <rte_ip_frag.h> > > > > +#include "event_helper.h" > > #include "ipsec.h" > > #include "parser.h" > > > > +volatile bool force_quit; > > + > > #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > > > #define MAX_JUMBO_PKT_LEN 9600 > > @@ -133,12 +140,21 @@ struct flow_info > flow_info_tbl[RTE_MAX_ETHPORTS]; > > #define CMD_LINE_OPT_CONFIG "config" > > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > > +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" > > +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" > > +#define CMD_LINE_OPT_IPSEC_MODE "process-mode" > > +#define CMD_LINE_OPT_IPSEC_DIR "process-dir" > > #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" > > #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" > > #define CMD_LINE_OPT_REASSEMBLE "reassemble" > > #define CMD_LINE_OPT_MTU "mtu" > > #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" > > > > +#define CMD_LINE_ARG_APP "app" > > +#define CMD_LINE_ARG_DRV "drv" > > +#define CMD_LINE_ARG_INB "in" > > +#define CMD_LINE_ARG_OUT "out" > > + > > enum { > > /* long options mapped to a short option */ > > > > @@ -149,7 +165,11 @@ enum { > > CMD_LINE_OPT_CONFIG_NUM, > > CMD_LINE_OPT_SINGLE_SA_NUM, > > CMD_LINE_OPT_CRYPTODEV_MASK_NUM, > > + CMD_LINE_OPT_TRANSFER_MODE_NUM, > > + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, > > CMD_LINE_OPT_RX_OFFLOAD_NUM, > > + CMD_LINE_OPT_IPSEC_MODE_NUM, > > + CMD_LINE_OPT_IPSEC_DIR_NUM, > > CMD_LINE_OPT_TX_OFFLOAD_NUM, > > CMD_LINE_OPT_REASSEMBLE_NUM, > > CMD_LINE_OPT_MTU_NUM, > > @@ -160,6 +180,10 @@ static const struct option lgopts[] = { > > {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, > > {CMD_LINE_OPT_SINGLE_SA, 1, 0, > CMD_LINE_OPT_SINGLE_SA_NUM}, > > {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, > > CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, > > + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, > CMD_LINE_OPT_TRANSFER_MODE_NUM}, > > + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, > CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, > > + {CMD_LINE_OPT_IPSEC_MODE, 1, 0, > CMD_LINE_OPT_IPSEC_MODE_NUM}, > > + {CMD_LINE_OPT_IPSEC_DIR, 1, 0, > CMD_LINE_OPT_IPSEC_DIR_NUM}, > > {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, > CMD_LINE_OPT_RX_OFFLOAD_NUM}, > > {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, > CMD_LINE_OPT_TX_OFFLOAD_NUM}, > > {CMD_LINE_OPT_REASSEMBLE, 1, 0, > CMD_LINE_OPT_REASSEMBLE_NUM}, @@ > > -1094,8 +1118,8 @@ drain_outbound_crypto_queues(const struct > > lcore_conf *qconf, } > > > > /* main processing loop */ > > -static int32_t > > -main_loop(__attribute__((unused)) void *dummy) > > +void > > +ipsec_poll_mode_worker(void) > > { > > struct rte_mbuf *pkts[MAX_PKT_BURST]; > > uint32_t lcore_id; > > @@ -1137,7 +1161,7 @@ main_loop(__attribute__((unused)) void > *dummy) > > if (qconf->nb_rx_queue == 0) { > > RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", > > lcore_id); > > - return 0; > > + return; > > } > > > > RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); > > @@ -1150,7 +1174,7 @@ main_loop(__attribute__((unused)) void > *dummy) > > lcore_id, portid, queueid); > > } > > > > - while (1) { > > + while (!force_quit) { > > cur_tsc = rte_rdtsc(); > > > > /* TX queue buffer drain */ > > @@ -1277,6 +1301,10 @@ print_usage(const char *prgname) > > " --config (port,queue,lcore)[,(port,queue,lcore)]" > > " [--single-sa SAIDX]" > > " [--cryptodev_mask MASK]" > > + " [--transfer-mode MODE]" > > + " [--schedule-type TYPE]" > > + " [--process-mode MODE]" > > + " [--process-dir DIR]" > > " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" > > " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" > > " [--" CMD_LINE_OPT_REASSEMBLE " > REASSEMBLE_TABLE_SIZE]" > > @@ -1298,6 +1326,22 @@ print_usage(const char *prgname) > > " bypassing the SP\n" > > " --cryptodev_mask MASK: Hexadecimal bitmask of the > crypto\n" > > " devices to configure\n" > > + " --transfer-mode MODE\n" > > + " 0: Packet transfer via polling (default)\n" > > + " 1: Packet transfer via eventdev\n" > > + " --schedule-type TYPE queue schedule type, used only > when\n" > > + " transfer mode is set to eventdev\n" > > + " 0: Ordered (default)\n" > > + " 1: Atomic\n" > > + " 2: Parallel\n" > > + " --process-mode MODE processing mode, used only > when\n" > > + " transfer mode is set to eventdev\n" > > + " \"app\" : application mode (default)\n" > > + " \"drv\" : driver mode\n" > > + " --process-dir DIR processing direction, used only when\n" > > + " transfer mode is set to eventdev\n" > > + " \"out\" : outbound (default)\n" > > + " \"in\" : inbound\n" > > " --" CMD_LINE_OPT_RX_OFFLOAD > > ": bitmask of the RX HW offload capabilities to enable/use\n" > > " (DEV_RX_OFFLOAD_*)\n" > > @@ -1433,7 +1477,89 @@ print_app_sa_prm(const struct app_sa_prm > *prm) > > } > > > > static int32_t > > -parse_args(int32_t argc, char **argv) > > +eh_parse_decimal(const char *str) > > +{ > > + unsigned long num; > > + char *end = NULL; > > + > > + num = strtoul(str, &end, 10); > > + if ((str[0] == '\0') || (end == NULL) || (*end != '\0')) > > + return -EINVAL; > > + > > + return num; > > +} > > + > > +static int > > +parse_transfer_mode(struct eh_conf *conf, const char *optarg) { > > + int32_t parsed_dec; > > + > > + parsed_dec = eh_parse_decimal(optarg); > > + if (parsed_dec != EH_PKT_TRANSFER_MODE_POLL && > > + parsed_dec != EH_PKT_TRANSFER_MODE_EVENT) { > > + printf("Unsupported packet transfer mode"); > > + return -EINVAL; > > + } > > + conf->mode = parsed_dec; > > + return 0; > > +} > > + > > +static int > > +parse_schedule_type(struct eh_conf *conf, const char *optarg) { > > + struct eventmode_conf *em_conf = NULL; > > + int32_t parsed_dec; > > + > > + parsed_dec = eh_parse_decimal(optarg); > > + if (parsed_dec != RTE_SCHED_TYPE_ORDERED && > > + parsed_dec != RTE_SCHED_TYPE_ATOMIC && > > + parsed_dec != RTE_SCHED_TYPE_PARALLEL) > > + return -EINVAL; > > + > > + /* Get eventmode conf */ > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > + > > + em_conf->ext_params.sched_type = parsed_dec; > > + > > + return 0; > > +} > > + > > +static int > > +parse_ipsec_mode(struct eh_conf *conf, const char *optarg) { > > + if (!strncmp(CMD_LINE_ARG_APP, optarg, > strlen(CMD_LINE_ARG_APP)) && > > + strlen(optarg) == strlen(CMD_LINE_ARG_APP)) > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > + else if (!strncmp(CMD_LINE_ARG_DRV, optarg, > strlen(CMD_LINE_ARG_DRV)) && > > + strlen(optarg) == strlen(CMD_LINE_ARG_DRV)) > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > + else { > > + printf("Unsupported ipsec mode\n"); > > + return -EINVAL; > > + } > > + > > + return 0; > > +} > > + > > +static int > > +parse_ipsec_dir(struct eh_conf *conf, const char *optarg) { > > + if (!strncmp(CMD_LINE_ARG_INB, optarg, > strlen(CMD_LINE_ARG_INB)) && > > + strlen(optarg) == strlen(CMD_LINE_ARG_INB)) > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > + else if (!strncmp(CMD_LINE_ARG_OUT, optarg, > strlen(CMD_LINE_ARG_OUT)) && > > + strlen(optarg) == strlen(CMD_LINE_ARG_OUT)) > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > + else { > > + printf("Unsupported ipsec direction\n"); > > + return -EINVAL; > > + } > > + > > + return 0; > > +} > > + > > +static int32_t > > +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > { > > int opt; > > int64_t ret; > > @@ -1536,6 +1662,43 @@ parse_args(int32_t argc, char **argv) > > /* else */ > > enabled_cryptodev_mask = ret; > > break; > > + > > + case CMD_LINE_OPT_TRANSFER_MODE_NUM: > > + ret = parse_transfer_mode(eh_conf, optarg); > > + if (ret < 0) { > > + printf("Invalid packet transfer mode\n"); > > + print_usage(prgname); > > + return -1; > > + } > > + break; > > + > > + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: > > + ret = parse_schedule_type(eh_conf, optarg); > > + if (ret < 0) { > > + printf("Invalid queue schedule type\n"); > > + print_usage(prgname); > > + return -1; > > + } > > + break; > > + > > + case CMD_LINE_OPT_IPSEC_MODE_NUM: > > + ret = parse_ipsec_mode(eh_conf, optarg); > > + if (ret < 0) { > > + printf("Invalid ipsec mode\n"); > > + print_usage(prgname); > > + return -1; > > + } > > + break; > > + > > + case CMD_LINE_OPT_IPSEC_DIR_NUM: > > + ret = parse_ipsec_dir(eh_conf, optarg); > > + if (ret < 0) { > > + printf("Invalid ipsec direction\n"); > > + print_usage(prgname); > > + return -1; > > + } > > + break; > > + > > case CMD_LINE_OPT_RX_OFFLOAD_NUM: > > ret = parse_mask(optarg, &dev_rx_offload); > > if (ret != 0) { > > @@ -2457,6 +2620,132 @@ create_default_ipsec_flow(uint16_t port_id, > uint64_t rx_offloads) > > return ret; > > } > > > > +static struct eh_conf * > > +eh_conf_init(void) > > +{ > > + struct eventmode_conf *em_conf = NULL; > > + struct eh_conf *conf = NULL; > > + unsigned int eth_core_id; > > + uint32_t nb_bytes; > > + void *mem = NULL; > > + > > + /* Allocate memory for config */ > > + conf = calloc(1, sizeof(struct eh_conf)); > > + if (conf == NULL) { > > + printf("Failed to allocate memory for eventmode helper > conf"); > > + goto err; > > + } > > + > > + /* Set default conf */ > > + > > + /* Packet transfer mode: poll */ > > + conf->mode = EH_PKT_TRANSFER_MODE_POLL; > > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > + conf->ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > + > > + /* Keep all ethernet ports enabled by default */ > > + conf->eth_portmask = -1; > > + > > + /* Allocate memory for event mode params */ > > + conf->mode_params = > > + calloc(1, sizeof(struct eventmode_conf)); > > + if (conf->mode_params == NULL) { > > + printf("Failed to allocate memory for event mode params"); > > + goto err; > > + } > > + > > + /* Get eventmode conf */ > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > + > > + /* Allocate and initialize bitmap for eth cores */ > > + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); > > + if (!nb_bytes) { > > + printf("Failed to get bitmap footprint"); > > + goto err; > > + } > > + > > + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, > > + RTE_CACHE_LINE_SIZE); > > + if (!mem) { > > + printf("Failed to allocate memory for eth cores bitmap\n"); > > + goto err; > > + } > > + > > + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, > mem, nb_bytes); > > + if (!em_conf->eth_core_mask) { > > + printf("Failed to initialize bitmap"); > > + goto err; > > + } > > + > > + /* Schedule type: ordered */ > > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; > > + > > + /* Set two cores as eth cores for Rx & Tx */ > > + > > + /* Use first core other than master core as Rx core */ > > + eth_core_id = rte_get_next_lcore(0, /* curr core */ > > + 1, /* skip master core */ > > + 0 /* wrap */); > > + > > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > > + > > + /* Use next core as Tx core */ > > + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core > */ > > + 1, /* skip master core */ > > + 0 /* wrap */); > > + > > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > > + > > + return conf; > > +err: > > + rte_free(mem); > > + free(em_conf); > > + free(conf); > > + return NULL; > > +} > > + > > +static void > > +eh_conf_uninit(struct eh_conf *conf) > > +{ > > + struct eventmode_conf *em_conf = NULL; > > + > > + /* Get eventmode conf */ > > + em_conf = (struct eventmode_conf *)(conf->mode_params); > > + > > + /* Free evenmode configuration memory */ > > + rte_free(em_conf->eth_core_mask); > > + free(em_conf); > > + free(conf); > > +} > > + > > +static void > > +signal_handler(int signum) > > +{ > > + if (signum == SIGINT || signum == SIGTERM) { > > + uint16_t port_id; > > + printf("\n\nSignal %d received, preparing to exit...\n", > > + signum); > > + force_quit = true; > > + > > + /* Destroy the default ipsec flow */ > > + RTE_ETH_FOREACH_DEV(port_id) { > > + if ((enabled_port_mask & (1 << port_id)) == 0) > > + continue; > > + if (flow_info_tbl[port_id].rx_def_flow) { > > + struct rte_flow_error err; > > + int ret; > > + ret = rte_flow_destroy(port_id, > > + flow_info_tbl[port_id].rx_def_flow, > > + &err); > > + if (ret) > > + RTE_LOG(ERR, IPSEC, > > + "Failed to destroy flow for port %u, " > > + "err msg: %s\n", port_id, > err.message); > > + } > > + } > > + } > > +} > > + > > int32_t > > main(int32_t argc, char **argv) > > { > > @@ -2466,6 +2755,7 @@ main(int32_t argc, char **argv) > > uint8_t socket_id; > > uint16_t portid; > > uint64_t req_rx_offloads, req_tx_offloads; > > + struct eh_conf *eh_conf = NULL; > > size_t sess_sz; > > > > /* init EAL */ > > @@ -2475,8 +2765,17 @@ main(int32_t argc, char **argv) > > argc -= ret; > > argv += ret; > > > > + force_quit = false; > > + signal(SIGINT, signal_handler); > > + signal(SIGTERM, signal_handler); > > + > > + /* initialize event helper configuration */ > > + eh_conf = eh_conf_init(); > > + if (eh_conf == NULL) > > + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); > > + > > /* parse application arguments (after the EAL ones) */ > > - ret = parse_args(argc, argv); > > + ret = parse_args(argc, argv, eh_conf); > > if (ret < 0) > > rte_exit(EXIT_FAILURE, "Invalid parameters\n"); > > > > @@ -2592,12 +2891,43 @@ main(int32_t argc, char **argv) > > > > check_all_ports_link_status(enabled_port_mask); > > > > + /* > > + * Set the enabled port mask in helper config for use by helper > > + * sub-system. This will be used while intializing devices using > > + * helper sub-system. > > + */ > > + eh_conf->eth_portmask = enabled_port_mask; > > + > > + /* Initialize eventmode components */ > > + ret = eh_devs_init(eh_conf); > > + if (ret < 0) > > + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", > ret); > > + > > /* launch per-lcore init on every lcore */ > > - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); > > + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, > > +CALL_MASTER); > > + > > RTE_LCORE_FOREACH_SLAVE(lcore_id) { > > if (rte_eal_wait_lcore(lcore_id) < 0) > > return -1; > > } > > > > + /* Uninitialize eventmode components */ > > + ret = eh_devs_uninit(eh_conf); > > + if (ret < 0) > > + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", > ret); > > + > > + /* Free eventmode configuration memory */ > > + eh_conf_uninit(eh_conf); > > + > > + RTE_ETH_FOREACH_DEV(portid) { > > + if ((enabled_port_mask & (1 << portid)) == 0) > > + continue; > > + printf("Closing port %d...", portid); > > + rte_eth_dev_stop(portid); > > + rte_eth_dev_close(portid); > > + printf(" Done\n"); > > + } > > + printf("Bye...\n"); > > + > > return 0; > > } > > diff --git a/examples/ipsec-secgw/ipsec.h > > b/examples/ipsec-secgw/ipsec.h index 28ff07d..0b9fc04 100644 > > --- a/examples/ipsec-secgw/ipsec.h > > +++ b/examples/ipsec-secgw/ipsec.h > > @@ -247,6 +247,13 @@ struct ipsec_traffic { > > struct traffic_type ip6; > > }; > > > > + > > +void > > +ipsec_poll_mode_worker(void); > > + > > +int > > +ipsec_launch_one_lcore(void *args); > > + > > uint16_t > > ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], > > uint16_t nb_pkts, uint16_t len); > > diff --git a/examples/ipsec-secgw/ipsec_worker.c > > b/examples/ipsec-secgw/ipsec_worker.c > > new file mode 100644 > > index 0000000..87c657b > > --- /dev/null > > +++ b/examples/ipsec-secgw/ipsec_worker.c > > @@ -0,0 +1,180 @@ > > +/* SPDX-License-Identifier: BSD-3-Clause > > + * Copyright(c) 2010-2016 Intel Corporation > > + * Copyright (C) 2019 Marvell International Ltd. > > + */ > > +#include <stdio.h> > > +#include <stdlib.h> > > +#include <string.h> > > +#include <stdint.h> > > +#include <inttypes.h> > > +#include <sys/types.h> > > +#include <sys/queue.h> > > +#include <netinet/in.h> > > +#include <setjmp.h> > > +#include <stdarg.h> > > +#include <ctype.h> > > +#include <stdbool.h> > > + > > +#include <rte_common.h> > > +#include <rte_log.h> > > +#include <rte_memcpy.h> > > +#include <rte_atomic.h> > > +#include <rte_cycles.h> > > +#include <rte_prefetch.h> > > +#include <rte_lcore.h> > > +#include <rte_branch_prediction.h> > > +#include <rte_event_eth_tx_adapter.h> #include <rte_ether.h> #include > > +<rte_ethdev.h> #include <rte_eventdev.h> #include <rte_malloc.h> > > +#include <rte_mbuf.h> > > + > > +#include "ipsec.h" > > +#include "event_helper.h" > > + > > +extern volatile bool force_quit; > > + > > +static inline void > > +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) { > > + /* Save the destination port in the mbuf */ > > + m->port = port_id; > > + > > + /* Save eth queue for Tx */ > > + rte_event_eth_tx_adapter_txq_set(m, 0); } > > + > > +/* > > + * Event mode exposes various operating modes depending on the > > + * capabilities of the event device and the operating mode > > + * selected. > > + */ > > + > > +/* Workers registered */ > > +#define IPSEC_EVENTMODE_WORKERS 1 > > + > > +/* > > + * Event mode worker > > + * Operating parameters : non-burst - Tx internal port - driver mode > > +- inbound */ static void > > +ipsec_wrkr_non_burst_int_port_drvr_mode_inb(struct > eh_event_link_info *links, > > + uint8_t nb_links) > > +{ > > + unsigned int nb_rx = 0; > > + struct rte_mbuf *pkt; > > + unsigned int port_id; > > + struct rte_event ev; > > + uint32_t lcore_id; > > + > > + /* Check if we have links registered for this lcore */ > > + if (nb_links == 0) { > > + /* No links registered - exit */ > > + goto exit; > > + } > > + > > + /* Get core ID */ > > + lcore_id = rte_lcore_id(); > > + > > + RTE_LOG(INFO, IPSEC, > > + "Launching event mode worker (non-burst - Tx internal port - > " > > + "driver mode - inbound) on lcore %d\n", lcore_id); > > + > > + /* We have valid links */ > > + > > + /* Check if it's single link */ > > + if (nb_links != 1) { > > + RTE_LOG(INFO, IPSEC, > > + "Multiple links not supported. Using first link\n"); > > + } > > + > > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", > lcore_id, > > + links[0].event_port_id); > > + while (!force_quit) { > > + /* Read packet from event queues */ > > + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > > + links[0].event_port_id, > > + &ev, /* events */ > > + 1, /* nb_events */ > > + 0 /* timeout_ticks */); > > + > > + if (nb_rx == 0) > > + continue; > > + > > + port_id = ev.queue_id; > > + pkt = ev.mbuf; > > + > > + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > > + > > + /* Process packet */ > > + ipsec_event_pre_forward(pkt, port_id); > > + > > + /* > > + * Since tx internal port is available, events can be > > + * directly enqueued to the adapter and it would be > > + * internally submitted to the eth device. > > + */ > > + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > > + links[0].event_port_id, > > + &ev, /* events */ > > + 1, /* nb_events */ > > + 0 /* flags */); > > + } > > + > > +exit: > > + return; > > +} > > + > > +static uint8_t > > +ipsec_eventmode_populate_wrkr_params(struct > eh_app_worker_params > > +*wrkrs) { > > + struct eh_app_worker_params *wrkr; > > + uint8_t nb_wrkr_param = 0; > > + > > + /* Save workers */ > > + wrkr = wrkrs; > > + > > + /* Non-burst - Tx internal port - driver mode - inbound */ > > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > + wrkr->worker_thread = > ipsec_wrkr_non_burst_int_port_drvr_mode_inb; > > + > > + nb_wrkr_param++; > > + return nb_wrkr_param; > > +} > > + > > +static void > > +ipsec_eventmode_worker(struct eh_conf *conf) { > > + struct eh_app_worker_params > ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { > > + {{{0} }, NULL } }; > > + uint8_t nb_wrkr_param; > > + > > + /* Populate l2fwd_wrkr params */ > > + nb_wrkr_param = > ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); > > + > > + /* > > + * Launch correct worker after checking > > + * the event device's capabilities. > > + */ > > + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); } > > + > > +int ipsec_launch_one_lcore(void *args) { > > + struct eh_conf *conf; > > + > > + conf = (struct eh_conf *)args; > > + > > + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { > > + /* Run in poll mode */ > > + ipsec_poll_mode_worker(); > > + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { > > + /* Run in event mode */ > > + ipsec_eventmode_worker(conf); > > + } > > + return 0; > > +} > > diff --git a/examples/ipsec-secgw/meson.build > > b/examples/ipsec-secgw/meson.build > > index 20f4064..ab40ca5 100644 > > --- a/examples/ipsec-secgw/meson.build > > +++ b/examples/ipsec-secgw/meson.build > > @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', > > 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true > > sources = files( > > 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', > > - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' > > + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c', > 'ipsec_worker.c' > > ) > > -- > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-03 10:20 ` Anoob Joseph @ 2020-01-06 16:50 ` Ananyev, Konstantin 2020-01-07 6:56 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-06 16:50 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev > > > Add eventmode support to ipsec-secgw. This uses event helper to setup > > > and use the eventmode capabilities. Add driver inbound worker. > > > > > > Example command: > > > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w > > > 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > > > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > > > --schedule-type 2 --process-mode drv --process-dir in > > > > As I can see new event mode is totally orthogonal to the existing poll mode. > > Event mode has it is own data-path, and it doesn't reuse any part of poll- > > mode data-path code. > > Plus in event mode many poll-mode options: > > libirary/legacy mode, fragment/reassemble, replay-window, ESN, fall-back > > session, etc. > > are simply ignored. > > [Anoob] The features are not supported with the initial version. But the features are equally applicable to eventmode and is planned for the > future. Also, fragment/reassemble, replay-window, ESN, fall-back session etc are not applicable for non-library mode. True, but in poll-mode library-mode support all functionality that legacy-mode does, plus some extra. Also I still hope that after perf-problems evaluation with NXP we will be able to safely remove legacy poll-mode. >We can follow the > same logic and allow for an extra arg (which is --transfer-mode). > > > Also as I can read the current code - > > right now these modes can't be mixed and used together. > > User has to use either only event based or poll mode API/devices. > > [Anoob] Same like how we cannot mix library and non-library modes. > > > > > If so, then at least we need a check (and report with error exit) for these > > mutually exclusive option variants. > > [Anoob] Will do that. Ok. > > Probably even better would be to generate two separate binaries Let say: > > ipsec-secgw-event and ipsec-secgw-poll. > > We can still keep the same parent directory, makefile, common src files etc. > > for both. > > [Anoob] I would be inclined to not fork the current application. Do you see any issues if the same binary could run in both modes. The > default behavior would be poll mode (with existing behavior). My main concern here that there will be over-helming number of options (some of which are mutually exclusive) in the same app. So it will be really hard to maintain and use such app. My thought was that it might be cleaner to have two different apps each with its own set of options. ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-06 16:50 ` Ananyev, Konstantin @ 2020-01-07 6:56 ` Anoob Joseph 2020-01-07 14:38 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-07 6:56 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Monday, January 6, 2020 10:21 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; > Nicolau, Radu <radu.nicolau@intel.com>; Thomas Monjalon > <thomas@monjalon.net> > Cc: Lukas Bartosik <lbartosik@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; Archana > Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>; > dev@dpdk.org > Subject: [EXT] RE: [PATCH 09/14] examples/ipsec-secgw: add eventmode to > ipsec-secgw > > External Email > > ---------------------------------------------------------------------- > > > > Add eventmode support to ipsec-secgw. This uses event helper to > > > > setup and use the eventmode capabilities. Add driver inbound worker. > > > > > > > > Example command: > > > > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w > > > > 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > > > > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > > > > --schedule-type 2 --process-mode drv --process-dir in > > > > > > As I can see new event mode is totally orthogonal to the existing poll mode. > > > Event mode has it is own data-path, and it doesn't reuse any part of > > > poll- mode data-path code. > > > Plus in event mode many poll-mode options: > > > libirary/legacy mode, fragment/reassemble, replay-window, ESN, > > > fall-back session, etc. > > > are simply ignored. > > > > [Anoob] The features are not supported with the initial version. But > > the features are equally applicable to eventmode and is planned for the future. > Also, fragment/reassemble, replay-window, ESN, fall-back session etc are not > applicable for non-library mode. > > True, but in poll-mode library-mode support all functionality that legacy-mode > does, plus some extra. > Also I still hope that after perf-problems evaluation with NXP we will be able to > safely remove legacy poll-mode. > > >We can follow the > > same logic and allow for an extra arg (which is --transfer-mode). > > > > > Also as I can read the current code - right now these modes can't be > > > mixed and used together. > > > User has to use either only event based or poll mode API/devices. > > > > [Anoob] Same like how we cannot mix library and non-library modes. > > > > > > > > If so, then at least we need a check (and report with error exit) > > > for these mutually exclusive option variants. > > > > [Anoob] Will do that. > > Ok. > > > > Probably even better would be to generate two separate binaries Let say: > > > ipsec-secgw-event and ipsec-secgw-poll. > > > We can still keep the same parent directory, makefile, common src files etc. > > > for both. > > > > [Anoob] I would be inclined to not fork the current application. Do > > you see any issues if the same binary could run in both modes. The default > behavior would be poll mode (with existing behavior). > > My main concern here that there will be over-helming number of options (some > of which are mutually exclusive) in the same app. > So it will be really hard to maintain and use such app. > My thought was that it might be cleaner to have two different apps each with its > own set of options. > [Anoob] Technically event mode would need only one extra argument. The one to specify "scheduling type". The direction can be removed (discussed in another thread) and app-mode can be merged with existing single_sa mode. And we do want the event-mode to be supporting all features supported by poll mode. Just that we will have to take it up gradually (because of the volume of code change). Thomas had opposed the idea of forking example applications for event mode. I also agree with him there. Event-mode just establishes an alternate way to receive and send packets. Entire IPsec processing can be maintained common. ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-07 6:56 ` Anoob Joseph @ 2020-01-07 14:38 ` Ananyev, Konstantin 0 siblings, 0 replies; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-07 14:38 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev > > > > > Add eventmode support to ipsec-secgw. This uses event helper to > > > > > setup and use the eventmode capabilities. Add driver inbound worker. > > > > > > > > > > Example command: > > > > > ./ipsec-secgw -c 0x1 -w 0002:02:00.0,ipsec_in_max_spi=100 -w > > > > > 0002:07:00.0 -w 0002:0e:00.0 -w 0002:10:00.1 -- -P -p 0x3 -u 0x1 > > > > > --config "(0,0,0),(1,0,0)" -f a-aes-gcm-msa.cfg --transfer-mode 1 > > > > > --schedule-type 2 --process-mode drv --process-dir in > > > > > > > > As I can see new event mode is totally orthogonal to the existing poll mode. > > > > Event mode has it is own data-path, and it doesn't reuse any part of > > > > poll- mode data-path code. > > > > Plus in event mode many poll-mode options: > > > > libirary/legacy mode, fragment/reassemble, replay-window, ESN, > > > > fall-back session, etc. > > > > are simply ignored. > > > > > > [Anoob] The features are not supported with the initial version. But > > > the features are equally applicable to eventmode and is planned for the future. > > Also, fragment/reassemble, replay-window, ESN, fall-back session etc are not > > applicable for non-library mode. > > > > True, but in poll-mode library-mode support all functionality that legacy-mode > > does, plus some extra. > > Also I still hope that after perf-problems evaluation with NXP we will be able to > > safely remove legacy poll-mode. > > > > >We can follow the > > > same logic and allow for an extra arg (which is --transfer-mode). > > > > > > > Also as I can read the current code - right now these modes can't be > > > > mixed and used together. > > > > User has to use either only event based or poll mode API/devices. > > > > > > [Anoob] Same like how we cannot mix library and non-library modes. > > > > > > > > > > > If so, then at least we need a check (and report with error exit) > > > > for these mutually exclusive option variants. > > > > > > [Anoob] Will do that. > > > > Ok. > > > > > > Probably even better would be to generate two separate binaries Let say: > > > > ipsec-secgw-event and ipsec-secgw-poll. > > > > We can still keep the same parent directory, makefile, common src files etc. > > > > for both. > > > > > > [Anoob] I would be inclined to not fork the current application. Do > > > you see any issues if the same binary could run in both modes. The default > > behavior would be poll mode (with existing behavior). > > > > My main concern here that there will be over-helming number of options (some > > of which are mutually exclusive) in the same app. > > So it will be really hard to maintain and use such app. > > My thought was that it might be cleaner to have two different apps each with its > > own set of options. > > > > [Anoob] Technically event mode would need only one extra argument. The one to specify "scheduling type". The direction can be > removed (discussed in another thread) and app-mode can be merged with existing single_sa mode. > > And we do want the event-mode to be supporting all features supported by poll mode. Just that we will have to take it up gradually > (because of the volume of code change). > > Thomas had opposed the idea of forking example applications for event mode. I also agree with him there. Event-mode just > establishes an alternate way to receive and send packets. Entire IPsec processing can be maintained common. I didn't talk about forking. I talked about something like that - keep all code in examples/ipsec-secgw Probably move event/poll specific code into examples/ipsec-secgw/poll, examples/ipsec-secgw/event. Make changes in Makefile, meson.build to generate 2 binaries. But ok, one extra event-mode specific option doesn't seem that much. Let's try to keep unified binary and see how it goes. Konstantin ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 10/14] examples/ipsec-secgw: add app inbound worker 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (8 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code Anoob Joseph ` (4 subsequent siblings) 14 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add application inbound worker thread. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec_worker.c | 85 ++++++++++++++++++++++++++++++++++++- 1 file changed, 84 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 87c657b..fce274a 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -52,7 +52,7 @@ ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 1 +#define IPSEC_EVENTMODE_WORKERS 2 /* * Event mode worker @@ -126,6 +126,79 @@ ipsec_wrkr_non_burst_int_port_drvr_mode_inb(struct eh_event_link_info *links, return; } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - app mode - inbound + */ +static void +ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, + uint8_t nb_links) +{ + unsigned int nb_rx = 0; + unsigned int port_id; + struct rte_mbuf *pkt; + struct rte_event ev; + uint32_t lcore_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + goto exit; + } + + /* We have valid links */ + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "app mode - inbound) on lcore %d\n", lcore_id); + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + port_id = ev.queue_id; + pkt = ev.mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } + +exit: + return; +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -142,6 +215,16 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drvr_mode_inb; + wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - app mode - inbound */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode_inb; + nb_wrkr_param++; return nb_wrkr_param; } -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (9 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 10/14] examples/ipsec-secgw: add app inbound worker Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-23 16:49 ` Ananyev, Konstantin ` (2 more replies) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker Anoob Joseph ` (3 subsequent siblings) 14 siblings, 3 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add IPsec application processing code for event mode. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 124 ++++++------------ examples/ipsec-secgw/ipsec-secgw.h | 81 ++++++++++++ examples/ipsec-secgw/ipsec.h | 37 +++--- examples/ipsec-secgw/ipsec_worker.c | 242 ++++++++++++++++++++++++++++++++++-- examples/ipsec-secgw/ipsec_worker.h | 39 ++++++ examples/ipsec-secgw/sa.c | 11 -- 6 files changed, 409 insertions(+), 125 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.h diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index c5d95b9..2e7d4d8 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -50,12 +50,11 @@ #include "event_helper.h" #include "ipsec.h" +#include "ipsec_worker.h" #include "parser.h" volatile bool force_quit; -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 - #define MAX_JUMBO_PKT_LEN 9600 #define MEMPOOL_CACHE_SIZE 256 @@ -70,8 +69,6 @@ volatile bool force_quit; #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ -#define NB_SOCKETS 4 - /* Configure how many packets ahead to prefetch, when reading packets */ #define PREFETCH_OFFSET 3 @@ -79,8 +76,6 @@ volatile bool force_quit; #define MAX_LCORE_PARAMS 1024 -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) - /* * Configurable number of RX/TX ring descriptors */ @@ -89,29 +84,6 @@ volatile bool force_quit; static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((a) & 0xff) << 56) | \ - ((uint64_t)((b) & 0xff) << 48) | \ - ((uint64_t)((c) & 0xff) << 40) | \ - ((uint64_t)((d) & 0xff) << 32) | \ - ((uint64_t)((e) & 0xff) << 24) | \ - ((uint64_t)((f) & 0xff) << 16) | \ - ((uint64_t)((g) & 0xff) << 8) | \ - ((uint64_t)(h) & 0xff)) -#else -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((h) & 0xff) << 56) | \ - ((uint64_t)((g) & 0xff) << 48) | \ - ((uint64_t)((f) & 0xff) << 40) | \ - ((uint64_t)((e) & 0xff) << 32) | \ - ((uint64_t)((d) & 0xff) << 24) | \ - ((uint64_t)((c) & 0xff) << 16) | \ - ((uint64_t)((b) & 0xff) << 8) | \ - ((uint64_t)(a) & 0xff)) -#endif -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) - #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ @@ -123,18 +95,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) -/* port/source ethernet addr and destination ethernet addr */ -struct ethaddr_info { - uint64_t src, dst; -}; - -struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } -}; - struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" @@ -192,10 +152,16 @@ static const struct option lgopts[] = { {NULL, 0, 0, 0} }; +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } +}; + /* mask of enabled ports */ static uint32_t enabled_port_mask; static uint64_t enabled_cryptodev_mask = UINT64_MAX; -static uint32_t unprotected_port_mask; static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; @@ -283,8 +249,6 @@ static struct rte_eth_conf port_conf = { }, }; -static struct socket_ctx socket_ctx[NB_SOCKETS]; - /* * Determine is multi-segment support required: * - either frame buffer size is smaller then mtu @@ -2828,47 +2792,10 @@ main(int32_t argc, char **argv) sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); port_init(portid, req_rx_offloads, req_tx_offloads); - /* Create default ipsec flow for the ethernet device */ - ret = create_default_ipsec_flow(portid, req_rx_offloads); - if (ret) - printf("Cannot create default flow, err=%d, port=%d\n", - ret, portid); } cryptodevs_init(); - /* start ports */ - RTE_ETH_FOREACH_DEV(portid) { - if ((enabled_port_mask & (1 << portid)) == 0) - continue; - - /* - * Start device - * note: device must be started before a flow rule - * can be installed. - */ - ret = rte_eth_dev_start(portid); - if (ret < 0) - rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " - "err=%d, port=%d\n", ret, portid); - /* - * If enabled, put device in promiscuous mode. - * This allows IO forwarding mode to forward packets - * to itself through 2 cross-connected ports of the - * target machine. - */ - if (promiscuous_on) { - ret = rte_eth_promiscuous_enable(portid); - if (ret != 0) - rte_exit(EXIT_FAILURE, - "rte_eth_promiscuous_enable: err=%s, port=%d\n", - rte_strerror(-ret), portid); - } - - rte_eth_dev_callback_register(portid, - RTE_ETH_EVENT_IPSEC, inline_ipsec_event_callback, NULL); - } - /* fragment reassemble is enabled */ if (frag_tbl_sz != 0) { ret = reassemble_init(); @@ -2889,8 +2816,6 @@ main(int32_t argc, char **argv) } } - check_all_ports_link_status(enabled_port_mask); - /* * Set the enabled port mask in helper config for use by helper * sub-system. This will be used while intializing devices using @@ -2903,6 +2828,39 @@ main(int32_t argc, char **argv) if (ret < 0) rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); + /* Create default ipsec flow for each port and start each port */ + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + ret = create_default_ipsec_flow(portid, req_rx_offloads); + if (ret) + printf("create_default_ipsec_flow failed, err=%d, " + "port=%d\n", ret, portid); + /* + * Start device + * note: device must be started before a flow rule + * can be installed. + */ + ret = rte_eth_dev_start(portid); + if (ret < 0) + rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " + "err=%d, port=%d\n", ret, portid); + /* + * If enabled, put device in promiscuous mode. + * This allows IO forwarding mode to forward packets + * to itself through 2 cross-connected ports of the + * target machine. + */ + if (promiscuous_on) + rte_eth_promiscuous_enable(portid); + + rte_eth_dev_callback_register(portid, + RTE_ETH_EVENT_IPSEC, inline_ipsec_event_callback, NULL); + } + + check_all_ports_link_status(enabled_port_mask); + /* launch per-lcore init on every lcore */ rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h new file mode 100644 index 0000000..67e1193 --- /dev/null +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Cavium, Inc + */ +#ifndef _IPSEC_SECGW_H_ +#define _IPSEC_SECGW_H_ + +#include <rte_hash.h> + +#define MAX_PKT_BURST 32 + +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 + +#define NB_SOCKETS 4 + +#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) + +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((a) & 0xff) << 56) | \ + ((uint64_t)((b) & 0xff) << 48) | \ + ((uint64_t)((c) & 0xff) << 40) | \ + ((uint64_t)((d) & 0xff) << 32) | \ + ((uint64_t)((e) & 0xff) << 24) | \ + ((uint64_t)((f) & 0xff) << 16) | \ + ((uint64_t)((g) & 0xff) << 8) | \ + ((uint64_t)(h) & 0xff)) +#else +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((h) & 0xff) << 56) | \ + ((uint64_t)((g) & 0xff) << 48) | \ + ((uint64_t)((f) & 0xff) << 40) | \ + ((uint64_t)((e) & 0xff) << 32) | \ + ((uint64_t)((d) & 0xff) << 24) | \ + ((uint64_t)((c) & 0xff) << 16) | \ + ((uint64_t)((b) & 0xff) << 8) | \ + ((uint64_t)(a) & 0xff)) +#endif + +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) + +struct traffic_type { + const uint8_t *data[MAX_PKT_BURST * 2]; + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; + void *saptr[MAX_PKT_BURST * 2]; + uint32_t res[MAX_PKT_BURST * 2]; + uint32_t num; +}; + +struct ipsec_traffic { + struct traffic_type ipsec; + struct traffic_type ip4; + struct traffic_type ip6; +}; + +/* Fields optimized for devices without burst */ +struct traffic_type_nb { + const uint8_t *data; + struct rte_mbuf *pkt; + uint32_t res; + uint32_t num; +}; + +struct ipsec_traffic_nb { + struct traffic_type_nb ipsec; + struct traffic_type_nb ip4; + struct traffic_type_nb ip6; +}; + +/* port/source ethernet addr and destination ethernet addr */ +struct ethaddr_info { + uint64_t src, dst; +}; + +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; + +/* TODO: All var definitions need to be part of a .c file */ + +/* Port mask to identify the unprotected ports */ +uint32_t unprotected_port_mask; + +#endif /* _IPSEC_SECGW_H_ */ diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 0b9fc04..0c5ee8a 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -13,11 +13,11 @@ #include <rte_flow.h> #include <rte_ipsec.h> -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 +#include "ipsec-secgw.h" + #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 -#define MAX_PKT_BURST 32 #define MAX_INFLIGHT 128 #define MAX_QP_PER_LCORE 256 @@ -153,6 +153,17 @@ struct ipsec_sa { struct rte_security_session_conf sess_conf; } __rte_cache_aligned; +struct sa_ctx { + void *satbl; /* pointer to array of rte_ipsec_sa objects*/ + struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; + union { + struct { + struct rte_crypto_sym_xform a; + struct rte_crypto_sym_xform b; + }; + } xf[IPSEC_SA_MAX_ENTRIES]; +}; + struct ipsec_mbuf_metadata { struct ipsec_sa *sa; struct rte_crypto_op cop; @@ -233,26 +244,8 @@ struct cnt_blk { uint32_t cnt; } __attribute__((packed)); -struct traffic_type { - const uint8_t *data[MAX_PKT_BURST * 2]; - struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; - void *saptr[MAX_PKT_BURST * 2]; - uint32_t res[MAX_PKT_BURST * 2]; - uint32_t num; -}; - -struct ipsec_traffic { - struct traffic_type ipsec; - struct traffic_type ip4; - struct traffic_type ip6; -}; - - -void -ipsec_poll_mode_worker(void); - -int -ipsec_launch_one_lcore(void *args); +/* Socket ctx */ +struct socket_ctx socket_ctx[NB_SOCKETS]; uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index fce274a..2af9475 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -15,6 +15,7 @@ #include <ctype.h> #include <stdbool.h> +#include <rte_acl.h> #include <rte_common.h> #include <rte_log.h> #include <rte_memcpy.h> @@ -29,12 +30,51 @@ #include <rte_eventdev.h> #include <rte_malloc.h> #include <rte_mbuf.h> +#include <rte_lpm.h> +#include <rte_lpm6.h> #include "ipsec.h" +#include "ipsec_worker.h" #include "event_helper.h" extern volatile bool force_quit; +static inline enum pkt_type +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) +{ + struct rte_ether_hdr *eth; + + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip, ip_p)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV4; + else + return PKT_TYPE_PLAIN_IPV4; + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip6_hdr, ip6_nxt)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV6; + else + return PKT_TYPE_PLAIN_IPV6; + } + + /* Unknown/Unsupported type */ + return PKT_TYPE_INVALID; +} + +static inline void +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) +{ + struct rte_ether_hdr *ethhdr; + + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); +} + static inline void ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) { @@ -45,6 +85,177 @@ ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) rte_event_eth_tx_adapter_txq_set(m, 0); } +static inline int +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) +{ + uint32_t res; + + if (unlikely(sp == NULL)) + return 0; + + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, + DEFAULT_MAX_CATEGORIES); + + if (unlikely(res == 0)) { + /* No match */ + return 0; + } + + if (res == DISCARD) + return 0; + else if (res == BYPASS) { + *sa_idx = 0; + return 1; + } + + *sa_idx = SPI2IDX(res); + if (*sa_idx < IPSEC_SA_MAX_ENTRIES) + return 1; + + /* Invalid SA IDX */ + return 0; +} + +static inline uint16_t +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint32_t dst_ip; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); + dst_ip = rte_be_to_cpu_32(dst_ip); + + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +/* TODO: To be tested */ +static inline uint16_t +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint8_t dst_ip[16]; + uint8_t *ip6_dst; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); + memcpy(&dst_ip[0], ip6_dst, 16); + + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +static inline uint16_t +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) +{ + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) + return route4_pkt(pkt, rt->rt4_ctx); + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) + return route6_pkt(pkt, rt->rt6_ctx); + + return RTE_MAX_ETHPORTS; +} + +static inline int +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct ipsec_sa *sa = NULL; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) + sa = (struct ipsec_sa *) pkt->udata64; + + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + case PKT_TYPE_PLAIN_IPV6: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) + sa = (struct ipsec_sa *) pkt->udata64; + + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + default: + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == 0) + goto route_and_send_pkt; + + /* Else the packet has to be protected with SA */ + + /* If the packet was IPsec processed, then SA pointer should be set */ + if (sa == NULL) + goto drop_pkt_and_exit; + + /* SPI on the packet should match with the one in SA */ + if (unlikely(sa->spi != sa_idx)) + goto drop_pkt_and_exit; + +route_and_send_pkt: + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -134,11 +345,11 @@ static void ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, uint8_t nb_links) { + struct lcore_conf_ev_tx_int_port_wrkr lconf; unsigned int nb_rx = 0; - unsigned int port_id; - struct rte_mbuf *pkt; struct rte_event ev; uint32_t lcore_id; + int32_t socket_id; /* Check if we have links registered for this lcore */ if (nb_links == 0) { @@ -151,6 +362,21 @@ ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, /* Get core ID */ lcore_id = rte_lcore_id(); + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Save routing table */ + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; + RTE_LOG(INFO, IPSEC, "Launching event mode worker (non-burst - Tx internal port - " "app mode - inbound) on lcore %d\n", lcore_id); @@ -175,13 +401,11 @@ ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, if (nb_rx == 0) continue; - port_id = ev.queue_id; - pkt = ev.mbuf; - - rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); - - /* Process packet */ - ipsec_event_pre_forward(pkt, port_id); + if (process_ipsec_ev_inbound(&lconf.inbound, + &lconf.rt, &ev) != 1) { + /* The pkt has been dropped */ + continue; + } /* * Since tx internal port is available, events can be diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h new file mode 100644 index 0000000..fd18a2e --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Cavium, Inc + */ +#ifndef _IPSEC_WORKER_H_ +#define _IPSEC_WORKER_H_ + +#include "ipsec.h" + +enum pkt_type { + PKT_TYPE_PLAIN_IPV4 = 1, + PKT_TYPE_IPSEC_IPV4, + PKT_TYPE_PLAIN_IPV6, + PKT_TYPE_IPSEC_IPV6, + PKT_TYPE_INVALID +}; + +struct route_table { + struct rt_ctx *rt4_ctx; + struct rt_ctx *rt6_ctx; +}; + +/* + * Conf required by event mode worker with tx internal port + */ +struct lcore_conf_ev_tx_int_port_wrkr { + struct ipsec_ctx inbound; + struct ipsec_ctx outbound; + struct route_table rt; +} __rte_cache_aligned; + +/* TODO + * + * Move this function to ipsec_worker.c + */ +void ipsec_poll_mode_worker(void); + +int ipsec_launch_one_lcore(void *args); + +#endif /* _IPSEC_WORKER_H_ */ diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 7f046e3..9e17ba0 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -772,17 +772,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) printf("\n"); } -struct sa_ctx { - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ - struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; - union { - struct { - struct rte_crypto_sym_xform a; - struct rte_crypto_sym_xform b; - }; - } xf[IPSEC_SA_MAX_ENTRIES]; -}; - static struct sa_ctx * sa_create(const char *name, int32_t socket_id) { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code 2019-12-08 12:30 ` [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code Anoob Joseph @ 2019-12-23 16:49 ` Ananyev, Konstantin 2020-01-10 14:28 ` [dpdk-dev] [EXT] " Lukas Bartosik 2019-12-24 13:13 ` [dpdk-dev] " Ananyev, Konstantin 2019-12-25 15:18 ` [dpdk-dev] " Ananyev, Konstantin 2 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-23 16:49 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > > Add IPsec application processing code for event mode. > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/ipsec-secgw.c | 124 ++++++------------ > examples/ipsec-secgw/ipsec-secgw.h | 81 ++++++++++++ > examples/ipsec-secgw/ipsec.h | 37 +++--- > examples/ipsec-secgw/ipsec_worker.c | 242 ++++++++++++++++++++++++++++++++++-- > examples/ipsec-secgw/ipsec_worker.h | 39 ++++++ > examples/ipsec-secgw/sa.c | 11 -- > 6 files changed, 409 insertions(+), 125 deletions(-) > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index c5d95b9..2e7d4d8 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -50,12 +50,11 @@ > > #include "event_helper.h" > #include "ipsec.h" > +#include "ipsec_worker.h" > #include "parser.h" > > volatile bool force_quit; > > -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > - > #define MAX_JUMBO_PKT_LEN 9600 > > #define MEMPOOL_CACHE_SIZE 256 > @@ -70,8 +69,6 @@ volatile bool force_quit; > > #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ > > -#define NB_SOCKETS 4 > - > /* Configure how many packets ahead to prefetch, when reading packets */ > #define PREFETCH_OFFSET 3 > > @@ -79,8 +76,6 @@ volatile bool force_quit; > > #define MAX_LCORE_PARAMS 1024 > > -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) > - > /* > * Configurable number of RX/TX ring descriptors > */ > @@ -89,29 +84,6 @@ volatile bool force_quit; > static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; > static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; > > -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN > -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ > - (((uint64_t)((a) & 0xff) << 56) | \ > - ((uint64_t)((b) & 0xff) << 48) | \ > - ((uint64_t)((c) & 0xff) << 40) | \ > - ((uint64_t)((d) & 0xff) << 32) | \ > - ((uint64_t)((e) & 0xff) << 24) | \ > - ((uint64_t)((f) & 0xff) << 16) | \ > - ((uint64_t)((g) & 0xff) << 8) | \ > - ((uint64_t)(h) & 0xff)) > -#else > -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ > - (((uint64_t)((h) & 0xff) << 56) | \ > - ((uint64_t)((g) & 0xff) << 48) | \ > - ((uint64_t)((f) & 0xff) << 40) | \ > - ((uint64_t)((e) & 0xff) << 32) | \ > - ((uint64_t)((d) & 0xff) << 24) | \ > - ((uint64_t)((c) & 0xff) << 16) | \ > - ((uint64_t)((b) & 0xff) << 8) | \ > - ((uint64_t)(a) & 0xff)) > -#endif > -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) > - > #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ > (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ > (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ > @@ -123,18 +95,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; > > #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) > > -/* port/source ethernet addr and destination ethernet addr */ > -struct ethaddr_info { > - uint64_t src, dst; > -}; > - > -struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } > -}; > - > struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > > #define CMD_LINE_OPT_CONFIG "config" > @@ -192,10 +152,16 @@ static const struct option lgopts[] = { > {NULL, 0, 0, 0} > }; > > +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } > +}; > + > /* mask of enabled ports */ > static uint32_t enabled_port_mask; > static uint64_t enabled_cryptodev_mask = UINT64_MAX; > -static uint32_t unprotected_port_mask; > static int32_t promiscuous_on = 1; > static int32_t numa_on = 1; /**< NUMA is enabled by default. */ > static uint32_t nb_lcores; > @@ -283,8 +249,6 @@ static struct rte_eth_conf port_conf = { > }, > }; > > -static struct socket_ctx socket_ctx[NB_SOCKETS]; > - > /* > * Determine is multi-segment support required: > * - either frame buffer size is smaller then mtu > @@ -2828,47 +2792,10 @@ main(int32_t argc, char **argv) > > sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); > port_init(portid, req_rx_offloads, req_tx_offloads); > - /* Create default ipsec flow for the ethernet device */ > - ret = create_default_ipsec_flow(portid, req_rx_offloads); > - if (ret) > - printf("Cannot create default flow, err=%d, port=%d\n", > - ret, portid); > } > > cryptodevs_init(); > > - /* start ports */ > - RTE_ETH_FOREACH_DEV(portid) { > - if ((enabled_port_mask & (1 << portid)) == 0) > - continue; > - > - /* > - * Start device > - * note: device must be started before a flow rule > - * can be installed. > - */ > - ret = rte_eth_dev_start(portid); > - if (ret < 0) > - rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " > - "err=%d, port=%d\n", ret, portid); > - /* > - * If enabled, put device in promiscuous mode. > - * This allows IO forwarding mode to forward packets > - * to itself through 2 cross-connected ports of the > - * target machine. > - */ > - if (promiscuous_on) { > - ret = rte_eth_promiscuous_enable(portid); > - if (ret != 0) > - rte_exit(EXIT_FAILURE, > - "rte_eth_promiscuous_enable: err=%s, port=%d\n", > - rte_strerror(-ret), portid); > - } > - > - rte_eth_dev_callback_register(portid, > - RTE_ETH_EVENT_IPSEC, inline_ipsec_event_callback, NULL); > - } > - > /* fragment reassemble is enabled */ > if (frag_tbl_sz != 0) { > ret = reassemble_init(); > @@ -2889,8 +2816,6 @@ main(int32_t argc, char **argv) > } > } > > - check_all_ports_link_status(enabled_port_mask); > - > /* > * Set the enabled port mask in helper config for use by helper > * sub-system. This will be used while intializing devices using > @@ -2903,6 +2828,39 @@ main(int32_t argc, char **argv) > if (ret < 0) > rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); > > + /* Create default ipsec flow for each port and start each port */ > + RTE_ETH_FOREACH_DEV(portid) { > + if ((enabled_port_mask & (1 << portid)) == 0) > + continue; > + > + ret = create_default_ipsec_flow(portid, req_rx_offloads); That doesn't look right. For more than one eth port in the system, req_rx_offloads will be overwritten by that moment. > + if (ret) > + printf("create_default_ipsec_flow failed, err=%d, " > + "port=%d\n", ret, portid); > + /* > + * Start device > + * note: device must be started before a flow rule > + * can be installed. > + */ > + ret = rte_eth_dev_start(portid); Moving that piece of code (dev_start) after sa_init() breaks ixgbe inline-crypto support. As I understand, because configured ipsec flows don't persist dev_start(). At least for ixgbe PMD. Any reason why to move that code at all? > + if (ret < 0) > + rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " > + "err=%d, port=%d\n", ret, portid); > + /* > + * If enabled, put device in promiscuous mode. > + * This allows IO forwarding mode to forward packets > + * to itself through 2 cross-connected ports of the > + * target machine. > + */ > + if (promiscuous_on) > + rte_eth_promiscuous_enable(portid); > + > + rte_eth_dev_callback_register(portid, > + RTE_ETH_EVENT_IPSEC, inline_ipsec_event_callback, NULL); > + } > + > + check_all_ports_link_status(enabled_port_mask); > + > /* launch per-lcore init on every lcore */ > rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH 11/14] examples/ipsec-secgw: add app processing code 2019-12-23 16:49 ` Ananyev, Konstantin @ 2020-01-10 14:28 ` Lukas Bartosik 0 siblings, 0 replies; 147+ messages in thread From: Lukas Bartosik @ 2020-01-10 14:28 UTC (permalink / raw) To: Ananyev, Konstantin, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Lukasz On 23.12.2019 17:49, Ananyev, Konstantin wrote: > External Email > > ---------------------------------------------------------------------- > > >> >> Add IPsec application processing code for event mode. >> >> Signed-off-by: Anoob Joseph <anoobj@marvell.com> >> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> >> --- >> examples/ipsec-secgw/ipsec-secgw.c | 124 ++++++------------ >> examples/ipsec-secgw/ipsec-secgw.h | 81 ++++++++++++ >> examples/ipsec-secgw/ipsec.h | 37 +++--- >> examples/ipsec-secgw/ipsec_worker.c | 242 ++++++++++++++++++++++++++++++++++-- >> examples/ipsec-secgw/ipsec_worker.h | 39 ++++++ >> examples/ipsec-secgw/sa.c | 11 -- >> 6 files changed, 409 insertions(+), 125 deletions(-) >> create mode 100644 examples/ipsec-secgw/ipsec-secgw.h >> create mode 100644 examples/ipsec-secgw/ipsec_worker.h >> >> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c >> index c5d95b9..2e7d4d8 100644 >> --- a/examples/ipsec-secgw/ipsec-secgw.c >> +++ b/examples/ipsec-secgw/ipsec-secgw.c >> @@ -50,12 +50,11 @@ >> >> #include "event_helper.h" >> #include "ipsec.h" >> +#include "ipsec_worker.h" >> #include "parser.h" >> >> volatile bool force_quit; >> >> -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 >> - >> #define MAX_JUMBO_PKT_LEN 9600 >> >> #define MEMPOOL_CACHE_SIZE 256 >> @@ -70,8 +69,6 @@ volatile bool force_quit; >> >> #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ >> >> -#define NB_SOCKETS 4 >> - >> /* Configure how many packets ahead to prefetch, when reading packets */ >> #define PREFETCH_OFFSET 3 >> >> @@ -79,8 +76,6 @@ volatile bool force_quit; >> >> #define MAX_LCORE_PARAMS 1024 >> >> -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) >> - >> /* >> * Configurable number of RX/TX ring descriptors >> */ >> @@ -89,29 +84,6 @@ volatile bool force_quit; >> static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; >> static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; >> >> -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN >> -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ >> - (((uint64_t)((a) & 0xff) << 56) | \ >> - ((uint64_t)((b) & 0xff) << 48) | \ >> - ((uint64_t)((c) & 0xff) << 40) | \ >> - ((uint64_t)((d) & 0xff) << 32) | \ >> - ((uint64_t)((e) & 0xff) << 24) | \ >> - ((uint64_t)((f) & 0xff) << 16) | \ >> - ((uint64_t)((g) & 0xff) << 8) | \ >> - ((uint64_t)(h) & 0xff)) >> -#else >> -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ >> - (((uint64_t)((h) & 0xff) << 56) | \ >> - ((uint64_t)((g) & 0xff) << 48) | \ >> - ((uint64_t)((f) & 0xff) << 40) | \ >> - ((uint64_t)((e) & 0xff) << 32) | \ >> - ((uint64_t)((d) & 0xff) << 24) | \ >> - ((uint64_t)((c) & 0xff) << 16) | \ >> - ((uint64_t)((b) & 0xff) << 8) | \ >> - ((uint64_t)(a) & 0xff)) >> -#endif >> -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) >> - >> #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ >> (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ >> (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ >> @@ -123,18 +95,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; >> >> #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) >> >> -/* port/source ethernet addr and destination ethernet addr */ >> -struct ethaddr_info { >> - uint64_t src, dst; >> -}; >> - >> -struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { >> - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, >> - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, >> - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, >> - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } >> -}; >> - >> struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; >> >> #define CMD_LINE_OPT_CONFIG "config" >> @@ -192,10 +152,16 @@ static const struct option lgopts[] = { >> {NULL, 0, 0, 0} >> }; >> >> +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { >> + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, >> + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, >> + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, >> + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } >> +}; >> + >> /* mask of enabled ports */ >> static uint32_t enabled_port_mask; >> static uint64_t enabled_cryptodev_mask = UINT64_MAX; >> -static uint32_t unprotected_port_mask; >> static int32_t promiscuous_on = 1; >> static int32_t numa_on = 1; /**< NUMA is enabled by default. */ >> static uint32_t nb_lcores; >> @@ -283,8 +249,6 @@ static struct rte_eth_conf port_conf = { >> }, >> }; >> >> -static struct socket_ctx socket_ctx[NB_SOCKETS]; >> - >> /* >> * Determine is multi-segment support required: >> * - either frame buffer size is smaller then mtu >> @@ -2828,47 +2792,10 @@ main(int32_t argc, char **argv) >> >> sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); >> port_init(portid, req_rx_offloads, req_tx_offloads); >> - /* Create default ipsec flow for the ethernet device */ >> - ret = create_default_ipsec_flow(portid, req_rx_offloads); >> - if (ret) >> - printf("Cannot create default flow, err=%d, port=%d\n", >> - ret, portid); >> } >> >> cryptodevs_init(); >> >> - /* start ports */ >> - RTE_ETH_FOREACH_DEV(portid) { >> - if ((enabled_port_mask & (1 << portid)) == 0) >> - continue; >> - >> - /* >> - * Start device >> - * note: device must be started before a flow rule >> - * can be installed. >> - */ >> - ret = rte_eth_dev_start(portid); >> - if (ret < 0) >> - rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " >> - "err=%d, port=%d\n", ret, portid); >> - /* >> - * If enabled, put device in promiscuous mode. >> - * This allows IO forwarding mode to forward packets >> - * to itself through 2 cross-connected ports of the >> - * target machine. >> - */ >> - if (promiscuous_on) { >> - ret = rte_eth_promiscuous_enable(portid); >> - if (ret != 0) >> - rte_exit(EXIT_FAILURE, >> - "rte_eth_promiscuous_enable: err=%s, port=%d\n", >> - rte_strerror(-ret), portid); >> - } >> - >> - rte_eth_dev_callback_register(portid, >> - RTE_ETH_EVENT_IPSEC, inline_ipsec_event_callback, NULL); >> - } >> - >> /* fragment reassemble is enabled */ >> if (frag_tbl_sz != 0) { >> ret = reassemble_init(); >> @@ -2889,8 +2816,6 @@ main(int32_t argc, char **argv) >> } >> } >> >> - check_all_ports_link_status(enabled_port_mask); >> - >> /* >> * Set the enabled port mask in helper config for use by helper >> * sub-system. This will be used while intializing devices using >> @@ -2903,6 +2828,39 @@ main(int32_t argc, char **argv) >> if (ret < 0) >> rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); >> >> + /* Create default ipsec flow for each port and start each port */ >> + RTE_ETH_FOREACH_DEV(portid) { >> + if ((enabled_port_mask & (1 << portid)) == 0) >> + continue; >> + >> + ret = create_default_ipsec_flow(portid, req_rx_offloads); > > That doesn't look right. > For more than one eth port in the system, req_rx_offloads will be overwritten by that moment. [Lukasz] You're right. I will fix it in v2. > >> + if (ret) >> + printf("create_default_ipsec_flow failed, err=%d, " >> + "port=%d\n", ret, portid); >> + /* >> + * Start device >> + * note: device must be started before a flow rule >> + * can be installed. >> + */ >> + ret = rte_eth_dev_start(portid); > > Moving that piece of code (dev_start) after sa_init() breaks ixgbe inline-crypto support. > As I understand, because configured ipsec flows don't persist dev_start(). > At least for ixgbe PMD. > Any reason why to move that code at all? [Lukasz] We moved starting eth port after creation of default ipsec flow in order to stop packets from temporary omitting inline (after eth port is started but before flow is created) . This happens if traffic is flowing and ipsec-secgw app is started. However moving eth_dev_start after sa_init is not necessary. I will revert this change to start eth ports before sa_init. >> + if (ret < 0) >> + rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " >> + "err=%d, port=%d\n", ret, portid); >> + /* >> + * If enabled, put device in promiscuous mode. >> + * This allows IO forwarding mode to forward packets >> + * to itself through 2 cross-connected ports of the >> + * target machine. >> + */ >> + if (promiscuous_on) >> + rte_eth_promiscuous_enable(portid); >> + >> + rte_eth_dev_callback_register(portid, >> + RTE_ETH_EVENT_IPSEC, inline_ipsec_event_callback, NULL); >> + } >> + >> + check_all_ports_link_status(enabled_port_mask); >> + >> /* launch per-lcore init on every lcore */ >> rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); >> ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code 2019-12-08 12:30 ` [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code Anoob Joseph 2019-12-23 16:49 ` Ananyev, Konstantin @ 2019-12-24 13:13 ` Ananyev, Konstantin 2020-01-10 14:36 ` [dpdk-dev] [EXT] " Lukas Bartosik 2019-12-25 15:18 ` [dpdk-dev] " Ananyev, Konstantin 2 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-24 13:13 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > --- a/examples/ipsec-secgw/ipsec_worker.c > +++ b/examples/ipsec-secgw/ipsec_worker.c > @@ -15,6 +15,7 @@ > #include <ctype.h> > #include <stdbool.h> > > +#include <rte_acl.h> > #include <rte_common.h> > #include <rte_log.h> > #include <rte_memcpy.h> > @@ -29,12 +30,51 @@ > #include <rte_eventdev.h> > #include <rte_malloc.h> > #include <rte_mbuf.h> > +#include <rte_lpm.h> > +#include <rte_lpm6.h> > > #include "ipsec.h" > +#include "ipsec_worker.h" > #include "event_helper.h" > > extern volatile bool force_quit; > > +static inline enum pkt_type > +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) > +{ > + struct rte_ether_hdr *eth; > + > + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { > + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > + offsetof(struct ip, ip_p)); > + if (**nlp == IPPROTO_ESP) > + return PKT_TYPE_IPSEC_IPV4; > + else > + return PKT_TYPE_PLAIN_IPV4; > + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { > + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > + offsetof(struct ip6_hdr, ip6_nxt)); > + if (**nlp == IPPROTO_ESP) > + return PKT_TYPE_IPSEC_IPV6; > + else > + return PKT_TYPE_PLAIN_IPV6; > + } > + > + /* Unknown/Unsupported type */ > + return PKT_TYPE_INVALID; > +} Looking though that file, it seems like you choose to create your own set of helper functions, instead of trying to reuse existing ones: process_ipsec_get_pkt_type() VS prepare_one_packet() update_mac_addrs() VS prepare_tx_pkt() check_sp() VS inbound_sp_sa() Obviously there is nothing good in code (and possible bugs) duplication. Any reason why you can't reuse existing functions and need to reinvent your own? > + > +static inline void > +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) > +{ > + struct rte_ether_hdr *ethhdr; > + > + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); > + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); > +} > + > static inline void > ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) > { > @@ -45,6 +85,177 @@ ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) > rte_event_eth_tx_adapter_txq_set(m, 0); > } > > +static inline int > +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) > +{ > + uint32_t res; > + > + if (unlikely(sp == NULL)) > + return 0; > + > + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, > + DEFAULT_MAX_CATEGORIES); > + > + if (unlikely(res == 0)) { > + /* No match */ > + return 0; > + } > + > + if (res == DISCARD) > + return 0; > + else if (res == BYPASS) { > + *sa_idx = 0; > + return 1; > + } > + > + *sa_idx = SPI2IDX(res); > + if (*sa_idx < IPSEC_SA_MAX_ENTRIES) > + return 1; > + > + /* Invalid SA IDX */ > + return 0; > +} > + > +static inline uint16_t > +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) > +{ > + uint32_t dst_ip; > + uint16_t offset; > + uint32_t hop; > + int ret; > + > + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); > + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); > + dst_ip = rte_be_to_cpu_32(dst_ip); > + > + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); > + > + if (ret == 0) { > + /* We have a hit */ > + return hop; > + } > + > + /* else */ > + return RTE_MAX_ETHPORTS; > +} > + > +/* TODO: To be tested */ > +static inline uint16_t > +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) > +{ > + uint8_t dst_ip[16]; > + uint8_t *ip6_dst; > + uint16_t offset; > + uint32_t hop; > + int ret; > + > + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); > + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); > + memcpy(&dst_ip[0], ip6_dst, 16); > + > + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); > + > + if (ret == 0) { > + /* We have a hit */ > + return hop; > + } > + > + /* else */ > + return RTE_MAX_ETHPORTS; > +} > + > +static inline uint16_t > +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) > +{ > + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) > + return route4_pkt(pkt, rt->rt4_ctx); > + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) > + return route6_pkt(pkt, rt->rt6_ctx); > + > + return RTE_MAX_ETHPORTS; > +} > + > +static inline int > +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, > + struct rte_event *ev) > +{ > + struct ipsec_sa *sa = NULL; > + struct rte_mbuf *pkt; > + uint16_t port_id = 0; > + enum pkt_type type; > + uint32_t sa_idx; > + uint8_t *nlp; > + > + /* Get pkt from event */ > + pkt = ev->mbuf; > + > + /* Check the packet type */ > + type = process_ipsec_get_pkt_type(pkt, &nlp); > + > + switch (type) { > + case PKT_TYPE_PLAIN_IPV4: > + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) > + sa = (struct ipsec_sa *) pkt->udata64; > + > + /* Check if we have a match */ > + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > + /* No valid match */ > + goto drop_pkt_and_exit; > + } > + break; > + > + case PKT_TYPE_PLAIN_IPV6: > + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) > + sa = (struct ipsec_sa *) pkt->udata64; > + > + /* Check if we have a match */ > + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > + /* No valid match */ > + goto drop_pkt_and_exit; > + } > + break; > + > + default: > + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > + goto drop_pkt_and_exit; > + } > + > + /* Check if the packet has to be bypassed */ > + if (sa_idx == 0) > + goto route_and_send_pkt; > + > + /* Else the packet has to be protected with SA */ > + > + /* If the packet was IPsec processed, then SA pointer should be set */ > + if (sa == NULL) > + goto drop_pkt_and_exit; > + > + /* SPI on the packet should match with the one in SA */ > + if (unlikely(sa->spi != sa_idx)) > + goto drop_pkt_and_exit; > + > +route_and_send_pkt: > + port_id = get_route(pkt, rt, type); > + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > + /* no match */ > + goto drop_pkt_and_exit; > + } > + /* else, we have a matching route */ > + > + /* Update mac addresses */ > + update_mac_addrs(pkt, port_id); > + > + /* Update the event with the dest port */ > + ipsec_event_pre_forward(pkt, port_id); > + return 1; > + > +drop_pkt_and_exit: > + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); > + rte_pktmbuf_free(pkt); > + ev->mbuf = NULL; > + return 0; > +} > + > /* > * Event mode exposes various operating modes depending on the > * capabilities of the event device and the operating mode > @@ -134,11 +345,11 @@ static void > ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, > uint8_t nb_links) > { > + struct lcore_conf_ev_tx_int_port_wrkr lconf; > unsigned int nb_rx = 0; > - unsigned int port_id; > - struct rte_mbuf *pkt; > struct rte_event ev; > uint32_t lcore_id; > + int32_t socket_id; > > /* Check if we have links registered for this lcore */ > if (nb_links == 0) { > @@ -151,6 +362,21 @@ ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, > /* Get core ID */ > lcore_id = rte_lcore_id(); > > + /* Get socket ID */ > + socket_id = rte_lcore_to_socket_id(lcore_id); > + > + /* Save routing table */ > + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; > + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; > + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; > + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; > + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; > + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; > + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; > + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; > + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; > + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; > + > RTE_LOG(INFO, IPSEC, > "Launching event mode worker (non-burst - Tx internal port - " > "app mode - inbound) on lcore %d\n", lcore_id); > @@ -175,13 +401,11 @@ ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, > if (nb_rx == 0) > continue; > > - port_id = ev.queue_id; > - pkt = ev.mbuf; > - > - rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > - > - /* Process packet */ > - ipsec_event_pre_forward(pkt, port_id); > + if (process_ipsec_ev_inbound(&lconf.inbound, > + &lconf.rt, &ev) != 1) { > + /* The pkt has been dropped */ > + continue; > + } > > /* > * Since tx internal port is available, events can be > diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h > new file mode 100644 > index 0000000..fd18a2e > --- /dev/null > +++ b/examples/ipsec-secgw/ipsec_worker.h > @@ -0,0 +1,39 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2018 Cavium, Inc > + */ > +#ifndef _IPSEC_WORKER_H_ > +#define _IPSEC_WORKER_H_ > + > +#include "ipsec.h" > + > +enum pkt_type { > + PKT_TYPE_PLAIN_IPV4 = 1, > + PKT_TYPE_IPSEC_IPV4, > + PKT_TYPE_PLAIN_IPV6, > + PKT_TYPE_IPSEC_IPV6, > + PKT_TYPE_INVALID > +}; > + > +struct route_table { > + struct rt_ctx *rt4_ctx; > + struct rt_ctx *rt6_ctx; > +}; > + > +/* > + * Conf required by event mode worker with tx internal port > + */ > +struct lcore_conf_ev_tx_int_port_wrkr { > + struct ipsec_ctx inbound; > + struct ipsec_ctx outbound; > + struct route_table rt; > +} __rte_cache_aligned; > + > +/* TODO > + * > + * Move this function to ipsec_worker.c > + */ > +void ipsec_poll_mode_worker(void); > + > +int ipsec_launch_one_lcore(void *args); > + > +#endif /* _IPSEC_WORKER_H_ */ > diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c > index 7f046e3..9e17ba0 100644 > --- a/examples/ipsec-secgw/sa.c > +++ b/examples/ipsec-secgw/sa.c > @@ -772,17 +772,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) > printf("\n"); > } > > -struct sa_ctx { > - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ > - struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; > - union { > - struct { > - struct rte_crypto_sym_xform a; > - struct rte_crypto_sym_xform b; > - }; > - } xf[IPSEC_SA_MAX_ENTRIES]; > -}; > - > static struct sa_ctx * > sa_create(const char *name, int32_t socket_id) > { > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH 11/14] examples/ipsec-secgw: add app processing code 2019-12-24 13:13 ` [dpdk-dev] " Ananyev, Konstantin @ 2020-01-10 14:36 ` Lukas Bartosik 0 siblings, 0 replies; 147+ messages in thread From: Lukas Bartosik @ 2020-01-10 14:36 UTC (permalink / raw) To: Ananyev, Konstantin, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Lukasz On 24.12.2019 14:13, Ananyev, Konstantin wrote: > External Email > > ---------------------------------------------------------------------- > >> --- a/examples/ipsec-secgw/ipsec_worker.c >> +++ b/examples/ipsec-secgw/ipsec_worker.c >> @@ -15,6 +15,7 @@ >> #include <ctype.h> >> #include <stdbool.h> >> >> +#include <rte_acl.h> >> #include <rte_common.h> >> #include <rte_log.h> >> #include <rte_memcpy.h> >> @@ -29,12 +30,51 @@ >> #include <rte_eventdev.h> >> #include <rte_malloc.h> >> #include <rte_mbuf.h> >> +#include <rte_lpm.h> >> +#include <rte_lpm6.h> >> >> #include "ipsec.h" >> +#include "ipsec_worker.h" >> #include "event_helper.h" >> >> extern volatile bool force_quit; >> >> +static inline enum pkt_type >> +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) >> +{ >> + struct rte_ether_hdr *eth; >> + >> + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); >> + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + >> + offsetof(struct ip, ip_p)); >> + if (**nlp == IPPROTO_ESP) >> + return PKT_TYPE_IPSEC_IPV4; >> + else >> + return PKT_TYPE_PLAIN_IPV4; >> + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + >> + offsetof(struct ip6_hdr, ip6_nxt)); >> + if (**nlp == IPPROTO_ESP) >> + return PKT_TYPE_IPSEC_IPV6; >> + else >> + return PKT_TYPE_PLAIN_IPV6; >> + } >> + >> + /* Unknown/Unsupported type */ >> + return PKT_TYPE_INVALID; >> +} > > Looking though that file, it seems like you choose to create your own set of > helper functions, instead of trying to reuse existing ones: > > process_ipsec_get_pkt_type() VS prepare_one_packet() > update_mac_addrs() VS prepare_tx_pkt() > check_sp() VS inbound_sp_sa() > > Obviously there is nothing good in code (and possible bugs) duplication. > Any reason why you can't reuse existing functions and need to reinvent your own? [Lukasz] The prepare_one_packet() and prepare_tx_pkt() do much more than we need and for performance reasons we crafted new functions. For example process_ipsec_get_pkt_type function returns nlp and whether packet type is plain or IPsec. That's all. Prepare_one_packet() process packets in chunks and does much more - it adjusts mbuf and ipv4 lengths then it demultiplex packet into plan and IPsec flows and finally does inline checks. This is similar for update_mac_addrs() vs prepare_tx_pkt() and check_sp() vs inbound_sp_sa() that prepare_tx_pkt() and inbound_sp_sa() do more that we need in event mode. > >> + >> +static inline void >> +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) >> +{ >> + struct rte_ether_hdr *ethhdr; >> + >> + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); >> + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); >> + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); >> +} >> + >> static inline void >> ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) >> { >> @@ -45,6 +85,177 @@ ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) >> rte_event_eth_tx_adapter_txq_set(m, 0); >> } >> >> +static inline int >> +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) >> +{ >> + uint32_t res; >> + >> + if (unlikely(sp == NULL)) >> + return 0; >> + >> + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, >> + DEFAULT_MAX_CATEGORIES); >> + >> + if (unlikely(res == 0)) { >> + /* No match */ >> + return 0; >> + } >> + >> + if (res == DISCARD) >> + return 0; >> + else if (res == BYPASS) { >> + *sa_idx = 0; >> + return 1; >> + } >> + >> + *sa_idx = SPI2IDX(res); >> + if (*sa_idx < IPSEC_SA_MAX_ENTRIES) >> + return 1; >> + >> + /* Invalid SA IDX */ >> + return 0; >> +} >> + >> +static inline uint16_t >> +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) >> +{ >> + uint32_t dst_ip; >> + uint16_t offset; >> + uint32_t hop; >> + int ret; >> + >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); >> + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); >> + dst_ip = rte_be_to_cpu_32(dst_ip); >> + >> + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); >> + >> + if (ret == 0) { >> + /* We have a hit */ >> + return hop; >> + } >> + >> + /* else */ >> + return RTE_MAX_ETHPORTS; >> +} >> + >> +/* TODO: To be tested */ >> +static inline uint16_t >> +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) >> +{ >> + uint8_t dst_ip[16]; >> + uint8_t *ip6_dst; >> + uint16_t offset; >> + uint32_t hop; >> + int ret; >> + >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); >> + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); >> + memcpy(&dst_ip[0], ip6_dst, 16); >> + >> + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); >> + >> + if (ret == 0) { >> + /* We have a hit */ >> + return hop; >> + } >> + >> + /* else */ >> + return RTE_MAX_ETHPORTS; >> +} >> + >> +static inline uint16_t >> +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) >> +{ >> + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) >> + return route4_pkt(pkt, rt->rt4_ctx); >> + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) >> + return route6_pkt(pkt, rt->rt6_ctx); >> + >> + return RTE_MAX_ETHPORTS; >> +} >> + >> +static inline int >> +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, >> + struct rte_event *ev) >> +{ >> + struct ipsec_sa *sa = NULL; >> + struct rte_mbuf *pkt; >> + uint16_t port_id = 0; >> + enum pkt_type type; >> + uint32_t sa_idx; >> + uint8_t *nlp; >> + >> + /* Get pkt from event */ >> + pkt = ev->mbuf; >> + >> + /* Check the packet type */ >> + type = process_ipsec_get_pkt_type(pkt, &nlp); >> + >> + switch (type) { >> + case PKT_TYPE_PLAIN_IPV4: >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) >> + sa = (struct ipsec_sa *) pkt->udata64; >> + >> + /* Check if we have a match */ >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { >> + /* No valid match */ >> + goto drop_pkt_and_exit; >> + } >> + break; >> + >> + case PKT_TYPE_PLAIN_IPV6: >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) >> + sa = (struct ipsec_sa *) pkt->udata64; >> + >> + /* Check if we have a match */ >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { >> + /* No valid match */ >> + goto drop_pkt_and_exit; >> + } >> + break; >> + >> + default: >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); >> + goto drop_pkt_and_exit; >> + } >> + >> + /* Check if the packet has to be bypassed */ >> + if (sa_idx == 0) >> + goto route_and_send_pkt; >> + >> + /* Else the packet has to be protected with SA */ >> + >> + /* If the packet was IPsec processed, then SA pointer should be set */ >> + if (sa == NULL) >> + goto drop_pkt_and_exit; >> + >> + /* SPI on the packet should match with the one in SA */ >> + if (unlikely(sa->spi != sa_idx)) >> + goto drop_pkt_and_exit; >> + >> +route_and_send_pkt: >> + port_id = get_route(pkt, rt, type); >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { >> + /* no match */ >> + goto drop_pkt_and_exit; >> + } >> + /* else, we have a matching route */ >> + >> + /* Update mac addresses */ >> + update_mac_addrs(pkt, port_id); >> + >> + /* Update the event with the dest port */ >> + ipsec_event_pre_forward(pkt, port_id); >> + return 1; >> + >> +drop_pkt_and_exit: >> + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); >> + rte_pktmbuf_free(pkt); >> + ev->mbuf = NULL; >> + return 0; >> +} >> + >> /* >> * Event mode exposes various operating modes depending on the >> * capabilities of the event device and the operating mode >> @@ -134,11 +345,11 @@ static void >> ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, >> uint8_t nb_links) >> { >> + struct lcore_conf_ev_tx_int_port_wrkr lconf; >> unsigned int nb_rx = 0; >> - unsigned int port_id; >> - struct rte_mbuf *pkt; >> struct rte_event ev; >> uint32_t lcore_id; >> + int32_t socket_id; >> >> /* Check if we have links registered for this lcore */ >> if (nb_links == 0) { >> @@ -151,6 +362,21 @@ ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, >> /* Get core ID */ >> lcore_id = rte_lcore_id(); >> >> + /* Get socket ID */ >> + socket_id = rte_lcore_to_socket_id(lcore_id); >> + >> + /* Save routing table */ >> + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; >> + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; >> + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; >> + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; >> + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; >> + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; >> + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; >> + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; >> + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; >> + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; >> + >> RTE_LOG(INFO, IPSEC, >> "Launching event mode worker (non-burst - Tx internal port - " >> "app mode - inbound) on lcore %d\n", lcore_id); >> @@ -175,13 +401,11 @@ ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, >> if (nb_rx == 0) >> continue; >> >> - port_id = ev.queue_id; >> - pkt = ev.mbuf; >> - >> - rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); >> - >> - /* Process packet */ >> - ipsec_event_pre_forward(pkt, port_id); >> + if (process_ipsec_ev_inbound(&lconf.inbound, >> + &lconf.rt, &ev) != 1) { >> + /* The pkt has been dropped */ >> + continue; >> + } >> >> /* >> * Since tx internal port is available, events can be >> diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h >> new file mode 100644 >> index 0000000..fd18a2e >> --- /dev/null >> +++ b/examples/ipsec-secgw/ipsec_worker.h >> @@ -0,0 +1,39 @@ >> +/* SPDX-License-Identifier: BSD-3-Clause >> + * Copyright(c) 2018 Cavium, Inc >> + */ >> +#ifndef _IPSEC_WORKER_H_ >> +#define _IPSEC_WORKER_H_ >> + >> +#include "ipsec.h" >> + >> +enum pkt_type { >> + PKT_TYPE_PLAIN_IPV4 = 1, >> + PKT_TYPE_IPSEC_IPV4, >> + PKT_TYPE_PLAIN_IPV6, >> + PKT_TYPE_IPSEC_IPV6, >> + PKT_TYPE_INVALID >> +}; >> + >> +struct route_table { >> + struct rt_ctx *rt4_ctx; >> + struct rt_ctx *rt6_ctx; >> +}; >> + >> +/* >> + * Conf required by event mode worker with tx internal port >> + */ >> +struct lcore_conf_ev_tx_int_port_wrkr { >> + struct ipsec_ctx inbound; >> + struct ipsec_ctx outbound; >> + struct route_table rt; >> +} __rte_cache_aligned; >> + >> +/* TODO >> + * >> + * Move this function to ipsec_worker.c >> + */ >> +void ipsec_poll_mode_worker(void); >> + >> +int ipsec_launch_one_lcore(void *args); >> + >> +#endif /* _IPSEC_WORKER_H_ */ >> diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c >> index 7f046e3..9e17ba0 100644 >> --- a/examples/ipsec-secgw/sa.c >> +++ b/examples/ipsec-secgw/sa.c >> @@ -772,17 +772,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) >> printf("\n"); >> } >> >> -struct sa_ctx { >> - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ >> - struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; >> - union { >> - struct { >> - struct rte_crypto_sym_xform a; >> - struct rte_crypto_sym_xform b; >> - }; >> - } xf[IPSEC_SA_MAX_ENTRIES]; >> -}; >> - >> static struct sa_ctx * >> sa_create(const char *name, int32_t socket_id) >> { >> -- >> 2.7.4 > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code 2019-12-08 12:30 ` [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code Anoob Joseph 2019-12-23 16:49 ` Ananyev, Konstantin 2019-12-24 13:13 ` [dpdk-dev] " Ananyev, Konstantin @ 2019-12-25 15:18 ` Ananyev, Konstantin 2020-01-07 6:16 ` Anoob Joseph 2 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-25 15:18 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > +static inline int > +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, > + struct rte_event *ev) > +{ > + struct ipsec_sa *sa = NULL; > + struct rte_mbuf *pkt; > + uint16_t port_id = 0; > + enum pkt_type type; > + uint32_t sa_idx; > + uint8_t *nlp; > + > + /* Get pkt from event */ > + pkt = ev->mbuf; > + > + /* Check the packet type */ > + type = process_ipsec_get_pkt_type(pkt, &nlp); > + > + switch (type) { > + case PKT_TYPE_PLAIN_IPV4: > + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) > + sa = (struct ipsec_sa *) pkt->udata64; Shouldn't packets with PKT_RX_SEC_OFFLOAD_FAIL be handled somehow? Another question - as I can see from the code, right now event mode supports only inline-proto, correct? If so, then probably an error should be reported at startup, if in config file some other types of sessions were requested. > + > + /* Check if we have a match */ > + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > + /* No valid match */ > + goto drop_pkt_and_exit; > + } > + break; > + > + case PKT_TYPE_PLAIN_IPV6: > + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) > + sa = (struct ipsec_sa *) pkt->udata64; > + > + /* Check if we have a match */ > + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > + /* No valid match */ > + goto drop_pkt_and_exit; > + } > + break; > + > + default: > + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > + goto drop_pkt_and_exit; > + } > + > + /* Check if the packet has to be bypassed */ > + if (sa_idx == 0) > + goto route_and_send_pkt; > + > + /* Else the packet has to be protected with SA */ > + > + /* If the packet was IPsec processed, then SA pointer should be set */ > + if (sa == NULL) > + goto drop_pkt_and_exit; > + > + /* SPI on the packet should match with the one in SA */ > + if (unlikely(sa->spi != sa_idx)) > + goto drop_pkt_and_exit; > + > +route_and_send_pkt: > + port_id = get_route(pkt, rt, type); > + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > + /* no match */ > + goto drop_pkt_and_exit; > + } > + /* else, we have a matching route */ > + > + /* Update mac addresses */ > + update_mac_addrs(pkt, port_id); > + > + /* Update the event with the dest port */ > + ipsec_event_pre_forward(pkt, port_id); > + return 1; > + > +drop_pkt_and_exit: > + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); > + rte_pktmbuf_free(pkt); > + ev->mbuf = NULL; > + return 0; > +} > + ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code 2019-12-25 15:18 ` [dpdk-dev] " Ananyev, Konstantin @ 2020-01-07 6:16 ` Anoob Joseph 0 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-07 6:16 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Wednesday, December 25, 2019 8:49 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; > Nicolau, Radu <radu.nicolau@intel.com>; Thomas Monjalon > <thomas@monjalon.net> > Cc: Lukas Bartosik <lbartosik@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; Archana > Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>; > dev@dpdk.org > Subject: [EXT] RE: [PATCH 11/14] examples/ipsec-secgw: add app processing > code > > External Email > > ---------------------------------------------------------------------- > > > +static inline int > > +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, > > + struct rte_event *ev) > > +{ > > + struct ipsec_sa *sa = NULL; > > + struct rte_mbuf *pkt; > > + uint16_t port_id = 0; > > + enum pkt_type type; > > + uint32_t sa_idx; > > + uint8_t *nlp; > > + > > + /* Get pkt from event */ > > + pkt = ev->mbuf; > > + > > + /* Check the packet type */ > > + type = process_ipsec_get_pkt_type(pkt, &nlp); > > + > > + switch (type) { > > + case PKT_TYPE_PLAIN_IPV4: > > + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) > > + sa = (struct ipsec_sa *) pkt->udata64; > > > Shouldn't packets with PKT_RX_SEC_OFFLOAD_FAIL be handled somehow? [Anoob] Yes. Will fix this in v2. > Another question - as I can see from the code, right now event mode supports > only inline-proto, correct? > If so, then probably an error should be reported at startup, if in config file > some other types of sessions were requested. [Anoob] Okay. Will add this in v2. > > > + > > + /* Check if we have a match */ > > + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > > + /* No valid match */ > > + goto drop_pkt_and_exit; > > + } > > + break; > > + > > + case PKT_TYPE_PLAIN_IPV6: > > + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) > > + sa = (struct ipsec_sa *) pkt->udata64; > > + > > + /* Check if we have a match */ > > + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > > + /* No valid match */ > > + goto drop_pkt_and_exit; > > + } > > + break; > > + > > + default: > > + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > > + goto drop_pkt_and_exit; > > + } > > + > > + /* Check if the packet has to be bypassed */ > > + if (sa_idx == 0) > > + goto route_and_send_pkt; > > + > > + /* Else the packet has to be protected with SA */ > > + > > + /* If the packet was IPsec processed, then SA pointer should be set */ > > + if (sa == NULL) > > + goto drop_pkt_and_exit; > > + > > + /* SPI on the packet should match with the one in SA */ > > + if (unlikely(sa->spi != sa_idx)) > > + goto drop_pkt_and_exit; > > + > > +route_and_send_pkt: > > + port_id = get_route(pkt, rt, type); > > + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > > + /* no match */ > > + goto drop_pkt_and_exit; > > + } > > + /* else, we have a matching route */ > > + > > + /* Update mac addresses */ > > + update_mac_addrs(pkt, port_id); > > + > > + /* Update the event with the dest port */ > > + ipsec_event_pre_forward(pkt, port_id); > > + return 1; > > + > > +drop_pkt_and_exit: > > + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); > > + rte_pktmbuf_free(pkt); > > + ev->mbuf = NULL; > > + return 0; > > +} > > + ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (10 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-23 17:28 ` Ananyev, Konstantin 2019-12-08 12:30 ` [dpdk-dev] [PATCH 13/14] examples/ipsec-secgw: add app " Anoob Joseph ` (2 subsequent siblings) 14 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev From: Ankur Dwivedi <adwivedi@marvell.com> This patch adds the driver outbound worker thread for ipsec-secgw. In this mode the security session is a fixed one and sa update is not done. Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 12 +++++ examples/ipsec-secgw/ipsec.c | 9 ++++ examples/ipsec-secgw/ipsec_worker.c | 90 ++++++++++++++++++++++++++++++++++++- 3 files changed, 110 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 2e7d4d8..76719f2 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2011,6 +2011,18 @@ cryptodevs_init(void) i++; } + /* + * Set the queue pair to at least the number of ethernet + * devices for inline outbound. + */ + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); + + /* + * The requested number of queues should never exceed + * the max available + */ + qp = RTE_MIN(qp, max_nb_qps); + if (qp == 0) continue; diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index e529f68..9ff8a63 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa, return 0; } +uint16_t sa_no; +#define MAX_FIXED_SESSIONS 10 +struct rte_security_session *sec_session_fixed[MAX_FIXED_SESSIONS]; + int create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, struct rte_ipsec_session *ips) @@ -401,6 +405,11 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, ips->security.ol_flags = sec_cap->ol_flags; ips->security.ctx = sec_ctx; + if (sa_no < MAX_FIXED_SESSIONS) { + sec_session_fixed[sa_no] = + ipsec_get_primary_session(sa)->security.ses; + sa_no++; + } } set_cdev_id: diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 2af9475..e202277 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -263,7 +263,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 2 +#define IPSEC_EVENTMODE_WORKERS 3 /* * Event mode worker @@ -423,6 +423,84 @@ ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, return; } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - driver mode - outbound + */ +extern struct rte_security_session *sec_session_fixed[]; +static void +ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct eh_event_link_info *links, + uint8_t nb_links) +{ + unsigned int nb_rx = 0; + struct rte_mbuf *pkt; + unsigned int port_id; + struct rte_event ev; + uint32_t lcore_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + goto exit; + } + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "driver mode - outbound) on lcore %d\n", lcore_id); + + /* We have valid links */ + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + port_id = ev.queue_id; + pkt = ev.mbuf; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + pkt->udata64 = (uint64_t) sec_session_fixed[port_id]; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } + +exit: + return; +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -449,6 +527,16 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode_inb; + wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - driver mode - outbound */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drvr_mode_outb; + nb_wrkr_param++; return nb_wrkr_param; } -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker 2019-12-08 12:30 ` [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker Anoob Joseph @ 2019-12-23 17:28 ` Ananyev, Konstantin 2020-01-04 10:58 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-23 17:28 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, dev > This patch adds the driver outbound worker thread for ipsec-secgw. > In this mode the security session is a fixed one and sa update > is not done. > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/ipsec-secgw.c | 12 +++++ > examples/ipsec-secgw/ipsec.c | 9 ++++ > examples/ipsec-secgw/ipsec_worker.c | 90 ++++++++++++++++++++++++++++++++++++- > 3 files changed, 110 insertions(+), 1 deletion(-) > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index 2e7d4d8..76719f2 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -2011,6 +2011,18 @@ cryptodevs_init(void) > i++; > } > > + /* > + * Set the queue pair to at least the number of ethernet > + * devices for inline outbound. > + */ > + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); Not sure, what for? Why we can't process packets from several eth devs on the same crypto-dev queue? > + > + /* > + * The requested number of queues should never exceed > + * the max available > + */ > + qp = RTE_MIN(qp, max_nb_qps); > + > if (qp == 0) > continue; > > diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c > index e529f68..9ff8a63 100644 > --- a/examples/ipsec-secgw/ipsec.c > +++ b/examples/ipsec-secgw/ipsec.c > @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx *ipsec_ctx, struct ipsec_sa *sa, > return 0; > } > > +uint16_t sa_no; > +#define MAX_FIXED_SESSIONS 10 > +struct rte_security_session *sec_session_fixed[MAX_FIXED_SESSIONS]; > + > int > create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, > struct rte_ipsec_session *ips) > @@ -401,6 +405,11 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, > > ips->security.ol_flags = sec_cap->ol_flags; > ips->security.ctx = sec_ctx; > + if (sa_no < MAX_FIXED_SESSIONS) { > + sec_session_fixed[sa_no] = > + ipsec_get_primary_session(sa)->security.ses; > + sa_no++; > + } > } Totally lost what is the purpose of these changes... Why first 10 inline-proto are special and need to be saved inside global array (sec_session_fixed)? Why later, in ipsec_worker.c this array is referenced by eth port_id? What would happen if number of inline-proto sessions is less than number of eth ports? > set_cdev_id: > diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c > index 2af9475..e202277 100644 > --- a/examples/ipsec-secgw/ipsec_worker.c > +++ b/examples/ipsec-secgw/ipsec_worker.c > @@ -263,7 +263,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, > */ > > /* Workers registered */ > -#define IPSEC_EVENTMODE_WORKERS 2 > +#define IPSEC_EVENTMODE_WORKERS 3 > > /* > * Event mode worker > @@ -423,6 +423,84 @@ ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info *links, > return; > } > > +/* > + * Event mode worker > + * Operating parameters : non-burst - Tx internal port - driver mode - outbound > + */ > +extern struct rte_security_session *sec_session_fixed[]; > +static void > +ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct eh_event_link_info *links, > + uint8_t nb_links) > +{ > + unsigned int nb_rx = 0; > + struct rte_mbuf *pkt; > + unsigned int port_id; > + struct rte_event ev; > + uint32_t lcore_id; > + > + /* Check if we have links registered for this lcore */ > + if (nb_links == 0) { > + /* No links registered - exit */ > + goto exit; > + } > + > + /* Get core ID */ > + lcore_id = rte_lcore_id(); > + > + RTE_LOG(INFO, IPSEC, > + "Launching event mode worker (non-burst - Tx internal port - " > + "driver mode - outbound) on lcore %d\n", lcore_id); > + > + /* We have valid links */ > + > + /* Check if it's single link */ > + if (nb_links != 1) { > + RTE_LOG(INFO, IPSEC, > + "Multiple links not supported. Using first link\n"); > + } > + > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, > + links[0].event_port_id); > + while (!force_quit) { > + /* Read packet from event queues */ > + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > + links[0].event_port_id, > + &ev, /* events */ > + 1, /* nb_events */ > + 0 /* timeout_ticks */); > + > + if (nb_rx == 0) > + continue; > + > + port_id = ev.queue_id; > + pkt = ev.mbuf; > + > + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > + > + /* Process packet */ > + ipsec_event_pre_forward(pkt, port_id); > + > + pkt->udata64 = (uint64_t) sec_session_fixed[port_id]; > + > + /* Mark the packet for Tx security offload */ > + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > + > + /* > + * Since tx internal port is available, events can be > + * directly enqueued to the adapter and it would be > + * internally submitted to the eth device. > + */ > + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > + links[0].event_port_id, > + &ev, /* events */ > + 1, /* nb_events */ > + 0 /* flags */); > + } > + > +exit: > + return; > +} > + > static uint8_t > ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) > { > @@ -449,6 +527,16 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) > wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode_inb; > > + wrkr++; > + nb_wrkr_param++; > + > + /* Non-burst - Tx internal port - driver mode - outbound */ > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drvr_mode_outb; > + > nb_wrkr_param++; > return nb_wrkr_param; > } > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker 2019-12-23 17:28 ` Ananyev, Konstantin @ 2020-01-04 10:58 ` Anoob Joseph 2020-01-06 17:46 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-04 10:58 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Monday, December 23, 2019 10:58 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal > <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas > Monjalon <thomas@monjalon.net> > Cc: Ankur Dwivedi <adwivedi@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Archana Muniganti <marchana@marvell.com>; > Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; Lukas Bartosik <lbartosik@marvell.com>; > dev@dpdk.org > Subject: [EXT] RE: [PATCH 12/14] examples/ipsec-secgw: add driver > outbound worker > > External Email > > ---------------------------------------------------------------------- > > > This patch adds the driver outbound worker thread for ipsec-secgw. > > In this mode the security session is a fixed one and sa update is not > > done. > > > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > --- > > examples/ipsec-secgw/ipsec-secgw.c | 12 +++++ > > examples/ipsec-secgw/ipsec.c | 9 ++++ > > examples/ipsec-secgw/ipsec_worker.c | 90 > > ++++++++++++++++++++++++++++++++++++- > > 3 files changed, 110 insertions(+), 1 deletion(-) > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > b/examples/ipsec-secgw/ipsec-secgw.c > > index 2e7d4d8..76719f2 100644 > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > @@ -2011,6 +2011,18 @@ cryptodevs_init(void) > > i++; > > } > > > > + /* > > + * Set the queue pair to at least the number of ethernet > > + * devices for inline outbound. > > + */ > > + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); > > > Not sure, what for? > Why we can't process packets from several eth devs on the same crypto-dev > queue? [Anoob] This is because of a limitation in our hardware. In our hardware, it's the crypto queue pair which would be submitting to the ethernet queue for Tx. But in DPDK spec, the security processing is done by the ethernet PMD Tx routine alone. We manage to do this by sharing the crypto queue internally. The crypto queues initialized during crypto_configure() gets mapped to various ethernet ports. Because of this, we need to have atleast as many crypto queues as the number of eth ports. The above change is required because here we limit the number of crypto qps based on the number of cores etc. So when tried on single core, the qps get limited to 1, which causes session_create() to fail for all ports other than the first one. > > > + > > + /* > > + * The requested number of queues should never exceed > > + * the max available > > + */ > > + qp = RTE_MIN(qp, max_nb_qps); > > + > > if (qp == 0) > > continue; > > > > diff --git a/examples/ipsec-secgw/ipsec.c > > b/examples/ipsec-secgw/ipsec.c index e529f68..9ff8a63 100644 > > --- a/examples/ipsec-secgw/ipsec.c > > +++ b/examples/ipsec-secgw/ipsec.c > > @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx > *ipsec_ctx, struct ipsec_sa *sa, > > return 0; > > } > > > > +uint16_t sa_no; > > +#define MAX_FIXED_SESSIONS 10 > > +struct rte_security_session *sec_session_fixed[MAX_FIXED_SESSIONS]; > > + > > int > > create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, > > struct rte_ipsec_session *ips) > > @@ -401,6 +405,11 @@ create_inline_session(struct socket_ctx *skt_ctx, > > struct ipsec_sa *sa, > > > > ips->security.ol_flags = sec_cap->ol_flags; > > ips->security.ctx = sec_ctx; > > + if (sa_no < MAX_FIXED_SESSIONS) { > > + sec_session_fixed[sa_no] = > > + ipsec_get_primary_session(sa)- > >security.ses; > > + sa_no++; > > + } > > } > > Totally lost what is the purpose of these changes... > Why first 10 inline-proto are special and need to be saved inside global array > (sec_session_fixed)? > Why later, in ipsec_worker.c this array is referenced by eth port_id? > What would happen if number of inline-proto sessions is less than number of > eth ports? [Anoob] This is required for the outbound driver mode. The 'driver mode' is more like 'single_sa' mode of the existing application. The idea is to skip all the lookups etc done in the s/w and perform ipsec processing fully in h/w. In outbound, following is roughly what we should do for driver mode, pkt = rx_burst(); /* set_pkt_metadata() */ pkt-> udata64 = session; tx_burst(pkt); The session is created on eth ports. And so, if we have single SA, then the entire traffic will have to be forwarded on the same port. The above change is to make sure we could send traffic on all ports. Currently we just use the first 10 SAs and save it in the array. So the user has to set the conf properly and make sure the SAs are distributed such. Will update this to save the first parsed outbound SA for a port in the array. That way the size of the array will be RTE_MAX_ETHPORTS. Is the above approach fine? > > > set_cdev_id: > > diff --git a/examples/ipsec-secgw/ipsec_worker.c > > b/examples/ipsec-secgw/ipsec_worker.c > > index 2af9475..e202277 100644 > > --- a/examples/ipsec-secgw/ipsec_worker.c > > +++ b/examples/ipsec-secgw/ipsec_worker.c > > @@ -263,7 +263,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, > struct route_table *rt, > > */ > > > > /* Workers registered */ > > -#define IPSEC_EVENTMODE_WORKERS 2 > > +#define IPSEC_EVENTMODE_WORKERS 3 > > > > /* > > * Event mode worker > > @@ -423,6 +423,84 @@ > ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info > *links, > > return; > > } > > > > +/* > > + * Event mode worker > > + * Operating parameters : non-burst - Tx internal port - driver mode > > +- outbound */ extern struct rte_security_session > > +*sec_session_fixed[]; static void > > +ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct > eh_event_link_info *links, > > + uint8_t nb_links) > > +{ > > + unsigned int nb_rx = 0; > > + struct rte_mbuf *pkt; > > + unsigned int port_id; > > + struct rte_event ev; > > + uint32_t lcore_id; > > + > > + /* Check if we have links registered for this lcore */ > > + if (nb_links == 0) { > > + /* No links registered - exit */ > > + goto exit; > > + } > > + > > + /* Get core ID */ > > + lcore_id = rte_lcore_id(); > > + > > + RTE_LOG(INFO, IPSEC, > > + "Launching event mode worker (non-burst - Tx internal port - > " > > + "driver mode - outbound) on lcore %d\n", lcore_id); > > + > > + /* We have valid links */ > > + > > + /* Check if it's single link */ > > + if (nb_links != 1) { > > + RTE_LOG(INFO, IPSEC, > > + "Multiple links not supported. Using first link\n"); > > + } > > + > > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", > lcore_id, > > + links[0].event_port_id); > > + while (!force_quit) { > > + /* Read packet from event queues */ > > + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > > + links[0].event_port_id, > > + &ev, /* events */ > > + 1, /* nb_events */ > > + 0 /* timeout_ticks */); > > + > > + if (nb_rx == 0) > > + continue; > > + > > + port_id = ev.queue_id; > > + pkt = ev.mbuf; > > + > > + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > > + > > + /* Process packet */ > > + ipsec_event_pre_forward(pkt, port_id); > > + > > + pkt->udata64 = (uint64_t) sec_session_fixed[port_id]; > > + > > + /* Mark the packet for Tx security offload */ > > + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > > + > > + /* > > + * Since tx internal port is available, events can be > > + * directly enqueued to the adapter and it would be > > + * internally submitted to the eth device. > > + */ > > + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > > + links[0].event_port_id, > > + &ev, /* events */ > > + 1, /* nb_events */ > > + 0 /* flags */); > > + } > > + > > +exit: > > + return; > > +} > > + > > static uint8_t > > ipsec_eventmode_populate_wrkr_params(struct > eh_app_worker_params > > *wrkrs) { @@ -449,6 +527,16 @@ > > ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params > *wrkrs) > > wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > wrkr->worker_thread = > ipsec_wrkr_non_burst_int_port_app_mode_inb; > > > > + wrkr++; > > + nb_wrkr_param++; > > + > > + /* Non-burst - Tx internal port - driver mode - outbound */ > > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > + wrkr->worker_thread = > ipsec_wrkr_non_burst_int_port_drvr_mode_outb; > > + > > nb_wrkr_param++; > > return nb_wrkr_param; > > } > > -- > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker 2020-01-04 10:58 ` Anoob Joseph @ 2020-01-06 17:46 ` Ananyev, Konstantin 2020-01-07 4:32 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-06 17:46 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev > > > This patch adds the driver outbound worker thread for ipsec-secgw. > > > In this mode the security session is a fixed one and sa update is not > > > done. > > > > > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > --- > > > examples/ipsec-secgw/ipsec-secgw.c | 12 +++++ > > > examples/ipsec-secgw/ipsec.c | 9 ++++ > > > examples/ipsec-secgw/ipsec_worker.c | 90 > > > ++++++++++++++++++++++++++++++++++++- > > > 3 files changed, 110 insertions(+), 1 deletion(-) > > > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > > b/examples/ipsec-secgw/ipsec-secgw.c > > > index 2e7d4d8..76719f2 100644 > > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > > @@ -2011,6 +2011,18 @@ cryptodevs_init(void) > > > i++; > > > } > > > > > > + /* > > > + * Set the queue pair to at least the number of ethernet > > > + * devices for inline outbound. > > > + */ > > > + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); > > > > > > Not sure, what for? > > Why we can't process packets from several eth devs on the same crypto-dev > > queue? > > [Anoob] This is because of a limitation in our hardware. In our hardware, it's the crypto queue pair which would be submitting to the > ethernet queue for Tx. But in DPDK spec, the security processing is done by the ethernet PMD Tx routine alone. We manage to do this by > sharing the crypto queue internally. The crypto queues initialized during crypto_configure() gets mapped to various ethernet ports. Because > of this, we need to have atleast as many crypto queues as the number of eth ports. Ok, but that breaks current behavior. Right now in poll-mode it is possible to map traffic from N eth-devs to M crypto-devs (N>= M, by using M lcores). Would prefer to keep this functionality in place. > > The above change is required because here we limit the number of crypto qps based on the number of cores etc. So when tried on single > core, the qps get limited to 1, which causes session_create() to fail for all ports other than the first one. > > > > > > + > > > + /* > > > + * The requested number of queues should never exceed > > > + * the max available > > > + */ > > > + qp = RTE_MIN(qp, max_nb_qps); > > > + > > > if (qp == 0) > > > continue; > > > > > > diff --git a/examples/ipsec-secgw/ipsec.c > > > b/examples/ipsec-secgw/ipsec.c index e529f68..9ff8a63 100644 > > > --- a/examples/ipsec-secgw/ipsec.c > > > +++ b/examples/ipsec-secgw/ipsec.c > > > @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx > > *ipsec_ctx, struct ipsec_sa *sa, > > > return 0; > > > } > > > > > > +uint16_t sa_no; > > > +#define MAX_FIXED_SESSIONS 10 > > > +struct rte_security_session *sec_session_fixed[MAX_FIXED_SESSIONS]; > > > + > > > int > > > create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, > > > struct rte_ipsec_session *ips) > > > @@ -401,6 +405,11 @@ create_inline_session(struct socket_ctx *skt_ctx, > > > struct ipsec_sa *sa, > > > > > > ips->security.ol_flags = sec_cap->ol_flags; > > > ips->security.ctx = sec_ctx; > > > + if (sa_no < MAX_FIXED_SESSIONS) { > > > + sec_session_fixed[sa_no] = > > > + ipsec_get_primary_session(sa)- > > >security.ses; > > > + sa_no++; > > > + } > > > } > > > > Totally lost what is the purpose of these changes... > > Why first 10 inline-proto are special and need to be saved inside global array > > (sec_session_fixed)? > > Why later, in ipsec_worker.c this array is referenced by eth port_id? > > What would happen if number of inline-proto sessions is less than number of > > eth ports? > > [Anoob] This is required for the outbound driver mode. The 'driver mode' is more like 'single_sa' mode of the existing application. The idea > is to skip all the lookups etc done in the s/w and perform ipsec processing fully in h/w. In outbound, following is roughly what we should do > for driver mode, > > pkt = rx_burst(); > > /* set_pkt_metadata() */ > pkt-> udata64 = session; > > tx_burst(pkt); > > The session is created on eth ports. And so, if we have single SA, then the entire traffic will have to be forwarded on the same port. The > above change is to make sure we could send traffic on all ports. > > Currently we just use the first 10 SAs and save it in the array. So the user has to set the conf properly and make sure the SAs are distributed > such. Will update this to save the first parsed outbound SA for a port in the array. That way the size of the array will be > RTE_MAX_ETHPORTS. Ok, then if it is for specific case (event-mode + sing-sa mode) then in create_inline_session we probably shouldn't do it always, but only when this mode is selected. Also wouldn't it better to reuse current single-sa cmd-line option and logic? I.E. whe event-mode and single-sa is selected, go though all eth-devs and for each do create_inline_session() with for sa that corresponds to sing_sa_idx? Then, I think create_inline_session() can be kept intact. > > Is the above approach fine? > > > > > > set_cdev_id: > > > diff --git a/examples/ipsec-secgw/ipsec_worker.c > > > b/examples/ipsec-secgw/ipsec_worker.c > > > index 2af9475..e202277 100644 > > > --- a/examples/ipsec-secgw/ipsec_worker.c > > > +++ b/examples/ipsec-secgw/ipsec_worker.c > > > @@ -263,7 +263,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, > > struct route_table *rt, > > > */ > > > > > > /* Workers registered */ > > > -#define IPSEC_EVENTMODE_WORKERS 2 > > > +#define IPSEC_EVENTMODE_WORKERS 3 > > > > > > /* > > > * Event mode worker > > > @@ -423,6 +423,84 @@ > > ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info > > *links, > > > return; > > > } > > > > > > +/* > > > + * Event mode worker > > > + * Operating parameters : non-burst - Tx internal port - driver mode > > > +- outbound */ extern struct rte_security_session > > > +*sec_session_fixed[]; static void > > > +ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct > > eh_event_link_info *links, > > > + uint8_t nb_links) > > > +{ > > > + unsigned int nb_rx = 0; > > > + struct rte_mbuf *pkt; > > > + unsigned int port_id; > > > + struct rte_event ev; > > > + uint32_t lcore_id; > > > + > > > + /* Check if we have links registered for this lcore */ > > > + if (nb_links == 0) { > > > + /* No links registered - exit */ > > > + goto exit; > > > + } > > > + > > > + /* Get core ID */ > > > + lcore_id = rte_lcore_id(); > > > + > > > + RTE_LOG(INFO, IPSEC, > > > + "Launching event mode worker (non-burst - Tx internal port - > > " > > > + "driver mode - outbound) on lcore %d\n", lcore_id); > > > + > > > + /* We have valid links */ > > > + > > > + /* Check if it's single link */ > > > + if (nb_links != 1) { > > > + RTE_LOG(INFO, IPSEC, > > > + "Multiple links not supported. Using first link\n"); > > > + } > > > + > > > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", > > lcore_id, > > > + links[0].event_port_id); > > > + while (!force_quit) { > > > + /* Read packet from event queues */ > > > + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > > > + links[0].event_port_id, > > > + &ev, /* events */ > > > + 1, /* nb_events */ > > > + 0 /* timeout_ticks */); > > > + > > > + if (nb_rx == 0) > > > + continue; > > > + > > > + port_id = ev.queue_id; > > > + pkt = ev.mbuf; > > > + > > > + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > > > + > > > + /* Process packet */ > > > + ipsec_event_pre_forward(pkt, port_id); > > > + > > > + pkt->udata64 = (uint64_t) sec_session_fixed[port_id]; > > > + > > > + /* Mark the packet for Tx security offload */ > > > + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > > > + > > > + /* > > > + * Since tx internal port is available, events can be > > > + * directly enqueued to the adapter and it would be > > > + * internally submitted to the eth device. > > > + */ > > > + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > > > + links[0].event_port_id, > > > + &ev, /* events */ > > > + 1, /* nb_events */ > > > + 0 /* flags */); > > > + } > > > + > > > +exit: > > > + return; > > > +} > > > + > > > static uint8_t > > > ipsec_eventmode_populate_wrkr_params(struct > > eh_app_worker_params > > > *wrkrs) { @@ -449,6 +527,16 @@ > > > ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params > > *wrkrs) > > > wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > > wrkr->worker_thread = > > ipsec_wrkr_non_burst_int_port_app_mode_inb; > > > > > > + wrkr++; > > > + nb_wrkr_param++; > > > + > > > + /* Non-burst - Tx internal port - driver mode - outbound */ > > > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > > > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > > > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > > + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > > + wrkr->worker_thread = > > ipsec_wrkr_non_burst_int_port_drvr_mode_outb; > > > + > > > nb_wrkr_param++; > > > return nb_wrkr_param; > > > } > > > -- > > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker 2020-01-06 17:46 ` Ananyev, Konstantin @ 2020-01-07 4:32 ` Anoob Joseph 2020-01-07 14:30 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-07 4:32 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: dev <dev-bounces@dpdk.org> On Behalf Of Ananyev, Konstantin > Sent: Monday, January 6, 2020 11:16 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; > Nicolau, Radu <radu.nicolau@intel.com>; Thomas Monjalon > <thomas@monjalon.net> > Cc: Ankur Dwivedi <adwivedi@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Archana Muniganti <marchana@marvell.com>; > Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; Lukas Bartosik <lbartosik@marvell.com>; > dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver > outbound worker > > > > > This patch adds the driver outbound worker thread for ipsec-secgw. > > > > In this mode the security session is a fixed one and sa update is > > > > not done. > > > > > > > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > > --- > > > > examples/ipsec-secgw/ipsec-secgw.c | 12 +++++ > > > > examples/ipsec-secgw/ipsec.c | 9 ++++ > > > > examples/ipsec-secgw/ipsec_worker.c | 90 > > > > ++++++++++++++++++++++++++++++++++++- > > > > 3 files changed, 110 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > > > b/examples/ipsec-secgw/ipsec-secgw.c > > > > index 2e7d4d8..76719f2 100644 > > > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > > > @@ -2011,6 +2011,18 @@ cryptodevs_init(void) > > > > i++; > > > > } > > > > > > > > + /* > > > > + * Set the queue pair to at least the number of ethernet > > > > + * devices for inline outbound. > > > > + */ > > > > + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); > > > > > > > > > Not sure, what for? > > > Why we can't process packets from several eth devs on the same > > > crypto-dev queue? > > > > [Anoob] This is because of a limitation in our hardware. In our > > hardware, it's the crypto queue pair which would be submitting to the > > ethernet queue for Tx. But in DPDK spec, the security processing is > > done by the ethernet PMD Tx routine alone. We manage to do this by sharing > the crypto queue internally. The crypto queues initialized during > crypto_configure() gets mapped to various ethernet ports. Because of this, we > need to have atleast as many crypto queues as the number of eth ports. > > Ok, but that breaks current behavior. > Right now in poll-mode it is possible to map traffic from N eth-devs to M crypto- > devs (N>= M, by using M lcores). > Would prefer to keep this functionality in place. [Anoob] Understood. I don't think that functionality is broken. If the number of qps available is lower than the number of eth devs, then only the ones available would be enabled. Inline protocol session for the other eth devs would fail for us. Currently, the app assumes that for one core, it needs only one qp (and for M core, M qp). Is there any harm in enabling all qps available? If such a change can be done, that would also work for us. > > > > > The above change is required because here we limit the number of > > crypto qps based on the number of cores etc. So when tried on single core, the > qps get limited to 1, which causes session_create() to fail for all ports other than > the first one. > > > > > > > > > + > > > > + /* > > > > + * The requested number of queues should never exceed > > > > + * the max available > > > > + */ > > > > + qp = RTE_MIN(qp, max_nb_qps); > > > > + > > > > if (qp == 0) > > > > continue; > > > > > > > > diff --git a/examples/ipsec-secgw/ipsec.c > > > > b/examples/ipsec-secgw/ipsec.c index e529f68..9ff8a63 100644 > > > > --- a/examples/ipsec-secgw/ipsec.c > > > > +++ b/examples/ipsec-secgw/ipsec.c > > > > @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx > > > *ipsec_ctx, struct ipsec_sa *sa, > > > > return 0; > > > > } > > > > > > > > +uint16_t sa_no; > > > > +#define MAX_FIXED_SESSIONS 10 > > > > +struct rte_security_session > > > > +*sec_session_fixed[MAX_FIXED_SESSIONS]; > > > > + > > > > int > > > > create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, > > > > struct rte_ipsec_session *ips) > > > > @@ -401,6 +405,11 @@ create_inline_session(struct socket_ctx > > > > *skt_ctx, struct ipsec_sa *sa, > > > > > > > > ips->security.ol_flags = sec_cap->ol_flags; > > > > ips->security.ctx = sec_ctx; > > > > + if (sa_no < MAX_FIXED_SESSIONS) { > > > > + sec_session_fixed[sa_no] = > > > > + ipsec_get_primary_session(sa)- > > > >security.ses; > > > > + sa_no++; > > > > + } > > > > } > > > > > > Totally lost what is the purpose of these changes... > > > Why first 10 inline-proto are special and need to be saved inside > > > global array (sec_session_fixed)? > > > Why later, in ipsec_worker.c this array is referenced by eth port_id? > > > What would happen if number of inline-proto sessions is less than > > > number of eth ports? > > > > [Anoob] This is required for the outbound driver mode. The 'driver > > mode' is more like 'single_sa' mode of the existing application. The > > idea is to skip all the lookups etc done in the s/w and perform ipsec > > processing fully in h/w. In outbound, following is roughly what we > > should do for driver mode, > > > > pkt = rx_burst(); > > > > /* set_pkt_metadata() */ > > pkt-> udata64 = session; > > > > tx_burst(pkt); > > > > The session is created on eth ports. And so, if we have single SA, > > then the entire traffic will have to be forwarded on the same port. The above > change is to make sure we could send traffic on all ports. > > > > Currently we just use the first 10 SAs and save it in the array. So > > the user has to set the conf properly and make sure the SAs are > > distributed such. Will update this to save the first parsed outbound SA for a > port in the array. That way the size of the array will be RTE_MAX_ETHPORTS. > > Ok, then if it is for specific case (event-mode + sing-sa mode) then in > create_inline_session we probably shouldn't do it always, but only when this > mode is selected. [Anoob] Will make that change. > Also wouldn't it better to reuse current single-sa cmd-line option and logic? > I.E. whe event-mode and single-sa is selected, go though all eth-devs and for > each do create_inline_session() with for sa that corresponds to sing_sa_idx? > Then, I think create_inline_session() can be kept intact. [Anoob] No disagreement. Current single_sa uses single_sa universally. The driver mode intends to use single_sa per port. Technically, just single_sa (universally) will result in the eth port being the bottleneck. So I can fix the single sa and we can use single_sa option in eventmode as you have described. > > > > > Is the above approach fine? > > > > > > > > > set_cdev_id: > > > > diff --git a/examples/ipsec-secgw/ipsec_worker.c > > > > b/examples/ipsec-secgw/ipsec_worker.c > > > > index 2af9475..e202277 100644 > > > > --- a/examples/ipsec-secgw/ipsec_worker.c > > > > +++ b/examples/ipsec-secgw/ipsec_worker.c > > > > @@ -263,7 +263,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx > > > > *ctx, > > > struct route_table *rt, > > > > */ > > > > > > > > /* Workers registered */ > > > > -#define IPSEC_EVENTMODE_WORKERS 2 > > > > +#define IPSEC_EVENTMODE_WORKERS 3 > > > > > > > > /* > > > > * Event mode worker > > > > @@ -423,6 +423,84 @@ > > > ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info > > > *links, > > > > return; > > > > } > > > > > > > > +/* > > > > + * Event mode worker > > > > + * Operating parameters : non-burst - Tx internal port - driver > > > > +mode > > > > +- outbound */ extern struct rte_security_session > > > > +*sec_session_fixed[]; static void > > > > +ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct > > > eh_event_link_info *links, > > > > + uint8_t nb_links) > > > > +{ > > > > + unsigned int nb_rx = 0; > > > > + struct rte_mbuf *pkt; > > > > + unsigned int port_id; > > > > + struct rte_event ev; > > > > + uint32_t lcore_id; > > > > + > > > > + /* Check if we have links registered for this lcore */ > > > > + if (nb_links == 0) { > > > > + /* No links registered - exit */ > > > > + goto exit; > > > > + } > > > > + > > > > + /* Get core ID */ > > > > + lcore_id = rte_lcore_id(); > > > > + > > > > + RTE_LOG(INFO, IPSEC, > > > > + "Launching event mode worker (non-burst - Tx internal port - > > > " > > > > + "driver mode - outbound) on lcore %d\n", lcore_id); > > > > + > > > > + /* We have valid links */ > > > > + > > > > + /* Check if it's single link */ > > > > + if (nb_links != 1) { > > > > + RTE_LOG(INFO, IPSEC, > > > > + "Multiple links not supported. Using first link\n"); > > > > + } > > > > + > > > > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", > > > lcore_id, > > > > + links[0].event_port_id); > > > > + while (!force_quit) { > > > > + /* Read packet from event queues */ > > > > + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > > > > + links[0].event_port_id, > > > > + &ev, /* events */ > > > > + 1, /* nb_events */ > > > > + 0 /* timeout_ticks */); > > > > + > > > > + if (nb_rx == 0) > > > > + continue; > > > > + > > > > + port_id = ev.queue_id; > > > > + pkt = ev.mbuf; > > > > + > > > > + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > > > > + > > > > + /* Process packet */ > > > > + ipsec_event_pre_forward(pkt, port_id); > > > > + > > > > + pkt->udata64 = (uint64_t) sec_session_fixed[port_id]; > > > > + > > > > + /* Mark the packet for Tx security offload */ > > > > + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > > > > + > > > > + /* > > > > + * Since tx internal port is available, events can be > > > > + * directly enqueued to the adapter and it would be > > > > + * internally submitted to the eth device. > > > > + */ > > > > + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > > > > + links[0].event_port_id, > > > > + &ev, /* events */ > > > > + 1, /* nb_events */ > > > > + 0 /* flags */); > > > > + } > > > > + > > > > +exit: > > > > + return; > > > > +} > > > > + > > > > static uint8_t > > > > ipsec_eventmode_populate_wrkr_params(struct > > > eh_app_worker_params > > > > *wrkrs) { @@ -449,6 +527,16 @@ > > > > ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params > > > *wrkrs) > > > > wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > > > wrkr->worker_thread = > > > ipsec_wrkr_non_burst_int_port_app_mode_inb; > > > > > > > > + wrkr++; > > > > + nb_wrkr_param++; > > > > + > > > > + /* Non-burst - Tx internal port - driver mode - outbound */ > > > > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > > > > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > > > > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > > > + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > > > + wrkr->worker_thread = > > > ipsec_wrkr_non_burst_int_port_drvr_mode_outb; > > > > + > > > > nb_wrkr_param++; > > > > return nb_wrkr_param; > > > > } > > > > -- > > > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker 2020-01-07 4:32 ` Anoob Joseph @ 2020-01-07 14:30 ` Ananyev, Konstantin 2020-01-09 11:49 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-07 14:30 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev > > > > > This patch adds the driver outbound worker thread for ipsec-secgw. > > > > > In this mode the security session is a fixed one and sa update is > > > > > not done. > > > > > > > > > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > > > --- > > > > > examples/ipsec-secgw/ipsec-secgw.c | 12 +++++ > > > > > examples/ipsec-secgw/ipsec.c | 9 ++++ > > > > > examples/ipsec-secgw/ipsec_worker.c | 90 > > > > > ++++++++++++++++++++++++++++++++++++- > > > > > 3 files changed, 110 insertions(+), 1 deletion(-) > > > > > > > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > > > > b/examples/ipsec-secgw/ipsec-secgw.c > > > > > index 2e7d4d8..76719f2 100644 > > > > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > > > > @@ -2011,6 +2011,18 @@ cryptodevs_init(void) > > > > > i++; > > > > > } > > > > > > > > > > + /* > > > > > + * Set the queue pair to at least the number of ethernet > > > > > + * devices for inline outbound. > > > > > + */ > > > > > + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); > > > > > > > > > > > > Not sure, what for? > > > > Why we can't process packets from several eth devs on the same > > > > crypto-dev queue? > > > > > > [Anoob] This is because of a limitation in our hardware. In our > > > hardware, it's the crypto queue pair which would be submitting to the > > > ethernet queue for Tx. But in DPDK spec, the security processing is > > > done by the ethernet PMD Tx routine alone. We manage to do this by sharing > > the crypto queue internally. The crypto queues initialized during > > crypto_configure() gets mapped to various ethernet ports. Because of this, we > > need to have atleast as many crypto queues as the number of eth ports. > > > > Ok, but that breaks current behavior. > > Right now in poll-mode it is possible to map traffic from N eth-devs to M crypto- > > devs (N>= M, by using M lcores). > > Would prefer to keep this functionality in place. > > [Anoob] Understood. I don't think that functionality is broken. If the number of qps available is lower than the number of eth devs, > then only the ones available would be enabled. Inline protocol session for the other eth devs would fail for us. > > Currently, the app assumes that for one core, it needs only one qp (and for M core, M qp). Is there any harm in enabling all qps > available? If such a change can be done, that would also work for us. Hmm, I suppose it could cause some problems with some corner-cases: if we'll have crypto-dev with really big number of max_queues. In that case it might require a lot of extra memory for cryptodev_configure/queue_pair_setup. Probably the easiest way to deal with it: - add req_queue_num parameter for cryptodevs_init() And then do: qp =RTE_MIN(max_nb_qps, RTE_MAX(req_queue_num, qp)); - for poll mode we'll call cryptodevs_init(0), for your case it could be cryptodevs_init(rte_eth_dev_count_avail()). Would it work for your case? > > > > > > > > The above change is required because here we limit the number of > > > crypto qps based on the number of cores etc. So when tried on single core, the > > qps get limited to 1, which causes session_create() to fail for all ports other than > > the first one. > > > > > > > > > > > > + > > > > > + /* > > > > > + * The requested number of queues should never exceed > > > > > + * the max available > > > > > + */ > > > > > + qp = RTE_MIN(qp, max_nb_qps); > > > > > + > > > > > if (qp == 0) > > > > > continue; > > > > > > > > > > diff --git a/examples/ipsec-secgw/ipsec.c > > > > > b/examples/ipsec-secgw/ipsec.c index e529f68..9ff8a63 100644 > > > > > --- a/examples/ipsec-secgw/ipsec.c > > > > > +++ b/examples/ipsec-secgw/ipsec.c > > > > > @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx > > > > *ipsec_ctx, struct ipsec_sa *sa, > > > > > return 0; > > > > > } > > > > > > > > > > +uint16_t sa_no; > > > > > +#define MAX_FIXED_SESSIONS 10 > > > > > +struct rte_security_session > > > > > +*sec_session_fixed[MAX_FIXED_SESSIONS]; > > > > > + > > > > > int > > > > > create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, > > > > > struct rte_ipsec_session *ips) > > > > > @@ -401,6 +405,11 @@ create_inline_session(struct socket_ctx > > > > > *skt_ctx, struct ipsec_sa *sa, > > > > > > > > > > ips->security.ol_flags = sec_cap->ol_flags; > > > > > ips->security.ctx = sec_ctx; > > > > > + if (sa_no < MAX_FIXED_SESSIONS) { > > > > > + sec_session_fixed[sa_no] = > > > > > + ipsec_get_primary_session(sa)- > > > > >security.ses; > > > > > + sa_no++; > > > > > + } > > > > > } > > > > > > > > Totally lost what is the purpose of these changes... > > > > Why first 10 inline-proto are special and need to be saved inside > > > > global array (sec_session_fixed)? > > > > Why later, in ipsec_worker.c this array is referenced by eth port_id? > > > > What would happen if number of inline-proto sessions is less than > > > > number of eth ports? > > > > > > [Anoob] This is required for the outbound driver mode. The 'driver > > > mode' is more like 'single_sa' mode of the existing application. The > > > idea is to skip all the lookups etc done in the s/w and perform ipsec > > > processing fully in h/w. In outbound, following is roughly what we > > > should do for driver mode, > > > > > > pkt = rx_burst(); > > > > > > /* set_pkt_metadata() */ > > > pkt-> udata64 = session; > > > > > > tx_burst(pkt); > > > > > > The session is created on eth ports. And so, if we have single SA, > > > then the entire traffic will have to be forwarded on the same port. The above > > change is to make sure we could send traffic on all ports. > > > > > > Currently we just use the first 10 SAs and save it in the array. So > > > the user has to set the conf properly and make sure the SAs are > > > distributed such. Will update this to save the first parsed outbound SA for a > > port in the array. That way the size of the array will be RTE_MAX_ETHPORTS. > > > > Ok, then if it is for specific case (event-mode + sing-sa mode) then in > > create_inline_session we probably shouldn't do it always, but only when this > > mode is selected. > > [Anoob] Will make that change. > > > Also wouldn't it better to reuse current single-sa cmd-line option and logic? > > I.E. whe event-mode and single-sa is selected, go though all eth-devs and for > > each do create_inline_session() with for sa that corresponds to sing_sa_idx? > > Then, I think create_inline_session() can be kept intact. > > [Anoob] No disagreement. Current single_sa uses single_sa universally. The driver mode intends to use single_sa per port. > Technically, just single_sa (universally) will result in the eth port being the bottleneck. So I can fix the single sa and we can use > single_sa option in eventmode as you have described. > > > > > > > > > Is the above approach fine? > > > > > > > > > > > > set_cdev_id: > > > > > diff --git a/examples/ipsec-secgw/ipsec_worker.c > > > > > b/examples/ipsec-secgw/ipsec_worker.c > > > > > index 2af9475..e202277 100644 > > > > > --- a/examples/ipsec-secgw/ipsec_worker.c > > > > > +++ b/examples/ipsec-secgw/ipsec_worker.c > > > > > @@ -263,7 +263,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx > > > > > *ctx, > > > > struct route_table *rt, > > > > > */ > > > > > > > > > > /* Workers registered */ > > > > > -#define IPSEC_EVENTMODE_WORKERS 2 > > > > > +#define IPSEC_EVENTMODE_WORKERS 3 > > > > > > > > > > /* > > > > > * Event mode worker > > > > > @@ -423,6 +423,84 @@ > > > > ipsec_wrkr_non_burst_int_port_app_mode_inb(struct eh_event_link_info > > > > *links, > > > > > return; > > > > > } > > > > > > > > > > +/* > > > > > + * Event mode worker > > > > > + * Operating parameters : non-burst - Tx internal port - driver > > > > > +mode > > > > > +- outbound */ extern struct rte_security_session > > > > > +*sec_session_fixed[]; static void > > > > > +ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct > > > > eh_event_link_info *links, > > > > > + uint8_t nb_links) > > > > > +{ > > > > > + unsigned int nb_rx = 0; > > > > > + struct rte_mbuf *pkt; > > > > > + unsigned int port_id; > > > > > + struct rte_event ev; > > > > > + uint32_t lcore_id; > > > > > + > > > > > + /* Check if we have links registered for this lcore */ > > > > > + if (nb_links == 0) { > > > > > + /* No links registered - exit */ > > > > > + goto exit; > > > > > + } > > > > > + > > > > > + /* Get core ID */ > > > > > + lcore_id = rte_lcore_id(); > > > > > + > > > > > + RTE_LOG(INFO, IPSEC, > > > > > + "Launching event mode worker (non-burst - Tx internal port - > > > > " > > > > > + "driver mode - outbound) on lcore %d\n", lcore_id); > > > > > + > > > > > + /* We have valid links */ > > > > > + > > > > > + /* Check if it's single link */ > > > > > + if (nb_links != 1) { > > > > > + RTE_LOG(INFO, IPSEC, > > > > > + "Multiple links not supported. Using first link\n"); > > > > > + } > > > > > + > > > > > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", > > > > lcore_id, > > > > > + links[0].event_port_id); > > > > > + while (!force_quit) { > > > > > + /* Read packet from event queues */ > > > > > + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > > > > > + links[0].event_port_id, > > > > > + &ev, /* events */ > > > > > + 1, /* nb_events */ > > > > > + 0 /* timeout_ticks */); > > > > > + > > > > > + if (nb_rx == 0) > > > > > + continue; > > > > > + > > > > > + port_id = ev.queue_id; > > > > > + pkt = ev.mbuf; > > > > > + > > > > > + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > > > > > + > > > > > + /* Process packet */ > > > > > + ipsec_event_pre_forward(pkt, port_id); > > > > > + > > > > > + pkt->udata64 = (uint64_t) sec_session_fixed[port_id]; > > > > > + > > > > > + /* Mark the packet for Tx security offload */ > > > > > + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > > > > > + > > > > > + /* > > > > > + * Since tx internal port is available, events can be > > > > > + * directly enqueued to the adapter and it would be > > > > > + * internally submitted to the eth device. > > > > > + */ > > > > > + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > > > > > + links[0].event_port_id, > > > > > + &ev, /* events */ > > > > > + 1, /* nb_events */ > > > > > + 0 /* flags */); > > > > > + } > > > > > + > > > > > +exit: > > > > > + return; > > > > > +} > > > > > + > > > > > static uint8_t > > > > > ipsec_eventmode_populate_wrkr_params(struct > > > > eh_app_worker_params > > > > > *wrkrs) { @@ -449,6 +527,16 @@ > > > > > ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params > > > > *wrkrs) > > > > > wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > > > > wrkr->worker_thread = > > > > ipsec_wrkr_non_burst_int_port_app_mode_inb; > > > > > > > > > > + wrkr++; > > > > > + nb_wrkr_param++; > > > > > + > > > > > + /* Non-burst - Tx internal port - driver mode - outbound */ > > > > > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > > > > > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > > > > > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > > > > + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > > > > + wrkr->worker_thread = > > > > ipsec_wrkr_non_burst_int_port_drvr_mode_outb; > > > > > + > > > > > nb_wrkr_param++; > > > > > return nb_wrkr_param; > > > > > } > > > > > -- > > > > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker 2020-01-07 14:30 ` Ananyev, Konstantin @ 2020-01-09 11:49 ` Anoob Joseph 0 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-09 11:49 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: dev <dev-bounces@dpdk.org> On Behalf Of Ananyev, Konstantin > Sent: Tuesday, January 7, 2020 8:01 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal > <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas > Monjalon <thomas@monjalon.net> > Cc: Ankur Dwivedi <adwivedi@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Archana Muniganti <marchana@marvell.com>; > Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; Lukas Bartosik <lbartosik@marvell.com>; > dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver > outbound worker > > > > > > > This patch adds the driver outbound worker thread for ipsec-secgw. > > > > > > In this mode the security session is a fixed one and sa update > > > > > > is not done. > > > > > > > > > > > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > > > > --- > > > > > > examples/ipsec-secgw/ipsec-secgw.c | 12 +++++ > > > > > > examples/ipsec-secgw/ipsec.c | 9 ++++ > > > > > > examples/ipsec-secgw/ipsec_worker.c | 90 > > > > > > ++++++++++++++++++++++++++++++++++++- > > > > > > 3 files changed, 110 insertions(+), 1 deletion(-) > > > > > > > > > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > > > > > b/examples/ipsec-secgw/ipsec-secgw.c > > > > > > index 2e7d4d8..76719f2 100644 > > > > > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > > > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > > > > > @@ -2011,6 +2011,18 @@ cryptodevs_init(void) > > > > > > i++; > > > > > > } > > > > > > > > > > > > + /* > > > > > > + * Set the queue pair to at least the number of > ethernet > > > > > > + * devices for inline outbound. > > > > > > + */ > > > > > > + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); > > > > > > > > > > > > > > > Not sure, what for? > > > > > Why we can't process packets from several eth devs on the same > > > > > crypto-dev queue? > > > > > > > > [Anoob] This is because of a limitation in our hardware. In our > > > > hardware, it's the crypto queue pair which would be submitting to > > > > the ethernet queue for Tx. But in DPDK spec, the security > > > > processing is done by the ethernet PMD Tx routine alone. We manage > > > > to do this by sharing > > > the crypto queue internally. The crypto queues initialized during > > > crypto_configure() gets mapped to various ethernet ports. Because of > > > this, we need to have atleast as many crypto queues as the number of > eth ports. > > > > > > Ok, but that breaks current behavior. > > > Right now in poll-mode it is possible to map traffic from N eth-devs > > > to M crypto- devs (N>= M, by using M lcores). > > > Would prefer to keep this functionality in place. > > > > [Anoob] Understood. I don't think that functionality is broken. If the > > number of qps available is lower than the number of eth devs, then only > the ones available would be enabled. Inline protocol session for the other > eth devs would fail for us. > > > > Currently, the app assumes that for one core, it needs only one qp > > (and for M core, M qp). Is there any harm in enabling all qps available? If > such a change can be done, that would also work for us. > > Hmm, I suppose it could cause some problems with some corner-cases: > if we'll have crypto-dev with really big number of max_queues. > In that case it might require a lot of extra memory for > cryptodev_configure/queue_pair_setup. > Probably the easiest way to deal with it: > - add req_queue_num parameter for cryptodevs_init() > And then do: qp =RTE_MIN(max_nb_qps, RTE_MAX(req_queue_num, > qp)); > - for poll mode we'll call cryptodevs_init(0), for your case it could be > cryptodevs_init(rte_eth_dev_count_avail()). > > Would it work for your case? [Anoob] I tried investigating about this a bit more. The reason why we get limited by the number of cores is because of the logic in add_cdev_mapping() & add_mapping() functions. I've tried reworking it a bit and was able to make it equal to number of lcore params (core-port-queue mapping). Technically, we just need to match that. What do you think? I will submit a separate patch with the said rework. > > > > > > > > > > > > The above change is required because here we limit the number of > > > > crypto qps based on the number of cores etc. So when tried on > > > > single core, the > > > qps get limited to 1, which causes session_create() to fail for all > > > ports other than the first one. > > > > > > > > > > > > > > > + > > > > > > + /* > > > > > > + * The requested number of queues should never > exceed > > > > > > + * the max available > > > > > > + */ > > > > > > + qp = RTE_MIN(qp, max_nb_qps); > > > > > > + > > > > > > if (qp == 0) > > > > > > continue; > > > > > > > > > > > > diff --git a/examples/ipsec-secgw/ipsec.c > > > > > > b/examples/ipsec-secgw/ipsec.c index e529f68..9ff8a63 100644 > > > > > > --- a/examples/ipsec-secgw/ipsec.c > > > > > > +++ b/examples/ipsec-secgw/ipsec.c > > > > > > @@ -141,6 +141,10 @@ create_lookaside_session(struct ipsec_ctx > > > > > *ipsec_ctx, struct ipsec_sa *sa, > > > > > > return 0; > > > > > > } > > > > > > > > > > > > +uint16_t sa_no; > > > > > > +#define MAX_FIXED_SESSIONS 10 > > > > > > +struct rte_security_session > > > > > > +*sec_session_fixed[MAX_FIXED_SESSIONS]; > > > > > > + > > > > > > int > > > > > > create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa > *sa, > > > > > > struct rte_ipsec_session *ips) @@ -401,6 +405,11 > @@ > > > > > > create_inline_session(struct socket_ctx *skt_ctx, struct > > > > > > ipsec_sa *sa, > > > > > > > > > > > > ips->security.ol_flags = sec_cap->ol_flags; > > > > > > ips->security.ctx = sec_ctx; > > > > > > + if (sa_no < MAX_FIXED_SESSIONS) { > > > > > > + sec_session_fixed[sa_no] = > > > > > > + ipsec_get_primary_session(sa)- > > > > > >security.ses; > > > > > > + sa_no++; > > > > > > + } > > > > > > } > > > > > > > > > > Totally lost what is the purpose of these changes... > > > > > Why first 10 inline-proto are special and need to be saved > > > > > inside global array (sec_session_fixed)? > > > > > Why later, in ipsec_worker.c this array is referenced by eth port_id? > > > > > What would happen if number of inline-proto sessions is less > > > > > than number of eth ports? > > > > > > > > [Anoob] This is required for the outbound driver mode. The 'driver > > > > mode' is more like 'single_sa' mode of the existing application. > > > > The idea is to skip all the lookups etc done in the s/w and > > > > perform ipsec processing fully in h/w. In outbound, following is > > > > roughly what we should do for driver mode, > > > > > > > > pkt = rx_burst(); > > > > > > > > /* set_pkt_metadata() */ > > > > pkt-> udata64 = session; > > > > > > > > tx_burst(pkt); > > > > > > > > The session is created on eth ports. And so, if we have single SA, > > > > then the entire traffic will have to be forwarded on the same > > > > port. The above > > > change is to make sure we could send traffic on all ports. > > > > > > > > Currently we just use the first 10 SAs and save it in the array. > > > > So the user has to set the conf properly and make sure the SAs are > > > > distributed such. Will update this to save the first parsed > > > > outbound SA for a > > > port in the array. That way the size of the array will be > RTE_MAX_ETHPORTS. > > > > > > Ok, then if it is for specific case (event-mode + sing-sa mode) then > > > in create_inline_session we probably shouldn't do it always, but > > > only when this mode is selected. > > > > [Anoob] Will make that change. > > > > > Also wouldn't it better to reuse current single-sa cmd-line option and > logic? > > > I.E. whe event-mode and single-sa is selected, go though all > > > eth-devs and for each do create_inline_session() with for sa that > corresponds to sing_sa_idx? > > > Then, I think create_inline_session() can be kept intact. > > > > [Anoob] No disagreement. Current single_sa uses single_sa universally. > The driver mode intends to use single_sa per port. > > Technically, just single_sa (universally) will result in the eth port > > being the bottleneck. So I can fix the single sa and we can use single_sa > option in eventmode as you have described. > > > > > > > > > > > > > Is the above approach fine? > > > > > > > > > > > > > > > set_cdev_id: > > > > > > diff --git a/examples/ipsec-secgw/ipsec_worker.c > > > > > > b/examples/ipsec-secgw/ipsec_worker.c > > > > > > index 2af9475..e202277 100644 > > > > > > --- a/examples/ipsec-secgw/ipsec_worker.c > > > > > > +++ b/examples/ipsec-secgw/ipsec_worker.c > > > > > > @@ -263,7 +263,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx > > > > > > *ctx, > > > > > struct route_table *rt, > > > > > > */ > > > > > > > > > > > > /* Workers registered */ > > > > > > -#define IPSEC_EVENTMODE_WORKERS 2 > > > > > > +#define IPSEC_EVENTMODE_WORKERS 3 > > > > > > > > > > > > /* > > > > > > * Event mode worker > > > > > > @@ -423,6 +423,84 @@ > > > > > ipsec_wrkr_non_burst_int_port_app_mode_inb(struct > > > > > eh_event_link_info *links, > > > > > > return; > > > > > > } > > > > > > > > > > > > +/* > > > > > > + * Event mode worker > > > > > > + * Operating parameters : non-burst - Tx internal port - > > > > > > +driver mode > > > > > > +- outbound */ extern struct rte_security_session > > > > > > +*sec_session_fixed[]; static void > > > > > > +ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct > > > > > eh_event_link_info *links, > > > > > > + uint8_t nb_links) > > > > > > +{ > > > > > > + unsigned int nb_rx = 0; > > > > > > + struct rte_mbuf *pkt; > > > > > > + unsigned int port_id; > > > > > > + struct rte_event ev; > > > > > > + uint32_t lcore_id; > > > > > > + > > > > > > + /* Check if we have links registered for this lcore */ > > > > > > + if (nb_links == 0) { > > > > > > + /* No links registered - exit */ > > > > > > + goto exit; > > > > > > + } > > > > > > + > > > > > > + /* Get core ID */ > > > > > > + lcore_id = rte_lcore_id(); > > > > > > + > > > > > > + RTE_LOG(INFO, IPSEC, > > > > > > + "Launching event mode worker (non-burst - Tx > internal port > > > > > > +- > > > > > " > > > > > > + "driver mode - outbound) on lcore %d\n", lcore_id); > > > > > > + > > > > > > + /* We have valid links */ > > > > > > + > > > > > > + /* Check if it's single link */ > > > > > > + if (nb_links != 1) { > > > > > > + RTE_LOG(INFO, IPSEC, > > > > > > + "Multiple links not supported. Using first > link\n"); > > > > > > + } > > > > > > + > > > > > > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", > > > > > lcore_id, > > > > > > + links[0].event_port_id); > > > > > > + while (!force_quit) { > > > > > > + /* Read packet from event queues */ > > > > > > + nb_rx = > rte_event_dequeue_burst(links[0].eventdev_id, > > > > > > + links[0].event_port_id, > > > > > > + &ev, /* events */ > > > > > > + 1, /* nb_events */ > > > > > > + 0 /* timeout_ticks */); > > > > > > + > > > > > > + if (nb_rx == 0) > > > > > > + continue; > > > > > > + > > > > > > + port_id = ev.queue_id; > > > > > > + pkt = ev.mbuf; > > > > > > + > > > > > > + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); > > > > > > + > > > > > > + /* Process packet */ > > > > > > + ipsec_event_pre_forward(pkt, port_id); > > > > > > + > > > > > > + pkt->udata64 = (uint64_t) > sec_session_fixed[port_id]; > > > > > > + > > > > > > + /* Mark the packet for Tx security offload */ > > > > > > + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > > > > > > + > > > > > > + /* > > > > > > + * Since tx internal port is available, events can be > > > > > > + * directly enqueued to the adapter and it would be > > > > > > + * internally submitted to the eth device. > > > > > > + */ > > > > > > + > rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > > > > > > + links[0].event_port_id, > > > > > > + &ev, /* events */ > > > > > > + 1, /* nb_events */ > > > > > > + 0 /* flags */); > > > > > > + } > > > > > > + > > > > > > +exit: > > > > > > + return; > > > > > > +} > > > > > > + > > > > > > static uint8_t > > > > > > ipsec_eventmode_populate_wrkr_params(struct > > > > > eh_app_worker_params > > > > > > *wrkrs) { @@ -449,6 +527,16 @@ > > > > > > ipsec_eventmode_populate_wrkr_params(struct > > > > > > eh_app_worker_params > > > > > *wrkrs) > > > > > > wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_INBOUND; > > > > > > wrkr->worker_thread = > > > > > ipsec_wrkr_non_burst_int_port_app_mode_inb; > > > > > > > > > > > > + wrkr++; > > > > > > + nb_wrkr_param++; > > > > > > + > > > > > > + /* Non-burst - Tx internal port - driver mode - outbound */ > > > > > > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > > > > > > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > > > > > > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > > > > > + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; > > > > > > + wrkr->worker_thread = > > > > > ipsec_wrkr_non_burst_int_port_drvr_mode_outb; > > > > > > + > > > > > > nb_wrkr_param++; > > > > > > return nb_wrkr_param; > > > > > > } > > > > > > -- > > > > > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 13/14] examples/ipsec-secgw: add app outbound worker 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (11 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph 14 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev From: Ankur Dwivedi <adwivedi@marvell.com> This patch adds the app outbound worker thread. Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec_worker.c | 193 +++++++++++++++++++++++++++++++++++- 1 file changed, 192 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index e202277..41d2264 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -256,6 +256,101 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, return 0; } +static inline int +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct rte_ipsec_session *sess; + struct sa_ctx *sa_ctx; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + struct ipsec_sa *sa; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + case PKT_TYPE_PLAIN_IPV6: + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + default: + /* + * Only plain IPv4 & IPv6 packets are allowed + * on protected port. Drop the rest. + */ + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == 0) { + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + goto send_pkt; + } + + /* Else the packet has to be protected */ + + /* Get SA ctx*/ + sa_ctx = ctx->sa_ctx; + + /* Get SA */ + sa = &(sa_ctx->sa[sa_idx]); + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + /* Allow only inline protocol for now */ + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); + goto drop_pkt_and_exit; + } + + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) + pkt->udata64 = (uint64_t) sess->security.ses; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + + /* Get the port to which this pkt need to be submitted */ + port_id = sa->portid; + +send_pkt: + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -263,7 +358,7 @@ process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 3 +#define IPSEC_EVENTMODE_WORKERS 4 /* * Event mode worker @@ -501,6 +596,92 @@ ipsec_wrkr_non_burst_int_port_drvr_mode_outb(struct eh_event_link_info *links, return; } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - app mode - outbound + */ +static void +ipsec_wrkr_non_burst_int_port_app_mode_outb(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct lcore_conf_ev_tx_int_port_wrkr lconf; + unsigned int nb_rx = 0; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + goto exit; + } + + /* We have valid links */ + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Save routing table */ + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "app mode - outbound) on lcore %d\n", lcore_id); + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + if (process_ipsec_ev_outbound(&lconf.outbound, + &lconf.rt, &ev) != 1) { + /* The pkt has been dropped */ + continue; + } + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } + +exit: + return; +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -537,6 +718,16 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drvr_mode_outb; + wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - app mode - outbound */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + wrkr->cap.ipsec_dir = EH_IPSEC_DIR_TYPE_OUTBOUND; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode_outb; + nb_wrkr_param++; return nb_wrkr_param; } -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (12 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 13/14] examples/ipsec-secgw: add app " Anoob Joseph @ 2019-12-08 12:30 ` Anoob Joseph 2019-12-23 16:14 ` Ananyev, Konstantin 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph 14 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2019-12-08 12:30 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add command line option -s which can be used to configure number of buffers in a pool. Default number of buffers is 8192. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 76719f2..f8e28d6 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -59,8 +59,6 @@ volatile bool force_quit; #define MEMPOOL_CACHE_SIZE 256 -#define NB_MBUF (32000) - #define CDEV_QUEUE_DESC 2048 #define CDEV_MAP_ENTRIES 16384 #define CDEV_MP_NB_OBJS 1024 @@ -167,6 +165,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; static uint32_t single_sa_idx; +static uint32_t nb_bufs_in_pool = 8192; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -1261,6 +1260,7 @@ print_usage(const char *prgname) " [-w REPLAY_WINDOW_SIZE]" " [-e]" " [-a]" + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" " -f CONFIG_FILE" " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" @@ -1284,6 +1284,7 @@ print_usage(const char *prgname) " size for each SA\n" " -e enables ESN\n" " -a enables SA SQN atomic behaviour\n" + " -s number of mbufs in packet pool (default 8192)\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration\n" " --single-sa SAIDX: Use single SA index for outbound traffic,\n" @@ -1534,7 +1535,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) argvopt = argv; - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", lgopts, &option_index)) != EOF) { switch (opt) { @@ -1568,6 +1569,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) cfgfile = optarg; f_present = 1; break; + + case 's': + ret = parse_decimal(optarg); + if (ret < 0) { + printf("Invalid number of buffers in a pool: " + "%s\n", optarg); + print_usage(prgname); + return -1; + } + + nb_bufs_in_pool = ret; + break; + case 'j': ret = parse_decimal(optarg); if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ -2792,11 +2806,12 @@ main(int32_t argc, char **argv) if (socket_ctx[socket_id].mbuf_pool) continue; - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); session_priv_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); } + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs 2019-12-08 12:30 ` [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs Anoob Joseph @ 2019-12-23 16:14 ` Ananyev, Konstantin 2019-12-23 16:16 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-23 16:14 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > > Add command line option -s which can be used to configure number > of buffers in a pool. Default number of buffers is 8192. > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/ipsec-secgw.c | 23 +++++++++++++++++++---- > 1 file changed, 19 insertions(+), 4 deletions(-) > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index 76719f2..f8e28d6 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -59,8 +59,6 @@ volatile bool force_quit; > > #define MEMPOOL_CACHE_SIZE 256 > > -#define NB_MBUF (32000) > - > #define CDEV_QUEUE_DESC 2048 > #define CDEV_MAP_ENTRIES 16384 > #define CDEV_MP_NB_OBJS 1024 > @@ -167,6 +165,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled by default. */ > static uint32_t nb_lcores; > static uint32_t single_sa; > static uint32_t single_sa_idx; > +static uint32_t nb_bufs_in_pool = 8192; Why to change the default number (behavior) here? Why not to keep existing one as default? > > /* > * RX/TX HW offload capabilities to enable/use on ethernet ports. > @@ -1261,6 +1260,7 @@ print_usage(const char *prgname) > " [-w REPLAY_WINDOW_SIZE]" > " [-e]" > " [-a]" > + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" > " -f CONFIG_FILE" > " --config (port,queue,lcore)[,(port,queue,lcore)]" > " [--single-sa SAIDX]" > @@ -1284,6 +1284,7 @@ print_usage(const char *prgname) > " size for each SA\n" > " -e enables ESN\n" > " -a enables SA SQN atomic behaviour\n" > + " -s number of mbufs in packet pool (default 8192)\n" > " -f CONFIG_FILE: Configuration file\n" > " --config (port,queue,lcore): Rx queue configuration\n" > " --single-sa SAIDX: Use single SA index for outbound traffic,\n" > @@ -1534,7 +1535,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > argvopt = argv; > > - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", > + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", > lgopts, &option_index)) != EOF) { > > switch (opt) { > @@ -1568,6 +1569,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > cfgfile = optarg; > f_present = 1; > break; > + > + case 's': > + ret = parse_decimal(optarg); > + if (ret < 0) { > + printf("Invalid number of buffers in a pool: " > + "%s\n", optarg); > + print_usage(prgname); > + return -1; > + } > + > + nb_bufs_in_pool = ret; > + break; > + > case 'j': > ret = parse_decimal(optarg); > if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || > @@ -2792,11 +2806,12 @@ main(int32_t argc, char **argv) > if (socket_ctx[socket_id].mbuf_pool) > continue; > > - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); > + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); > session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); > session_priv_pool_init(&socket_ctx[socket_id], socket_id, > sess_sz); > } > + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); > > RTE_ETH_FOREACH_DEV(portid) { > if ((enabled_port_mask & (1 << portid)) == 0) > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs 2019-12-23 16:14 ` Ananyev, Konstantin @ 2019-12-23 16:16 ` Ananyev, Konstantin 2020-01-03 5:42 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2019-12-23 16:16 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > > Add command line option -s which can be used to configure number > > of buffers in a pool. Default number of buffers is 8192. > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > --- > > examples/ipsec-secgw/ipsec-secgw.c | 23 +++++++++++++++++++---- > > 1 file changed, 19 insertions(+), 4 deletions(-) > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > > index 76719f2..f8e28d6 100644 > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > @@ -59,8 +59,6 @@ volatile bool force_quit; > > > > #define MEMPOOL_CACHE_SIZE 256 > > > > -#define NB_MBUF (32000) > > - > > #define CDEV_QUEUE_DESC 2048 > > #define CDEV_MAP_ENTRIES 16384 > > #define CDEV_MP_NB_OBJS 1024 > > @@ -167,6 +165,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled by default. */ > > static uint32_t nb_lcores; > > static uint32_t single_sa; > > static uint32_t single_sa_idx; > > +static uint32_t nb_bufs_in_pool = 8192; > > Why to change the default number (behavior) here? > Why not to keep existing one as default? Or, at least try to guess required number of mbufs (like l3fwd, etc., do)? > > > > > /* > > * RX/TX HW offload capabilities to enable/use on ethernet ports. > > @@ -1261,6 +1260,7 @@ print_usage(const char *prgname) > > " [-w REPLAY_WINDOW_SIZE]" > > " [-e]" > > " [-a]" > > + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" > > " -f CONFIG_FILE" > > " --config (port,queue,lcore)[,(port,queue,lcore)]" > > " [--single-sa SAIDX]" > > @@ -1284,6 +1284,7 @@ print_usage(const char *prgname) > > " size for each SA\n" > > " -e enables ESN\n" > > " -a enables SA SQN atomic behaviour\n" > > + " -s number of mbufs in packet pool (default 8192)\n" > > " -f CONFIG_FILE: Configuration file\n" > > " --config (port,queue,lcore): Rx queue configuration\n" > > " --single-sa SAIDX: Use single SA index for outbound traffic,\n" > > @@ -1534,7 +1535,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > > > argvopt = argv; > > > > - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", > > + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", > > lgopts, &option_index)) != EOF) { > > > > switch (opt) { > > @@ -1568,6 +1569,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > cfgfile = optarg; > > f_present = 1; > > break; > > + > > + case 's': > > + ret = parse_decimal(optarg); > > + if (ret < 0) { > > + printf("Invalid number of buffers in a pool: " > > + "%s\n", optarg); > > + print_usage(prgname); > > + return -1; > > + } > > + > > + nb_bufs_in_pool = ret; > > + break; > > + > > case 'j': > > ret = parse_decimal(optarg); > > if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || > > @@ -2792,11 +2806,12 @@ main(int32_t argc, char **argv) > > if (socket_ctx[socket_id].mbuf_pool) > > continue; > > > > - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); > > + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); > > session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); > > session_priv_pool_init(&socket_ctx[socket_id], socket_id, > > sess_sz); > > } > > + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); > > > > RTE_ETH_FOREACH_DEV(portid) { > > if ((enabled_port_mask & (1 << portid)) == 0) > > -- > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs 2019-12-23 16:16 ` Ananyev, Konstantin @ 2020-01-03 5:42 ` Anoob Joseph 2020-01-06 15:21 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-03 5:42 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: dev <dev-bounces@dpdk.org> On Behalf Of Ananyev, Konstantin > Sent: Monday, December 23, 2019 9:47 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal > <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas > Monjalon <thomas@monjalon.net> > Cc: Lukas Bartosik <lbartosik@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line > option for bufs > > > > > > Add command line option -s which can be used to configure number of > > > buffers in a pool. Default number of buffers is 8192. > > > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > --- > > > examples/ipsec-secgw/ipsec-secgw.c | 23 +++++++++++++++++++---- > > > 1 file changed, 19 insertions(+), 4 deletions(-) > > > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > > b/examples/ipsec-secgw/ipsec-secgw.c > > > index 76719f2..f8e28d6 100644 > > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > > @@ -59,8 +59,6 @@ volatile bool force_quit; > > > > > > #define MEMPOOL_CACHE_SIZE 256 > > > > > > -#define NB_MBUF (32000) > > > - > > > #define CDEV_QUEUE_DESC 2048 > > > #define CDEV_MAP_ENTRIES 16384 > > > #define CDEV_MP_NB_OBJS 1024 > > > @@ -167,6 +165,7 @@ static int32_t numa_on = 1; /**< NUMA is > enabled > > > by default. */ static uint32_t nb_lcores; static uint32_t > > > single_sa; static uint32_t single_sa_idx; > > > +static uint32_t nb_bufs_in_pool = 8192; > > > > Why to change the default number (behavior) here? > > Why not to keep existing one as default? > > Or, at least try to guess required number of mbufs (like l3fwd, etc., do)? [Anoob] Existing code sets the default number of mbufs to 32k, which is leading to higher cache misses on our platform. Also, other example applications have 8192 as the minimum. Hence the change. Do you see any perf issues with lowering the default value? Also, I'm fine with making the default one same as the ones in l2fwd & l3fwd. From l3fwd: /* * This expression is used to calculate the number of mbufs needed * depending on user input, taking into account memory for rx and * tx hardware rings, cache per lcore and mtable per port per lcore. * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum * value of 8192 */ #define NB_MBUF(nports) RTE_MAX( \ (nports*nb_rx_queue*nb_rxd + \ nports*nb_lcores*MAX_PKT_BURST + \ nports*n_tx_queue*nb_txd + \ nb_lcores*MEMPOOL_CACHE_SIZE), \ (unsigned)8192) I do understand that we will have to rework the above logic a bit more to handle the in-flight packets in cryptodev. What's your suggestion? > > > > > > > > > /* > > > * RX/TX HW offload capabilities to enable/use on ethernet ports. > > > @@ -1261,6 +1260,7 @@ print_usage(const char *prgname) > > > " [-w REPLAY_WINDOW_SIZE]" > > > " [-e]" > > > " [-a]" > > > + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" > > > " -f CONFIG_FILE" > > > " --config (port,queue,lcore)[,(port,queue,lcore)]" > > > " [--single-sa SAIDX]" > > > @@ -1284,6 +1284,7 @@ print_usage(const char *prgname) > > > " size for each SA\n" > > > " -e enables ESN\n" > > > " -a enables SA SQN atomic behaviour\n" > > > + " -s number of mbufs in packet pool (default 8192)\n" > > > " -f CONFIG_FILE: Configuration file\n" > > > " --config (port,queue,lcore): Rx queue configuration\n" > > > " --single-sa SAIDX: Use single SA index for outbound > traffic,\n" > > > @@ -1534,7 +1535,7 @@ parse_args(int32_t argc, char **argv, struct > > > eh_conf *eh_conf) > > > > > > argvopt = argv; > > > > > > - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", > > > + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", > > > lgopts, &option_index)) != EOF) { > > > > > > switch (opt) { > > > @@ -1568,6 +1569,19 @@ parse_args(int32_t argc, char **argv, struct > eh_conf *eh_conf) > > > cfgfile = optarg; > > > f_present = 1; > > > break; > > > + > > > + case 's': > > > + ret = parse_decimal(optarg); > > > + if (ret < 0) { > > > + printf("Invalid number of buffers in a pool: " > > > + "%s\n", optarg); > > > + print_usage(prgname); > > > + return -1; > > > + } > > > + > > > + nb_bufs_in_pool = ret; > > > + break; > > > + > > > case 'j': > > > ret = parse_decimal(optarg); > > > if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ - > 2792,11 +2806,12 @@ > > > main(int32_t argc, char **argv) > > > if (socket_ctx[socket_id].mbuf_pool) > > > continue; > > > > > > - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); > > > + pool_init(&socket_ctx[socket_id], socket_id, > nb_bufs_in_pool); > > > session_pool_init(&socket_ctx[socket_id], socket_id, > sess_sz); > > > session_priv_pool_init(&socket_ctx[socket_id], socket_id, > > > sess_sz); > > > } > > > + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); > > > > > > RTE_ETH_FOREACH_DEV(portid) { > > > if ((enabled_port_mask & (1 << portid)) == 0) > > > -- > > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs 2020-01-03 5:42 ` Anoob Joseph @ 2020-01-06 15:21 ` Ananyev, Konstantin 0 siblings, 0 replies; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-06 15:21 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev > > > > Add command line option -s which can be used to configure number of > > > > buffers in a pool. Default number of buffers is 8192. > > > > > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > > --- > > > > examples/ipsec-secgw/ipsec-secgw.c | 23 +++++++++++++++++++---- > > > > 1 file changed, 19 insertions(+), 4 deletions(-) > > > > > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > > > b/examples/ipsec-secgw/ipsec-secgw.c > > > > index 76719f2..f8e28d6 100644 > > > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > > > @@ -59,8 +59,6 @@ volatile bool force_quit; > > > > > > > > #define MEMPOOL_CACHE_SIZE 256 > > > > > > > > -#define NB_MBUF (32000) > > > > - > > > > #define CDEV_QUEUE_DESC 2048 > > > > #define CDEV_MAP_ENTRIES 16384 > > > > #define CDEV_MP_NB_OBJS 1024 > > > > @@ -167,6 +165,7 @@ static int32_t numa_on = 1; /**< NUMA is > > enabled > > > > by default. */ static uint32_t nb_lcores; static uint32_t > > > > single_sa; static uint32_t single_sa_idx; > > > > +static uint32_t nb_bufs_in_pool = 8192; > > > > > > Why to change the default number (behavior) here? > > > Why not to keep existing one as default? > > > > Or, at least try to guess required number of mbufs (like l3fwd, etc., do)? > > [Anoob] Existing code sets the default number of mbufs to 32k, which is leading to higher cache misses on our platform. Also, other > example applications have 8192 as the minimum. Hence the change. > > Do you see any perf issues with lowering the default value? 8K is not much at all. Ipsec-secgw uses 1K as RXD/TXD num per queue. So just 4RX+4TX queues will already bring it to the edge. With 8+ RX queues app simply wouldn't be able to start. Looks like a change in behavior. > Also, I'm fine with making the default one same as the ones in l2fwd & l3fwd. Ok. > > From l3fwd: > > /* > * This expression is used to calculate the number of mbufs needed > * depending on user input, taking into account memory for rx and > * tx hardware rings, cache per lcore and mtable per port per lcore. > * RTE_MAX is used to ensure that NB_MBUF never goes below a minimum > * value of 8192 > */ > #define NB_MBUF(nports) RTE_MAX( \ > (nports*nb_rx_queue*nb_rxd + \ > nports*nb_lcores*MAX_PKT_BURST + \ > nports*n_tx_queue*nb_txd + \ > nb_lcores*MEMPOOL_CACHE_SIZE), \ > (unsigned)8192) > > I do understand that we will have to rework the above logic a bit more to handle the in-flight packets in cryptodev. Yes, plus also will need to take into account size of fragmentation table. > What's your suggestion? I think the best way is to calculate mumber of required mbufs as discussed above, plus add ability to the user to overwrite this value (cmd-line option). > > > > > > > > /* > > > > * RX/TX HW offload capabilities to enable/use on ethernet ports. > > > > @@ -1261,6 +1260,7 @@ print_usage(const char *prgname) > > > > " [-w REPLAY_WINDOW_SIZE]" > > > > " [-e]" > > > > " [-a]" > > > > + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" > > > > " -f CONFIG_FILE" > > > > " --config (port,queue,lcore)[,(port,queue,lcore)]" > > > > " [--single-sa SAIDX]" > > > > @@ -1284,6 +1284,7 @@ print_usage(const char *prgname) > > > > " size for each SA\n" > > > > " -e enables ESN\n" > > > > " -a enables SA SQN atomic behaviour\n" > > > > + " -s number of mbufs in packet pool (default 8192)\n" > > > > " -f CONFIG_FILE: Configuration file\n" > > > > " --config (port,queue,lcore): Rx queue configuration\n" > > > > " --single-sa SAIDX: Use single SA index for outbound > > traffic,\n" > > > > @@ -1534,7 +1535,7 @@ parse_args(int32_t argc, char **argv, struct > > > > eh_conf *eh_conf) > > > > > > > > argvopt = argv; > > > > > > > > - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", > > > > + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", > > > > lgopts, &option_index)) != EOF) { > > > > > > > > switch (opt) { > > > > @@ -1568,6 +1569,19 @@ parse_args(int32_t argc, char **argv, struct > > eh_conf *eh_conf) > > > > cfgfile = optarg; > > > > f_present = 1; > > > > break; > > > > + > > > > + case 's': > > > > + ret = parse_decimal(optarg); > > > > + if (ret < 0) { > > > > + printf("Invalid number of buffers in a pool: " > > > > + "%s\n", optarg); > > > > + print_usage(prgname); > > > > + return -1; > > > > + } > > > > + > > > > + nb_bufs_in_pool = ret; > > > > + break; > > > > + > > > > case 'j': > > > > ret = parse_decimal(optarg); > > > > if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ - > > 2792,11 +2806,12 @@ > > > > main(int32_t argc, char **argv) > > > > if (socket_ctx[socket_id].mbuf_pool) > > > > continue; > > > > > > > > - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); > > > > + pool_init(&socket_ctx[socket_id], socket_id, > > nb_bufs_in_pool); > > > > session_pool_init(&socket_ctx[socket_id], socket_id, > > sess_sz); > > > > session_priv_pool_init(&socket_ctx[socket_id], socket_id, > > > > sess_sz); > > > > } > > > > + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); > > > > > > > > RTE_ETH_FOREACH_DEV(portid) { > > > > if ((enabled_port_mask & (1 << portid)) == 0) > > > > -- > > > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph ` (13 preceding siblings ...) 2019-12-08 12:30 ` [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 01/12] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph ` (13 more replies) 14 siblings, 14 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev This series introduces event-mode additions to ipsec-secgw. This effort is parallel to the similar changes in l2fwd (l2fwd-event app) & l3fwd. With this series, ipsec-secgw would be able to run in eventmode. The worker thread (executing loop) would be receiving events and would be submitting it back to the eventdev after the processing. This way, multicore scaling and h/w assisted scheduling is achieved by making use of the eventdev capabilities. Since the underlying event device would be having varying capabilities, the worker thread could be drafted differently to maximize performance. This series introduces usage of multiple worker threads, among which the one to be used will be determined by the operating conditions and the underlying device capabilities. For example, if an event device - eth device pair has Tx internal port, then application can do tx_adapter_enqueue() instead of regular event_enqueue(). So a thread making an assumption that the device pair has internal port will not be the right solution for another pair. The infrastructure added with these patches aims to help application to have multiple worker threads, there by extracting maximum performance from every device without affecting existing paths/use cases. The eventmode configuration is predefined. All packets reaching one eth port will hit one event queue. All event queues will be mapped to all event ports. So all cores will be able to receive traffic from all ports. When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event device will ensure the ordering. Ordering would be lost when tried in PARALLEL. Following command line options are introduced, --transfer-mode: to choose between poll mode & event mode --schedule-type: to specify the scheduling type (RTE_SCHED_TYPE_ORDERED/ RTE_SCHED_TYPE_ATOMIC/ RTE_SCHED_TYPE_PARALLEL) Additionally the event mode introduces two modes of processing packets: Driver-mode: This mode will have bare minimum changes in the application to support ipsec. There woudn't be any lookup etc done in the application. And for inline-protocol use case, the thread would resemble l2fwd as the ipsec processing would be done entirely in the h/w. This mode can be used to benchmark the raw performance of the h/w. All the application side steps (like lookup) can be redone based on the requirement of the end user. Hence the need for a mode which would report the raw performance. App-mode: This mode will have all the features currently implemented with ipsec-secgw (non librte_ipsec mode). All the lookups etc would follow the existing methods and would report numbers that can be compared against regular ipsec-secgw benchmark numbers. The driver mode is selected with existing --single-sa option (used also by poll mode). When --single-sa option is used in conjution with event mode then index passed to --single-sa is ignored. Example commands to execute ipsec-secgw in various modes on OCTEON TX2 platform, #Inbound and outbound app mode ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --schedule-type parallel #Inbound and outbound driver mode ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --schedule-type parallel --single-sa 0 This series adds non burst tx internal port workers only. It provides infrastructure for non internal port workers, however does not define any. Also, only inline ipsec protocol mode is supported by the worker threads added. Following are planned features, 1. Add burst mode workers. 2. Add non internal port workers. 3. Verify support for Rx core (the support is added but lack of h/w to verify). 4. Add lookaside protocol support. Following are features that Marvell won't be attempting. 1. Inline crypto support. 2. Lookaside crypto support. For the features that Marvell won't be attempting, new workers can be introduced by the respective stake holders. This series is tested on Marvell OCTEON TX2. Changes in v2: * Remove --process-dir option. Instead use existing unprotected port mask option (-u) to decide wheter port handles inbound or outbound traffic. * Remove --process-mode option. Instead use existing --single-sa option to select between app and driver modes. * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). * Move destruction of flows to a location where eth ports are stopped and closed. * Print error and exit when event mode --schedule-type option is used in poll mode. * Reduce number of goto statements replacing them with loop constructs. * Remove sec_session_fixed table and replace it with locally build table in driver worker thread. Table is indexed by port identifier and holds first inline session pointer found for a given port. * Print error and exit when sessions other than inline are configured in event mode. * When number of event queues is less than number of eth ports then map all eth ports to one event queue. * Cleanup and minor improvements in code as suggested by Konstantin Deferred to v3: * The final patch updates the hardcoded number of buffers in a pool. Also, there was a discussion on the update of number of qp. Both the above can be handled properly, if we can remove the logic which limits one core to only use one crypto qp. If we can allow one qp per lcore_param, every eth queue can have it's own crypto qp and that would solve the requirements with OCTEON TX2 inline ipsec support as well. Patch with the mentioned change, http://patches.dpdk.org/patch/64408/ * Update ipsec-secgw documentation to describe the new options as well as event mode support. This series depends on the PMD changes submitted in the following set, http://patches.dpdk.org/project/dpdk/list/?series=8203 Ankur Dwivedi (1): examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph (5): examples/ipsec-secgw: add framework for eventmode helper examples/ipsec-secgw: add eventdev port-lcore link examples/ipsec-secgw: add Rx adapter support examples/ipsec-secgw: add Tx adapter support examples/ipsec-secgw: add routines to display config Lukasz Bartosik (6): examples/ipsec-secgw: add routines to launch workers examples/ipsec-secgw: add support for internal ports examples/ipsec-secgw: add eventmode to ipsec-secgw examples/ipsec-secgw: add driver mode worker examples/ipsec-secgw: add app mode worker examples/ipsec-secgw: add cmd line option for bufs examples/ipsec-secgw/Makefile | 2 + examples/ipsec-secgw/event_helper.c | 1714 +++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 312 +++++++ examples/ipsec-secgw/ipsec-secgw.c | 502 ++++++++-- examples/ipsec-secgw/ipsec-secgw.h | 86 ++ examples/ipsec-secgw/ipsec.c | 7 + examples/ipsec-secgw/ipsec.h | 36 +- examples/ipsec-secgw/ipsec_worker.c | 656 ++++++++++++++ examples/ipsec-secgw/ipsec_worker.h | 39 + examples/ipsec-secgw/meson.build | 4 +- examples/ipsec-secgw/sa.c | 11 - 11 files changed, 3275 insertions(+), 94 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c create mode 100644 examples/ipsec-secgw/ipsec_worker.h -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 01/12] examples/ipsec-secgw: add default rte_flow for inline Rx 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 02/12] examples/ipsec-secgw: add framework for eventmode helper Anoob Joseph ` (12 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev From: Ankur Dwivedi <adwivedi@marvell.com> The default flow created would enable security processing on all ESP packets. If the default flow is created, SA based rte_flow creation would be skipped. Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Anoob Joseph <anoobj@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 61 +++++++++++++++++++++++++++++++++----- examples/ipsec-secgw/ipsec.c | 7 +++++ examples/ipsec-secgw/ipsec.h | 6 ++++ 3 files changed, 66 insertions(+), 8 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 3b5aaf6..d5e8fe5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -128,6 +128,8 @@ struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" @@ -2406,6 +2408,48 @@ reassemble_init(void) return rc; } +static void +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) +{ + struct rte_flow_action action[2]; + struct rte_flow_item pattern[2]; + struct rte_flow_attr attr = {0}; + struct rte_flow_error err; + struct rte_flow *flow; + int ret; + + if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY)) + return; + + /* Add the default rte_flow to enable SECURITY for all ESP packets */ + + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; + pattern[0].spec = NULL; + pattern[0].mask = NULL; + pattern[0].last = NULL; + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; + + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; + action[0].conf = NULL; + action[1].type = RTE_FLOW_ACTION_TYPE_END; + action[1].conf = NULL; + + attr.ingress = 1; + + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); + if (ret) + return; + + flow = rte_flow_create(port_id, &attr, pattern, action, &err); + if (flow == NULL) + return; + + flow_info_tbl[port_id].rx_def_flow = flow; + RTE_LOG(INFO, IPSEC, + "Created default flow enabling SECURITY for all ESP traffic on port %d\n", + port_id); +} + int32_t main(int32_t argc, char **argv) { @@ -2414,7 +2458,8 @@ main(int32_t argc, char **argv) uint32_t i; uint8_t socket_id; uint16_t portid; - uint64_t req_rx_offloads, req_tx_offloads; + uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; + uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; size_t sess_sz; /* init EAL */ @@ -2476,8 +2521,10 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); - port_init(portid, req_rx_offloads, req_tx_offloads); + sa_check_offloads(portid, &req_rx_offloads[portid], + &req_tx_offloads[portid]); + port_init(portid, req_rx_offloads[portid], + req_tx_offloads[portid]); } cryptodevs_init(); @@ -2487,11 +2534,9 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - /* - * Start device - * note: device must be started before a flow rule - * can be installed. - */ + /* Create flow before starting the device */ + create_default_ipsec_flow(portid, req_rx_offloads[portid]); + ret = rte_eth_dev_start(portid); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index d4b5712..58f6e8c 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -261,6 +261,12 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, unsigned int i; unsigned int j; + /* Don't create flow if default flow is created */ + if (flow_info_tbl[sa->portid].rx_def_flow) { + sa->cdev_id_qp = 0; + return 0; + } + ret = rte_eth_dev_info_get(sa->portid, &dev_info); if (ret != 0) { RTE_LOG(ERR, IPSEC, @@ -396,6 +402,7 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, ips->security.ol_flags = sec_cap->ol_flags; ips->security.ctx = sec_ctx; } + sa->cdev_id_qp = 0; return 0; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 8e07521..28ff07d 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -81,6 +81,12 @@ struct app_sa_prm { extern struct app_sa_prm app_sa_prm; +struct flow_info { + struct rte_flow *rx_def_flow; +}; + +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + enum { IPSEC_SESSION_PRIMARY = 0, IPSEC_SESSION_FALLBACK = 1, -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 02/12] examples/ipsec-secgw: add framework for eventmode helper 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 01/12] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 03/12] examples/ipsec-secgw: add eventdev port-lcore link Anoob Joseph ` (11 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add framework for eventmode helper. Event mode involves initialization of multiple devices like eventdev, ethdev and etc. Add routines to initialize and uninitialize event device. Generate a default config for event device if it is not specified in the configuration. Currently event helper supports single event device only. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/event_helper.c | 326 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 115 +++++++++++++ examples/ipsec-secgw/meson.build | 4 +- 4 files changed, 444 insertions(+), 2 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index a4977f6..09e3c5a 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -15,6 +15,7 @@ SRCS-y += sa.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c new file mode 100644 index 0000000..82425de --- /dev/null +++ b/examples/ipsec-secgw/event_helper.c @@ -0,0 +1,326 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#include <rte_ethdev.h> +#include <rte_eventdev.h> + +#include "event_helper.h" + +static int +eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct rte_event_dev_info dev_info; + int lcore_count; + int nb_eventdev; + int nb_eth_dev; + int ret; + + /* Get the number of event devices */ + nb_eventdev = rte_event_dev_count(); + if (nb_eventdev == 0) { + EH_LOG_ERR("No event devices detected"); + return -EINVAL; + } + + if (nb_eventdev != 1) { + EH_LOG_ERR("Event mode does not support multiple event devices. " + "Please provide only one event device."); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + if (nb_eth_dev == 0) { + EH_LOG_ERR("No eth devices detected"); + return -EINVAL; + } + + /* Get the number of lcores */ + lcore_count = rte_lcore_count(); + + /* Read event device info */ + ret = rte_event_dev_info_get(0, &dev_info); + if (ret < 0) { + EH_LOG_ERR("Failed to read event device info %d", ret); + return ret; + } + + /* Check if enough ports are available */ + if (dev_info.max_event_ports < 2) { + EH_LOG_ERR("Not enough event ports available"); + return -EINVAL; + } + + /* Get the first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Save number of queues & ports available */ + eventdev_config->eventdev_id = 0; + eventdev_config->nb_eventqueue = dev_info.max_event_queues; + eventdev_config->nb_eventport = dev_info.max_event_ports; + eventdev_config->ev_queue_mode = + RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + /* Check if there are more queues than required */ + if (eventdev_config->nb_eventqueue > nb_eth_dev + 1) { + /* One queue is reserved for Tx */ + eventdev_config->nb_eventqueue = nb_eth_dev + 1; + } + + /* Check if there are more ports than required */ + if (eventdev_config->nb_eventport > lcore_count) { + /* One port per lcore is enough */ + eventdev_config->nb_eventport = lcore_count; + } + + /* Update the number of event devices */ + em_conf->nb_eventdev++; + + return 0; +} + +static int +eh_validate_conf(struct eventmode_conf *em_conf) +{ + int ret; + + /* + * Check if event devs are specified. Else probe the event devices + * and initialize the config with all ports & queues available + */ + if (em_conf->nb_eventdev == 0) { + ret = eh_set_default_conf_eventdev(em_conf); + if (ret != 0) + return ret; + } + + return 0; +} + +static int +eh_initialize_eventdev(struct eventmode_conf *em_conf) +{ + struct rte_event_queue_conf eventq_conf = {0}; + struct rte_event_dev_info evdev_default_conf; + struct rte_event_dev_config eventdev_conf; + struct eventdev_params *eventdev_config; + int nb_eventdev = em_conf->nb_eventdev; + uint8_t eventdev_id; + int nb_eventqueue; + uint8_t i, j; + int ret; + + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Get event dev ID */ + eventdev_id = eventdev_config->eventdev_id; + + /* Get the number of queues */ + nb_eventqueue = eventdev_config->nb_eventqueue; + + /* Reset the default conf */ + memset(&evdev_default_conf, 0, + sizeof(struct rte_event_dev_info)); + + /* Get default conf of eventdev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR( + "Error in getting event device info[devID:%d]", + eventdev_id); + return ret; + } + + memset(&eventdev_conf, 0, sizeof(struct rte_event_dev_config)); + eventdev_conf.nb_events_limit = + evdev_default_conf.max_num_events; + eventdev_conf.nb_event_queues = nb_eventqueue; + eventdev_conf.nb_event_ports = + eventdev_config->nb_eventport; + eventdev_conf.nb_event_queue_flows = + evdev_default_conf.max_event_queue_flows; + eventdev_conf.nb_event_port_dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + eventdev_conf.nb_event_port_enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Configure event device */ + ret = rte_event_dev_configure(eventdev_id, &eventdev_conf); + if (ret < 0) { + EH_LOG_ERR("Error in configuring event device"); + return ret; + } + + /* Configure event queues */ + for (j = 0; j < nb_eventqueue; j++) { + + memset(&eventq_conf, 0, + sizeof(struct rte_event_queue_conf)); + + /* Read the requested conf */ + + /* Per event dev queues can be ATQ or SINGLE LINK */ + eventq_conf.event_queue_cfg = + eventdev_config->ev_queue_mode; + /* + * All queues need to be set with sched_type as + * schedule type for the application stage. One queue + * would be reserved for the final eth tx stage. This + * will be an atomic queue. + */ + if (j == nb_eventqueue-1) { + eventq_conf.schedule_type = + RTE_SCHED_TYPE_ATOMIC; + } else { + eventq_conf.schedule_type = + em_conf->ext_params.sched_type; + } + + /* Set max atomic flows to 1024 */ + eventq_conf.nb_atomic_flows = 1024; + eventq_conf.nb_atomic_order_sequences = 1024; + + /* Setup the queue */ + ret = rte_event_queue_setup(eventdev_id, j, + &eventq_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event queue %d", + ret); + return ret; + } + } + + /* Configure event ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + ret = rte_event_port_setup(eventdev_id, j, NULL); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event port %d", + ret); + return ret; + } + } + } + + /* Start event devices */ + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + ret = rte_event_dev_start(eventdev_config->eventdev_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start event device %d, %d", + i, ret); + return ret; + } + } + return 0; +} + +int32_t +eh_devs_init(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t port_id; + int ret; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Validate the requested config */ + ret = eh_validate_conf(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to validate the requested config %d", ret); + return ret; + } + + /* Stop eth devices before setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + rte_eth_dev_stop(port_id); + } + + /* Setup eventdev */ + ret = eh_initialize_eventdev(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize event dev %d", ret); + return ret; + } + + /* Start eth devices after setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + ret = rte_eth_dev_start(port_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start eth dev %d, %d", + port_id, ret); + return ret; + } + } + + return 0; +} + +int32_t +eh_devs_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t id; + int ret, i; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Stop and release event devices */ + for (i = 0; i < em_conf->nb_eventdev; i++) { + + id = em_conf->eventdev_config[i].eventdev_id; + rte_event_dev_stop(id); + + ret = rte_event_dev_close(id); + if (ret < 0) { + EH_LOG_ERR("Failed to close event dev %d, %d", id, ret); + return ret; + } + } + + return 0; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h new file mode 100644 index 0000000..7685987 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _EVENT_HELPER_H_ +#define _EVENT_HELPER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <rte_log.h> + +#define RTE_LOGTYPE_EH RTE_LOGTYPE_USER4 + +#define EH_LOG_ERR(...) \ + RTE_LOG(ERR, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/* Max event devices supported */ +#define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS + +/** + * Packet transfer mode of the application + */ +enum eh_pkt_transfer_mode { + EH_PKT_TRANSFER_MODE_POLL = 0, + EH_PKT_TRANSFER_MODE_EVENT, +}; + +/* Event dev params */ +struct eventdev_params { + uint8_t eventdev_id; + uint8_t nb_eventqueue; + uint8_t nb_eventport; + uint8_t ev_queue_mode; +}; + +/* Eventmode conf data */ +struct eventmode_conf { + int nb_eventdev; + /**< No of event devs */ + struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; + /**< Per event dev conf */ + union { + RTE_STD_C11 + struct { + uint64_t sched_type : 2; + /**< Schedule type */ + }; + uint64_t u64; + } ext_params; + /**< 64 bit field to specify extended params */ +}; + +/** + * Event helper configuration + */ +struct eh_conf { + enum eh_pkt_transfer_mode mode; + /**< Packet transfer mode of the application */ + uint32_t eth_portmask; + /**< + * Mask of the eth ports to be used. This portmask would be + * checked while initializing devices using helper routines. + */ + void *mode_params; + /**< Mode specific parameters */ +}; + +/** + * Initialize event mode devices + * + * Application can call this function to get the event devices, eth devices + * and eth rx & tx adapters initialized according to the default config or + * config populated using the command line args. + * + * Application is expected to initialize the eth devices and then the event + * mode helper subsystem will stop & start eth devices according to its + * requirement. Call to this function should be done after the eth devices + * are successfully initialized. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_init(struct eh_conf *conf); + +/** + * Release event mode devices + * + * Application can call this function to release event devices, + * eth rx & tx adapters according to the config. + * + * Call to this function should be done before application stops + * and closes eth devices. This function will not close and stop + * eth devices. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_uninit(struct eh_conf *conf); + +#ifdef __cplusplus +} +#endif + +#endif /* _EVENT_HELPER_H_ */ diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 9ece345..20f4064 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -6,9 +6,9 @@ # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' -deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec'] +deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c' + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 03/12] examples/ipsec-secgw: add eventdev port-lcore link 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 01/12] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 02/12] examples/ipsec-secgw: add framework for eventmode helper Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 04/12] examples/ipsec-secgw: add Rx adapter support Anoob Joseph ` (10 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add event device port-lcore link and specify which event queues should be connected to the event port. Generate a default config for event port-lcore links if it is not specified in the configuration. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues are to be linked with every port. This enables one core to receive packets from all ethernet ports. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 126 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 33 ++++++++++ 2 files changed, 159 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 82425de..cf2dff0 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,11 +1,33 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2020 Marvell International Ltd. */ +#include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_malloc.h> #include "event_helper.h" +static inline unsigned int +eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) +{ + unsigned int next_core; + + /* Get next active core skipping cores reserved as eth cores */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + prev_core = next_core; + } while (rte_bitmap_get(em_conf->eth_core_mask, next_core)); + + return next_core; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -81,6 +103,71 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_link(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct eh_event_link_info *link; + unsigned int lcore_id = -1; + int i, link_index; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If there are more event ports, then some ports + * won't be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link config, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues + * to the port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + /* Get first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Loop through the ports */ + for (i = 0; i < eventdev_config->nb_eventport; i++) { + + /* Get next active core id */ + lcore_id = eh_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_port_id = i; + link->lcore_id = lcore_id; + + /* + * Don't set eventq_id as by default all queues + * need to be mapped to the port, which is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -95,6 +182,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if links are specified. Else generate a default config for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = eh_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -106,6 +203,8 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) struct rte_event_dev_config eventdev_conf; struct eventdev_params *eventdev_config; int nb_eventdev = em_conf->nb_eventdev; + struct eh_event_link_info *link; + uint8_t *queue = NULL; uint8_t eventdev_id; int nb_eventqueue; uint8_t i, j; @@ -205,6 +304,33 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) } } + /* Make event queue - event port link */ + for (j = 0; j < em_conf->nb_link; j++) { + + /* Get link info */ + link = &(em_conf->link[j]); + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); + + /* Link queue to port */ + ret = rte_event_port_link(eventdev_id, link->event_port_id, + queue, NULL, 1); + if (ret < 0) { + EH_LOG_ERR("Failed to link event port %d", ret); + return ret; + } + } + /* Start event devices */ for (i = 0; i < nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 7685987..16b03b3 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,13 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max event queues supported per event device */ +#define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV + +/* Max event-lcore links */ +#define EVENT_MODE_MAX_LCORE_LINKS \ + (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) + /** * Packet transfer mode of the application */ @@ -36,17 +43,43 @@ struct eventdev_params { uint8_t ev_queue_mode; }; +/** + * Event-lcore link configuration + */ +struct eh_event_link_info { + uint8_t eventdev_id; + /**< Event device ID */ + uint8_t event_port_id; + /**< Event port ID */ + uint8_t eventq_id; + /**< Event queue to be linked to the port */ + uint8_t lcore_id; + /**< Lcore to be polling on this port */ +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_link; + /**< No of links */ + struct eh_event_link_info + link[EVENT_MODE_MAX_LCORE_LINKS]; + /**< Per link conf */ + struct rte_bitmap *eth_core_mask; + /**< Core mask of cores to be used for software Rx and Tx */ union { RTE_STD_C11 struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 04/12] examples/ipsec-secgw: add Rx adapter support 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (2 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 03/12] examples/ipsec-secgw: add eventdev port-lcore link Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 05/12] examples/ipsec-secgw: add Tx " Anoob Joseph ` (9 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add Rx adapter support. The event helper init routine will initialize the Rx adapter according to the configuration. If Rx adapter config is not present it will generate a default config. If there are enough event queues available it will map eth ports and event queues 1:1 (one eth port will be connected to one event queue). Otherwise it will map all eth ports to one event queue. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 273 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/event_helper.h | 29 ++++ 2 files changed, 301 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index cf2dff0..1d06a45 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -4,10 +4,58 @@ #include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_event_eth_rx_adapter.h> #include <rte_malloc.h> +#include <stdbool.h> #include "event_helper.h" +static int +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) +{ + int i, count = 0; + + RTE_LCORE_FOREACH(i) { + /* Check if this core is enabled in core mask*/ + if (rte_bitmap_get(eth_core_mask, i)) { + /* Found enabled core */ + count++; + } + } + return count; +} + +static inline unsigned int +eh_get_next_eth_core(struct eventmode_conf *em_conf) +{ + static unsigned int prev_core = -1; + unsigned int next_core; + + /* + * Make sure we have at least one eth core running, else the following + * logic would lead to an infinite loop. + */ + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { + EH_LOG_ERR("No enabled eth core found"); + return RTE_MAX_LCORE; + } + + /* Only some cores are marked as eth cores, skip others */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 1); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Update prev_core */ + prev_core = next_core; + } while (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))); + + return next_core; +} + static inline unsigned int eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) { @@ -168,6 +216,82 @@ eh_set_default_conf_link(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct rx_adapter_conf *adapter; + bool single_ev_queue = false; + int eventdev_id; + int nb_eth_dev; + int adapter_id; + int conn_id; + int i; + + /* Create one adapter with eth queues mapped to event queue(s) */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + adapter = &(em_conf->rx_adapter[adapter_id]); + + /* Set adapter conf */ + adapter->eventdev_id = eventdev_id; + adapter->adapter_id = adapter_id; + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Map all queues of eth device (port) to an event queue. If there + * are more event queues than eth ports then create 1:1 mapping. + * Otherwise map all eth ports to a single event queue. + */ + if (nb_eth_dev > eventdev_config->nb_eventqueue) + single_ev_queue = true; + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = adapter->nb_connections; + + /* Get the connection */ + conn = &(adapter->conn[conn_id]); + + /* Set mapping between eth ports & event queues*/ + conn->ethdev_id = i; + conn->eventq_id = single_ev_queue ? 0 : i; + + /* Add all eth queues eth port to event queue */ + conn->ethdev_rx_qid = -1; + + /* Update no of connections */ + adapter->nb_connections++; + + } + + /* We have setup one adapter */ + em_conf->nb_rx_adapter = 1; + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -192,6 +316,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if rx adapters are specified. Else generate a default config + * with one rx adapter and all eth queues - event queue mapped. + */ + if (em_conf->nb_rx_adapter == 0) { + ret = eh_set_default_conf_rx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -347,6 +481,104 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) return 0; } +static int +eh_rx_adapter_configure(struct eventmode_conf *em_conf, + struct rx_adapter_conf *adapter) +{ + struct rte_event_eth_rx_adapter_queue_conf queue_conf = {0}; + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct rx_adapter_connection_info *conn; + uint8_t eventdev_id; + uint32_t service_id; + int ret; + int j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = 1200; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create Rx adapter */ + ret = rte_event_eth_rx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create rx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Setup queue conf */ + queue_conf.ev.queue_id = conn->eventq_id; + queue_conf.ev.sched_type = em_conf->ext_params.sched_type; + queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV; + + /* Add queue to the adapter */ + ret = rte_event_eth_rx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_rx_qid, + &queue_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to rx adapter %d", + ret); + return ret; + } + } + + /* Get the service ID used by rx adapter */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by rx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_rx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start rx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_conf *adapter; + int i, ret; + + /* Configure rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + ret = eh_rx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure rx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -370,6 +602,9 @@ eh_devs_init(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Eventmode conf would need eth portmask */ + em_conf->eth_portmask = conf->eth_portmask; + /* Validate the requested config */ ret = eh_validate_conf(em_conf); if (ret < 0) { @@ -394,6 +629,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Rx adapter */ + ret = eh_initialize_rx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize rx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -416,8 +658,8 @@ int32_t eh_devs_uninit(struct eh_conf *conf) { struct eventmode_conf *em_conf; + int ret, i, j; uint16_t id; - int ret, i; if (conf == NULL) { EH_LOG_ERR("Invalid event helper configuration"); @@ -435,6 +677,35 @@ eh_devs_uninit(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Stop and release rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + + id = em_conf->rx_adapter[i].adapter_id; + ret = rte_event_eth_rx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop rx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->rx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_rx_adapter_queue_del(id, + em_conf->rx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove rx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_rx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free rx adapter %d", ret); + return ret; + } + } + /* Stop and release event devices */ for (i = 0; i < em_conf->nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 16b03b3..baf93e1 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,12 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max Rx adapters supported */ +#define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS + +/* Max Rx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -57,12 +63,33 @@ struct eh_event_link_info { /**< Lcore to be polling on this port */ }; +/* Rx adapter connection info */ +struct rx_adapter_connection_info { + uint8_t ethdev_id; + uint8_t eventq_id; + int32_t ethdev_rx_qid; +}; + +/* Rx adapter conf */ +struct rx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t rx_core_id; + uint8_t nb_connections; + struct rx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_rx_adapter; + /**< No of Rx adapters */ + struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; + /**< Rx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -70,6 +97,8 @@ struct eventmode_conf { /**< Per link conf */ struct rte_bitmap *eth_core_mask; /**< Core mask of cores to be used for software Rx and Tx */ + uint32_t eth_portmask; + /**< Mask of the eth ports to be used */ union { RTE_STD_C11 struct { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 05/12] examples/ipsec-secgw: add Tx adapter support 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (3 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 04/12] examples/ipsec-secgw: add Rx adapter support Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 06/12] examples/ipsec-secgw: add routines to display config Anoob Joseph ` (8 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add Tx adapter support. The event helper init routine will initialize the Tx adapter according to the configuration. If Tx adapter config is not present it will generate a default config. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 313 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 361 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 1d06a45..e6569c1 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -5,6 +5,7 @@ #include <rte_ethdev.h> #include <rte_eventdev.h> #include <rte_event_eth_rx_adapter.h> +#include <rte_event_eth_tx_adapter.h> #include <rte_malloc.h> #include <stdbool.h> @@ -76,6 +77,22 @@ eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) return next_core; } +static struct eventdev_params * +eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) +{ + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + if (em_conf->eventdev_config[i].eventdev_id == eventdev_id) + break; + } + + /* No match */ + if (i == em_conf->nb_eventdev) + return NULL; + + return &(em_conf->eventdev_config[i]); +} static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -292,6 +309,95 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct tx_adapter_conf *tx_adapter; + int eventdev_id; + int adapter_id; + int nb_eth_dev; + int conn_id; + int i; + + /* + * Create one Tx adapter with all eth queues mapped to event queues + * 1:1. + */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + tx_adapter = &(em_conf->tx_adapter[adapter_id]); + + /* Set adapter conf */ + tx_adapter->eventdev_id = eventdev_id; + tx_adapter->adapter_id = adapter_id; + + /* TODO: Tx core is required only when internal port is not present */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Application uses one event queue per adapter for submitting + * packets for Tx. Reserve the last queue available and decrement + * the total available event queues for this + */ + + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + + /* + * Map all Tx queues of the eth device (port) to the event device. + */ + + /* Set defaults for connections */ + + /* + * One eth device (port) is one connection. Map all Tx queues + * of the device to the Tx adapter. + */ + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = tx_adapter->nb_connections; + + /* Get the connection */ + conn = &(tx_adapter->conn[conn_id]); + + /* Add ethdev to connections */ + conn->ethdev_id = i; + + /* Add all eth tx queues to adapter */ + conn->ethdev_tx_qid = -1; + + /* Update no of connections */ + tx_adapter->nb_connections++; + } + + /* We have setup one adapter */ + em_conf->nb_tx_adapter = 1; + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -326,6 +432,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if tx adapters are specified. Else generate a default config + * with one tx adapter. + */ + if (em_conf->nb_tx_adapter == 0) { + ret = eh_set_default_conf_tx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -579,6 +695,133 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int +eh_tx_adapter_configure(struct eventmode_conf *em_conf, + struct tx_adapter_conf *adapter) +{ + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + uint8_t tx_port_id = 0; + uint8_t eventdev_id; + uint32_t service_id; + int ret, j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + /* Create Tx adapter */ + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = + evdev_default_conf.max_num_events; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create adapter */ + ret = rte_event_eth_tx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create tx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Add queue to the adapter */ + ret = rte_event_eth_tx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_tx_qid); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to tx adapter %d", + ret); + return ret; + } + } + + /* Setup Tx queue & port */ + + /* Get event port used by the adapter */ + ret = rte_event_eth_tx_adapter_event_port_get( + adapter->adapter_id, &tx_port_id); + if (ret) { + EH_LOG_ERR("Failed to get tx adapter port id %d", ret); + return ret; + } + + /* + * Tx event queue is reserved for Tx adapter. Unlink this queue + * from all other ports + * + */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + rte_event_port_unlink(eventdev_id, j, + &(adapter->tx_ev_queue), 1); + } + + /* Link Tx event queue to Tx port */ + ret = rte_event_port_link(eventdev_id, tx_port_id, + &(adapter->tx_ev_queue), NULL, 1); + if (ret != 1) { + EH_LOG_ERR("Failed to link event queue to port"); + return ret; + } + + /* Get the service ID used by Tx adapter */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by tx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start tx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_conf *adapter; + int i, ret; + + /* Configure Tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + ret = eh_tx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure tx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -636,6 +879,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Tx adapter */ + ret = eh_initialize_tx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize tx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -719,5 +969,68 @@ eh_devs_uninit(struct eh_conf *conf) } } + /* Stop and release tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + + id = em_conf->tx_adapter[i].adapter_id; + ret = rte_event_eth_tx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop tx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->tx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_tx_adapter_queue_del(id, + em_conf->tx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove tx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_tx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free tx adapter %d", ret); + return ret; + } + } + return 0; } + +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) +{ + struct eventdev_params *eventdev_config; + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + if (eventdev_config == NULL) { + EH_LOG_ERR("Failed to read eventdev config"); + return -EINVAL; + } + + /* + * The last queue is reserved to be used as atomic queue for the + * last stage (eth packet tx stage) + */ + return eventdev_config->nb_eventqueue - 1; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index baf93e1..e76d764 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -23,9 +23,15 @@ extern "C" { /* Max Rx adapters supported */ #define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS +/* Max Tx adapters supported */ +#define EVENT_MODE_MAX_TX_ADAPTERS RTE_EVENT_MAX_DEVS + /* Max Rx adapter connections */ #define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 +/* Max Tx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -33,6 +39,9 @@ extern "C" { #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Tx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS + /** * Packet transfer mode of the application */ @@ -80,6 +89,23 @@ struct rx_adapter_conf { conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; }; +/* Tx adapter connection info */ +struct tx_adapter_connection_info { + uint8_t ethdev_id; + int32_t ethdev_tx_qid; +}; + +/* Tx adapter conf */ +struct tx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t tx_core_id; + uint8_t nb_connections; + struct tx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER]; + uint8_t tx_ev_queue; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; @@ -90,6 +116,10 @@ struct eventmode_conf { /**< No of Rx adapters */ struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; /**< Rx adapter conf */ + uint8_t nb_tx_adapter; + /**< No of Tx adapters */ + struct tx_adapter_conf tx_adapter[EVENT_MODE_MAX_TX_ADAPTERS]; + /** Tx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -170,6 +200,24 @@ eh_devs_init(struct eh_conf *conf); int32_t eh_devs_uninit(struct eh_conf *conf); +/** + * Get eventdev tx queue + * + * If the application uses event device which does not support internal port + * then it needs to submit the events to a Tx queue before final transmission. + * This Tx queue will be created internally by the eventmode helper subsystem, + * and application will need its queue ID when it runs the execution loop. + * + * @param mode_conf + * Event helper configuration + * @param eventdev_id + * Event device ID + * @return + * Tx queue ID + */ +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 06/12] examples/ipsec-secgw: add routines to display config 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (4 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 05/12] examples/ipsec-secgw: add Tx " Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 07/12] examples/ipsec-secgw: add routines to launch workers Anoob Joseph ` (7 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Lukasz Bartosik, Konstantin Ananyev, dev Add routines to display the eventmode configuration and provide an overview of the devices used. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 207 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 14 +++ 2 files changed, 221 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index e6569c1..883cb19 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -822,6 +822,210 @@ eh_initialize_tx_adapter(struct eventmode_conf *em_conf) return 0; } +static void +eh_display_operating_mode(struct eventmode_conf *em_conf) +{ + char sched_types[][32] = { + "RTE_SCHED_TYPE_ORDERED", + "RTE_SCHED_TYPE_ATOMIC", + "RTE_SCHED_TYPE_PARALLEL", + }; + EH_LOG_INFO("Operating mode:"); + + EH_LOG_INFO("\tScheduling type: \t%s", + sched_types[em_conf->ext_params.sched_type]); + + EH_LOG_INFO(""); +} + +static void +eh_display_event_dev_conf(struct eventmode_conf *em_conf) +{ + char queue_mode[][32] = { + "", + "ATQ (ALL TYPE QUEUE)", + "SINGLE LINK", + }; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Event Device Configuration:"); + + for (i = 0; i < em_conf->nb_eventdev; i++) { + sprintf(print_buf, + "\tDev ID: %-2d \tQueues: %-2d \tPorts: %-2d", + em_conf->eventdev_config[i].eventdev_id, + em_conf->eventdev_config[i].nb_eventqueue, + em_conf->eventdev_config[i].nb_eventport); + sprintf(print_buf + strlen(print_buf), + "\tQueue mode: %s", + queue_mode[em_conf->eventdev_config[i].ev_queue_mode]); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +static void +eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_rx_adapter = em_conf->nb_rx_adapter; + struct rx_adapter_connection_info *conn; + struct rx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Rx adapters configured: %d", nb_rx_adapter); + + for (i = 0; i < nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + EH_LOG_INFO( + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" + "\tRx core: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id, + adapter->rx_core_id); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_rx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2d", + conn->ethdev_rx_qid); + + sprintf(print_buf + strlen(print_buf), + "\tEvent queue: %-2d", conn->eventq_id); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_tx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_tx_adapter = em_conf->nb_tx_adapter; + struct tx_adapter_connection_info *conn; + struct tx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Tx adapters configured: %d", nb_tx_adapter); + + for (i = 0; i < nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + sprintf(print_buf, + "\tTx adapter ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id); + if (adapter->tx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->tx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2d,\tInput event queue: %-2d", + adapter->tx_core_id, adapter->tx_ev_queue); + + EH_LOG_INFO("%s", print_buf); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_tx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2d", + conn->ethdev_tx_qid); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_link_conf(struct eventmode_conf *em_conf) +{ + struct eh_event_link_info *link; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Links configured: %d", em_conf->nb_link); + + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + + sprintf(print_buf, + "\tEvent dev ID: %-2d\tEvent port: %-2d", + link->eventdev_id, + link->event_port_id); + + if (em_conf->ext_params.all_ev_queue_to_ev_port) + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2s\t", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2d\t", link->eventq_id); + + sprintf(print_buf + strlen(print_buf), + "Lcore: %-2d", link->lcore_id); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +void +eh_display_conf(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Display user exposed operating modes */ + eh_display_operating_mode(em_conf); + + /* Display event device conf */ + eh_display_event_dev_conf(em_conf); + + /* Display Rx adapter conf */ + eh_display_rx_adapter_conf(em_conf); + + /* Display Tx adapter conf */ + eh_display_tx_adapter_conf(em_conf); + + /* Display event-lcore link */ + eh_display_link_conf(em_conf); +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -855,6 +1059,9 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Display the current configuration */ + eh_display_conf(conf); + /* Stop eth devices before setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index e76d764..d7191a6 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -17,6 +17,11 @@ extern "C" { RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) +#define EH_LOG_INFO(...) \ + RTE_LOG(INFO, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS @@ -218,6 +223,15 @@ eh_devs_uninit(struct eh_conf *conf); uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); +/** + * Display event mode configuration + * + * @param conf + * Event helper configuration + */ +void +eh_display_conf(struct eh_conf *conf); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 07/12] examples/ipsec-secgw: add routines to launch workers 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (5 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 06/12] examples/ipsec-secgw: add routines to display config Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 08/12] examples/ipsec-secgw: add support for internal ports Anoob Joseph ` (6 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> In eventmode workers can be drafted differently according to the capabilities of the underlying event device. The added functions will receive an array of such workers and probe the eventmode properties to choose the worker. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 336 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 384 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 883cb19..95dc4e6 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -11,6 +11,8 @@ #include "event_helper.h" +static volatile bool eth_core_running; + static int eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { @@ -93,6 +95,16 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } +static inline bool +eh_dev_has_burst_mode(uint8_t dev_id) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(dev_id, &dev_info); + return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE) ? + true : false; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -695,6 +707,257 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int32_t +eh_start_worker_eth_core(struct eventmode_conf *conf, uint32_t lcore_id) +{ + uint32_t service_id[EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE]; + struct rx_adapter_conf *rx_adapter; + struct tx_adapter_conf *tx_adapter; + int service_count = 0; + int adapter_id; + int32_t ret; + int i; + + EH_LOG_INFO("Entering eth_core processing on lcore %u", lcore_id); + + /* + * Parse adapter config to check which of all Rx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_rx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per rx core"); + break; + } + + rx_adapter = &(conf->rx_adapter[i]); + if (rx_adapter->rx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = rx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by rx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + /* + * Parse adapter config to see which of all Tx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_tx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per tx core"); + break; + } + + tx_adapter = &conf->tx_adapter[i]; + if (tx_adapter->tx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = tx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by tx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + eth_core_running = true; + + while (eth_core_running) { + for (i = 0; i < service_count; i++) { + /* Initiate adapter service */ + rte_service_run_iter_on_app_lcore(service_id[i], 0); + } + } + + return 0; +} + +static int32_t +eh_stop_worker_eth_core(void) +{ + if (eth_core_running) { + EH_LOG_INFO("Stopping eth cores"); + eth_core_running = false; + } + return 0; +} + +static struct eh_app_worker_params * +eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, + struct eh_app_worker_params *app_wrkrs, uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params curr_conf = { {{0} }, NULL}; + struct eh_event_link_info *link = NULL; + struct eh_app_worker_params *tmp_wrkr; + struct eventmode_conf *em_conf; + uint8_t eventdev_id; + int i; + + /* Get eventmode config */ + em_conf = conf->mode_params; + + /* + * Use event device from the first lcore-event link. + * + * Assumption: All lcore-event links tied to a core are using the + * same event device. In other words, one core would be polling on + * queues of a single event device only. + */ + + /* Get a link for this lcore */ + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + if (link->lcore_id == lcore_id) + break; + } + + if (link == NULL) { + EH_LOG_ERR("No valid link found for lcore %d", lcore_id); + return NULL; + } + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* Populate the curr_conf with the capabilities */ + + /* Check for burst mode */ + if (eh_dev_has_burst_mode(eventdev_id)) + curr_conf.cap.burst = EH_RX_TYPE_BURST; + else + curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + + /* Parse the passed list and see if we have matching capabilities */ + + /* Initialize the pointer used to traverse the list */ + tmp_wrkr = app_wrkrs; + + for (i = 0; i < nb_wrkr_param; i++, tmp_wrkr++) { + + /* Skip this if capabilities are not matching */ + if (tmp_wrkr->cap.u64 != curr_conf.cap.u64) + continue; + + /* If the checks pass, we have a match */ + return tmp_wrkr; + } + + return NULL; +} + +static int +eh_verify_match_worker(struct eh_app_worker_params *match_wrkr) +{ + /* Verify registered worker */ + if (match_wrkr->worker_thread == NULL) { + EH_LOG_ERR("No worker registered"); + return 0; + } + + /* Success */ + return 1; +} + +static uint8_t +eh_get_event_lcore_links(uint32_t lcore_id, struct eh_conf *conf, + struct eh_event_link_info **links) +{ + struct eh_event_link_info *link_cache; + struct eventmode_conf *em_conf = NULL; + struct eh_event_link_info *link; + uint8_t lcore_nb_link = 0; + size_t single_link_size; + size_t cache_size; + int index = 0; + int i; + + if (conf == NULL || links == NULL) { + EH_LOG_ERR("Invalid args"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (em_conf == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Update the number of links for this core */ + lcore_nb_link++; + + } + } + + /* Compute size of one entry to be copied */ + single_link_size = sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + cache_size = lcore_nb_link * sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + link_cache = calloc(1, cache_size); + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Cache the link */ + memcpy(&link_cache[index], link, single_link_size); + + /* Update index */ + index++; + } + } + + /* Update the links for application to use the cached links */ + *links = link_cache; + + /* Return the number of cached links */ + return lcore_nb_link; +} + static int eh_tx_adapter_configure(struct eventmode_conf *em_conf, struct tx_adapter_conf *adapter) @@ -1208,6 +1471,79 @@ eh_devs_uninit(struct eh_conf *conf) return 0; } +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params *match_wrkr; + struct eh_event_link_info *links = NULL; + struct eventmode_conf *em_conf; + uint32_t lcore_id; + uint8_t nb_links; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Check if this is eth core */ + if (rte_bitmap_get(em_conf->eth_core_mask, lcore_id)) { + eh_start_worker_eth_core(em_conf, lcore_id); + return; + } + + if (app_wrkr == NULL || nb_wrkr_param == 0) { + EH_LOG_ERR("Invalid args"); + return; + } + + /* + * This is a regular worker thread. The application registers + * multiple workers with various capabilities. Run worker + * based on the selected capabilities of the event + * device configured. + */ + + /* Get the first matching worker for the event device */ + match_wrkr = eh_find_worker(lcore_id, conf, app_wrkr, nb_wrkr_param); + if (match_wrkr == NULL) { + EH_LOG_ERR("Failed to match worker registered for lcore %d", + lcore_id); + goto clean_and_exit; + } + + /* Verify sanity of the matched worker */ + if (eh_verify_match_worker(match_wrkr) != 1) { + EH_LOG_ERR("Failed to validate the matched worker"); + goto clean_and_exit; + } + + /* Get worker links */ + nb_links = eh_get_event_lcore_links(lcore_id, conf, &links); + + /* Launch the worker thread */ + match_wrkr->worker_thread(links, nb_links); + + /* Free links info memory */ + free(links); + +clean_and_exit: + + /* Flag eth_cores to stop, if started */ + eh_stop_worker_eth_core(); +} + uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index d7191a6..31a158e 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -44,6 +44,9 @@ extern "C" { #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Rx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE EVENT_MODE_MAX_RX_ADAPTERS + /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS @@ -55,6 +58,14 @@ enum eh_pkt_transfer_mode { EH_PKT_TRANSFER_MODE_EVENT, }; +/** + * Event mode packet rx types + */ +enum eh_rx_types { + EH_RX_TYPE_NON_BURST = 0, + EH_RX_TYPE_BURST +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -165,6 +176,22 @@ struct eh_conf { /**< Mode specific parameters */ }; +/* Workers registered by the application */ +struct eh_app_worker_params { + union { + RTE_STD_C11 + struct { + uint64_t burst : 1; + /**< Specify status of rx type burst */ + }; + uint64_t u64; + } cap; + /**< Capabilities of this worker */ + void (*worker_thread)(struct eh_event_link_info *links, + uint8_t nb_links); + /**< Worker thread */ +}; + /** * Initialize event mode devices * @@ -232,6 +259,27 @@ eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); void eh_display_conf(struct eh_conf *conf); + +/** + * Launch eventmode worker + * + * The application can request the eventmode helper subsystem to launch the + * worker based on the capabilities of event device and the options selected + * while initializing the eventmode. + * + * @param conf + * Event helper configuration + * @param app_wrkr + * List of all the workers registered by application, along with its + * capabilities + * @param nb_wrkr_param + * Number of workers passed by the application + * + */ +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 08/12] examples/ipsec-secgw: add support for internal ports 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (6 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 07/12] examples/ipsec-secgw: add routines to launch workers Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph ` (5 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add support for Rx and Tx internal ports. When internal ports are available then a packet can be received from eth port and forwarded to event queue by HW without any software intervention. The same applies to Tx side where a packet sent to an event queue can by forwarded by HW to eth port without any software intervention. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 179 +++++++++++++++++++++++++++++++----- examples/ipsec-secgw/event_helper.h | 11 +++ 2 files changed, 167 insertions(+), 23 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 95dc4e6..9719ab4 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -95,6 +95,39 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } + +static inline bool +eh_dev_has_rx_internal_port(uint8_t eventdev_id) +{ + int j; + bool flag = true; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_rx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + +static inline bool +eh_dev_has_tx_internal_port(uint8_t eventdev_id) +{ + int j; + bool flag = true; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_tx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + static inline bool eh_dev_has_burst_mode(uint8_t dev_id) { @@ -179,6 +212,42 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return 0; } +static void +eh_do_capability_check(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + int all_internal_ports = 1; + uint32_t eventdev_id; + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + eventdev_id = eventdev_config->eventdev_id; + + /* Check if event device has internal port for Rx & Tx */ + if (eh_dev_has_rx_internal_port(eventdev_id) && + eh_dev_has_tx_internal_port(eventdev_id)) { + eventdev_config->all_internal_ports = 1; + } else { + all_internal_ports = 0; + } + } + + /* + * If Rx & Tx internal ports are supported by all event devices then + * eth cores won't be required. Override the eth core mask requested + * and decrement number of event queues by one as it won't be needed + * for Tx. + */ + if (all_internal_ports) { + rte_bitmap_reset(em_conf->eth_core_mask); + for (i = 0; i < em_conf->nb_eventdev; i++) + em_conf->eventdev_config[i].nb_eventqueue--; + } +} + static int eh_set_default_conf_link(struct eventmode_conf *em_conf) { @@ -250,7 +319,10 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) struct rx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct rx_adapter_conf *adapter; + bool rx_internal_port = true; bool single_ev_queue = false; + int nb_eventqueue; + uint32_t caps = 0; int eventdev_id; int nb_eth_dev; int adapter_id; @@ -280,14 +352,21 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Set adapter conf */ adapter->eventdev_id = eventdev_id; adapter->adapter_id = adapter_id; - adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * If event device does not have internal ports for passing + * packets then reserved one queue for Tx path + */ + nb_eventqueue = eventdev_config->all_internal_ports ? + eventdev_config->nb_eventqueue : + eventdev_config->nb_eventqueue - 1; /* * Map all queues of eth device (port) to an event queue. If there * are more event queues than eth ports then create 1:1 mapping. * Otherwise map all eth ports to a single event queue. */ - if (nb_eth_dev > eventdev_config->nb_eventqueue) + if (nb_eth_dev > nb_eventqueue) single_ev_queue = true; for (i = 0; i < nb_eth_dev; i++) { @@ -309,11 +388,24 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Add all eth queues eth port to event queue */ conn->ethdev_rx_qid = -1; + /* Get Rx adapter capabilities */ + rte_event_eth_rx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + rx_internal_port = false; + /* Update no of connections */ adapter->nb_connections++; } + if (rx_internal_port) { + /* Rx core is not required */ + adapter->rx_core_id = -1; + } else { + /* Rx core is required */ + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + } + /* We have setup one adapter */ em_conf->nb_rx_adapter = 1; @@ -326,6 +418,8 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) struct tx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct tx_adapter_conf *tx_adapter; + bool tx_internal_port = true; + uint32_t caps = 0; int eventdev_id; int adapter_id; int nb_eth_dev; @@ -359,18 +453,6 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) tx_adapter->eventdev_id = eventdev_id; tx_adapter->adapter_id = adapter_id; - /* TODO: Tx core is required only when internal port is not present */ - tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); - - /* - * Application uses one event queue per adapter for submitting - * packets for Tx. Reserve the last queue available and decrement - * the total available event queues for this - */ - - /* Queue numbers start at 0 */ - tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; - /* * Map all Tx queues of the eth device (port) to the event device. */ @@ -400,10 +482,30 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) /* Add all eth tx queues to adapter */ conn->ethdev_tx_qid = -1; + /* Get Tx adapter capabilities */ + rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + tx_internal_port = false; + /* Update no of connections */ tx_adapter->nb_connections++; } + if (tx_internal_port) { + /* Tx core is not required */ + tx_adapter->tx_core_id = -1; + } else { + /* Tx core is required */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Use one event queue per adapter for submitting packets + * for Tx. Reserving the last queue available + */ + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + } + /* We have setup one adapter */ em_conf->nb_tx_adapter = 1; return 0; @@ -424,6 +526,9 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* Perform capability check for the selected event devices */ + eh_do_capability_check(em_conf); + /* * Check if links are specified. Else generate a default config for * the event ports used. @@ -529,11 +634,13 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode; /* * All queues need to be set with sched_type as - * schedule type for the application stage. One queue - * would be reserved for the final eth tx stage. This - * will be an atomic queue. + * schedule type for the application stage. One + * queue would be reserved for the final eth tx + * stage if event device does not have internal + * ports. This will be an atomic queue. */ - if (j == nb_eventqueue-1) { + if (!eventdev_config->all_internal_ports && + j == nb_eventqueue-1) { eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; } else { @@ -847,6 +954,12 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, /* Populate the curr_conf with the capabilities */ + /* Check for Tx internal port */ + if (eh_dev_has_tx_internal_port(eventdev_id)) + curr_conf.cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + else + curr_conf.cap.tx_internal_port = EH_TX_TYPE_NO_INTERNAL_PORT; + /* Check for burst mode */ if (eh_dev_has_burst_mode(eventdev_id)) curr_conf.cap.burst = EH_RX_TYPE_BURST; @@ -1018,6 +1131,16 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } } + /* + * Check if Tx core is assigned. If Tx core is not assigned then + * the adapter has internal port for submitting Tx packets and + * Tx event queue & port setup is not required + */ + if (adapter->tx_core_id == (uint32_t) (-1)) { + /* Internal port is present */ + goto skip_tx_queue_port_setup; + } + /* Setup Tx queue & port */ /* Get event port used by the adapter */ @@ -1057,6 +1180,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, rte_service_set_runstate_mapped_check(service_id, 0); +skip_tx_queue_port_setup: /* Start adapter */ ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); if (ret < 0) { @@ -1141,13 +1265,22 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); - EH_LOG_INFO( - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" - "\tRx core: %-2d", + sprintf(print_buf, + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, - adapter->eventdev_id, - adapter->rx_core_id); + adapter->eventdev_id); + if (adapter->rx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->rx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2d", adapter->rx_core_id); + + EH_LOG_INFO("%s", print_buf); for (j = 0; j < adapter->nb_connections; j++) { conn = &(adapter->conn[j]); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 31a158e..15a7bd6 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -66,12 +66,21 @@ enum eh_rx_types { EH_RX_TYPE_BURST }; +/** + * Event mode packet tx types + */ +enum eh_tx_types { + EH_TX_TYPE_INTERNAL_PORT = 0, + EH_TX_TYPE_NO_INTERNAL_PORT +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; uint8_t nb_eventqueue; uint8_t nb_eventport; uint8_t ev_queue_mode; + uint8_t all_internal_ports; }; /** @@ -183,6 +192,8 @@ struct eh_app_worker_params { struct { uint64_t burst : 1; /**< Specify status of rx type burst */ + uint64_t tx_internal_port : 1; + /**< Specify whether tx internal port is available */ }; uint64_t u64; } cap; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (7 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 08/12] examples/ipsec-secgw: add support for internal ports Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-29 23:31 ` Ananyev, Konstantin 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 10/12] examples/ipsec-secgw: add driver mode worker Anoob Joseph ` (4 subsequent siblings) 13 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add eventmode support to ipsec-secgw. With the aid of event helper configure and use the eventmode capabilities. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 4 +- examples/ipsec-secgw/event_helper.h | 14 ++ examples/ipsec-secgw/ipsec-secgw.c | 341 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec.h | 11 ++ examples/ipsec-secgw/sa.c | 11 -- 5 files changed, 365 insertions(+), 16 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 9719ab4..54a98c9 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -966,6 +966,8 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, else curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + curr_conf.cap.ipsec_mode = conf->ipsec_mode; + /* Parse the passed list and see if we have matching capabilities */ /* Initialize the pointer used to traverse the list */ @@ -1625,7 +1627,7 @@ eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, } /* Get eventmode conf */ - em_conf = (struct eventmode_conf *)(conf->mode_params); + em_conf = conf->mode_params; /* Get core ID */ lcore_id = rte_lcore_id(); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 15a7bd6..cf5d346 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -74,6 +74,14 @@ enum eh_tx_types { EH_TX_TYPE_NO_INTERNAL_PORT }; +/** + * Event mode ipsec mode types + */ +enum eh_ipsec_mode_types { + EH_IPSEC_MODE_TYPE_APP = 0, + EH_IPSEC_MODE_TYPE_DRIVER +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -183,6 +191,10 @@ struct eh_conf { */ void *mode_params; /**< Mode specific parameters */ + + /** Application specific params */ + enum eh_ipsec_mode_types ipsec_mode; + /**< Mode of ipsec run */ }; /* Workers registered by the application */ @@ -194,6 +206,8 @@ struct eh_app_worker_params { /**< Specify status of rx type burst */ uint64_t tx_internal_port : 1; /**< Specify whether tx internal port is available */ + uint64_t ipsec_mode : 1; + /**< Specify ipsec processing level */ }; uint64_t u64; } cap; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index d5e8fe5..f1cc3fb 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2,6 +2,7 @@ * Copyright(c) 2016 Intel Corporation */ +#include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> @@ -14,6 +15,7 @@ #include <sys/queue.h> #include <stdarg.h> #include <errno.h> +#include <signal.h> #include <getopt.h> #include <rte_common.h> @@ -41,12 +43,17 @@ #include <rte_jhash.h> #include <rte_cryptodev.h> #include <rte_security.h> +#include <rte_bitmap.h> +#include <rte_eventdev.h> #include <rte_ip.h> #include <rte_ip_frag.h> +#include "event_helper.h" #include "ipsec.h" #include "parser.h" +volatile bool force_quit; + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define MAX_JUMBO_PKT_LEN 9600 @@ -133,12 +140,20 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" #define CMD_LINE_OPT_REASSEMBLE "reassemble" #define CMD_LINE_OPT_MTU "mtu" #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" +#define CMD_LINE_ARG_EVENT "event" +#define CMD_LINE_ARG_POLL "poll" +#define CMD_LINE_ARG_ORDERED "ordered" +#define CMD_LINE_ARG_ATOMIC "atomic" +#define CMD_LINE_ARG_PARALLEL "parallel" + enum { /* long options mapped to a short option */ @@ -149,6 +164,8 @@ enum { CMD_LINE_OPT_CONFIG_NUM, CMD_LINE_OPT_SINGLE_SA_NUM, CMD_LINE_OPT_CRYPTODEV_MASK_NUM, + CMD_LINE_OPT_TRANSFER_MODE_NUM, + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, CMD_LINE_OPT_RX_OFFLOAD_NUM, CMD_LINE_OPT_TX_OFFLOAD_NUM, CMD_LINE_OPT_REASSEMBLE_NUM, @@ -160,6 +177,8 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, @@ -177,6 +196,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; static uint32_t single_sa_idx; +static uint32_t schedule_type; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -1185,7 +1205,7 @@ main_loop(__attribute__((unused)) void *dummy) } static int32_t -check_params(void) +check_params(struct eh_conf *eh_conf) { uint8_t lcore; uint16_t portid; @@ -1220,6 +1240,14 @@ check_params(void) return -1; } } + + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL) { + if (schedule_type) { + printf("error: option --schedule-type applies only to event mode\n"); + return -1; + } + } + return 0; } @@ -1277,6 +1305,8 @@ print_usage(const char *prgname) " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" " [--cryptodev_mask MASK]" + " [--transfer-mode MODE]" + " [--schedule-type TYPE]" " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" @@ -1298,6 +1328,14 @@ print_usage(const char *prgname) " bypassing the SP\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" + " --transfer-mode MODE\n" + " \"poll\" : Packet transfer via polling (default)\n" + " \"event\" : Packet transfer via event device\n" + " --schedule-type TYPE queue schedule type, used only when\n" + " transfer mode is set to event\n" + " \"ordered\" : Ordered (default)\n" + " \"atomic\" : Atomic\n" + " \"parallel\" : Parallel\n" " --" CMD_LINE_OPT_RX_OFFLOAD ": bitmask of the RX HW offload capabilities to enable/use\n" " (DEV_RX_OFFLOAD_*)\n" @@ -1432,8 +1470,45 @@ print_app_sa_prm(const struct app_sa_prm *prm) printf("Frag TTL: %" PRIu64 " ns\n", frag_ttl_ns); } +static int +parse_transfer_mode(struct eh_conf *conf, const char *optarg) +{ + if (!strcmp(CMD_LINE_ARG_POLL, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + else if (!strcmp(CMD_LINE_ARG_EVENT, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_EVENT; + else { + printf("Unsupported packet transfer mode\n"); + return -EINVAL; + } + + return 0; +} + +static int +parse_schedule_type(struct eh_conf *conf, const char *optarg) +{ + struct eventmode_conf *em_conf = NULL; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (!strcmp(CMD_LINE_ARG_ORDERED, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + else if (!strcmp(CMD_LINE_ARG_ATOMIC, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ATOMIC; + else if (!strcmp(CMD_LINE_ARG_PARALLEL, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_PARALLEL; + else { + printf("Unsupported queue schedule type\n"); + return -EINVAL; + } + + return 0; +} + static int32_t -parse_args(int32_t argc, char **argv) +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) { int opt; int64_t ret; @@ -1522,6 +1597,7 @@ parse_args(int32_t argc, char **argv) /* else */ single_sa = 1; single_sa_idx = ret; + eh_conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; printf("Configured with single SA index %u\n", single_sa_idx); break; @@ -1536,6 +1612,26 @@ parse_args(int32_t argc, char **argv) /* else */ enabled_cryptodev_mask = ret; break; + + case CMD_LINE_OPT_TRANSFER_MODE_NUM: + ret = parse_transfer_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid packet transfer mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: + ret = parse_schedule_type(eh_conf, optarg); + if (ret < 0) { + printf("Invalid queue schedule type\n"); + print_usage(prgname); + return -1; + } + schedule_type = 1; + break; + case CMD_LINE_OPT_RX_OFFLOAD_NUM: ret = parse_mask(optarg, &dev_rx_offload); if (ret != 0) { @@ -2450,16 +2546,176 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) port_id); } +static struct eh_conf * +eh_conf_init(void) +{ + struct eventmode_conf *em_conf = NULL; + struct eh_conf *conf = NULL; + unsigned int eth_core_id; + uint32_t nb_bytes; + void *mem = NULL; + + /* Allocate memory for config */ + conf = calloc(1, sizeof(struct eh_conf)); + if (conf == NULL) { + printf("Failed to allocate memory for eventmode helper conf"); + goto err; + } + + /* Set default conf */ + + /* Packet transfer mode: poll */ + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + + /* Keep all ethernet ports enabled by default */ + conf->eth_portmask = -1; + + /* Allocate memory for event mode params */ + conf->mode_params = calloc(1, sizeof(struct eventmode_conf)); + if (conf->mode_params == NULL) { + printf("Failed to allocate memory for event mode params"); + goto err; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Allocate and initialize bitmap for eth cores */ + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); + if (!nb_bytes) { + printf("Failed to get bitmap footprint"); + goto err; + } + + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, + RTE_CACHE_LINE_SIZE); + if (!mem) { + printf("Failed to allocate memory for eth cores bitmap\n"); + goto err; + } + + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, mem, nb_bytes); + if (!em_conf->eth_core_mask) { + printf("Failed to initialize bitmap"); + goto err; + } + + /* Schedule type: ordered */ + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + + /* Set two cores as eth cores for Rx & Tx */ + + /* Use first core other than master core as Rx core */ + eth_core_id = rte_get_next_lcore(0, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + /* Use next core as Tx core */ + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + return conf; +err: + rte_free(mem); + free(em_conf); + free(conf); + return NULL; +} + +static void +eh_conf_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf = NULL; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Free evenmode configuration memory */ + rte_free(em_conf->eth_core_mask); + free(em_conf); + free(conf); +} + +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + } +} + +static void +inline_sessions_free(struct sa_ctx *sa_ctx) +{ + struct rte_ipsec_session *ips; + struct ipsec_sa *sa; + int32_t i, ret; + + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { + + sa = &sa_ctx->sa[i]; + if (!sa->spi) + continue; + + ips = ipsec_get_primary_session(sa); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && + ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + continue; + + ret = rte_security_session_destroy( + rte_eth_dev_get_sec_ctx(sa->portid), + ips->security.ses); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy security " + "session type %d, spi %d\n", + ips->type, sa->spi); + } +} + +static void +ev_mode_sess_verify(struct sa_ctx *sa_ctx) +{ + struct rte_ipsec_session *ips; + struct ipsec_sa *sa; + int32_t i; + + if (!sa_ctx) + return; + + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { + + sa = &sa_ctx->sa[i]; + if (!sa->spi) + continue; + + ips = ipsec_get_primary_session(sa); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) + rte_exit(EXIT_FAILURE, "Event mode supports only " + "inline protocol sessions\n"); + } + +} + int32_t main(int32_t argc, char **argv) { int32_t ret; uint32_t lcore_id; + uint32_t cdev_id; uint32_t i; uint8_t socket_id; uint16_t portid; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; + struct eh_conf *eh_conf = NULL; size_t sess_sz; /* init EAL */ @@ -2469,8 +2725,17 @@ main(int32_t argc, char **argv) argc -= ret; argv += ret; + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* initialize event helper configuration */ + eh_conf = eh_conf_init(); + if (eh_conf == NULL) + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); + /* parse application arguments (after the EAL ones) */ - ret = parse_args(argc, argv); + ret = parse_args(argc, argv, eh_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid parameters\n"); @@ -2487,7 +2752,7 @@ main(int32_t argc, char **argv) rte_exit(EXIT_FAILURE, "Invalid unprotected portmask 0x%x\n", unprotected_port_mask); - if (check_params() < 0) + if (check_params(eh_conf) < 0) rte_exit(EXIT_FAILURE, "check_params failed\n"); ret = init_lcore_rx_queues(); @@ -2529,6 +2794,18 @@ main(int32_t argc, char **argv) cryptodevs_init(); + /* + * Set the enabled port mask in helper config for use by helper + * sub-system. This will be used while initializing devices using + * helper sub-system. + */ + eh_conf->eth_portmask = enabled_port_mask; + + /* Initialize eventmode components */ + ret = eh_devs_init(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); + /* start ports */ RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2576,6 +2853,18 @@ main(int32_t argc, char **argv) sp4_init(&socket_ctx[socket_id], socket_id); sp6_init(&socket_ctx[socket_id], socket_id); rt_init(&socket_ctx[socket_id], socket_id); + + /* + * Event mode currently supports only inline protocol + * sessions. If there are other types of sessions + * configured then exit with error. + */ + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { + ev_mode_sess_verify( + socket_ctx[socket_id].sa_in); + ev_mode_sess_verify( + socket_ctx[socket_id].sa_out); + } } } @@ -2583,10 +2872,54 @@ main(int32_t argc, char **argv) /* launch per-lcore init on every lcore */ rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } + /* Uninitialize eventmode components */ + ret = eh_devs_uninit(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); + + /* Free eventmode configuration memory */ + eh_conf_uninit(eh_conf); + + /* Destroy inline inbound and outbound sessions */ + for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { + socket_id = rte_socket_id_by_idx(i); + inline_sessions_free(socket_ctx[socket_id].sa_in); + inline_sessions_free(socket_ctx[socket_id].sa_out); + } + + for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { + printf("Closing cryptodev %d...", cdev_id); + rte_cryptodev_stop(cdev_id); + rte_cryptodev_close(cdev_id); + printf(" Done\n"); + } + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + printf("Closing port %d...", portid); + if (flow_info_tbl[portid].rx_def_flow) { + struct rte_flow_error err; + + ret = rte_flow_destroy(portid, + flow_info_tbl[portid].rx_def_flow, &err); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy flow " + " for port %u, err msg: %s\n", portid, + err.message); + } + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } + printf("Bye...\n"); + return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 28ff07d..0539aec 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -153,6 +153,17 @@ struct ipsec_sa { struct rte_security_session_conf sess_conf; } __rte_cache_aligned; +struct sa_ctx { + void *satbl; /* pointer to array of rte_ipsec_sa objects*/ + struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; + union { + struct { + struct rte_crypto_sym_xform a; + struct rte_crypto_sym_xform b; + }; + } xf[IPSEC_SA_MAX_ENTRIES]; +}; + struct ipsec_mbuf_metadata { struct ipsec_sa *sa; struct rte_crypto_op cop; diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index c75a5a1..2ec3e17 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -781,17 +781,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) printf("\n"); } -struct sa_ctx { - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ - struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; - union { - struct { - struct rte_crypto_sym_xform a; - struct rte_crypto_sym_xform b; - }; - } xf[IPSEC_SA_MAX_ENTRIES]; -}; - static struct sa_ctx * sa_create(const char *name, int32_t socket_id) { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph @ 2020-01-29 23:31 ` Ananyev, Konstantin 2020-01-30 11:04 ` [dpdk-dev] [EXT] " Lukas Bartosik 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-29 23:31 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > Add eventmode support to ipsec-secgw. With the aid of event helper > configure and use the eventmode capabilities. > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/event_helper.c | 4 +- > examples/ipsec-secgw/event_helper.h | 14 ++ > examples/ipsec-secgw/ipsec-secgw.c | 341 +++++++++++++++++++++++++++++++++++- > examples/ipsec-secgw/ipsec.h | 11 ++ > examples/ipsec-secgw/sa.c | 11 -- > 5 files changed, 365 insertions(+), 16 deletions(-) > > diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c > index 9719ab4..54a98c9 100644 > --- a/examples/ipsec-secgw/event_helper.c > +++ b/examples/ipsec-secgw/event_helper.c > @@ -966,6 +966,8 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, > else > curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; > > + curr_conf.cap.ipsec_mode = conf->ipsec_mode; > + > /* Parse the passed list and see if we have matching capabilities */ > > /* Initialize the pointer used to traverse the list */ > @@ -1625,7 +1627,7 @@ eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, > } > > /* Get eventmode conf */ > - em_conf = (struct eventmode_conf *)(conf->mode_params); > + em_conf = conf->mode_params; > > /* Get core ID */ > lcore_id = rte_lcore_id(); > diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h > index 15a7bd6..cf5d346 100644 > --- a/examples/ipsec-secgw/event_helper.h > +++ b/examples/ipsec-secgw/event_helper.h > @@ -74,6 +74,14 @@ enum eh_tx_types { > EH_TX_TYPE_NO_INTERNAL_PORT > }; > > +/** > + * Event mode ipsec mode types > + */ > +enum eh_ipsec_mode_types { > + EH_IPSEC_MODE_TYPE_APP = 0, > + EH_IPSEC_MODE_TYPE_DRIVER > +}; > + > /* Event dev params */ > struct eventdev_params { > uint8_t eventdev_id; > @@ -183,6 +191,10 @@ struct eh_conf { > */ > void *mode_params; > /**< Mode specific parameters */ > + > + /** Application specific params */ > + enum eh_ipsec_mode_types ipsec_mode; > + /**< Mode of ipsec run */ > }; > > /* Workers registered by the application */ > @@ -194,6 +206,8 @@ struct eh_app_worker_params { > /**< Specify status of rx type burst */ > uint64_t tx_internal_port : 1; > /**< Specify whether tx internal port is available */ > + uint64_t ipsec_mode : 1; > + /**< Specify ipsec processing level */ > }; > uint64_t u64; > } cap; > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index d5e8fe5..f1cc3fb 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -2,6 +2,7 @@ > * Copyright(c) 2016 Intel Corporation > */ > > +#include <stdbool.h> > #include <stdio.h> > #include <stdlib.h> > #include <stdint.h> > @@ -14,6 +15,7 @@ > #include <sys/queue.h> > #include <stdarg.h> > #include <errno.h> > +#include <signal.h> > #include <getopt.h> > > #include <rte_common.h> > @@ -41,12 +43,17 @@ > #include <rte_jhash.h> > #include <rte_cryptodev.h> > #include <rte_security.h> > +#include <rte_bitmap.h> > +#include <rte_eventdev.h> > #include <rte_ip.h> > #include <rte_ip_frag.h> > > +#include "event_helper.h" > #include "ipsec.h" > #include "parser.h" > > +volatile bool force_quit; > + > #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > #define MAX_JUMBO_PKT_LEN 9600 > @@ -133,12 +140,20 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > #define CMD_LINE_OPT_CONFIG "config" > #define CMD_LINE_OPT_SINGLE_SA "single-sa" > #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" > +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" > +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" > #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" > #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" > #define CMD_LINE_OPT_REASSEMBLE "reassemble" > #define CMD_LINE_OPT_MTU "mtu" > #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" > > +#define CMD_LINE_ARG_EVENT "event" > +#define CMD_LINE_ARG_POLL "poll" > +#define CMD_LINE_ARG_ORDERED "ordered" > +#define CMD_LINE_ARG_ATOMIC "atomic" > +#define CMD_LINE_ARG_PARALLEL "parallel" > + > enum { > /* long options mapped to a short option */ > > @@ -149,6 +164,8 @@ enum { > CMD_LINE_OPT_CONFIG_NUM, > CMD_LINE_OPT_SINGLE_SA_NUM, > CMD_LINE_OPT_CRYPTODEV_MASK_NUM, > + CMD_LINE_OPT_TRANSFER_MODE_NUM, > + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, > CMD_LINE_OPT_RX_OFFLOAD_NUM, > CMD_LINE_OPT_TX_OFFLOAD_NUM, > CMD_LINE_OPT_REASSEMBLE_NUM, > @@ -160,6 +177,8 @@ static const struct option lgopts[] = { > {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, > {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, > {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, > + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, > + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, > {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, > {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, > {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, > @@ -177,6 +196,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled by default. */ > static uint32_t nb_lcores; > static uint32_t single_sa; > static uint32_t single_sa_idx; > +static uint32_t schedule_type; > > /* > * RX/TX HW offload capabilities to enable/use on ethernet ports. > @@ -1185,7 +1205,7 @@ main_loop(__attribute__((unused)) void *dummy) > } > > static int32_t > -check_params(void) > +check_params(struct eh_conf *eh_conf) > { > uint8_t lcore; > uint16_t portid; > @@ -1220,6 +1240,14 @@ check_params(void) > return -1; > } > } > + > + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL) { > + if (schedule_type) { > + printf("error: option --schedule-type applies only to event mode\n"); > + return -1; > + } > + } As a nit - might be better to keep check_params() intact, and put this new check above into a separate function? check_eh_conf() or so? Another thing it seems a bit clumsy construction to have global var (scheduler_type) just to figure out was particular option present on command line or not. Probably simler way to avoid it - set initially em_conf->ext_params.sched_type to some invalid value (-1 or so). Then after parse args you can check did its value change or not. As alternative thought: wouldn't it be better to unite both --transfer-mode and --schedule-type options into one? Then possible values for this unite option would be: "poll" "event" (expands to "event-ordered") "event-ordered" "event-atomic" "event-parallel" And this situation you are checking above simply wouldn't be possible. Again probably would be easier/simpler for users. > + > return 0; > } > > @@ -1277,6 +1305,8 @@ print_usage(const char *prgname) > " --config (port,queue,lcore)[,(port,queue,lcore)]" > " [--single-sa SAIDX]" > " [--cryptodev_mask MASK]" > + " [--transfer-mode MODE]" > + " [--schedule-type TYPE]" > " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" > " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" > " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" > @@ -1298,6 +1328,14 @@ print_usage(const char *prgname) > " bypassing the SP\n" > " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" > " devices to configure\n" > + " --transfer-mode MODE\n" > + " \"poll\" : Packet transfer via polling (default)\n" > + " \"event\" : Packet transfer via event device\n" > + " --schedule-type TYPE queue schedule type, used only when\n" > + " transfer mode is set to event\n" > + " \"ordered\" : Ordered (default)\n" > + " \"atomic\" : Atomic\n" > + " \"parallel\" : Parallel\n" > " --" CMD_LINE_OPT_RX_OFFLOAD > ": bitmask of the RX HW offload capabilities to enable/use\n" > " (DEV_RX_OFFLOAD_*)\n" > @@ -1432,8 +1470,45 @@ print_app_sa_prm(const struct app_sa_prm *prm) > printf("Frag TTL: %" PRIu64 " ns\n", frag_ttl_ns); > } > > +static int > +parse_transfer_mode(struct eh_conf *conf, const char *optarg) > +{ > + if (!strcmp(CMD_LINE_ARG_POLL, optarg)) > + conf->mode = EH_PKT_TRANSFER_MODE_POLL; > + else if (!strcmp(CMD_LINE_ARG_EVENT, optarg)) > + conf->mode = EH_PKT_TRANSFER_MODE_EVENT; > + else { > + printf("Unsupported packet transfer mode\n"); > + return -EINVAL; > + } > + > + return 0; > +} > + > +static int > +parse_schedule_type(struct eh_conf *conf, const char *optarg) > +{ > + struct eventmode_conf *em_conf = NULL; > + > + /* Get eventmode conf */ > + em_conf = conf->mode_params; > + > + if (!strcmp(CMD_LINE_ARG_ORDERED, optarg)) > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; > + else if (!strcmp(CMD_LINE_ARG_ATOMIC, optarg)) > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ATOMIC; > + else if (!strcmp(CMD_LINE_ARG_PARALLEL, optarg)) > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_PARALLEL; > + else { > + printf("Unsupported queue schedule type\n"); > + return -EINVAL; > + } > + > + return 0; > +} > + > static int32_t > -parse_args(int32_t argc, char **argv) > +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > { > int opt; > int64_t ret; > @@ -1522,6 +1597,7 @@ parse_args(int32_t argc, char **argv) > /* else */ > single_sa = 1; > single_sa_idx = ret; > + eh_conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > printf("Configured with single SA index %u\n", > single_sa_idx); > break; > @@ -1536,6 +1612,26 @@ parse_args(int32_t argc, char **argv) > /* else */ > enabled_cryptodev_mask = ret; > break; > + > + case CMD_LINE_OPT_TRANSFER_MODE_NUM: > + ret = parse_transfer_mode(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid packet transfer mode\n"); > + print_usage(prgname); > + return -1; > + } > + break; > + > + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: > + ret = parse_schedule_type(eh_conf, optarg); > + if (ret < 0) { > + printf("Invalid queue schedule type\n"); > + print_usage(prgname); > + return -1; > + } > + schedule_type = 1; > + break; > + > case CMD_LINE_OPT_RX_OFFLOAD_NUM: > ret = parse_mask(optarg, &dev_rx_offload); > if (ret != 0) { > @@ -2450,16 +2546,176 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) > port_id); > } > Wouldn't it be more natural to have these 2 functions below (eh_conf_init(), eh_conf_uninit()) defined inside event_helper.c? > +static struct eh_conf * > +eh_conf_init(void) > +{ > + struct eventmode_conf *em_conf = NULL; > + struct eh_conf *conf = NULL; > + unsigned int eth_core_id; > + uint32_t nb_bytes; > + void *mem = NULL; > + > + /* Allocate memory for config */ > + conf = calloc(1, sizeof(struct eh_conf)); > + if (conf == NULL) { > + printf("Failed to allocate memory for eventmode helper conf"); > + goto err; > + } > + > + /* Set default conf */ > + > + /* Packet transfer mode: poll */ > + conf->mode = EH_PKT_TRANSFER_MODE_POLL; > + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > + > + /* Keep all ethernet ports enabled by default */ > + conf->eth_portmask = -1; > + > + /* Allocate memory for event mode params */ > + conf->mode_params = calloc(1, sizeof(struct eventmode_conf)); > + if (conf->mode_params == NULL) { > + printf("Failed to allocate memory for event mode params"); > + goto err; > + } > + > + /* Get eventmode conf */ > + em_conf = conf->mode_params; > + > + /* Allocate and initialize bitmap for eth cores */ > + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); > + if (!nb_bytes) { > + printf("Failed to get bitmap footprint"); > + goto err; > + } > + > + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, > + RTE_CACHE_LINE_SIZE); > + if (!mem) { > + printf("Failed to allocate memory for eth cores bitmap\n"); > + goto err; > + } > + > + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, mem, nb_bytes); > + if (!em_conf->eth_core_mask) { > + printf("Failed to initialize bitmap"); > + goto err; > + } > + > + /* Schedule type: ordered */ > + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; > + > + /* Set two cores as eth cores for Rx & Tx */ > + > + /* Use first core other than master core as Rx core */ > + eth_core_id = rte_get_next_lcore(0, /* curr core */ > + 1, /* skip master core */ > + 0 /* wrap */); > + > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > + > + /* Use next core as Tx core */ > + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ > + 1, /* skip master core */ > + 0 /* wrap */); > + > + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); > + > + return conf; > +err: > + rte_free(mem); > + free(em_conf); > + free(conf); > + return NULL; > +} > + > +static void > +eh_conf_uninit(struct eh_conf *conf) > +{ > + struct eventmode_conf *em_conf = NULL; > + > + /* Get eventmode conf */ > + em_conf = conf->mode_params; > + > + /* Free evenmode configuration memory */ > + rte_free(em_conf->eth_core_mask); > + free(em_conf); > + free(conf); > +} > + > +static void > +signal_handler(int signum) > +{ > + if (signum == SIGINT || signum == SIGTERM) { > + printf("\n\nSignal %d received, preparing to exit...\n", > + signum); > + force_quit = true; > + } > +} > + > +static void > +inline_sessions_free(struct sa_ctx *sa_ctx) > +{ > + struct rte_ipsec_session *ips; > + struct ipsec_sa *sa; > + int32_t i, ret; > + > + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { > + > + sa = &sa_ctx->sa[i]; > + if (!sa->spi) > + continue; > + > + ips = ipsec_get_primary_session(sa); > + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && > + ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) > + continue; > + > + ret = rte_security_session_destroy( > + rte_eth_dev_get_sec_ctx(sa->portid), > + ips->security.ses); > + if (ret) > + RTE_LOG(ERR, IPSEC, "Failed to destroy security " > + "session type %d, spi %d\n", > + ips->type, sa->spi); > + } > +} > + > +static void > +ev_mode_sess_verify(struct sa_ctx *sa_ctx) > +{ > + struct rte_ipsec_session *ips; > + struct ipsec_sa *sa; > + int32_t i; > + > + if (!sa_ctx) > + return; > + > + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { > + > + sa = &sa_ctx->sa[i]; > + if (!sa->spi) > + continue; > + > + ips = ipsec_get_primary_session(sa); > + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) > + rte_exit(EXIT_FAILURE, "Event mode supports only " > + "inline protocol sessions\n"); As I understand at that moment inline sessions already created on devices? For consistency wouldn't it be better to do this check at parsing cfg file, or straight after it? > + } > + > +} > + > int32_t > main(int32_t argc, char **argv) > { > int32_t ret; > uint32_t lcore_id; > + uint32_t cdev_id; > uint32_t i; > uint8_t socket_id; > uint16_t portid; > uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; > uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; > + struct eh_conf *eh_conf = NULL; > size_t sess_sz; > > /* init EAL */ > @@ -2469,8 +2725,17 @@ main(int32_t argc, char **argv) > argc -= ret; > argv += ret; > > + force_quit = false; > + signal(SIGINT, signal_handler); > + signal(SIGTERM, signal_handler); > + > + /* initialize event helper configuration */ > + eh_conf = eh_conf_init(); > + if (eh_conf == NULL) > + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); > + > /* parse application arguments (after the EAL ones) */ > - ret = parse_args(argc, argv); > + ret = parse_args(argc, argv, eh_conf); > if (ret < 0) > rte_exit(EXIT_FAILURE, "Invalid parameters\n"); > > @@ -2487,7 +2752,7 @@ main(int32_t argc, char **argv) > rte_exit(EXIT_FAILURE, "Invalid unprotected portmask 0x%x\n", > unprotected_port_mask); > > - if (check_params() < 0) > + if (check_params(eh_conf) < 0) > rte_exit(EXIT_FAILURE, "check_params failed\n"); > > ret = init_lcore_rx_queues(); > @@ -2529,6 +2794,18 @@ main(int32_t argc, char **argv) > > cryptodevs_init(); > > + /* > + * Set the enabled port mask in helper config for use by helper > + * sub-system. This will be used while initializing devices using > + * helper sub-system. > + */ > + eh_conf->eth_portmask = enabled_port_mask; > + > + /* Initialize eventmode components */ > + ret = eh_devs_init(eh_conf); > + if (ret < 0) > + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); > + > /* start ports */ > RTE_ETH_FOREACH_DEV(portid) { > if ((enabled_port_mask & (1 << portid)) == 0) > @@ -2576,6 +2853,18 @@ main(int32_t argc, char **argv) > sp4_init(&socket_ctx[socket_id], socket_id); > sp6_init(&socket_ctx[socket_id], socket_id); > rt_init(&socket_ctx[socket_id], socket_id); > + > + /* > + * Event mode currently supports only inline protocol > + * sessions. If there are other types of sessions > + * configured then exit with error. > + */ > + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { > + ev_mode_sess_verify( > + socket_ctx[socket_id].sa_in); > + ev_mode_sess_verify( > + socket_ctx[socket_id].sa_out); > + } > } > } > > @@ -2583,10 +2872,54 @@ main(int32_t argc, char **argv) > > /* launch per-lcore init on every lcore */ > rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); > + > RTE_LCORE_FOREACH_SLAVE(lcore_id) { > if (rte_eal_wait_lcore(lcore_id) < 0) > return -1; > } > > + /* Uninitialize eventmode components */ > + ret = eh_devs_uninit(eh_conf); > + if (ret < 0) > + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); > + > + /* Free eventmode configuration memory */ > + eh_conf_uninit(eh_conf); > + > + /* Destroy inline inbound and outbound sessions */ > + for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { > + socket_id = rte_socket_id_by_idx(i); > + inline_sessions_free(socket_ctx[socket_id].sa_in); That causes a crash on 2 socket system with the config that uses lcores only from the first socket. > + inline_sessions_free(socket_ctx[socket_id].sa_out); > + } > + > + for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { > + printf("Closing cryptodev %d...", cdev_id); > + rte_cryptodev_stop(cdev_id); > + rte_cryptodev_close(cdev_id); > + printf(" Done\n"); > + } > + > + RTE_ETH_FOREACH_DEV(portid) { > + if ((enabled_port_mask & (1 << portid)) == 0) > + continue; > + > + printf("Closing port %d...", portid); > + if (flow_info_tbl[portid].rx_def_flow) { > + struct rte_flow_error err; > + > + ret = rte_flow_destroy(portid, > + flow_info_tbl[portid].rx_def_flow, &err); > + if (ret) > + RTE_LOG(ERR, IPSEC, "Failed to destroy flow " > + " for port %u, err msg: %s\n", portid, > + err.message); > + } > + rte_eth_dev_stop(portid); > + rte_eth_dev_close(portid); > + printf(" Done\n"); > + } > + printf("Bye...\n"); > + > return 0; > } > diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h > index 28ff07d..0539aec 100644 > --- a/examples/ipsec-secgw/ipsec.h > +++ b/examples/ipsec-secgw/ipsec.h > @@ -153,6 +153,17 @@ struct ipsec_sa { > struct rte_security_session_conf sess_conf; > } __rte_cache_aligned; > > +struct sa_ctx { > + void *satbl; /* pointer to array of rte_ipsec_sa objects*/ > + struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; > + union { > + struct { > + struct rte_crypto_sym_xform a; > + struct rte_crypto_sym_xform b; > + }; > + } xf[IPSEC_SA_MAX_ENTRIES]; > +}; > + > struct ipsec_mbuf_metadata { > struct ipsec_sa *sa; > struct rte_crypto_op cop; > diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c > index c75a5a1..2ec3e17 100644 > --- a/examples/ipsec-secgw/sa.c > +++ b/examples/ipsec-secgw/sa.c > @@ -781,17 +781,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) > printf("\n"); > } > > -struct sa_ctx { > - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ > - struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; > - union { > - struct { > - struct rte_crypto_sym_xform a; > - struct rte_crypto_sym_xform b; > - }; > - } xf[IPSEC_SA_MAX_ENTRIES]; > -}; > - > static struct sa_ctx * > sa_create(const char *name, int32_t socket_id) > { > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-29 23:31 ` Ananyev, Konstantin @ 2020-01-30 11:04 ` Lukas Bartosik 2020-01-30 11:13 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Lukas Bartosik @ 2020-01-30 11:04 UTC (permalink / raw) To: Ananyev, Konstantin, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Lukasz On 30.01.2020 00:31, Ananyev, Konstantin wrote: > External Email > > ---------------------------------------------------------------------- >> Add eventmode support to ipsec-secgw. With the aid of event helper >> configure and use the eventmode capabilities. >> >> Signed-off-by: Anoob Joseph <anoobj@marvell.com> >> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> >> --- >> examples/ipsec-secgw/event_helper.c | 4 +- >> examples/ipsec-secgw/event_helper.h | 14 ++ >> examples/ipsec-secgw/ipsec-secgw.c | 341 +++++++++++++++++++++++++++++++++++- >> examples/ipsec-secgw/ipsec.h | 11 ++ >> examples/ipsec-secgw/sa.c | 11 -- >> 5 files changed, 365 insertions(+), 16 deletions(-) >> >> diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c >> index 9719ab4..54a98c9 100644 >> --- a/examples/ipsec-secgw/event_helper.c >> +++ b/examples/ipsec-secgw/event_helper.c >> @@ -966,6 +966,8 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, >> else >> curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; >> >> + curr_conf.cap.ipsec_mode = conf->ipsec_mode; >> + >> /* Parse the passed list and see if we have matching capabilities */ >> >> /* Initialize the pointer used to traverse the list */ >> @@ -1625,7 +1627,7 @@ eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, >> } >> >> /* Get eventmode conf */ >> - em_conf = (struct eventmode_conf *)(conf->mode_params); >> + em_conf = conf->mode_params; >> >> /* Get core ID */ >> lcore_id = rte_lcore_id(); >> diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h >> index 15a7bd6..cf5d346 100644 >> --- a/examples/ipsec-secgw/event_helper.h >> +++ b/examples/ipsec-secgw/event_helper.h >> @@ -74,6 +74,14 @@ enum eh_tx_types { >> EH_TX_TYPE_NO_INTERNAL_PORT >> }; >> >> +/** >> + * Event mode ipsec mode types >> + */ >> +enum eh_ipsec_mode_types { >> + EH_IPSEC_MODE_TYPE_APP = 0, >> + EH_IPSEC_MODE_TYPE_DRIVER >> +}; >> + >> /* Event dev params */ >> struct eventdev_params { >> uint8_t eventdev_id; >> @@ -183,6 +191,10 @@ struct eh_conf { >> */ >> void *mode_params; >> /**< Mode specific parameters */ >> + >> + /** Application specific params */ >> + enum eh_ipsec_mode_types ipsec_mode; >> + /**< Mode of ipsec run */ >> }; >> >> /* Workers registered by the application */ >> @@ -194,6 +206,8 @@ struct eh_app_worker_params { >> /**< Specify status of rx type burst */ >> uint64_t tx_internal_port : 1; >> /**< Specify whether tx internal port is available */ >> + uint64_t ipsec_mode : 1; >> + /**< Specify ipsec processing level */ >> }; >> uint64_t u64; >> } cap; >> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c >> index d5e8fe5..f1cc3fb 100644 >> --- a/examples/ipsec-secgw/ipsec-secgw.c >> +++ b/examples/ipsec-secgw/ipsec-secgw.c >> @@ -2,6 +2,7 @@ >> * Copyright(c) 2016 Intel Corporation >> */ >> >> +#include <stdbool.h> >> #include <stdio.h> >> #include <stdlib.h> >> #include <stdint.h> >> @@ -14,6 +15,7 @@ >> #include <sys/queue.h> >> #include <stdarg.h> >> #include <errno.h> >> +#include <signal.h> >> #include <getopt.h> >> >> #include <rte_common.h> >> @@ -41,12 +43,17 @@ >> #include <rte_jhash.h> >> #include <rte_cryptodev.h> >> #include <rte_security.h> >> +#include <rte_bitmap.h> >> +#include <rte_eventdev.h> >> #include <rte_ip.h> >> #include <rte_ip_frag.h> >> >> +#include "event_helper.h" >> #include "ipsec.h" >> #include "parser.h" >> >> +volatile bool force_quit; >> + >> #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 >> >> #define MAX_JUMBO_PKT_LEN 9600 >> @@ -133,12 +140,20 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; >> #define CMD_LINE_OPT_CONFIG "config" >> #define CMD_LINE_OPT_SINGLE_SA "single-sa" >> #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" >> +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" >> +#define CMD_LINE_OPT_SCHEDULE_TYPE "schedule-type" >> #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" >> #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" >> #define CMD_LINE_OPT_REASSEMBLE "reassemble" >> #define CMD_LINE_OPT_MTU "mtu" >> #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" >> >> +#define CMD_LINE_ARG_EVENT "event" >> +#define CMD_LINE_ARG_POLL "poll" >> +#define CMD_LINE_ARG_ORDERED "ordered" >> +#define CMD_LINE_ARG_ATOMIC "atomic" >> +#define CMD_LINE_ARG_PARALLEL "parallel" >> + >> enum { >> /* long options mapped to a short option */ >> >> @@ -149,6 +164,8 @@ enum { >> CMD_LINE_OPT_CONFIG_NUM, >> CMD_LINE_OPT_SINGLE_SA_NUM, >> CMD_LINE_OPT_CRYPTODEV_MASK_NUM, >> + CMD_LINE_OPT_TRANSFER_MODE_NUM, >> + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, >> CMD_LINE_OPT_RX_OFFLOAD_NUM, >> CMD_LINE_OPT_TX_OFFLOAD_NUM, >> CMD_LINE_OPT_REASSEMBLE_NUM, >> @@ -160,6 +177,8 @@ static const struct option lgopts[] = { >> {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, >> {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, >> {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, >> + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, >> + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, >> {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, >> {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, >> {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, >> @@ -177,6 +196,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled by default. */ >> static uint32_t nb_lcores; >> static uint32_t single_sa; >> static uint32_t single_sa_idx; >> +static uint32_t schedule_type; >> >> /* >> * RX/TX HW offload capabilities to enable/use on ethernet ports. >> @@ -1185,7 +1205,7 @@ main_loop(__attribute__((unused)) void *dummy) >> } >> >> static int32_t >> -check_params(void) >> +check_params(struct eh_conf *eh_conf) >> { >> uint8_t lcore; >> uint16_t portid; >> @@ -1220,6 +1240,14 @@ check_params(void) >> return -1; >> } >> } >> + >> + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL) { >> + if (schedule_type) { >> + printf("error: option --schedule-type applies only to event mode\n"); >> + return -1; >> + } >> + } > > As a nit - might be better to keep check_params() intact, > and put this new check above into a separate function? > check_eh_conf() or so? [Lukasz] I will put the check into new check_eh_conf() function. > Another thing it seems a bit clumsy construction to have global var (scheduler_type) > just to figure out was particular option present on command line or not. > Probably simler way to avoid it - set initially em_conf->ext_params.sched_type to > some invalid value (-1 or so). Then after parse args you can check did its value > change or not. [Lukasz] I will change it in V3. > As alternative thought: wouldn't it be better to unite both --transfer-mode > and --schedule-type options into one? > Then possible values for this unite option would be: > "poll" > "event" (expands to "event-ordered") > "event-ordered" > "event-atomic" > "event-parallel" > And this situation you are checking above simply wouldn't be possible. > Again probably would be easier/simpler for users. [Lukasz] I would rather not combine event mode parameters into one for two reason: - to be consistent with poll where one configuration item is controlled with one option, - if we come up with a need to add a new event mode parameter in future then we we will need to split event-ordered back to --transfer-mode and --schedule-type to be consistent with how with provide event mode command line options. > >> + >> return 0; >> } >> >> @@ -1277,6 +1305,8 @@ print_usage(const char *prgname) >> " --config (port,queue,lcore)[,(port,queue,lcore)]" >> " [--single-sa SAIDX]" >> " [--cryptodev_mask MASK]" >> + " [--transfer-mode MODE]" >> + " [--schedule-type TYPE]" >> " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" >> " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" >> " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" >> @@ -1298,6 +1328,14 @@ print_usage(const char *prgname) >> " bypassing the SP\n" >> " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" >> " devices to configure\n" >> + " --transfer-mode MODE\n" >> + " \"poll\" : Packet transfer via polling (default)\n" >> + " \"event\" : Packet transfer via event device\n" >> + " --schedule-type TYPE queue schedule type, used only when\n" >> + " transfer mode is set to event\n" >> + " \"ordered\" : Ordered (default)\n" >> + " \"atomic\" : Atomic\n" >> + " \"parallel\" : Parallel\n" >> " --" CMD_LINE_OPT_RX_OFFLOAD >> ": bitmask of the RX HW offload capabilities to enable/use\n" >> " (DEV_RX_OFFLOAD_*)\n" >> @@ -1432,8 +1470,45 @@ print_app_sa_prm(const struct app_sa_prm *prm) >> printf("Frag TTL: %" PRIu64 " ns\n", frag_ttl_ns); >> } >> >> +static int >> +parse_transfer_mode(struct eh_conf *conf, const char *optarg) >> +{ >> + if (!strcmp(CMD_LINE_ARG_POLL, optarg)) >> + conf->mode = EH_PKT_TRANSFER_MODE_POLL; >> + else if (!strcmp(CMD_LINE_ARG_EVENT, optarg)) >> + conf->mode = EH_PKT_TRANSFER_MODE_EVENT; >> + else { >> + printf("Unsupported packet transfer mode\n"); >> + return -EINVAL; >> + } >> + >> + return 0; >> +} >> + >> +static int >> +parse_schedule_type(struct eh_conf *conf, const char *optarg) >> +{ >> + struct eventmode_conf *em_conf = NULL; >> + >> + /* Get eventmode conf */ >> + em_conf = conf->mode_params; >> + >> + if (!strcmp(CMD_LINE_ARG_ORDERED, optarg)) >> + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; >> + else if (!strcmp(CMD_LINE_ARG_ATOMIC, optarg)) >> + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ATOMIC; >> + else if (!strcmp(CMD_LINE_ARG_PARALLEL, optarg)) >> + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_PARALLEL; >> + else { >> + printf("Unsupported queue schedule type\n"); >> + return -EINVAL; >> + } >> + >> + return 0; >> +} >> + >> static int32_t >> -parse_args(int32_t argc, char **argv) >> +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) >> { >> int opt; >> int64_t ret; >> @@ -1522,6 +1597,7 @@ parse_args(int32_t argc, char **argv) >> /* else */ >> single_sa = 1; >> single_sa_idx = ret; >> + eh_conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; >> printf("Configured with single SA index %u\n", >> single_sa_idx); >> break; >> @@ -1536,6 +1612,26 @@ parse_args(int32_t argc, char **argv) >> /* else */ >> enabled_cryptodev_mask = ret; >> break; >> + >> + case CMD_LINE_OPT_TRANSFER_MODE_NUM: >> + ret = parse_transfer_mode(eh_conf, optarg); >> + if (ret < 0) { >> + printf("Invalid packet transfer mode\n"); >> + print_usage(prgname); >> + return -1; >> + } >> + break; >> + >> + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: >> + ret = parse_schedule_type(eh_conf, optarg); >> + if (ret < 0) { >> + printf("Invalid queue schedule type\n"); >> + print_usage(prgname); >> + return -1; >> + } >> + schedule_type = 1; >> + break; >> + >> case CMD_LINE_OPT_RX_OFFLOAD_NUM: >> ret = parse_mask(optarg, &dev_rx_offload); >> if (ret != 0) { >> @@ -2450,16 +2546,176 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) >> port_id); >> } >> > > Wouldn't it be more natural to have these 2 functions below > (eh_conf_init(), eh_conf_uninit()) defined inside event_helper.c? > [Lukasz] I will move these functions to event_helper.c. >> +static struct eh_conf * >> +eh_conf_init(void) >> +{ >> + struct eventmode_conf *em_conf = NULL; >> + struct eh_conf *conf = NULL; >> + unsigned int eth_core_id; >> + uint32_t nb_bytes; >> + void *mem = NULL; >> + >> + /* Allocate memory for config */ >> + conf = calloc(1, sizeof(struct eh_conf)); >> + if (conf == NULL) { >> + printf("Failed to allocate memory for eventmode helper conf"); >> + goto err; >> + } >> + >> + /* Set default conf */ >> + >> + /* Packet transfer mode: poll */ >> + conf->mode = EH_PKT_TRANSFER_MODE_POLL; >> + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; >> + >> + /* Keep all ethernet ports enabled by default */ >> + conf->eth_portmask = -1; >> + >> + /* Allocate memory for event mode params */ >> + conf->mode_params = calloc(1, sizeof(struct eventmode_conf)); >> + if (conf->mode_params == NULL) { >> + printf("Failed to allocate memory for event mode params"); >> + goto err; >> + } >> + >> + /* Get eventmode conf */ >> + em_conf = conf->mode_params; >> + >> + /* Allocate and initialize bitmap for eth cores */ >> + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); >> + if (!nb_bytes) { >> + printf("Failed to get bitmap footprint"); >> + goto err; >> + } >> + >> + mem = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, >> + RTE_CACHE_LINE_SIZE); >> + if (!mem) { >> + printf("Failed to allocate memory for eth cores bitmap\n"); >> + goto err; >> + } >> + >> + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, mem, nb_bytes); >> + if (!em_conf->eth_core_mask) { >> + printf("Failed to initialize bitmap"); >> + goto err; >> + } >> + >> + /* Schedule type: ordered */ >> + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; >> + >> + /* Set two cores as eth cores for Rx & Tx */ >> + >> + /* Use first core other than master core as Rx core */ >> + eth_core_id = rte_get_next_lcore(0, /* curr core */ >> + 1, /* skip master core */ >> + 0 /* wrap */); >> + >> + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); >> + >> + /* Use next core as Tx core */ >> + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ >> + 1, /* skip master core */ >> + 0 /* wrap */); >> + >> + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); >> + >> + return conf; >> +err: >> + rte_free(mem); >> + free(em_conf); >> + free(conf); >> + return NULL; >> +} >> + >> +static void >> +eh_conf_uninit(struct eh_conf *conf) >> +{ >> + struct eventmode_conf *em_conf = NULL; >> + >> + /* Get eventmode conf */ >> + em_conf = conf->mode_params; >> + >> + /* Free evenmode configuration memory */ >> + rte_free(em_conf->eth_core_mask); >> + free(em_conf); >> + free(conf); >> +} >> + >> +static void >> +signal_handler(int signum) >> +{ >> + if (signum == SIGINT || signum == SIGTERM) { >> + printf("\n\nSignal %d received, preparing to exit...\n", >> + signum); >> + force_quit = true; >> + } >> +} >> + >> +static void >> +inline_sessions_free(struct sa_ctx *sa_ctx) >> +{ >> + struct rte_ipsec_session *ips; >> + struct ipsec_sa *sa; >> + int32_t i, ret; >> + >> + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { >> + >> + sa = &sa_ctx->sa[i]; >> + if (!sa->spi) >> + continue; >> + >> + ips = ipsec_get_primary_session(sa); >> + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && >> + ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) >> + continue; >> + >> + ret = rte_security_session_destroy( >> + rte_eth_dev_get_sec_ctx(sa->portid), >> + ips->security.ses); >> + if (ret) >> + RTE_LOG(ERR, IPSEC, "Failed to destroy security " >> + "session type %d, spi %d\n", >> + ips->type, sa->spi); >> + } >> +} >> + >> +static void >> +ev_mode_sess_verify(struct sa_ctx *sa_ctx) >> +{ >> + struct rte_ipsec_session *ips; >> + struct ipsec_sa *sa; >> + int32_t i; >> + >> + if (!sa_ctx) >> + return; >> + >> + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { >> + >> + sa = &sa_ctx->sa[i]; >> + if (!sa->spi) >> + continue; >> + >> + ips = ipsec_get_primary_session(sa); >> + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) >> + rte_exit(EXIT_FAILURE, "Event mode supports only " >> + "inline protocol sessions\n"); > > As I understand at that moment inline sessions already created on devices? > For consistency wouldn't it be better to do this check at parsing cfg file, > or straight after it? > [Lukasz] I will move this check to be done after parsing cfg file into check_eh_conf() function. >> + } >> + >> +} >> + >> int32_t >> main(int32_t argc, char **argv) >> { >> int32_t ret; >> uint32_t lcore_id; >> + uint32_t cdev_id; >> uint32_t i; >> uint8_t socket_id; >> uint16_t portid; >> uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; >> uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; >> + struct eh_conf *eh_conf = NULL; >> size_t sess_sz; >> >> /* init EAL */ >> @@ -2469,8 +2725,17 @@ main(int32_t argc, char **argv) >> argc -= ret; >> argv += ret; >> >> + force_quit = false; >> + signal(SIGINT, signal_handler); >> + signal(SIGTERM, signal_handler); >> + >> + /* initialize event helper configuration */ >> + eh_conf = eh_conf_init(); >> + if (eh_conf == NULL) >> + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); >> + >> /* parse application arguments (after the EAL ones) */ >> - ret = parse_args(argc, argv); >> + ret = parse_args(argc, argv, eh_conf); >> if (ret < 0) >> rte_exit(EXIT_FAILURE, "Invalid parameters\n"); >> >> @@ -2487,7 +2752,7 @@ main(int32_t argc, char **argv) >> rte_exit(EXIT_FAILURE, "Invalid unprotected portmask 0x%x\n", >> unprotected_port_mask); >> >> - if (check_params() < 0) >> + if (check_params(eh_conf) < 0) >> rte_exit(EXIT_FAILURE, "check_params failed\n"); >> >> ret = init_lcore_rx_queues(); >> @@ -2529,6 +2794,18 @@ main(int32_t argc, char **argv) >> >> cryptodevs_init(); >> >> + /* >> + * Set the enabled port mask in helper config for use by helper >> + * sub-system. This will be used while initializing devices using >> + * helper sub-system. >> + */ >> + eh_conf->eth_portmask = enabled_port_mask; >> + >> + /* Initialize eventmode components */ >> + ret = eh_devs_init(eh_conf); >> + if (ret < 0) >> + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); >> + >> /* start ports */ >> RTE_ETH_FOREACH_DEV(portid) { >> if ((enabled_port_mask & (1 << portid)) == 0) >> @@ -2576,6 +2853,18 @@ main(int32_t argc, char **argv) >> sp4_init(&socket_ctx[socket_id], socket_id); >> sp6_init(&socket_ctx[socket_id], socket_id); >> rt_init(&socket_ctx[socket_id], socket_id); >> + >> + /* >> + * Event mode currently supports only inline protocol >> + * sessions. If there are other types of sessions >> + * configured then exit with error. >> + */ >> + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { >> + ev_mode_sess_verify( >> + socket_ctx[socket_id].sa_in); >> + ev_mode_sess_verify( >> + socket_ctx[socket_id].sa_out); >> + } >> } >> } >> >> @@ -2583,10 +2872,54 @@ main(int32_t argc, char **argv) >> >> /* launch per-lcore init on every lcore */ >> rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); >> + >> RTE_LCORE_FOREACH_SLAVE(lcore_id) { >> if (rte_eal_wait_lcore(lcore_id) < 0) >> return -1; >> } >> >> + /* Uninitialize eventmode components */ >> + ret = eh_devs_uninit(eh_conf); >> + if (ret < 0) >> + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); >> + >> + /* Free eventmode configuration memory */ >> + eh_conf_uninit(eh_conf); >> + >> + /* Destroy inline inbound and outbound sessions */ >> + for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { >> + socket_id = rte_socket_id_by_idx(i); >> + inline_sessions_free(socket_ctx[socket_id].sa_in); > > That causes a crash on 2 socket system with the config that uses > lcores only from the first socket. > [Lukasz] I will fix it in V3. Thanks >> + inline_sessions_free(socket_ctx[socket_id].sa_out); >> + } >> + >> + for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { >> + printf("Closing cryptodev %d...", cdev_id); >> + rte_cryptodev_stop(cdev_id); >> + rte_cryptodev_close(cdev_id); >> + printf(" Done\n"); >> + } >> + >> + RTE_ETH_FOREACH_DEV(portid) { >> + if ((enabled_port_mask & (1 << portid)) == 0) >> + continue; >> + >> + printf("Closing port %d...", portid); >> + if (flow_info_tbl[portid].rx_def_flow) { >> + struct rte_flow_error err; >> + >> + ret = rte_flow_destroy(portid, >> + flow_info_tbl[portid].rx_def_flow, &err); >> + if (ret) >> + RTE_LOG(ERR, IPSEC, "Failed to destroy flow " >> + " for port %u, err msg: %s\n", portid, >> + err.message); >> + } >> + rte_eth_dev_stop(portid); >> + rte_eth_dev_close(portid); >> + printf(" Done\n"); >> + } >> + printf("Bye...\n"); >> + >> return 0; >> } >> diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h >> index 28ff07d..0539aec 100644 >> --- a/examples/ipsec-secgw/ipsec.h >> +++ b/examples/ipsec-secgw/ipsec.h >> @@ -153,6 +153,17 @@ struct ipsec_sa { >> struct rte_security_session_conf sess_conf; >> } __rte_cache_aligned; >> >> +struct sa_ctx { >> + void *satbl; /* pointer to array of rte_ipsec_sa objects*/ >> + struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; >> + union { >> + struct { >> + struct rte_crypto_sym_xform a; >> + struct rte_crypto_sym_xform b; >> + }; >> + } xf[IPSEC_SA_MAX_ENTRIES]; >> +}; >> + >> struct ipsec_mbuf_metadata { >> struct ipsec_sa *sa; >> struct rte_crypto_op cop; >> diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c >> index c75a5a1..2ec3e17 100644 >> --- a/examples/ipsec-secgw/sa.c >> +++ b/examples/ipsec-secgw/sa.c >> @@ -781,17 +781,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) >> printf("\n"); >> } >> >> -struct sa_ctx { >> - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ >> - struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; >> - union { >> - struct { >> - struct rte_crypto_sym_xform a; >> - struct rte_crypto_sym_xform b; >> - }; >> - } xf[IPSEC_SA_MAX_ENTRIES]; >> -}; >> - >> static struct sa_ctx * >> sa_create(const char *name, int32_t socket_id) >> { >> -- >> 2.7.4 > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-30 11:04 ` [dpdk-dev] [EXT] " Lukas Bartosik @ 2020-01-30 11:13 ` Ananyev, Konstantin 2020-01-30 22:21 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-30 11:13 UTC (permalink / raw) To: Lukas Bartosik, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Lukasz, > >> > >> /* > >> * RX/TX HW offload capabilities to enable/use on ethernet ports. > >> @@ -1185,7 +1205,7 @@ main_loop(__attribute__((unused)) void *dummy) > >> } > >> > >> static int32_t > >> -check_params(void) > >> +check_params(struct eh_conf *eh_conf) > >> { > >> uint8_t lcore; > >> uint16_t portid; > >> @@ -1220,6 +1240,14 @@ check_params(void) > >> return -1; > >> } > >> } > >> + > >> + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL) { > >> + if (schedule_type) { > >> + printf("error: option --schedule-type applies only to event mode\n"); > >> + return -1; > >> + } > >> + } > > > > As a nit - might be better to keep check_params() intact, > > and put this new check above into a separate function? > > check_eh_conf() or so? > > [Lukasz] I will put the check into new check_eh_conf() function. > > > Another thing it seems a bit clumsy construction to have global var (scheduler_type) > > just to figure out was particular option present on command line or not. > > Probably simler way to avoid it - set initially em_conf->ext_params.sched_type to > > some invalid value (-1 or so). Then after parse args you can check did its value > > change or not. > > [Lukasz] I will change it in V3. > > > As alternative thought: wouldn't it be better to unite both --transfer-mode > > and --schedule-type options into one? > > Then possible values for this unite option would be: > > "poll" > > "event" (expands to "event-ordered") > > "event-ordered" > > "event-atomic" > > "event-parallel" > > And this situation you are checking above simply wouldn't be possible. > > Again probably would be easier/simpler for users. > > [Lukasz] I would rather not combine event mode parameters into one for two reason: > - to be consistent with poll where one configuration item is controlled with one option, > - if we come up with a need to add a new event mode parameter in future then we > we will need to split event-ordered back to --transfer-mode and --schedule-type > to be consistent with how with provide event mode command line options. I thought for future mods we can just keep adding new types here: "event-xxx", "poll-yyy", etc. But if you think separate ones is a better approach - I am fine. Konstantin ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-30 11:13 ` Ananyev, Konstantin @ 2020-01-30 22:21 ` Ananyev, Konstantin 2020-01-31 1:09 ` Lukas Bartosik 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-30 22:21 UTC (permalink / raw) To: Lukas Bartosik, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev > -----Original Message----- > From: Ananyev, Konstantin > Sent: Thursday, January 30, 2020 11:13 AM > To: Lukas Bartosik <lbartosik@marvell.com>; Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; Nicolau, Radu > <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju Athreya <pathreya@marvell.com>; Ankur Dwivedi > <adwivedi@marvell.com>; Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna > Attunuru <vattunuru@marvell.com>; dev@dpdk.org > Subject: RE: [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw > > Hi Lukasz, > > > >> > > >> /* > > >> * RX/TX HW offload capabilities to enable/use on ethernet ports. > > >> @@ -1185,7 +1205,7 @@ main_loop(__attribute__((unused)) void *dummy) > > >> } > > >> > > >> static int32_t > > >> -check_params(void) > > >> +check_params(struct eh_conf *eh_conf) > > >> { > > >> uint8_t lcore; > > >> uint16_t portid; > > >> @@ -1220,6 +1240,14 @@ check_params(void) > > >> return -1; > > >> } > > >> } > > >> + > > >> + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL) { > > >> + if (schedule_type) { > > >> + printf("error: option --schedule-type applies only to event mode\n"); > > >> + return -1; > > >> + } > > >> + } > > > > > > As a nit - might be better to keep check_params() intact, > > > and put this new check above into a separate function? > > > check_eh_conf() or so? > > > > [Lukasz] I will put the check into new check_eh_conf() function. > > > > > Another thing it seems a bit clumsy construction to have global var (scheduler_type) > > > just to figure out was particular option present on command line or not. > > > Probably simler way to avoid it - set initially em_conf->ext_params.sched_type to > > > some invalid value (-1 or so). Then after parse args you can check did its value > > > change or not. > > > > [Lukasz] I will change it in V3. > > > > > As alternative thought: wouldn't it be better to unite both --transfer-mode > > > and --schedule-type options into one? > > > Then possible values for this unite option would be: > > > "poll" > > > "event" (expands to "event-ordered") > > > "event-ordered" > > > "event-atomic" > > > "event-parallel" > > > And this situation you are checking above simply wouldn't be possible. > > > Again probably would be easier/simpler for users. > > > > [Lukasz] I would rather not combine event mode parameters into one for two reason: > > - to be consistent with poll where one configuration item is controlled with one option, > > - if we come up with a need to add a new event mode parameter in future then we > > we will need to split event-ordered back to --transfer-mode and --schedule-type > > to be consistent with how with provide event mode command line options. > > I thought for future mods we can just keep adding new types here: > "event-xxx", "poll-yyy", etc. > But if you think separate ones is a better approach - I am fine. Probably one extra suggestion - would it make sense to change name for that option to have 'event' inside? '--event-scheduler' or so. Will probably make things a bit more clear. ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-30 22:21 ` Ananyev, Konstantin @ 2020-01-31 1:09 ` Lukas Bartosik 2020-02-02 23:00 ` Lukas Bartosik 0 siblings, 1 reply; 147+ messages in thread From: Lukas Bartosik @ 2020-01-31 1:09 UTC (permalink / raw) To: Ananyev, Konstantin, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, On 30.01.2020 23:21, Ananyev, Konstantin wrote: > > >> -----Original Message----- >> From: Ananyev, Konstantin >> Sent: Thursday, January 30, 2020 11:13 AM >> To: Lukas Bartosik <lbartosik@marvell.com>; Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; Nicolau, Radu >> <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> >> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju Athreya <pathreya@marvell.com>; Ankur Dwivedi >> <adwivedi@marvell.com>; Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna >> Attunuru <vattunuru@marvell.com>; dev@dpdk.org >> Subject: RE: [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw >> >> Hi Lukasz, >> >>>>> >>>>> /* >>>>> * RX/TX HW offload capabilities to enable/use on ethernet ports. >>>>> @@ -1185,7 +1205,7 @@ main_loop(__attribute__((unused)) void *dummy) >>>>> } >>>>> >>>>> static int32_t >>>>> -check_params(void) >>>>> +check_params(struct eh_conf *eh_conf) >>>>> { >>>>> uint8_t lcore; >>>>> uint16_t portid; >>>>> @@ -1220,6 +1240,14 @@ check_params(void) >>>>> return -1; >>>>> } >>>>> } >>>>> + >>>>> + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL) { >>>>> + if (schedule_type) { >>>>> + printf("error: option --schedule-type applies only to event mode\n"); >>>>> + return -1; >>>>> + } >>>>> + } >>>> >>>> As a nit - might be better to keep check_params() intact, >>>> and put this new check above into a separate function? >>>> check_eh_conf() or so? >>> >>> [Lukasz] I will put the check into new check_eh_conf() function. >>> >>>> Another thing it seems a bit clumsy construction to have global var (scheduler_type) >>>> just to figure out was particular option present on command line or not. >>>> Probably simler way to avoid it - set initially em_conf->ext_params.sched_type to >>>> some invalid value (-1 or so). Then after parse args you can check did its value >>>> change or not. >>> >>> [Lukasz] I will change it in V3. >>> >>>> As alternative thought: wouldn't it be better to unite both --transfer-mode >>>> and --schedule-type options into one? >>>> Then possible values for this unite option would be: >>>> "poll" >>>> "event" (expands to "event-ordered") >>>> "event-ordered" >>>> "event-atomic" >>>> "event-parallel" >>>> And this situation you are checking above simply wouldn't be possible. >>>> Again probably would be easier/simpler for users. >>> >>> [Lukasz] I would rather not combine event mode parameters into one for two reason: >>> - to be consistent with poll where one configuration item is controlled with one option, >>> - if we come up with a need to add a new event mode parameter in future then we >>> we will need to split event-ordered back to --transfer-mode and --schedule-type >>> to be consistent with how with provide event mode command line options. >> >> I thought for future mods we can just keep adding new types here: >> "event-xxx", "poll-yyy", etc. >> But if you think separate ones is a better approach - I am fine. > > Probably one extra suggestion - would it make sense to change name for > that option to have 'event' inside? > '--event-scheduler' or so. > Will probably make things a bit more clear. [Lukasz] I will rename option --schedule-type to --event-scheduler ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-01-31 1:09 ` Lukas Bartosik @ 2020-02-02 23:00 ` Lukas Bartosik 2020-02-03 7:50 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Lukas Bartosik @ 2020-02-02 23:00 UTC (permalink / raw) To: Ananyev, Konstantin, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, On 31.01.2020 02:09, Lukasz Bartosik wrote: > Hi Konstantin, > > On 30.01.2020 23:21, Ananyev, Konstantin wrote: >> >> >>> -----Original Message----- >>> From: Ananyev, Konstantin >>> Sent: Thursday, January 30, 2020 11:13 AM >>> To: Lukas Bartosik <lbartosik@marvell.com>; Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; Nicolau, Radu >>> <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> >>> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju Athreya <pathreya@marvell.com>; Ankur Dwivedi >>> <adwivedi@marvell.com>; Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna >>> Attunuru <vattunuru@marvell.com>; dev@dpdk.org >>> Subject: RE: [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw >>> >>> Hi Lukasz, >>> >>>>>> >>>>>> /* >>>>>> * RX/TX HW offload capabilities to enable/use on ethernet ports. >>>>>> @@ -1185,7 +1205,7 @@ main_loop(__attribute__((unused)) void *dummy) >>>>>> } >>>>>> >>>>>> static int32_t >>>>>> -check_params(void) >>>>>> +check_params(struct eh_conf *eh_conf) >>>>>> { >>>>>> uint8_t lcore; >>>>>> uint16_t portid; >>>>>> @@ -1220,6 +1240,14 @@ check_params(void) >>>>>> return -1; >>>>>> } >>>>>> } >>>>>> + >>>>>> + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL) { >>>>>> + if (schedule_type) { >>>>>> + printf("error: option --schedule-type applies only to event mode\n"); >>>>>> + return -1; >>>>>> + } >>>>>> + } >>>>> >>>>> As a nit - might be better to keep check_params() intact, >>>>> and put this new check above into a separate function? >>>>> check_eh_conf() or so? >>>> >>>> [Lukasz] I will put the check into new check_eh_conf() function. >>>> >>>>> Another thing it seems a bit clumsy construction to have global var (scheduler_type) >>>>> just to figure out was particular option present on command line or not. >>>>> Probably simler way to avoid it - set initially em_conf->ext_params.sched_type to >>>>> some invalid value (-1 or so). Then after parse args you can check did its value >>>>> change or not. >>>> >>>> [Lukasz] I will change it in V3. >>>> >>>>> As alternative thought: wouldn't it be better to unite both --transfer-mode >>>>> and --schedule-type options into one? >>>>> Then possible values for this unite option would be: >>>>> "poll" >>>>> "event" (expands to "event-ordered") >>>>> "event-ordered" >>>>> "event-atomic" >>>>> "event-parallel" >>>>> And this situation you are checking above simply wouldn't be possible. >>>>> Again probably would be easier/simpler for users. >>>> >>>> [Lukasz] I would rather not combine event mode parameters into one for two reason: >>>> - to be consistent with poll where one configuration item is controlled with one option, >>>> - if we come up with a need to add a new event mode parameter in future then we >>>> we will need to split event-ordered back to --transfer-mode and --schedule-type >>>> to be consistent with how with provide event mode command line options. >>> >>> I thought for future mods we can just keep adding new types here: >>> "event-xxx", "poll-yyy", etc. >>> But if you think separate ones is a better approach - I am fine. >> >> Probably one extra suggestion - would it make sense to change name for >> that option to have 'event' inside? >> '--event-scheduler' or so. >> Will probably make things a bit more clear. > [Lukasz] I will rename option --schedule-type to --event-scheduler > [Lukasz] After reconsideration my proposal is to change option --schedule-type to --event-schedule-type. Are you ok with that ? ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-02-02 23:00 ` Lukas Bartosik @ 2020-02-03 7:50 ` Ananyev, Konstantin 0 siblings, 0 replies; 147+ messages in thread From: Ananyev, Konstantin @ 2020-02-03 7:50 UTC (permalink / raw) To: Lukas Bartosik, Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Lukasz, > >>>>>> /* > >>>>>> * RX/TX HW offload capabilities to enable/use on ethernet ports. > >>>>>> @@ -1185,7 +1205,7 @@ main_loop(__attribute__((unused)) void *dummy) > >>>>>> } > >>>>>> > >>>>>> static int32_t > >>>>>> -check_params(void) > >>>>>> +check_params(struct eh_conf *eh_conf) > >>>>>> { > >>>>>> uint8_t lcore; > >>>>>> uint16_t portid; > >>>>>> @@ -1220,6 +1240,14 @@ check_params(void) > >>>>>> return -1; > >>>>>> } > >>>>>> } > >>>>>> + > >>>>>> + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL) { > >>>>>> + if (schedule_type) { > >>>>>> + printf("error: option --schedule-type applies only to event mode\n"); > >>>>>> + return -1; > >>>>>> + } > >>>>>> + } > >>>>> > >>>>> As a nit - might be better to keep check_params() intact, > >>>>> and put this new check above into a separate function? > >>>>> check_eh_conf() or so? > >>>> > >>>> [Lukasz] I will put the check into new check_eh_conf() function. > >>>> > >>>>> Another thing it seems a bit clumsy construction to have global var (scheduler_type) > >>>>> just to figure out was particular option present on command line or not. > >>>>> Probably simler way to avoid it - set initially em_conf->ext_params.sched_type to > >>>>> some invalid value (-1 or so). Then after parse args you can check did its value > >>>>> change or not. > >>>> > >>>> [Lukasz] I will change it in V3. > >>>> > >>>>> As alternative thought: wouldn't it be better to unite both --transfer-mode > >>>>> and --schedule-type options into one? > >>>>> Then possible values for this unite option would be: > >>>>> "poll" > >>>>> "event" (expands to "event-ordered") > >>>>> "event-ordered" > >>>>> "event-atomic" > >>>>> "event-parallel" > >>>>> And this situation you are checking above simply wouldn't be possible. > >>>>> Again probably would be easier/simpler for users. > >>>> > >>>> [Lukasz] I would rather not combine event mode parameters into one for two reason: > >>>> - to be consistent with poll where one configuration item is controlled with one option, > >>>> - if we come up with a need to add a new event mode parameter in future then we > >>>> we will need to split event-ordered back to --transfer-mode and --schedule-type > >>>> to be consistent with how with provide event mode command line options. > >>> > >>> I thought for future mods we can just keep adding new types here: > >>> "event-xxx", "poll-yyy", etc. > >>> But if you think separate ones is a better approach - I am fine. > >> > >> Probably one extra suggestion - would it make sense to change name for > >> that option to have 'event' inside? > >> '--event-scheduler' or so. > >> Will probably make things a bit more clear. > > [Lukasz] I will rename option --schedule-type to --event-scheduler > > > [Lukasz] After reconsideration my proposal is to change option --schedule-type to --event-schedule-type. Are you ok with that ? Yes, sounds ok to me. ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 10/12] examples/ipsec-secgw: add driver mode worker 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (8 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-29 22:22 ` Ananyev, Konstantin 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 11/12] examples/ipsec-secgw: add app " Anoob Joseph ` (3 subsequent siblings) 13 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add driver inbound and outbound worker thread for ipsec-secgw. In driver mode application does as little as possible. It simply forwards packets back to port from which traffic was received instructing HW to apply inline security processing using first outbound SA configured for a given port. If a port does not have SA configured outbound traffic on that port will be silently dropped. The aim of this mode is to measure HW capabilities. Driver mode is selected with single-sa option. The single-sa option accepts SA index however in event mode the SA index is ignored. Example command to run ipsec-secgw in driver mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --schedule-type parallel --single-sa 0 Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/ipsec-secgw.c | 36 +++--- examples/ipsec-secgw/ipsec-secgw.h | 17 +++ examples/ipsec-secgw/ipsec.h | 11 ++ examples/ipsec-secgw/ipsec_worker.c | 240 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/meson.build | 2 +- 6 files changed, 291 insertions(+), 16 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index 09e3c5a..f6fd94c 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -15,6 +15,7 @@ SRCS-y += sa.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += ipsec_worker.c SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index f1cc3fb..86215fb 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -70,8 +70,6 @@ volatile bool force_quit; #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ -#define NB_SOCKETS 4 - /* Configure how many packets ahead to prefetch, when reading packets */ #define PREFETCH_OFFSET 3 @@ -79,8 +77,6 @@ volatile bool force_quit; #define MAX_LCORE_PARAMS 1024 -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) - /* * Configurable number of RX/TX ring descriptors */ @@ -190,12 +186,10 @@ static const struct option lgopts[] = { /* mask of enabled ports */ static uint32_t enabled_port_mask; static uint64_t enabled_cryptodev_mask = UINT64_MAX; -static uint32_t unprotected_port_mask; static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; -static uint32_t single_sa_idx; static uint32_t schedule_type; /* @@ -279,8 +273,6 @@ static struct rte_eth_conf port_conf = { }, }; -static struct socket_ctx socket_ctx[NB_SOCKETS]; - /* * Determine is multi-segment support required: * - either frame buffer size is smaller then mtu @@ -1114,8 +1106,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, } /* main processing loop */ -static int32_t -main_loop(__attribute__((unused)) void *dummy) +void +ipsec_poll_mode_worker(void) { struct rte_mbuf *pkts[MAX_PKT_BURST]; uint32_t lcore_id; @@ -1157,7 +1149,7 @@ main_loop(__attribute__((unused)) void *dummy) if (qconf->nb_rx_queue == 0) { RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", lcore_id); - return 0; + return; } RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); @@ -1170,7 +1162,7 @@ main_loop(__attribute__((unused)) void *dummy) lcore_id, portid, queueid); } - while (1) { + while (!force_quit) { cur_tsc = rte_rdtsc(); /* TX queue buffer drain */ @@ -1324,8 +1316,10 @@ print_usage(const char *prgname) " -a enables SA SQN atomic behaviour\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration\n" - " --single-sa SAIDX: Use single SA index for outbound traffic,\n" - " bypassing the SP\n" + " --single-sa SAIDX: In poll mode use single SA index for\n" + " outbound traffic, bypassing the SP\n" + " In event mode selects driver mode,\n" + " SA index value is ignored\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" " --transfer-mode MODE\n" @@ -1980,6 +1974,18 @@ cryptodevs_init(void) i++; } + /* + * Set the queue pair to at least the number of ethernet + * devices for inline outbound. + */ + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); + + /* + * The requested number of queues should never exceed + * the max available + */ + qp = RTE_MIN(qp, max_nb_qps); + if (qp == 0) continue; @@ -2871,7 +2877,7 @@ main(int32_t argc, char **argv) check_all_ports_link_status(enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h new file mode 100644 index 0000000..5b19e29 --- /dev/null +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_SECGW_H_ +#define _IPSEC_SECGW_H_ + +#define NB_SOCKETS 4 + +#define UNPROTECTED_PORT(portid) (unprotected_port_mask & (1 << portid)) + +/* Port mask to identify the unprotected ports */ +uint32_t unprotected_port_mask; + +/* Index of SA in single mode */ +uint32_t single_sa_idx; + +#endif /* _IPSEC_SECGW_H_ */ diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 0539aec..65be2ac 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -13,6 +13,8 @@ #include <rte_flow.h> #include <rte_ipsec.h> +#include "ipsec-secgw.h" + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 @@ -258,6 +260,15 @@ struct ipsec_traffic { struct traffic_type ip6; }; +/* Socket ctx */ +struct socket_ctx socket_ctx[NB_SOCKETS]; + +void +ipsec_poll_mode_worker(void); + +int +ipsec_launch_one_lcore(void *args); + uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t len); diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c new file mode 100644 index 0000000..876ec68 --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -0,0 +1,240 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2016 Intel Corporation + * Copyright (C) 2020 Marvell International Ltd. + */ +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <stdint.h> +#include <inttypes.h> +#include <sys/types.h> +#include <sys/queue.h> +#include <netinet/in.h> +#include <setjmp.h> +#include <stdarg.h> +#include <ctype.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_log.h> +#include <rte_memcpy.h> +#include <rte_atomic.h> +#include <rte_cycles.h> +#include <rte_prefetch.h> +#include <rte_lcore.h> +#include <rte_branch_prediction.h> +#include <rte_event_eth_tx_adapter.h> +#include <rte_ether.h> +#include <rte_ethdev.h> +#include <rte_eventdev.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "event_helper.h" +#include "ipsec.h" +#include "ipsec-secgw.h" + +extern volatile bool force_quit; + +static inline void +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) +{ + /* Save the destination port in the mbuf */ + m->port = port_id; + + /* Save eth queue for Tx */ + rte_event_eth_tx_adapter_txq_set(m, 0); +} + +static inline void +prepare_out_sessions_tbl(struct sa_ctx *sa_out, + struct rte_security_session **sess_tbl, uint16_t size) +{ + struct rte_ipsec_session *pri_sess; + struct ipsec_sa *sa; + int i; + + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { + + sa = &sa_out->sa[i]; + if (!sa->spi) + continue; + + pri_sess = ipsec_get_primary_session(sa); + if (pri_sess->type != + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + + RTE_LOG(ERR, IPSEC, "Invalid session type %d\n", + pri_sess->type); + continue; + } + + if (sa->portid >= size) { + RTE_LOG(ERR, IPSEC, + "Port id >= than table size %d, %d\n", + sa->portid, size); + continue; + } + + /* Use only first inline session found for a given port */ + if (sess_tbl[sa->portid]) + continue; + sess_tbl[sa->portid] = pri_sess->security.ses; + } +} + +/* + * Event mode exposes various operating modes depending on the + * capabilities of the event device and the operating mode + * selected. + */ + +/* Workers registered */ +#define IPSEC_EVENTMODE_WORKERS 1 + +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - driver mode + */ +static void +ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct rte_security_session *sess_tbl[RTE_MAX_ETHPORTS] = { NULL }; + unsigned int nb_rx = 0; + struct rte_mbuf *pkt; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int16_t port_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* + * Prepare security sessions table. In outbound driver mode + * we always use first session configured for a given port + */ + prepare_out_sessions_tbl(socket_ctx[socket_id].sa_out, sess_tbl, + RTE_MAX_ETHPORTS); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "driver mode) on lcore %d\n", lcore_id); + + /* We have valid links */ + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + pkt = ev.mbuf; + port_id = pkt->port; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + if (!UNPROTECTED_PORT(port_id)) { + + if (unlikely(!sess_tbl[port_id])) { + rte_pktmbuf_free(pkt); + continue; + } + + /* Save security session */ + pkt->udata64 = (uint64_t) sess_tbl[port_id]; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + } + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + +static uint8_t +ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) +{ + struct eh_app_worker_params *wrkr; + uint8_t nb_wrkr_param = 0; + + /* Save workers */ + wrkr = wrkrs; + + /* Non-burst - Tx internal port - driver mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; + wrkr++; + + return nb_wrkr_param; +} + +static void +ipsec_eventmode_worker(struct eh_conf *conf) +{ + struct eh_app_worker_params ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { + {{{0} }, NULL } }; + uint8_t nb_wrkr_param; + + /* Populate l2fwd_wrkr params */ + nb_wrkr_param = ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); + + /* + * Launch correct worker after checking + * the event device's capabilities. + */ + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); +} + +int ipsec_launch_one_lcore(void *args) +{ + struct eh_conf *conf; + + conf = (struct eh_conf *)args; + + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { + /* Run in poll mode */ + ipsec_poll_mode_worker(); + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { + /* Run in event mode */ + ipsec_eventmode_worker(conf); + } + return 0; +} diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 20f4064..ab40ca5 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c', 'ipsec_worker.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v2 10/12] examples/ipsec-secgw: add driver mode worker 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 10/12] examples/ipsec-secgw: add driver mode worker Anoob Joseph @ 2020-01-29 22:22 ` Ananyev, Konstantin 0 siblings, 0 replies; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-29 22:22 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > Add driver inbound and outbound worker thread for ipsec-secgw. In driver > mode application does as little as possible. It simply forwards packets > back to port from which traffic was received instructing HW to apply > inline security processing using first outbound SA configured for > a given port. If a port does not have SA configured outbound traffic > on that port will be silently dropped. The aim of this mode is to > measure HW capabilities. Driver mode is selected with single-sa option. > The single-sa option accepts SA index however in event mode the SA > index is ignored. > > Example command to run ipsec-secgw in driver mode: > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 > -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" > -f aes-gcm.cfg --transfer-mode event --schedule-type parallel > --single-sa 0 > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/Makefile | 1 + > examples/ipsec-secgw/ipsec-secgw.c | 36 +++--- > examples/ipsec-secgw/ipsec-secgw.h | 17 +++ > examples/ipsec-secgw/ipsec.h | 11 ++ > examples/ipsec-secgw/ipsec_worker.c | 240 ++++++++++++++++++++++++++++++++++++ > examples/ipsec-secgw/meson.build | 2 +- > 6 files changed, 291 insertions(+), 16 deletions(-) > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > > diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile > index 09e3c5a..f6fd94c 100644 > --- a/examples/ipsec-secgw/Makefile > +++ b/examples/ipsec-secgw/Makefile > @@ -15,6 +15,7 @@ SRCS-y += sa.c > SRCS-y += rt.c > SRCS-y += ipsec_process.c > SRCS-y += ipsec-secgw.c > +SRCS-y += ipsec_worker.c > SRCS-y += event_helper.c > > CFLAGS += -gdwarf-2 > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index f1cc3fb..86215fb 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -70,8 +70,6 @@ volatile bool force_quit; > > #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ > > -#define NB_SOCKETS 4 > - > /* Configure how many packets ahead to prefetch, when reading packets */ > #define PREFETCH_OFFSET 3 > > @@ -79,8 +77,6 @@ volatile bool force_quit; > > #define MAX_LCORE_PARAMS 1024 > > -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) > - > /* > * Configurable number of RX/TX ring descriptors > */ > @@ -190,12 +186,10 @@ static const struct option lgopts[] = { > /* mask of enabled ports */ > static uint32_t enabled_port_mask; > static uint64_t enabled_cryptodev_mask = UINT64_MAX; > -static uint32_t unprotected_port_mask; > static int32_t promiscuous_on = 1; > static int32_t numa_on = 1; /**< NUMA is enabled by default. */ > static uint32_t nb_lcores; > static uint32_t single_sa; > -static uint32_t single_sa_idx; > static uint32_t schedule_type; > > /* > @@ -279,8 +273,6 @@ static struct rte_eth_conf port_conf = { > }, > }; > > -static struct socket_ctx socket_ctx[NB_SOCKETS]; > - > /* > * Determine is multi-segment support required: > * - either frame buffer size is smaller then mtu > @@ -1114,8 +1106,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, > } > > /* main processing loop */ > -static int32_t > -main_loop(__attribute__((unused)) void *dummy) > +void > +ipsec_poll_mode_worker(void) > { > struct rte_mbuf *pkts[MAX_PKT_BURST]; > uint32_t lcore_id; > @@ -1157,7 +1149,7 @@ main_loop(__attribute__((unused)) void *dummy) > if (qconf->nb_rx_queue == 0) { > RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", > lcore_id); > - return 0; > + return; > } > > RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); > @@ -1170,7 +1162,7 @@ main_loop(__attribute__((unused)) void *dummy) > lcore_id, portid, queueid); > } > > - while (1) { > + while (!force_quit) { > cur_tsc = rte_rdtsc(); > > /* TX queue buffer drain */ > @@ -1324,8 +1316,10 @@ print_usage(const char *prgname) > " -a enables SA SQN atomic behaviour\n" > " -f CONFIG_FILE: Configuration file\n" > " --config (port,queue,lcore): Rx queue configuration\n" > - " --single-sa SAIDX: Use single SA index for outbound traffic,\n" > - " bypassing the SP\n" > + " --single-sa SAIDX: In poll mode use single SA index for\n" > + " outbound traffic, bypassing the SP\n" > + " In event mode selects driver mode,\n" > + " SA index value is ignored\n" > " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" > " devices to configure\n" > " --transfer-mode MODE\n" > @@ -1980,6 +1974,18 @@ cryptodevs_init(void) > i++; > } > > + /* > + * Set the queue pair to at least the number of ethernet > + * devices for inline outbound. > + */ > + qp = RTE_MAX(rte_eth_dev_count_avail(), qp); > + > + /* > + * The requested number of queues should never exceed > + * the max available > + */ > + qp = RTE_MIN(qp, max_nb_qps); > + Same comment as for v1: I still don't understand why we have to do it for unconditionally. For poll mode it seems to bring nothing but waste of resources. Konstantin > if (qp == 0) > continue; > > @@ -2871,7 +2877,7 @@ main(int32_t argc, char **argv) > check_all_ports_link_status(enabled_port_mask); > > /* launch per-lcore init on every lcore */ > - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); > + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); > > RTE_LCORE_FOREACH_SLAVE(lcore_id) { > if (rte_eal_wait_lcore(lcore_id) < 0) > diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h > new file mode 100644 > index 0000000..5b19e29 > --- /dev/null > +++ b/examples/ipsec-secgw/ipsec-secgw.h > @@ -0,0 +1,17 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright (C) 2020 Marvell International Ltd. > + */ > +#ifndef _IPSEC_SECGW_H_ > +#define _IPSEC_SECGW_H_ > + > +#define NB_SOCKETS 4 > + > +#define UNPROTECTED_PORT(portid) (unprotected_port_mask & (1 << portid)) > + > +/* Port mask to identify the unprotected ports */ > +uint32_t unprotected_port_mask; > + > +/* Index of SA in single mode */ > +uint32_t single_sa_idx; > + > +#endif /* _IPSEC_SECGW_H_ */ > diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h > index 0539aec..65be2ac 100644 > --- a/examples/ipsec-secgw/ipsec.h > +++ b/examples/ipsec-secgw/ipsec.h > @@ -13,6 +13,8 @@ > #include <rte_flow.h> > #include <rte_ipsec.h> > > +#include "ipsec-secgw.h" > + > #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 > #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 > @@ -258,6 +260,15 @@ struct ipsec_traffic { > struct traffic_type ip6; > }; > > +/* Socket ctx */ > +struct socket_ctx socket_ctx[NB_SOCKETS]; > + > +void > +ipsec_poll_mode_worker(void); > + > +int > +ipsec_launch_one_lcore(void *args); > + > uint16_t > ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], > uint16_t nb_pkts, uint16_t len); ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 11/12] examples/ipsec-secgw: add app mode worker 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (9 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 10/12] examples/ipsec-secgw: add driver mode worker Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-29 15:34 ` Ananyev, Konstantin 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 12/12] examples/ipsec-secgw: add cmd line option for bufs Anoob Joseph ` (2 subsequent siblings) 13 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add application inbound/outbound worker thread and IPsec application processing code for event mode. Exampple ipsec-secgw command in app mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --schedule-type parallel Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 45 +--- examples/ipsec-secgw/ipsec-secgw.h | 69 ++++++ examples/ipsec-secgw/ipsec.h | 22 -- examples/ipsec-secgw/ipsec_worker.c | 418 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec_worker.h | 39 ++++ 5 files changed, 533 insertions(+), 60 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec_worker.h diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 86215fb..7d844bb 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -50,12 +50,11 @@ #include "event_helper.h" #include "ipsec.h" +#include "ipsec_worker.h" #include "parser.h" volatile bool force_quit; -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 - #define MAX_JUMBO_PKT_LEN 9600 #define MEMPOOL_CACHE_SIZE 256 @@ -85,29 +84,6 @@ volatile bool force_quit; static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((a) & 0xff) << 56) | \ - ((uint64_t)((b) & 0xff) << 48) | \ - ((uint64_t)((c) & 0xff) << 40) | \ - ((uint64_t)((d) & 0xff) << 32) | \ - ((uint64_t)((e) & 0xff) << 24) | \ - ((uint64_t)((f) & 0xff) << 16) | \ - ((uint64_t)((g) & 0xff) << 8) | \ - ((uint64_t)(h) & 0xff)) -#else -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((h) & 0xff) << 56) | \ - ((uint64_t)((g) & 0xff) << 48) | \ - ((uint64_t)((f) & 0xff) << 40) | \ - ((uint64_t)((e) & 0xff) << 32) | \ - ((uint64_t)((d) & 0xff) << 24) | \ - ((uint64_t)((c) & 0xff) << 16) | \ - ((uint64_t)((b) & 0xff) << 8) | \ - ((uint64_t)(a) & 0xff)) -#endif -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) - #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ @@ -119,18 +95,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) -/* port/source ethernet addr and destination ethernet addr */ -struct ethaddr_info { - uint64_t src, dst; -}; - -struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } -}; - struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" @@ -183,6 +147,13 @@ static const struct option lgopts[] = { {NULL, 0, 0, 0} }; +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } +}; + /* mask of enabled ports */ static uint32_t enabled_port_mask; static uint64_t enabled_cryptodev_mask = UINT64_MAX; diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index 5b19e29..926ce5d 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -4,10 +4,79 @@ #ifndef _IPSEC_SECGW_H_ #define _IPSEC_SECGW_H_ +#include <rte_hash.h> + +#define NB_SOCKETS 4 + +#define MAX_PKT_BURST 32 + +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 + #define NB_SOCKETS 4 #define UNPROTECTED_PORT(portid) (unprotected_port_mask & (1 << portid)) +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((a) & 0xff) << 56) | \ + ((uint64_t)((b) & 0xff) << 48) | \ + ((uint64_t)((c) & 0xff) << 40) | \ + ((uint64_t)((d) & 0xff) << 32) | \ + ((uint64_t)((e) & 0xff) << 24) | \ + ((uint64_t)((f) & 0xff) << 16) | \ + ((uint64_t)((g) & 0xff) << 8) | \ + ((uint64_t)(h) & 0xff)) +#else +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((h) & 0xff) << 56) | \ + ((uint64_t)((g) & 0xff) << 48) | \ + ((uint64_t)((f) & 0xff) << 40) | \ + ((uint64_t)((e) & 0xff) << 32) | \ + ((uint64_t)((d) & 0xff) << 24) | \ + ((uint64_t)((c) & 0xff) << 16) | \ + ((uint64_t)((b) & 0xff) << 8) | \ + ((uint64_t)(a) & 0xff)) +#endif + +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) + +struct traffic_type { + const uint8_t *data[MAX_PKT_BURST * 2]; + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; + void *saptr[MAX_PKT_BURST * 2]; + uint32_t res[MAX_PKT_BURST * 2]; + uint32_t num; +}; + +struct ipsec_traffic { + struct traffic_type ipsec; + struct traffic_type ip4; + struct traffic_type ip6; +}; + +/* Fields optimized for devices without burst */ +struct traffic_type_nb { + const uint8_t *data; + struct rte_mbuf *pkt; + uint32_t res; + uint32_t num; +}; + +struct ipsec_traffic_nb { + struct traffic_type_nb ipsec; + struct traffic_type_nb ip4; + struct traffic_type_nb ip6; +}; + +/* port/source ethernet addr and destination ethernet addr */ +struct ethaddr_info { + uint64_t src, dst; +}; + +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; + +/* TODO: All var definitions need to be part of a .c file */ + /* Port mask to identify the unprotected ports */ uint32_t unprotected_port_mask; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 65be2ac..0c5ee8a 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -15,11 +15,9 @@ #include "ipsec-secgw.h" -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 -#define MAX_PKT_BURST 32 #define MAX_INFLIGHT 128 #define MAX_QP_PER_LCORE 256 @@ -246,29 +244,9 @@ struct cnt_blk { uint32_t cnt; } __attribute__((packed)); -struct traffic_type { - const uint8_t *data[MAX_PKT_BURST * 2]; - struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; - void *saptr[MAX_PKT_BURST * 2]; - uint32_t res[MAX_PKT_BURST * 2]; - uint32_t num; -}; - -struct ipsec_traffic { - struct traffic_type ipsec; - struct traffic_type ip4; - struct traffic_type ip6; -}; - /* Socket ctx */ struct socket_ctx socket_ctx[NB_SOCKETS]; -void -ipsec_poll_mode_worker(void); - -int -ipsec_launch_one_lcore(void *args); - uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t len); diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 876ec68..09c798d 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -15,6 +15,7 @@ #include <ctype.h> #include <stdbool.h> +#include <rte_acl.h> #include <rte_common.h> #include <rte_log.h> #include <rte_memcpy.h> @@ -29,13 +30,52 @@ #include <rte_eventdev.h> #include <rte_malloc.h> #include <rte_mbuf.h> +#include <rte_lpm.h> +#include <rte_lpm6.h> #include "event_helper.h" #include "ipsec.h" #include "ipsec-secgw.h" +#include "ipsec_worker.h" extern volatile bool force_quit; +static inline enum pkt_type +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) +{ + struct rte_ether_hdr *eth; + + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip, ip_p)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV4; + else + return PKT_TYPE_PLAIN_IPV4; + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip6_hdr, ip6_nxt)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV6; + else + return PKT_TYPE_PLAIN_IPV6; + } + + /* Unknown/Unsupported type */ + return PKT_TYPE_INVALID; +} + +static inline void +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) +{ + struct rte_ether_hdr *ethhdr; + + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); +} + static inline void ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) { @@ -83,6 +123,286 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, } } +static inline int +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) +{ + uint32_t res; + + if (unlikely(sp == NULL)) + return 0; + + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, + DEFAULT_MAX_CATEGORIES); + + if (unlikely(res == 0)) { + /* No match */ + return 0; + } + + if (res == DISCARD) + return 0; + else if (res == BYPASS) { + *sa_idx = 0; + return 1; + } + + *sa_idx = SPI2IDX(res); + if (*sa_idx < IPSEC_SA_MAX_ENTRIES) + return 1; + + /* Invalid SA IDX */ + return 0; +} + +static inline uint16_t +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint32_t dst_ip; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); + dst_ip = rte_be_to_cpu_32(dst_ip); + + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +/* TODO: To be tested */ +static inline uint16_t +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint8_t dst_ip[16]; + uint8_t *ip6_dst; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); + memcpy(&dst_ip[0], ip6_dst, 16); + + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +static inline uint16_t +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) +{ + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) + return route4_pkt(pkt, rt->rt4_ctx); + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) + return route6_pkt(pkt, rt->rt6_ctx); + + return RTE_MAX_ETHPORTS; +} + +static inline int +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct ipsec_sa *sa = NULL; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = (struct ipsec_sa *) pkt->udata64; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + case PKT_TYPE_PLAIN_IPV6: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = (struct ipsec_sa *) pkt->udata64; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + default: + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == 0) + goto route_and_send_pkt; + + /* Else the packet has to be protected with SA */ + + /* If the packet was IPsec processed, then SA pointer should be set */ + if (sa == NULL) + goto drop_pkt_and_exit; + + /* SPI on the packet should match with the one in SA */ + if (unlikely(sa->spi != sa_idx)) + goto drop_pkt_and_exit; + +route_and_send_pkt: + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + +static inline int +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct rte_ipsec_session *sess; + struct sa_ctx *sa_ctx; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + struct ipsec_sa *sa; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + case PKT_TYPE_PLAIN_IPV6: + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + default: + /* + * Only plain IPv4 & IPv6 packets are allowed + * on protected port. Drop the rest. + */ + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == 0) { + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + goto send_pkt; + } + + /* Else the packet has to be protected */ + + /* Get SA ctx*/ + sa_ctx = ctx->sa_ctx; + + /* Get SA */ + sa = &(sa_ctx->sa[sa_idx]); + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + /* Allow only inline protocol for now */ + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); + goto drop_pkt_and_exit; + } + + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) + pkt->udata64 = (uint64_t) sess->security.ses; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + + /* Get the port to which this pkt need to be submitted */ + port_id = sa->portid; + +send_pkt: + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -90,7 +410,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 1 +#define IPSEC_EVENTMODE_WORKERS 2 /* * Event mode worker @@ -187,6 +507,94 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - app mode + */ +static void +ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct lcore_conf_ev_tx_int_port_wrkr lconf; + unsigned int nb_rx = 0; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int ret; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* We have valid links */ + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Save routing table */ + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "app mode) on lcore %d\n", lcore_id); + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + if (UNPROTECTED_PORT(ev.mbuf->port)) + ret = process_ipsec_ev_inbound(&lconf.inbound, + &lconf.rt, &ev); + else + ret = process_ipsec_ev_outbound(&lconf.outbound, + &lconf.rt, &ev); + if (ret != 1) + /* The pkt has been dropped */ + continue; + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -202,6 +610,14 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - app mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; + nb_wrkr_param++; return nb_wrkr_param; } diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h new file mode 100644 index 0000000..1b18b3c --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_WORKER_H_ +#define _IPSEC_WORKER_H_ + +#include "ipsec.h" + +enum pkt_type { + PKT_TYPE_PLAIN_IPV4 = 1, + PKT_TYPE_IPSEC_IPV4, + PKT_TYPE_PLAIN_IPV6, + PKT_TYPE_IPSEC_IPV6, + PKT_TYPE_INVALID +}; + +struct route_table { + struct rt_ctx *rt4_ctx; + struct rt_ctx *rt6_ctx; +}; + +/* + * Conf required by event mode worker with tx internal port + */ +struct lcore_conf_ev_tx_int_port_wrkr { + struct ipsec_ctx inbound; + struct ipsec_ctx outbound; + struct route_table rt; +} __rte_cache_aligned; + +/* TODO + * + * Move this function to ipsec_worker.c + */ +void ipsec_poll_mode_worker(void); + +int ipsec_launch_one_lcore(void *args); + +#endif /* _IPSEC_WORKER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v2 11/12] examples/ipsec-secgw: add app mode worker 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 11/12] examples/ipsec-secgw: add app " Anoob Joseph @ 2020-01-29 15:34 ` Ananyev, Konstantin 2020-01-29 17:18 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-29 15:34 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > Add application inbound/outbound worker thread and > IPsec application processing code for event mode. > > Exampple ipsec-secgw command in app mode: > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 > -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" > -f aes-gcm.cfg --transfer-mode event --schedule-type parallel > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/ipsec-secgw.c | 45 +--- > examples/ipsec-secgw/ipsec-secgw.h | 69 ++++++ > examples/ipsec-secgw/ipsec.h | 22 -- > examples/ipsec-secgw/ipsec_worker.c | 418 +++++++++++++++++++++++++++++++++++- > examples/ipsec-secgw/ipsec_worker.h | 39 ++++ > 5 files changed, 533 insertions(+), 60 deletions(-) > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index 86215fb..7d844bb 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -50,12 +50,11 @@ > > #include "event_helper.h" > #include "ipsec.h" > +#include "ipsec_worker.h" > #include "parser.h" > > volatile bool force_quit; > > -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > - > #define MAX_JUMBO_PKT_LEN 9600 > > #define MEMPOOL_CACHE_SIZE 256 > @@ -85,29 +84,6 @@ volatile bool force_quit; > static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; > static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; > > -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN > -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ > - (((uint64_t)((a) & 0xff) << 56) | \ > - ((uint64_t)((b) & 0xff) << 48) | \ > - ((uint64_t)((c) & 0xff) << 40) | \ > - ((uint64_t)((d) & 0xff) << 32) | \ > - ((uint64_t)((e) & 0xff) << 24) | \ > - ((uint64_t)((f) & 0xff) << 16) | \ > - ((uint64_t)((g) & 0xff) << 8) | \ > - ((uint64_t)(h) & 0xff)) > -#else > -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ > - (((uint64_t)((h) & 0xff) << 56) | \ > - ((uint64_t)((g) & 0xff) << 48) | \ > - ((uint64_t)((f) & 0xff) << 40) | \ > - ((uint64_t)((e) & 0xff) << 32) | \ > - ((uint64_t)((d) & 0xff) << 24) | \ > - ((uint64_t)((c) & 0xff) << 16) | \ > - ((uint64_t)((b) & 0xff) << 8) | \ > - ((uint64_t)(a) & 0xff)) > -#endif > -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) > - > #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ > (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ > (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ > @@ -119,18 +95,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; > > #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) > > -/* port/source ethernet addr and destination ethernet addr */ > -struct ethaddr_info { > - uint64_t src, dst; > -}; > - > -struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } > -}; > - > struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > > #define CMD_LINE_OPT_CONFIG "config" > @@ -183,6 +147,13 @@ static const struct option lgopts[] = { > {NULL, 0, 0, 0} > }; > > +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } > +}; > + > /* mask of enabled ports */ > static uint32_t enabled_port_mask; > static uint64_t enabled_cryptodev_mask = UINT64_MAX; > diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h > index 5b19e29..926ce5d 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.h > +++ b/examples/ipsec-secgw/ipsec-secgw.h > @@ -4,10 +4,79 @@ > #ifndef _IPSEC_SECGW_H_ > #define _IPSEC_SECGW_H_ > > +#include <rte_hash.h> > + > +#define NB_SOCKETS 4 > + > +#define MAX_PKT_BURST 32 > + > +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > + > #define NB_SOCKETS 4 Duplicate, NB_SOCKETS already defined, see above. > > #define UNPROTECTED_PORT(portid) (unprotected_port_mask & (1 << portid)) As you are moving it anyway probably a good time to put portid param in (), or even make it a static inline function. > > +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN > +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ > + (((uint64_t)((a) & 0xff) << 56) | \ > + ((uint64_t)((b) & 0xff) << 48) | \ > + ((uint64_t)((c) & 0xff) << 40) | \ > + ((uint64_t)((d) & 0xff) << 32) | \ > + ((uint64_t)((e) & 0xff) << 24) | \ > + ((uint64_t)((f) & 0xff) << 16) | \ > + ((uint64_t)((g) & 0xff) << 8) | \ > + ((uint64_t)(h) & 0xff)) > +#else > +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ > + (((uint64_t)((h) & 0xff) << 56) | \ > + ((uint64_t)((g) & 0xff) << 48) | \ > + ((uint64_t)((f) & 0xff) << 40) | \ > + ((uint64_t)((e) & 0xff) << 32) | \ > + ((uint64_t)((d) & 0xff) << 24) | \ > + ((uint64_t)((c) & 0xff) << 16) | \ > + ((uint64_t)((b) & 0xff) << 8) | \ > + ((uint64_t)(a) & 0xff)) > +#endif > + > +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) > + > +struct traffic_type { > + const uint8_t *data[MAX_PKT_BURST * 2]; > + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; > + void *saptr[MAX_PKT_BURST * 2]; > + uint32_t res[MAX_PKT_BURST * 2]; > + uint32_t num; > +}; > + > +struct ipsec_traffic { > + struct traffic_type ipsec; > + struct traffic_type ip4; > + struct traffic_type ip6; > +}; > + > +/* Fields optimized for devices without burst */ > +struct traffic_type_nb { > + const uint8_t *data; > + struct rte_mbuf *pkt; > + uint32_t res; > + uint32_t num; > +}; > + > +struct ipsec_traffic_nb { > + struct traffic_type_nb ipsec; > + struct traffic_type_nb ip4; > + struct traffic_type_nb ip6; > +}; > + > +/* port/source ethernet addr and destination ethernet addr */ > +struct ethaddr_info { > + uint64_t src, dst; > +}; > + > +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; > + > +/* TODO: All var definitions need to be part of a .c file */ Seems like that TODO wasn't done :) Probably a good thing to add extern for all global vars declarations here, and keep actual definitions in ipsec-secgw.c. Same story for: +struct socket_ctx socket_ctx[NB_SOCKETS]; in ipsec.h > + > /* Port mask to identify the unprotected ports */ > uint32_t unprotected_port_mask; > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v2 11/12] examples/ipsec-secgw: add app mode worker 2020-01-29 15:34 ` Ananyev, Konstantin @ 2020-01-29 17:18 ` Anoob Joseph 0 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-29 17:18 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Wednesday, January 29, 2020 9:05 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; > Nicolau, Radu <radu.nicolau@intel.com>; Thomas Monjalon > <thomas@monjalon.net> > Cc: Lukas Bartosik <lbartosik@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; Archana > Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>; > dev@dpdk.org > Subject: [EXT] RE: [PATCH v2 11/12] examples/ipsec-secgw: add app mode > worker > > External Email > > ---------------------------------------------------------------------- > > Add application inbound/outbound worker thread and IPsec application > > processing code for event mode. > > > > Exampple ipsec-secgw command in app mode: > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > > --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" > > -f aes-gcm.cfg --transfer-mode event --schedule-type parallel > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > --- > > examples/ipsec-secgw/ipsec-secgw.c | 45 +--- > > examples/ipsec-secgw/ipsec-secgw.h | 69 ++++++ > > examples/ipsec-secgw/ipsec.h | 22 -- > > examples/ipsec-secgw/ipsec_worker.c | 418 > > +++++++++++++++++++++++++++++++++++- > > examples/ipsec-secgw/ipsec_worker.h | 39 ++++ > > 5 files changed, 533 insertions(+), 60 deletions(-) create mode > > 100644 examples/ipsec-secgw/ipsec_worker.h > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > b/examples/ipsec-secgw/ipsec-secgw.c > > index 86215fb..7d844bb 100644 > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > @@ -50,12 +50,11 @@ > > > > #include "event_helper.h" > > #include "ipsec.h" > > +#include "ipsec_worker.h" > > #include "parser.h" > > > > volatile bool force_quit; > > > > -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > - > > #define MAX_JUMBO_PKT_LEN 9600 > > > > #define MEMPOOL_CACHE_SIZE 256 > > @@ -85,29 +84,6 @@ volatile bool force_quit; static uint16_t nb_rxd = > > IPSEC_SECGW_RX_DESC_DEFAULT; static uint16_t nb_txd = > > IPSEC_SECGW_TX_DESC_DEFAULT; > > > > -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN -#define > __BYTES_TO_UINT64(a, > > b, c, d, e, f, g, h) \ > > - (((uint64_t)((a) & 0xff) << 56) | \ > > - ((uint64_t)((b) & 0xff) << 48) | \ > > - ((uint64_t)((c) & 0xff) << 40) | \ > > - ((uint64_t)((d) & 0xff) << 32) | \ > > - ((uint64_t)((e) & 0xff) << 24) | \ > > - ((uint64_t)((f) & 0xff) << 16) | \ > > - ((uint64_t)((g) & 0xff) << 8) | \ > > - ((uint64_t)(h) & 0xff)) > > -#else > > -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ > > - (((uint64_t)((h) & 0xff) << 56) | \ > > - ((uint64_t)((g) & 0xff) << 48) | \ > > - ((uint64_t)((f) & 0xff) << 40) | \ > > - ((uint64_t)((e) & 0xff) << 32) | \ > > - ((uint64_t)((d) & 0xff) << 24) | \ > > - ((uint64_t)((c) & 0xff) << 16) | \ > > - ((uint64_t)((b) & 0xff) << 8) | \ > > - ((uint64_t)(a) & 0xff)) > > -#endif > > -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, > > f, 0, 0)) > > - > > #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ > > (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ > > (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ @@ -119,18 > +95,6 @@ > > static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; > > > > #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + > RTE_ETHER_CRC_LEN) > > > > -/* port/source ethernet addr and destination ethernet addr */ -struct > > ethaddr_info { > > - uint64_t src, dst; > > -}; > > - > > -struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { > > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, > > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, > > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, > > - { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } > > -}; > > - > > struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; > > > > #define CMD_LINE_OPT_CONFIG "config" > > @@ -183,6 +147,13 @@ static const struct option lgopts[] = { > > {NULL, 0, 0, 0} > > }; > > > > +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { > > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, > > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, > > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x08, 0x69, 0x26) }, > > + { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; > > + > > /* mask of enabled ports */ > > static uint32_t enabled_port_mask; > > static uint64_t enabled_cryptodev_mask = UINT64_MAX; diff --git > > a/examples/ipsec-secgw/ipsec-secgw.h > > b/examples/ipsec-secgw/ipsec-secgw.h > > index 5b19e29..926ce5d 100644 > > --- a/examples/ipsec-secgw/ipsec-secgw.h > > +++ b/examples/ipsec-secgw/ipsec-secgw.h > > @@ -4,10 +4,79 @@ > > #ifndef _IPSEC_SECGW_H_ > > #define _IPSEC_SECGW_H_ > > > > +#include <rte_hash.h> > > + > > +#define NB_SOCKETS 4 > > + > > +#define MAX_PKT_BURST 32 > > + > > +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 > > + > > #define NB_SOCKETS 4 > > Duplicate, NB_SOCKETS already defined, see above. [Anoob] Good catch. Will fix in v3. > > > > > #define UNPROTECTED_PORT(portid) (unprotected_port_mask & (1 << > > portid)) > > As you are moving it anyway probably a good time to put portid param in (), or > even make it a static inline function. [Anoob] I would prefer a static inline function. Shall I make this change in v3? > > > > > +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN #define > __BYTES_TO_UINT64(a, > > +b, c, d, e, f, g, h) \ > > + (((uint64_t)((a) & 0xff) << 56) | \ > > + ((uint64_t)((b) & 0xff) << 48) | \ > > + ((uint64_t)((c) & 0xff) << 40) | \ > > + ((uint64_t)((d) & 0xff) << 32) | \ > > + ((uint64_t)((e) & 0xff) << 24) | \ > > + ((uint64_t)((f) & 0xff) << 16) | \ > > + ((uint64_t)((g) & 0xff) << 8) | \ > > + ((uint64_t)(h) & 0xff)) > > +#else > > +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ > > + (((uint64_t)((h) & 0xff) << 56) | \ > > + ((uint64_t)((g) & 0xff) << 48) | \ > > + ((uint64_t)((f) & 0xff) << 40) | \ > > + ((uint64_t)((e) & 0xff) << 32) | \ > > + ((uint64_t)((d) & 0xff) << 24) | \ > > + ((uint64_t)((c) & 0xff) << 16) | \ > > + ((uint64_t)((b) & 0xff) << 8) | \ > > + ((uint64_t)(a) & 0xff)) > > +#endif > > + > > +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, > > +f, 0, 0)) > > + > > +struct traffic_type { > > + const uint8_t *data[MAX_PKT_BURST * 2]; > > + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; > > + void *saptr[MAX_PKT_BURST * 2]; > > + uint32_t res[MAX_PKT_BURST * 2]; > > + uint32_t num; > > +}; > > + > > +struct ipsec_traffic { > > + struct traffic_type ipsec; > > + struct traffic_type ip4; > > + struct traffic_type ip6; > > +}; > > + > > +/* Fields optimized for devices without burst */ struct > > +traffic_type_nb { > > + const uint8_t *data; > > + struct rte_mbuf *pkt; > > + uint32_t res; > > + uint32_t num; > > +}; > > + > > +struct ipsec_traffic_nb { > > + struct traffic_type_nb ipsec; > > + struct traffic_type_nb ip4; > > + struct traffic_type_nb ip6; > > +}; > > + > > +/* port/source ethernet addr and destination ethernet addr */ struct > > +ethaddr_info { > > + uint64_t src, dst; > > +}; > > + > > +struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; > > + > > +/* TODO: All var definitions need to be part of a .c file */ > > Seems like that TODO wasn't done :) > Probably a good thing to add extern for all global vars declarations here, and > keep actual definitions in ipsec-secgw.c. > Same story for: > +struct socket_ctx socket_ctx[NB_SOCKETS]; > in ipsec.h [Anoob] Will do in v3. > > > + > > /* Port mask to identify the unprotected ports */ uint32_t > > unprotected_port_mask; > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v2 12/12] examples/ipsec-secgw: add cmd line option for bufs 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (10 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 11/12] examples/ipsec-secgw: add app " Anoob Joseph @ 2020-01-20 13:45 ` Anoob Joseph 2020-01-29 14:40 ` Ananyev, Konstantin 2020-01-28 5:02 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik 13 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-20 13:45 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Lukasz Bartosik <lbartosik@marvell.com> Add command line option -s which can be used to configure number of buffers in a pool. Default number of buffers is 8192. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 7d844bb..a67ea0a 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -59,8 +59,6 @@ volatile bool force_quit; #define MEMPOOL_CACHE_SIZE 256 -#define NB_MBUF (32000) - #define CDEV_QUEUE_DESC 2048 #define CDEV_MAP_ENTRIES 16384 #define CDEV_MP_NB_OBJS 1024 @@ -162,6 +160,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; static uint32_t schedule_type; +static uint32_t nb_bufs_in_pool = 8192; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -1264,6 +1263,7 @@ print_usage(const char *prgname) " [-w REPLAY_WINDOW_SIZE]" " [-e]" " [-a]" + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" " -f CONFIG_FILE" " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" @@ -1285,6 +1285,7 @@ print_usage(const char *prgname) " size for each SA\n" " -e enables ESN\n" " -a enables SA SQN atomic behaviour\n" + " -s number of mbufs in packet pool (default 8192)\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration\n" " --single-sa SAIDX: In poll mode use single SA index for\n" @@ -1484,7 +1485,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) argvopt = argv; - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", lgopts, &option_index)) != EOF) { switch (opt) { @@ -1518,6 +1519,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) cfgfile = optarg; f_present = 1; break; + + case 's': + ret = parse_decimal(optarg); + if (ret < 0) { + printf("Invalid number of buffers in a pool: " + "%s\n", optarg); + print_usage(prgname); + return -1; + } + + nb_bufs_in_pool = ret; + break; + case 'j': ret = parse_decimal(optarg); if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ -2753,11 +2767,12 @@ main(int32_t argc, char **argv) if (socket_ctx[socket_id].mbuf_pool) continue; - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); session_priv_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); } + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v2 12/12] examples/ipsec-secgw: add cmd line option for bufs 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 12/12] examples/ipsec-secgw: add cmd line option for bufs Anoob Joseph @ 2020-01-29 14:40 ` Ananyev, Konstantin 2020-01-29 17:14 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-29 14:40 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukasz Bartosik, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > > From: Lukasz Bartosik <lbartosik@marvell.com> > > Add command line option -s which can be used to configure number > of buffers in a pool. Default number of buffers is 8192. > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/ipsec-secgw.c | 23 +++++++++++++++++++---- > 1 file changed, 19 insertions(+), 4 deletions(-) > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index 7d844bb..a67ea0a 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -59,8 +59,6 @@ volatile bool force_quit; > > #define MEMPOOL_CACHE_SIZE 256 > > -#define NB_MBUF (32000) > - > #define CDEV_QUEUE_DESC 2048 > #define CDEV_MAP_ENTRIES 16384 > #define CDEV_MP_NB_OBJS 1024 > @@ -162,6 +160,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled by default. */ > static uint32_t nb_lcores; > static uint32_t single_sa; > static uint32_t schedule_type; > +static uint32_t nb_bufs_in_pool = 8192; I still think it is not a good idea to change default number of mbufs. 8K is not that much: 1 core with 4 ports, or 1 port over 4 cores, and user might start to see unexpected failures. Now you added an option to allow user define number of mbufs in the app, which is a good thing, but default one I think should remain the same (to avoid any unexpected failures). Konstantin > > /* > * RX/TX HW offload capabilities to enable/use on ethernet ports. > @@ -1264,6 +1263,7 @@ print_usage(const char *prgname) > " [-w REPLAY_WINDOW_SIZE]" > " [-e]" > " [-a]" > + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" > " -f CONFIG_FILE" > " --config (port,queue,lcore)[,(port,queue,lcore)]" > " [--single-sa SAIDX]" > @@ -1285,6 +1285,7 @@ print_usage(const char *prgname) > " size for each SA\n" > " -e enables ESN\n" > " -a enables SA SQN atomic behaviour\n" > + " -s number of mbufs in packet pool (default 8192)\n" > " -f CONFIG_FILE: Configuration file\n" > " --config (port,queue,lcore): Rx queue configuration\n" > " --single-sa SAIDX: In poll mode use single SA index for\n" > @@ -1484,7 +1485,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > argvopt = argv; > > - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", > + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", > lgopts, &option_index)) != EOF) { > > switch (opt) { > @@ -1518,6 +1519,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > cfgfile = optarg; > f_present = 1; > break; > + > + case 's': > + ret = parse_decimal(optarg); > + if (ret < 0) { > + printf("Invalid number of buffers in a pool: " > + "%s\n", optarg); > + print_usage(prgname); > + return -1; > + } > + > + nb_bufs_in_pool = ret; > + break; > + > case 'j': > ret = parse_decimal(optarg); > if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || > @@ -2753,11 +2767,12 @@ main(int32_t argc, char **argv) > if (socket_ctx[socket_id].mbuf_pool) > continue; > > - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); > + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); > session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); > session_priv_pool_init(&socket_ctx[socket_id], socket_id, > sess_sz); > } > + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); > > RTE_ETH_FOREACH_DEV(portid) { > if ((enabled_port_mask & (1 << portid)) == 0) > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v2 12/12] examples/ipsec-secgw: add cmd line option for bufs 2020-01-29 14:40 ` Ananyev, Konstantin @ 2020-01-29 17:14 ` Anoob Joseph 0 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-01-29 17:14 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Lukas Bartosik, Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Wednesday, January 29, 2020 8:11 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal <akhil.goyal@nxp.com>; > Nicolau, Radu <radu.nicolau@intel.com>; Thomas Monjalon > <thomas@monjalon.net> > Cc: Lukas Bartosik <lbartosik@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; Archana > Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>; > dev@dpdk.org > Subject: [EXT] RE: [PATCH v2 12/12] examples/ipsec-secgw: add cmd line option > for bufs > > External Email > > ---------------------------------------------------------------------- > > > > > From: Lukasz Bartosik <lbartosik@marvell.com> > > > > Add command line option -s which can be used to configure number of > > buffers in a pool. Default number of buffers is 8192. > > > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > --- > > examples/ipsec-secgw/ipsec-secgw.c | 23 +++++++++++++++++++---- > > 1 file changed, 19 insertions(+), 4 deletions(-) > > > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c > > b/examples/ipsec-secgw/ipsec-secgw.c > > index 7d844bb..a67ea0a 100644 > > --- a/examples/ipsec-secgw/ipsec-secgw.c > > +++ b/examples/ipsec-secgw/ipsec-secgw.c > > @@ -59,8 +59,6 @@ volatile bool force_quit; > > > > #define MEMPOOL_CACHE_SIZE 256 > > > > -#define NB_MBUF (32000) > > - > > #define CDEV_QUEUE_DESC 2048 > > #define CDEV_MAP_ENTRIES 16384 > > #define CDEV_MP_NB_OBJS 1024 > > @@ -162,6 +160,7 @@ static int32_t numa_on = 1; /**< NUMA is enabled > > by default. */ static uint32_t nb_lcores; static uint32_t single_sa; > > static uint32_t schedule_type; > > +static uint32_t nb_bufs_in_pool = 8192; > > I still think it is not a good idea to change default number of mbufs. > 8K is not that much: 1 core with 4 ports, or 1 port over 4 cores, and user might > start to see unexpected failures. > Now you added an option to allow user define number of mbufs in the app, > which is a good thing, but default one I think should remain the same (to avoid > any unexpected failures). > Konstantin [Anoob] No disagreement. I had submitted this patch as is since I had some other ideas which could solve this better. I had mentioned this in the cover-letter. Deferred to v3: * The final patch updates the hardcoded number of buffers in a pool. Also, there was a discussion on the update of number of qp. Both the above can be handled properly, if we can remove the logic which limits one core to only use one crypto qp. If we can allow one qp per lcore_param, every eth queue can have it's own crypto qp and that would solve the requirements with OCTEON TX2 inline ipsec support as well. http://patches.dpdk.org/patch/64408/ The above patch requires a minor rework and I would be submitting a v2 soon. But the idea would be same. Please take a look at it and share your thoughts. Please do wait for v2 before running on h/w, though 😊. > > > > > > /* > > * RX/TX HW offload capabilities to enable/use on ethernet ports. > > @@ -1264,6 +1263,7 @@ print_usage(const char *prgname) > > " [-w REPLAY_WINDOW_SIZE]" > > " [-e]" > > " [-a]" > > + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" > > " -f CONFIG_FILE" > > " --config (port,queue,lcore)[,(port,queue,lcore)]" > > " [--single-sa SAIDX]" > > @@ -1285,6 +1285,7 @@ print_usage(const char *prgname) > > " size for each SA\n" > > " -e enables ESN\n" > > " -a enables SA SQN atomic behaviour\n" > > + " -s number of mbufs in packet pool (default 8192)\n" > > " -f CONFIG_FILE: Configuration file\n" > > " --config (port,queue,lcore): Rx queue configuration\n" > > " --single-sa SAIDX: In poll mode use single SA index for\n" > > @@ -1484,7 +1485,7 @@ parse_args(int32_t argc, char **argv, struct > > eh_conf *eh_conf) > > > > argvopt = argv; > > > > - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", > > + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", > > lgopts, &option_index)) != EOF) { > > > > switch (opt) { > > @@ -1518,6 +1519,19 @@ parse_args(int32_t argc, char **argv, struct > eh_conf *eh_conf) > > cfgfile = optarg; > > f_present = 1; > > break; > > + > > + case 's': > > + ret = parse_decimal(optarg); > > + if (ret < 0) { > > + printf("Invalid number of buffers in a pool: " > > + "%s\n", optarg); > > + print_usage(prgname); > > + return -1; > > + } > > + > > + nb_bufs_in_pool = ret; > > + break; > > + > > case 'j': > > ret = parse_decimal(optarg); > > if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ - > 2753,11 +2767,12 @@ > > main(int32_t argc, char **argv) > > if (socket_ctx[socket_id].mbuf_pool) > > continue; > > > > - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); > > + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); > > session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); > > session_priv_pool_init(&socket_ctx[socket_id], socket_id, > > sess_sz); > > } > > + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); > > > > RTE_ETH_FOREACH_DEV(portid) { > > if ((enabled_port_mask & (1 << portid)) == 0) > > -- > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (11 preceding siblings ...) 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 12/12] examples/ipsec-secgw: add cmd line option for bufs Anoob Joseph @ 2020-01-28 5:02 ` Anoob Joseph 2020-01-28 13:00 ` Ananyev, Konstantin 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik 13 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-01-28 5:02 UTC (permalink / raw) To: Akhil Goyal, konstantin.ananyev Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, Konstantin Ananyev, dev, Thomas Monjalon, Radu Nicolau, Anoob Joseph Hi Akhil, Konstantin, Do you have any further comments? Thanks, Anoob > -----Original Message----- > From: dev <dev-bounces@dpdk.org> On Behalf Of Anoob Joseph > Sent: Monday, January 20, 2020 7:15 PM > To: Akhil Goyal <akhil.goyal@nxp.com>; Radu Nicolau > <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> > Cc: Anoob Joseph <anoobj@marvell.com>; Jerin Jacob Kollanukkaran > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; Lukas Bartosik <lbartosik@marvell.com>; > Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org > Subject: [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw > > This series introduces event-mode additions to ipsec-secgw. This effort is > parallel to the similar changes in l2fwd (l2fwd-event app) & l3fwd. > > With this series, ipsec-secgw would be able to run in eventmode. The worker > thread (executing loop) would be receiving events and would be submitting > it back to the eventdev after the processing. This way, multicore scaling and > h/w assisted scheduling is achieved by making use of the eventdev > capabilities. > > Since the underlying event device would be having varying capabilities, the > worker thread could be drafted differently to maximize performance. > This series introduces usage of multiple worker threads, among which the > one to be used will be determined by the operating conditions and the > underlying device capabilities. > > For example, if an event device - eth device pair has Tx internal port, then > application can do tx_adapter_enqueue() instead of regular > event_enqueue(). So a thread making an assumption that the device pair has > internal port will not be the right solution for another pair. The infrastructure > added with these patches aims to help application to have multiple worker > threads, there by extracting maximum performance from every device > without affecting existing paths/use cases. > > The eventmode configuration is predefined. All packets reaching one eth > port will hit one event queue. All event queues will be mapped to all event > ports. So all cores will be able to receive traffic from all ports. > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event > device will ensure the ordering. Ordering would be lost when tried in > PARALLEL. > > Following command line options are introduced, > > --transfer-mode: to choose between poll mode & event mode > --schedule-type: to specify the scheduling type > (RTE_SCHED_TYPE_ORDERED/ > RTE_SCHED_TYPE_ATOMIC/ > RTE_SCHED_TYPE_PARALLEL) > > Additionally the event mode introduces two modes of processing packets: > > Driver-mode: This mode will have bare minimum changes in the application > to support ipsec. There woudn't be any lookup etc done in > the application. And for inline-protocol use case, the > thread would resemble l2fwd as the ipsec processing would be > done entirely in the h/w. This mode can be used to benchmark > the raw performance of the h/w. All the application side > steps (like lookup) can be redone based on the requirement > of the end user. Hence the need for a mode which would > report the raw performance. > > App-mode: This mode will have all the features currently implemented with > ipsec-secgw (non librte_ipsec mode). All the lookups etc > would follow the existing methods and would report numbers > that can be compared against regular ipsec-secgw benchmark > numbers. > > The driver mode is selected with existing --single-sa option (used also by poll > mode). When --single-sa option is used in conjution with event mode then > index passed to --single-sa is ignored. > > Example commands to execute ipsec-secgw in various modes on OCTEON > TX2 platform, > > #Inbound and outbound app mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- > transfer-mode event --schedule-type parallel > > #Inbound and outbound driver mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- > transfer-mode event --schedule-type parallel --single-sa 0 > > This series adds non burst tx internal port workers only. It provides > infrastructure for non internal port workers, however does not define any. > Also, only inline ipsec protocol mode is supported by the worker threads > added. > > Following are planned features, > 1. Add burst mode workers. > 2. Add non internal port workers. > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > 4. Add lookaside protocol support. > > Following are features that Marvell won't be attempting. > 1. Inline crypto support. > 2. Lookaside crypto support. > > For the features that Marvell won't be attempting, new workers can be > introduced by the respective stake holders. > > This series is tested on Marvell OCTEON TX2. > > Changes in v2: > * Remove --process-dir option. Instead use existing unprotected port mask > option (-u) to decide wheter port handles inbound or outbound traffic. > * Remove --process-mode option. Instead use existing --single-sa option > to select between app and driver modes. > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > * Move destruction of flows to a location where eth ports are stopped > and closed. > * Print error and exit when event mode --schedule-type option is used > in poll mode. > * Reduce number of goto statements replacing them with loop constructs. > * Remove sec_session_fixed table and replace it with locally build > table in driver worker thread. Table is indexed by port identifier > and holds first inline session pointer found for a given port. > * Print error and exit when sessions other than inline are configured > in event mode. > * When number of event queues is less than number of eth ports then > map all eth ports to one event queue. > * Cleanup and minor improvements in code as suggested by Konstantin > > Deferred to v3: > * The final patch updates the hardcoded number of buffers in a pool. > Also, there was a discussion on the update of number of qp. Both the > above can be handled properly, if we can remove the logic which limits > one core to only use one crypto qp. If we can allow one qp per > lcore_param, every eth queue can have it's own crypto qp and that would > solve the requirements with OCTEON TX2 inline ipsec support as well. > > Patch with the mentioned change, > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__patches.dpdk.org_patch_64408_&d=DwIDAg&c=nKjWec2b6R0mOyPaz > 7xtfQ&r=BPcGOOudUMrTDQ9YbgKcOkO5ChYiUPPlPNIEvTOhjNE&m=rg71UQ > 1CwRYPFy30QuJQZd1Lam_kwYg15N2h5GN2iD4&s=yHzfRBRuunl4JWV97vufk > 7aycUc472ahPVnQ9Tt6SeY&e= > > * Update ipsec-secgw documentation to describe the new options as well as > event mode support. > > This series depends on the PMD changes submitted in the following set, > https://urldefense.proofpoint.com/v2/url?u=http- > 3A__patches.dpdk.org_project_dpdk_list_-3Fseries- > 3D8203&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=BPcGOOudUMrTDQ9Y > bgKcOkO5ChYiUPPlPNIEvTOhjNE&m=rg71UQ1CwRYPFy30QuJQZd1Lam_kwY > g15N2h5GN2iD4&s=g2wtO9tOQTYHa9os1ECz5uwgpz9JmjTlGbEl- > Cp6WAw&e= > > Ankur Dwivedi (1): > examples/ipsec-secgw: add default rte_flow for inline Rx > > Anoob Joseph (5): > examples/ipsec-secgw: add framework for eventmode helper > examples/ipsec-secgw: add eventdev port-lcore link > examples/ipsec-secgw: add Rx adapter support > examples/ipsec-secgw: add Tx adapter support > examples/ipsec-secgw: add routines to display config > > Lukasz Bartosik (6): > examples/ipsec-secgw: add routines to launch workers > examples/ipsec-secgw: add support for internal ports > examples/ipsec-secgw: add eventmode to ipsec-secgw > examples/ipsec-secgw: add driver mode worker > examples/ipsec-secgw: add app mode worker > examples/ipsec-secgw: add cmd line option for bufs > > examples/ipsec-secgw/Makefile | 2 + > examples/ipsec-secgw/event_helper.c | 1714 > +++++++++++++++++++++++++++++++++++ > examples/ipsec-secgw/event_helper.h | 312 +++++++ examples/ipsec- > secgw/ipsec-secgw.c | 502 ++++++++-- > examples/ipsec-secgw/ipsec-secgw.h | 86 ++ > examples/ipsec-secgw/ipsec.c | 7 + > examples/ipsec-secgw/ipsec.h | 36 +- > examples/ipsec-secgw/ipsec_worker.c | 656 ++++++++++++++ > examples/ipsec-secgw/ipsec_worker.h | 39 + > examples/ipsec-secgw/meson.build | 4 +- > examples/ipsec-secgw/sa.c | 11 - > 11 files changed, 3275 insertions(+), 94 deletions(-) create mode 100644 > examples/ipsec-secgw/event_helper.c > create mode 100644 examples/ipsec-secgw/event_helper.h > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw 2020-01-28 5:02 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph @ 2020-01-28 13:00 ` Ananyev, Konstantin 0 siblings, 0 replies; 147+ messages in thread From: Ananyev, Konstantin @ 2020-01-28 13:00 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Lukas Bartosik, dev, Thomas Monjalon, Nicolau, Radu > > Hi Akhil, Konstantin, > > Do you have any further comments? Will try to have a proper look today/tomorrow. Sorry for delay. Konstantin > > Thanks, > Anoob > > > -----Original Message----- > > From: dev <dev-bounces@dpdk.org> On Behalf Of Anoob Joseph > > Sent: Monday, January 20, 2020 7:15 PM > > To: Akhil Goyal <akhil.goyal@nxp.com>; Radu Nicolau > > <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> > > Cc: Anoob Joseph <anoobj@marvell.com>; Jerin Jacob Kollanukkaran > > <jerinj@marvell.com>; Narayana Prasad Raju Athreya > > <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > > <ktejasree@marvell.com>; Vamsi Krishna Attunuru > > <vattunuru@marvell.com>; Lukas Bartosik <lbartosik@marvell.com>; > > Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org > > Subject: [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw > > > > This series introduces event-mode additions to ipsec-secgw. This effort is > > parallel to the similar changes in l2fwd (l2fwd-event app) & l3fwd. > > > > With this series, ipsec-secgw would be able to run in eventmode. The worker > > thread (executing loop) would be receiving events and would be submitting > > it back to the eventdev after the processing. This way, multicore scaling and > > h/w assisted scheduling is achieved by making use of the eventdev > > capabilities. > > > > Since the underlying event device would be having varying capabilities, the > > worker thread could be drafted differently to maximize performance. > > This series introduces usage of multiple worker threads, among which the > > one to be used will be determined by the operating conditions and the > > underlying device capabilities. > > > > For example, if an event device - eth device pair has Tx internal port, then > > application can do tx_adapter_enqueue() instead of regular > > event_enqueue(). So a thread making an assumption that the device pair has > > internal port will not be the right solution for another pair. The infrastructure > > added with these patches aims to help application to have multiple worker > > threads, there by extracting maximum performance from every device > > without affecting existing paths/use cases. > > > > The eventmode configuration is predefined. All packets reaching one eth > > port will hit one event queue. All event queues will be mapped to all event > > ports. So all cores will be able to receive traffic from all ports. > > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event > > device will ensure the ordering. Ordering would be lost when tried in > > PARALLEL. > > > > Following command line options are introduced, > > > > --transfer-mode: to choose between poll mode & event mode > > --schedule-type: to specify the scheduling type > > (RTE_SCHED_TYPE_ORDERED/ > > RTE_SCHED_TYPE_ATOMIC/ > > RTE_SCHED_TYPE_PARALLEL) > > > > Additionally the event mode introduces two modes of processing packets: > > > > Driver-mode: This mode will have bare minimum changes in the application > > to support ipsec. There woudn't be any lookup etc done in > > the application. And for inline-protocol use case, the > > thread would resemble l2fwd as the ipsec processing would be > > done entirely in the h/w. This mode can be used to benchmark > > the raw performance of the h/w. All the application side > > steps (like lookup) can be redone based on the requirement > > of the end user. Hence the need for a mode which would > > report the raw performance. > > > > App-mode: This mode will have all the features currently implemented with > > ipsec-secgw (non librte_ipsec mode). All the lookups etc > > would follow the existing methods and would report numbers > > that can be compared against regular ipsec-secgw benchmark > > numbers. > > > > The driver mode is selected with existing --single-sa option (used also by poll > > mode). When --single-sa option is used in conjution with event mode then > > index passed to --single-sa is ignored. > > > > Example commands to execute ipsec-secgw in various modes on OCTEON > > TX2 platform, > > > > #Inbound and outbound app mode > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- > > transfer-mode event --schedule-type parallel > > > > #Inbound and outbound driver mode > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- > > transfer-mode event --schedule-type parallel --single-sa 0 > > > > This series adds non burst tx internal port workers only. It provides > > infrastructure for non internal port workers, however does not define any. > > Also, only inline ipsec protocol mode is supported by the worker threads > > added. > > > > Following are planned features, > > 1. Add burst mode workers. > > 2. Add non internal port workers. > > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > > 4. Add lookaside protocol support. > > > > Following are features that Marvell won't be attempting. > > 1. Inline crypto support. > > 2. Lookaside crypto support. > > > > For the features that Marvell won't be attempting, new workers can be > > introduced by the respective stake holders. > > > > This series is tested on Marvell OCTEON TX2. > > > > Changes in v2: > > * Remove --process-dir option. Instead use existing unprotected port mask > > option (-u) to decide wheter port handles inbound or outbound traffic. > > * Remove --process-mode option. Instead use existing --single-sa option > > to select between app and driver modes. > > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > > * Move destruction of flows to a location where eth ports are stopped > > and closed. > > * Print error and exit when event mode --schedule-type option is used > > in poll mode. > > * Reduce number of goto statements replacing them with loop constructs. > > * Remove sec_session_fixed table and replace it with locally build > > table in driver worker thread. Table is indexed by port identifier > > and holds first inline session pointer found for a given port. > > * Print error and exit when sessions other than inline are configured > > in event mode. > > * When number of event queues is less than number of eth ports then > > map all eth ports to one event queue. > > * Cleanup and minor improvements in code as suggested by Konstantin > > > > Deferred to v3: > > * The final patch updates the hardcoded number of buffers in a pool. > > Also, there was a discussion on the update of number of qp. Both the > > above can be handled properly, if we can remove the logic which limits > > one core to only use one crypto qp. If we can allow one qp per > > lcore_param, every eth queue can have it's own crypto qp and that would > > solve the requirements with OCTEON TX2 inline ipsec support as well. > > > > Patch with the mentioned change, > > https://urldefense.proofpoint.com/v2/url?u=http- > > 3A__patches.dpdk.org_patch_64408_&d=DwIDAg&c=nKjWec2b6R0mOyPaz > > 7xtfQ&r=BPcGOOudUMrTDQ9YbgKcOkO5ChYiUPPlPNIEvTOhjNE&m=rg71UQ > > 1CwRYPFy30QuJQZd1Lam_kwYg15N2h5GN2iD4&s=yHzfRBRuunl4JWV97vufk > > 7aycUc472ahPVnQ9Tt6SeY&e= > > > > * Update ipsec-secgw documentation to describe the new options as well as > > event mode support. > > > > This series depends on the PMD changes submitted in the following set, > > https://urldefense.proofpoint.com/v2/url?u=http- > > 3A__patches.dpdk.org_project_dpdk_list_-3Fseries- > > 3D8203&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=BPcGOOudUMrTDQ9Y > > bgKcOkO5ChYiUPPlPNIEvTOhjNE&m=rg71UQ1CwRYPFy30QuJQZd1Lam_kwY > > g15N2h5GN2iD4&s=g2wtO9tOQTYHa9os1ECz5uwgpz9JmjTlGbEl- > > Cp6WAw&e= > > > > Ankur Dwivedi (1): > > examples/ipsec-secgw: add default rte_flow for inline Rx > > > > Anoob Joseph (5): > > examples/ipsec-secgw: add framework for eventmode helper > > examples/ipsec-secgw: add eventdev port-lcore link > > examples/ipsec-secgw: add Rx adapter support > > examples/ipsec-secgw: add Tx adapter support > > examples/ipsec-secgw: add routines to display config > > > > Lukasz Bartosik (6): > > examples/ipsec-secgw: add routines to launch workers > > examples/ipsec-secgw: add support for internal ports > > examples/ipsec-secgw: add eventmode to ipsec-secgw > > examples/ipsec-secgw: add driver mode worker > > examples/ipsec-secgw: add app mode worker > > examples/ipsec-secgw: add cmd line option for bufs > > > > examples/ipsec-secgw/Makefile | 2 + > > examples/ipsec-secgw/event_helper.c | 1714 > > +++++++++++++++++++++++++++++++++++ > > examples/ipsec-secgw/event_helper.h | 312 +++++++ examples/ipsec- > > secgw/ipsec-secgw.c | 502 ++++++++-- > > examples/ipsec-secgw/ipsec-secgw.h | 86 ++ > > examples/ipsec-secgw/ipsec.c | 7 + > > examples/ipsec-secgw/ipsec.h | 36 +- > > examples/ipsec-secgw/ipsec_worker.c | 656 ++++++++++++++ > > examples/ipsec-secgw/ipsec_worker.h | 39 + > > examples/ipsec-secgw/meson.build | 4 +- > > examples/ipsec-secgw/sa.c | 11 - > > 11 files changed, 3275 insertions(+), 94 deletions(-) create mode 100644 > > examples/ipsec-secgw/event_helper.c > > create mode 100644 examples/ipsec-secgw/event_helper.h > > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > > > -- > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 00/13] add eventmode to ipsec-secgw 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph ` (12 preceding siblings ...) 2020-01-28 5:02 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 01/13] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik ` (13 more replies) 13 siblings, 14 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev This series introduces event-mode additions to ipsec-secgw. With this series, ipsec-secgw would be able to run in eventmode. The worker thread (executing loop) would be receiving events and would be submitting it back to the eventdev after the processing. This way, multicore scaling and h/w assisted scheduling is achieved by making use of the eventdev capabilities. Since the underlying event device would be having varying capabilities, the worker thread could be drafted differently to maximize performance. This series introduces usage of multiple worker threads, among which the one to be used will be determined by the operating conditions and the underlying device capabilities. For example, if an event device - eth device pair has Tx internal port, then application can do tx_adapter_enqueue() instead of regular event_enqueue(). So a thread making an assumption that the device pair has internal port will not be the right solution for another pair. The infrastructure added with these patches aims to help application to have multiple worker threads, there by extracting maximum performance from every device without affecting existing paths/use cases. The eventmode configuration is predefined. All packets reaching one eth port will hit one event queue. All event queues will be mapped to all event ports. So all cores will be able to receive traffic from all ports. When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event device will ensure the ordering. Ordering would be lost when tried in PARALLEL. Following command line options are introduced, --transfer-mode: to choose between poll mode & event mode --event-schedule-type: to specify the scheduling type (RTE_SCHED_TYPE_ORDERED/ RTE_SCHED_TYPE_ATOMIC/ RTE_SCHED_TYPE_PARALLEL) Additionally the event mode introduces two modes of processing packets: Driver-mode: This mode will have bare minimum changes in the application to support ipsec. There woudn't be any lookup etc done in the application. And for inline-protocol use case, the thread would resemble l2fwd as the ipsec processing would be done entirely in the h/w. This mode can be used to benchmark the raw performance of the h/w. All the application side steps (like lookup) can be redone based on the requirement of the end user. Hence the need for a mode which would report the raw performance. App-mode: This mode will have all the features currently implemented with ipsec-secgw (non librte_ipsec mode). All the lookups etc would follow the existing methods and would report numbers that can be compared against regular ipsec-secgw benchmark numbers. The driver mode is selected with existing --single-sa option (used also by poll mode). When --single-sa option is used in conjution with event mode then index passed to --single-sa is ignored. Example commands to execute ipsec-secgw in various modes on OCTEON TX2 platform, #Inbound and outbound app mode ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel #Inbound and outbound driver mode ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel --single-sa 0 This series adds non burst tx internal port workers only. It provides infrastructure for non internal port workers, however does not define any. Also, only inline ipsec protocol mode is supported by the worker threads added. Following are planned features, 1. Add burst mode workers. 2. Add non internal port workers. 3. Verify support for Rx core (the support is added but lack of h/w to verify). 4. Add lookaside protocol support. Following are features that Marvell won't be attempting. 1. Inline crypto support. 2. Lookaside crypto support. For the features that Marvell won't be attempting, new workers can be introduced by the respective stake holders. This series is tested on Marvell OCTEON TX2. Deferred to v4: * Update ipsec-secgw documentation to describe the new options as well as event mode support. Changes in v3: * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c including minor rework. * Rename --schedule-type option to --event-schedule-type. * Replace macro UNPROTECTED_PORT with static inline function is_unprotected_port(). * Move definitions of global variables used by multiple modules to .c files and add externs in .h headers. * Add eh_check_conf() which validates ipsec-secgw configuration for event mode. * Add dynamic calculation of number of buffers in a pool based on number of cores, ports and crypto queues. * Fix segmentation fault in event mode driver worker which happens when there are no inline outbound sessions configured. * Remove change related to updating number of crypto queues in cryptodevs_init(). The update of crypto queues will be handled in a separate patch. * Fix compilation error on 32-bit platforms by using userdata instead of udata64 from rte_mbuf. Changes in v2: * Remove --process-dir option. Instead use existing unprotected port mask option (-u) to decide wheter port handles inbound or outbound traffic. * Remove --process-mode option. Instead use existing --single-sa option to select between app and driver modes. * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). * Move destruction of flows to a location where eth ports are stopped and closed. * Print error and exit when event mode --schedule-type option is used in poll mode. * Reduce number of goto statements replacing them with loop constructs. * Remove sec_session_fixed table and replace it with locally build table in driver worker thread. Table is indexed by port identifier and holds first inline session pointer found for a given port. * Print error and exit when sessions other than inline are configured in event mode. * When number of event queues is less than number of eth ports then map all eth ports to one event queue. * Cleanup and minor improvements in code as suggested by Konstantin This series depends on the PMD changes submitted in the following set, http://patches.dpdk.org/project/dpdk/list/?series=8411 Ankur Dwivedi (1): examples/ipsec-secgw: add default rte flow for inline Rx Anoob Joseph (5): examples/ipsec-secgw: add framework for eventmode helper examples/ipsec-secgw: add eventdev port-lcore link examples/ipsec-secgw: add Rx adapter support examples/ipsec-secgw: add Tx adapter support examples/ipsec-secgw: add routines to display config Lukasz Bartosik (7): examples/ipsec-secgw: add routines to launch workers examples/ipsec-secgw: add support for internal ports examples/ipsec-secgw: add event helper config init/uninit examples/ipsec-secgw: add eventmode to ipsec-secgw examples/ipsec-secgw: add driver mode worker examples/ipsec-secgw: add app mode worker examples/ipsec-secgw: make number of buffers dynamic examples/ipsec-secgw/Makefile | 2 + examples/ipsec-secgw/event_helper.c | 1818 +++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 335 +++++++ examples/ipsec-secgw/ipsec-secgw.c | 437 +++++++-- examples/ipsec-secgw/ipsec-secgw.h | 86 ++ examples/ipsec-secgw/ipsec.c | 7 + examples/ipsec-secgw/ipsec.h | 40 +- examples/ipsec-secgw/ipsec_worker.c | 659 +++++++++++++ examples/ipsec-secgw/ipsec_worker.h | 39 + examples/ipsec-secgw/meson.build | 4 +- examples/ipsec-secgw/sa.c | 19 +- 11 files changed, 3349 insertions(+), 97 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c create mode 100644 examples/ipsec-secgw/ipsec_worker.h -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 01/13] examples/ipsec-secgw: add default rte flow for inline Rx 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 02/13] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik ` (12 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Ankur Dwivedi <adwivedi@marvell.com> The default flow created would enable security processing on all ESP packets. If the default flow is created, SA based rte_flow creation would be skipped. Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Anoob Joseph <anoobj@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 61 +++++++++++++++++++++++++++++++++----- examples/ipsec-secgw/ipsec.c | 7 +++++ examples/ipsec-secgw/ipsec.h | 6 ++++ 3 files changed, 66 insertions(+), 8 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 3b5aaf6..d5e8fe5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -128,6 +128,8 @@ struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" @@ -2406,6 +2408,48 @@ reassemble_init(void) return rc; } +static void +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) +{ + struct rte_flow_action action[2]; + struct rte_flow_item pattern[2]; + struct rte_flow_attr attr = {0}; + struct rte_flow_error err; + struct rte_flow *flow; + int ret; + + if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY)) + return; + + /* Add the default rte_flow to enable SECURITY for all ESP packets */ + + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; + pattern[0].spec = NULL; + pattern[0].mask = NULL; + pattern[0].last = NULL; + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; + + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; + action[0].conf = NULL; + action[1].type = RTE_FLOW_ACTION_TYPE_END; + action[1].conf = NULL; + + attr.ingress = 1; + + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); + if (ret) + return; + + flow = rte_flow_create(port_id, &attr, pattern, action, &err); + if (flow == NULL) + return; + + flow_info_tbl[port_id].rx_def_flow = flow; + RTE_LOG(INFO, IPSEC, + "Created default flow enabling SECURITY for all ESP traffic on port %d\n", + port_id); +} + int32_t main(int32_t argc, char **argv) { @@ -2414,7 +2458,8 @@ main(int32_t argc, char **argv) uint32_t i; uint8_t socket_id; uint16_t portid; - uint64_t req_rx_offloads, req_tx_offloads; + uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; + uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; size_t sess_sz; /* init EAL */ @@ -2476,8 +2521,10 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); - port_init(portid, req_rx_offloads, req_tx_offloads); + sa_check_offloads(portid, &req_rx_offloads[portid], + &req_tx_offloads[portid]); + port_init(portid, req_rx_offloads[portid], + req_tx_offloads[portid]); } cryptodevs_init(); @@ -2487,11 +2534,9 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - /* - * Start device - * note: device must be started before a flow rule - * can be installed. - */ + /* Create flow before starting the device */ + create_default_ipsec_flow(portid, req_rx_offloads[portid]); + ret = rte_eth_dev_start(portid); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index d4b5712..58f6e8c 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -261,6 +261,12 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, unsigned int i; unsigned int j; + /* Don't create flow if default flow is created */ + if (flow_info_tbl[sa->portid].rx_def_flow) { + sa->cdev_id_qp = 0; + return 0; + } + ret = rte_eth_dev_info_get(sa->portid, &dev_info); if (ret != 0) { RTE_LOG(ERR, IPSEC, @@ -396,6 +402,7 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, ips->security.ol_flags = sec_cap->ol_flags; ips->security.ctx = sec_ctx; } + sa->cdev_id_qp = 0; return 0; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 8e07521..28ff07d 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -81,6 +81,12 @@ struct app_sa_prm { extern struct app_sa_prm app_sa_prm; +struct flow_info { + struct rte_flow *rx_def_flow; +}; + +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + enum { IPSEC_SESSION_PRIMARY = 0, IPSEC_SESSION_FALLBACK = 1, -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 02/13] examples/ipsec-secgw: add framework for eventmode helper 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 01/13] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 03/13] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik ` (11 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add framework for eventmode helper. Event mode involves initialization of multiple devices like eventdev, ethdev and etc. Add routines to initialize and uninitialize event device. Generate a default config for event device if it is not specified in the configuration. Currently event helper supports single event device only. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/event_helper.c | 326 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 115 +++++++++++++ examples/ipsec-secgw/meson.build | 4 +- 4 files changed, 444 insertions(+), 2 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index a4977f6..09e3c5a 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -15,6 +15,7 @@ SRCS-y += sa.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c new file mode 100644 index 0000000..82425de --- /dev/null +++ b/examples/ipsec-secgw/event_helper.c @@ -0,0 +1,326 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#include <rte_ethdev.h> +#include <rte_eventdev.h> + +#include "event_helper.h" + +static int +eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct rte_event_dev_info dev_info; + int lcore_count; + int nb_eventdev; + int nb_eth_dev; + int ret; + + /* Get the number of event devices */ + nb_eventdev = rte_event_dev_count(); + if (nb_eventdev == 0) { + EH_LOG_ERR("No event devices detected"); + return -EINVAL; + } + + if (nb_eventdev != 1) { + EH_LOG_ERR("Event mode does not support multiple event devices. " + "Please provide only one event device."); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + if (nb_eth_dev == 0) { + EH_LOG_ERR("No eth devices detected"); + return -EINVAL; + } + + /* Get the number of lcores */ + lcore_count = rte_lcore_count(); + + /* Read event device info */ + ret = rte_event_dev_info_get(0, &dev_info); + if (ret < 0) { + EH_LOG_ERR("Failed to read event device info %d", ret); + return ret; + } + + /* Check if enough ports are available */ + if (dev_info.max_event_ports < 2) { + EH_LOG_ERR("Not enough event ports available"); + return -EINVAL; + } + + /* Get the first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Save number of queues & ports available */ + eventdev_config->eventdev_id = 0; + eventdev_config->nb_eventqueue = dev_info.max_event_queues; + eventdev_config->nb_eventport = dev_info.max_event_ports; + eventdev_config->ev_queue_mode = + RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + /* Check if there are more queues than required */ + if (eventdev_config->nb_eventqueue > nb_eth_dev + 1) { + /* One queue is reserved for Tx */ + eventdev_config->nb_eventqueue = nb_eth_dev + 1; + } + + /* Check if there are more ports than required */ + if (eventdev_config->nb_eventport > lcore_count) { + /* One port per lcore is enough */ + eventdev_config->nb_eventport = lcore_count; + } + + /* Update the number of event devices */ + em_conf->nb_eventdev++; + + return 0; +} + +static int +eh_validate_conf(struct eventmode_conf *em_conf) +{ + int ret; + + /* + * Check if event devs are specified. Else probe the event devices + * and initialize the config with all ports & queues available + */ + if (em_conf->nb_eventdev == 0) { + ret = eh_set_default_conf_eventdev(em_conf); + if (ret != 0) + return ret; + } + + return 0; +} + +static int +eh_initialize_eventdev(struct eventmode_conf *em_conf) +{ + struct rte_event_queue_conf eventq_conf = {0}; + struct rte_event_dev_info evdev_default_conf; + struct rte_event_dev_config eventdev_conf; + struct eventdev_params *eventdev_config; + int nb_eventdev = em_conf->nb_eventdev; + uint8_t eventdev_id; + int nb_eventqueue; + uint8_t i, j; + int ret; + + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Get event dev ID */ + eventdev_id = eventdev_config->eventdev_id; + + /* Get the number of queues */ + nb_eventqueue = eventdev_config->nb_eventqueue; + + /* Reset the default conf */ + memset(&evdev_default_conf, 0, + sizeof(struct rte_event_dev_info)); + + /* Get default conf of eventdev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR( + "Error in getting event device info[devID:%d]", + eventdev_id); + return ret; + } + + memset(&eventdev_conf, 0, sizeof(struct rte_event_dev_config)); + eventdev_conf.nb_events_limit = + evdev_default_conf.max_num_events; + eventdev_conf.nb_event_queues = nb_eventqueue; + eventdev_conf.nb_event_ports = + eventdev_config->nb_eventport; + eventdev_conf.nb_event_queue_flows = + evdev_default_conf.max_event_queue_flows; + eventdev_conf.nb_event_port_dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + eventdev_conf.nb_event_port_enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Configure event device */ + ret = rte_event_dev_configure(eventdev_id, &eventdev_conf); + if (ret < 0) { + EH_LOG_ERR("Error in configuring event device"); + return ret; + } + + /* Configure event queues */ + for (j = 0; j < nb_eventqueue; j++) { + + memset(&eventq_conf, 0, + sizeof(struct rte_event_queue_conf)); + + /* Read the requested conf */ + + /* Per event dev queues can be ATQ or SINGLE LINK */ + eventq_conf.event_queue_cfg = + eventdev_config->ev_queue_mode; + /* + * All queues need to be set with sched_type as + * schedule type for the application stage. One queue + * would be reserved for the final eth tx stage. This + * will be an atomic queue. + */ + if (j == nb_eventqueue-1) { + eventq_conf.schedule_type = + RTE_SCHED_TYPE_ATOMIC; + } else { + eventq_conf.schedule_type = + em_conf->ext_params.sched_type; + } + + /* Set max atomic flows to 1024 */ + eventq_conf.nb_atomic_flows = 1024; + eventq_conf.nb_atomic_order_sequences = 1024; + + /* Setup the queue */ + ret = rte_event_queue_setup(eventdev_id, j, + &eventq_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event queue %d", + ret); + return ret; + } + } + + /* Configure event ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + ret = rte_event_port_setup(eventdev_id, j, NULL); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event port %d", + ret); + return ret; + } + } + } + + /* Start event devices */ + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + ret = rte_event_dev_start(eventdev_config->eventdev_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start event device %d, %d", + i, ret); + return ret; + } + } + return 0; +} + +int32_t +eh_devs_init(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t port_id; + int ret; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Validate the requested config */ + ret = eh_validate_conf(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to validate the requested config %d", ret); + return ret; + } + + /* Stop eth devices before setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + rte_eth_dev_stop(port_id); + } + + /* Setup eventdev */ + ret = eh_initialize_eventdev(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize event dev %d", ret); + return ret; + } + + /* Start eth devices after setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + ret = rte_eth_dev_start(port_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start eth dev %d, %d", + port_id, ret); + return ret; + } + } + + return 0; +} + +int32_t +eh_devs_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t id; + int ret, i; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Stop and release event devices */ + for (i = 0; i < em_conf->nb_eventdev; i++) { + + id = em_conf->eventdev_config[i].eventdev_id; + rte_event_dev_stop(id); + + ret = rte_event_dev_close(id); + if (ret < 0) { + EH_LOG_ERR("Failed to close event dev %d, %d", id, ret); + return ret; + } + } + + return 0; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h new file mode 100644 index 0000000..7685987 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _EVENT_HELPER_H_ +#define _EVENT_HELPER_H_ + +#ifdef __cplusplus +extern "C" { +#endif + +#include <rte_log.h> + +#define RTE_LOGTYPE_EH RTE_LOGTYPE_USER4 + +#define EH_LOG_ERR(...) \ + RTE_LOG(ERR, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/* Max event devices supported */ +#define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS + +/** + * Packet transfer mode of the application + */ +enum eh_pkt_transfer_mode { + EH_PKT_TRANSFER_MODE_POLL = 0, + EH_PKT_TRANSFER_MODE_EVENT, +}; + +/* Event dev params */ +struct eventdev_params { + uint8_t eventdev_id; + uint8_t nb_eventqueue; + uint8_t nb_eventport; + uint8_t ev_queue_mode; +}; + +/* Eventmode conf data */ +struct eventmode_conf { + int nb_eventdev; + /**< No of event devs */ + struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; + /**< Per event dev conf */ + union { + RTE_STD_C11 + struct { + uint64_t sched_type : 2; + /**< Schedule type */ + }; + uint64_t u64; + } ext_params; + /**< 64 bit field to specify extended params */ +}; + +/** + * Event helper configuration + */ +struct eh_conf { + enum eh_pkt_transfer_mode mode; + /**< Packet transfer mode of the application */ + uint32_t eth_portmask; + /**< + * Mask of the eth ports to be used. This portmask would be + * checked while initializing devices using helper routines. + */ + void *mode_params; + /**< Mode specific parameters */ +}; + +/** + * Initialize event mode devices + * + * Application can call this function to get the event devices, eth devices + * and eth rx & tx adapters initialized according to the default config or + * config populated using the command line args. + * + * Application is expected to initialize the eth devices and then the event + * mode helper subsystem will stop & start eth devices according to its + * requirement. Call to this function should be done after the eth devices + * are successfully initialized. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_init(struct eh_conf *conf); + +/** + * Release event mode devices + * + * Application can call this function to release event devices, + * eth rx & tx adapters according to the config. + * + * Call to this function should be done before application stops + * and closes eth devices. This function will not close and stop + * eth devices. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_uninit(struct eh_conf *conf); + +#ifdef __cplusplus +} +#endif + +#endif /* _EVENT_HELPER_H_ */ diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 9ece345..20f4064 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -6,9 +6,9 @@ # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' -deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec'] +deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c' + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 03/13] examples/ipsec-secgw: add eventdev port-lcore link 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 01/13] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 02/13] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 04/13] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik ` (10 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add event device port-lcore link and specify which event queues should be connected to the event port. Generate a default config for event port-lcore links if it is not specified in the configuration. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues are to be linked with every port. This enables one core to receive packets from all ethernet ports. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 126 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 33 ++++++++++ 2 files changed, 159 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 82425de..cf2dff0 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,11 +1,33 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2020 Marvell International Ltd. */ +#include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_malloc.h> #include "event_helper.h" +static inline unsigned int +eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) +{ + unsigned int next_core; + + /* Get next active core skipping cores reserved as eth cores */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + prev_core = next_core; + } while (rte_bitmap_get(em_conf->eth_core_mask, next_core)); + + return next_core; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -81,6 +103,71 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_link(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct eh_event_link_info *link; + unsigned int lcore_id = -1; + int i, link_index; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If there are more event ports, then some ports + * won't be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link config, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues + * to the port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + /* Get first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Loop through the ports */ + for (i = 0; i < eventdev_config->nb_eventport; i++) { + + /* Get next active core id */ + lcore_id = eh_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_port_id = i; + link->lcore_id = lcore_id; + + /* + * Don't set eventq_id as by default all queues + * need to be mapped to the port, which is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -95,6 +182,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if links are specified. Else generate a default config for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = eh_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -106,6 +203,8 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) struct rte_event_dev_config eventdev_conf; struct eventdev_params *eventdev_config; int nb_eventdev = em_conf->nb_eventdev; + struct eh_event_link_info *link; + uint8_t *queue = NULL; uint8_t eventdev_id; int nb_eventqueue; uint8_t i, j; @@ -205,6 +304,33 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) } } + /* Make event queue - event port link */ + for (j = 0; j < em_conf->nb_link; j++) { + + /* Get link info */ + link = &(em_conf->link[j]); + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); + + /* Link queue to port */ + ret = rte_event_port_link(eventdev_id, link->event_port_id, + queue, NULL, 1); + if (ret < 0) { + EH_LOG_ERR("Failed to link event port %d", ret); + return ret; + } + } + /* Start event devices */ for (i = 0; i < nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 7685987..16b03b3 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,13 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max event queues supported per event device */ +#define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV + +/* Max event-lcore links */ +#define EVENT_MODE_MAX_LCORE_LINKS \ + (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) + /** * Packet transfer mode of the application */ @@ -36,17 +43,43 @@ struct eventdev_params { uint8_t ev_queue_mode; }; +/** + * Event-lcore link configuration + */ +struct eh_event_link_info { + uint8_t eventdev_id; + /**< Event device ID */ + uint8_t event_port_id; + /**< Event port ID */ + uint8_t eventq_id; + /**< Event queue to be linked to the port */ + uint8_t lcore_id; + /**< Lcore to be polling on this port */ +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_link; + /**< No of links */ + struct eh_event_link_info + link[EVENT_MODE_MAX_LCORE_LINKS]; + /**< Per link conf */ + struct rte_bitmap *eth_core_mask; + /**< Core mask of cores to be used for software Rx and Tx */ union { RTE_STD_C11 struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 04/13] examples/ipsec-secgw: add Rx adapter support 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (2 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 03/13] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 05/13] examples/ipsec-secgw: add Tx " Lukasz Bartosik ` (9 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add Rx adapter support. The event helper init routine will initialize the Rx adapter according to the configuration. If Rx adapter config is not present it will generate a default config. If there are enough event queues available it will map eth ports and event queues 1:1 (one eth port will be connected to one event queue). Otherwise it will map all eth ports to one event queue. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 273 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/event_helper.h | 29 ++++ 2 files changed, 301 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index cf2dff0..1d06a45 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -4,10 +4,58 @@ #include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_event_eth_rx_adapter.h> #include <rte_malloc.h> +#include <stdbool.h> #include "event_helper.h" +static int +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) +{ + int i, count = 0; + + RTE_LCORE_FOREACH(i) { + /* Check if this core is enabled in core mask*/ + if (rte_bitmap_get(eth_core_mask, i)) { + /* Found enabled core */ + count++; + } + } + return count; +} + +static inline unsigned int +eh_get_next_eth_core(struct eventmode_conf *em_conf) +{ + static unsigned int prev_core = -1; + unsigned int next_core; + + /* + * Make sure we have at least one eth core running, else the following + * logic would lead to an infinite loop. + */ + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { + EH_LOG_ERR("No enabled eth core found"); + return RTE_MAX_LCORE; + } + + /* Only some cores are marked as eth cores, skip others */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 1); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Update prev_core */ + prev_core = next_core; + } while (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))); + + return next_core; +} + static inline unsigned int eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) { @@ -168,6 +216,82 @@ eh_set_default_conf_link(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct rx_adapter_conf *adapter; + bool single_ev_queue = false; + int eventdev_id; + int nb_eth_dev; + int adapter_id; + int conn_id; + int i; + + /* Create one adapter with eth queues mapped to event queue(s) */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + adapter = &(em_conf->rx_adapter[adapter_id]); + + /* Set adapter conf */ + adapter->eventdev_id = eventdev_id; + adapter->adapter_id = adapter_id; + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Map all queues of eth device (port) to an event queue. If there + * are more event queues than eth ports then create 1:1 mapping. + * Otherwise map all eth ports to a single event queue. + */ + if (nb_eth_dev > eventdev_config->nb_eventqueue) + single_ev_queue = true; + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = adapter->nb_connections; + + /* Get the connection */ + conn = &(adapter->conn[conn_id]); + + /* Set mapping between eth ports & event queues*/ + conn->ethdev_id = i; + conn->eventq_id = single_ev_queue ? 0 : i; + + /* Add all eth queues eth port to event queue */ + conn->ethdev_rx_qid = -1; + + /* Update no of connections */ + adapter->nb_connections++; + + } + + /* We have setup one adapter */ + em_conf->nb_rx_adapter = 1; + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -192,6 +316,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if rx adapters are specified. Else generate a default config + * with one rx adapter and all eth queues - event queue mapped. + */ + if (em_conf->nb_rx_adapter == 0) { + ret = eh_set_default_conf_rx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -347,6 +481,104 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) return 0; } +static int +eh_rx_adapter_configure(struct eventmode_conf *em_conf, + struct rx_adapter_conf *adapter) +{ + struct rte_event_eth_rx_adapter_queue_conf queue_conf = {0}; + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct rx_adapter_connection_info *conn; + uint8_t eventdev_id; + uint32_t service_id; + int ret; + int j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = 1200; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create Rx adapter */ + ret = rte_event_eth_rx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create rx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Setup queue conf */ + queue_conf.ev.queue_id = conn->eventq_id; + queue_conf.ev.sched_type = em_conf->ext_params.sched_type; + queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV; + + /* Add queue to the adapter */ + ret = rte_event_eth_rx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_rx_qid, + &queue_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to rx adapter %d", + ret); + return ret; + } + } + + /* Get the service ID used by rx adapter */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by rx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_rx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start rx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_conf *adapter; + int i, ret; + + /* Configure rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + ret = eh_rx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure rx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -370,6 +602,9 @@ eh_devs_init(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Eventmode conf would need eth portmask */ + em_conf->eth_portmask = conf->eth_portmask; + /* Validate the requested config */ ret = eh_validate_conf(em_conf); if (ret < 0) { @@ -394,6 +629,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Rx adapter */ + ret = eh_initialize_rx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize rx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -416,8 +658,8 @@ int32_t eh_devs_uninit(struct eh_conf *conf) { struct eventmode_conf *em_conf; + int ret, i, j; uint16_t id; - int ret, i; if (conf == NULL) { EH_LOG_ERR("Invalid event helper configuration"); @@ -435,6 +677,35 @@ eh_devs_uninit(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Stop and release rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + + id = em_conf->rx_adapter[i].adapter_id; + ret = rte_event_eth_rx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop rx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->rx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_rx_adapter_queue_del(id, + em_conf->rx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove rx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_rx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free rx adapter %d", ret); + return ret; + } + } + /* Stop and release event devices */ for (i = 0; i < em_conf->nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 16b03b3..baf93e1 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -20,6 +20,12 @@ extern "C" { /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max Rx adapters supported */ +#define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS + +/* Max Rx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -57,12 +63,33 @@ struct eh_event_link_info { /**< Lcore to be polling on this port */ }; +/* Rx adapter connection info */ +struct rx_adapter_connection_info { + uint8_t ethdev_id; + uint8_t eventq_id; + int32_t ethdev_rx_qid; +}; + +/* Rx adapter conf */ +struct rx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t rx_core_id; + uint8_t nb_connections; + struct rx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_rx_adapter; + /**< No of Rx adapters */ + struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; + /**< Rx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -70,6 +97,8 @@ struct eventmode_conf { /**< Per link conf */ struct rte_bitmap *eth_core_mask; /**< Core mask of cores to be used for software Rx and Tx */ + uint32_t eth_portmask; + /**< Mask of the eth ports to be used */ union { RTE_STD_C11 struct { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 05/13] examples/ipsec-secgw: add Tx adapter support 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (3 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 04/13] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 06/13] examples/ipsec-secgw: add routines to display config Lukasz Bartosik ` (8 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add Tx adapter support. The event helper init routine will initialize the Tx adapter according to the configuration. If Tx adapter config is not present it will generate a default config. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 313 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 361 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 1d06a45..e6569c1 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -5,6 +5,7 @@ #include <rte_ethdev.h> #include <rte_eventdev.h> #include <rte_event_eth_rx_adapter.h> +#include <rte_event_eth_tx_adapter.h> #include <rte_malloc.h> #include <stdbool.h> @@ -76,6 +77,22 @@ eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) return next_core; } +static struct eventdev_params * +eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) +{ + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + if (em_conf->eventdev_config[i].eventdev_id == eventdev_id) + break; + } + + /* No match */ + if (i == em_conf->nb_eventdev) + return NULL; + + return &(em_conf->eventdev_config[i]); +} static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -292,6 +309,95 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct tx_adapter_conf *tx_adapter; + int eventdev_id; + int adapter_id; + int nb_eth_dev; + int conn_id; + int i; + + /* + * Create one Tx adapter with all eth queues mapped to event queues + * 1:1. + */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + tx_adapter = &(em_conf->tx_adapter[adapter_id]); + + /* Set adapter conf */ + tx_adapter->eventdev_id = eventdev_id; + tx_adapter->adapter_id = adapter_id; + + /* TODO: Tx core is required only when internal port is not present */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Application uses one event queue per adapter for submitting + * packets for Tx. Reserve the last queue available and decrement + * the total available event queues for this + */ + + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + + /* + * Map all Tx queues of the eth device (port) to the event device. + */ + + /* Set defaults for connections */ + + /* + * One eth device (port) is one connection. Map all Tx queues + * of the device to the Tx adapter. + */ + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = tx_adapter->nb_connections; + + /* Get the connection */ + conn = &(tx_adapter->conn[conn_id]); + + /* Add ethdev to connections */ + conn->ethdev_id = i; + + /* Add all eth tx queues to adapter */ + conn->ethdev_tx_qid = -1; + + /* Update no of connections */ + tx_adapter->nb_connections++; + } + + /* We have setup one adapter */ + em_conf->nb_tx_adapter = 1; + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -326,6 +432,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if tx adapters are specified. Else generate a default config + * with one tx adapter. + */ + if (em_conf->nb_tx_adapter == 0) { + ret = eh_set_default_conf_tx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -579,6 +695,133 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int +eh_tx_adapter_configure(struct eventmode_conf *em_conf, + struct tx_adapter_conf *adapter) +{ + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + uint8_t tx_port_id = 0; + uint8_t eventdev_id; + uint32_t service_id; + int ret, j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + /* Create Tx adapter */ + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = + evdev_default_conf.max_num_events; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create adapter */ + ret = rte_event_eth_tx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create tx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Add queue to the adapter */ + ret = rte_event_eth_tx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_tx_qid); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to tx adapter %d", + ret); + return ret; + } + } + + /* Setup Tx queue & port */ + + /* Get event port used by the adapter */ + ret = rte_event_eth_tx_adapter_event_port_get( + adapter->adapter_id, &tx_port_id); + if (ret) { + EH_LOG_ERR("Failed to get tx adapter port id %d", ret); + return ret; + } + + /* + * Tx event queue is reserved for Tx adapter. Unlink this queue + * from all other ports + * + */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + rte_event_port_unlink(eventdev_id, j, + &(adapter->tx_ev_queue), 1); + } + + /* Link Tx event queue to Tx port */ + ret = rte_event_port_link(eventdev_id, tx_port_id, + &(adapter->tx_ev_queue), NULL, 1); + if (ret != 1) { + EH_LOG_ERR("Failed to link event queue to port"); + return ret; + } + + /* Get the service ID used by Tx adapter */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by tx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start tx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_conf *adapter; + int i, ret; + + /* Configure Tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + ret = eh_tx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure tx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -636,6 +879,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Tx adapter */ + ret = eh_initialize_tx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize tx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -719,5 +969,68 @@ eh_devs_uninit(struct eh_conf *conf) } } + /* Stop and release tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + + id = em_conf->tx_adapter[i].adapter_id; + ret = rte_event_eth_tx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop tx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->tx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_tx_adapter_queue_del(id, + em_conf->tx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove tx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_tx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free tx adapter %d", ret); + return ret; + } + } + return 0; } + +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) +{ + struct eventdev_params *eventdev_config; + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + if (eventdev_config == NULL) { + EH_LOG_ERR("Failed to read eventdev config"); + return -EINVAL; + } + + /* + * The last queue is reserved to be used as atomic queue for the + * last stage (eth packet tx stage) + */ + return eventdev_config->nb_eventqueue - 1; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index baf93e1..e76d764 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -23,9 +23,15 @@ extern "C" { /* Max Rx adapters supported */ #define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS +/* Max Tx adapters supported */ +#define EVENT_MODE_MAX_TX_ADAPTERS RTE_EVENT_MAX_DEVS + /* Max Rx adapter connections */ #define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 +/* Max Tx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -33,6 +39,9 @@ extern "C" { #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Tx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS + /** * Packet transfer mode of the application */ @@ -80,6 +89,23 @@ struct rx_adapter_conf { conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; }; +/* Tx adapter connection info */ +struct tx_adapter_connection_info { + uint8_t ethdev_id; + int32_t ethdev_tx_qid; +}; + +/* Tx adapter conf */ +struct tx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t tx_core_id; + uint8_t nb_connections; + struct tx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER]; + uint8_t tx_ev_queue; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; @@ -90,6 +116,10 @@ struct eventmode_conf { /**< No of Rx adapters */ struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; /**< Rx adapter conf */ + uint8_t nb_tx_adapter; + /**< No of Tx adapters */ + struct tx_adapter_conf tx_adapter[EVENT_MODE_MAX_TX_ADAPTERS]; + /** Tx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -170,6 +200,24 @@ eh_devs_init(struct eh_conf *conf); int32_t eh_devs_uninit(struct eh_conf *conf); +/** + * Get eventdev tx queue + * + * If the application uses event device which does not support internal port + * then it needs to submit the events to a Tx queue before final transmission. + * This Tx queue will be created internally by the eventmode helper subsystem, + * and application will need its queue ID when it runs the execution loop. + * + * @param mode_conf + * Event helper configuration + * @param eventdev_id + * Event device ID + * @return + * Tx queue ID + */ +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 06/13] examples/ipsec-secgw: add routines to display config 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (4 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 05/13] examples/ipsec-secgw: add Tx " Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 07/13] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik ` (7 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add routines to display the eventmode configuration and provide an overview of the devices used. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 207 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 14 +++ 2 files changed, 221 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index e6569c1..883cb19 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -822,6 +822,210 @@ eh_initialize_tx_adapter(struct eventmode_conf *em_conf) return 0; } +static void +eh_display_operating_mode(struct eventmode_conf *em_conf) +{ + char sched_types[][32] = { + "RTE_SCHED_TYPE_ORDERED", + "RTE_SCHED_TYPE_ATOMIC", + "RTE_SCHED_TYPE_PARALLEL", + }; + EH_LOG_INFO("Operating mode:"); + + EH_LOG_INFO("\tScheduling type: \t%s", + sched_types[em_conf->ext_params.sched_type]); + + EH_LOG_INFO(""); +} + +static void +eh_display_event_dev_conf(struct eventmode_conf *em_conf) +{ + char queue_mode[][32] = { + "", + "ATQ (ALL TYPE QUEUE)", + "SINGLE LINK", + }; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Event Device Configuration:"); + + for (i = 0; i < em_conf->nb_eventdev; i++) { + sprintf(print_buf, + "\tDev ID: %-2d \tQueues: %-2d \tPorts: %-2d", + em_conf->eventdev_config[i].eventdev_id, + em_conf->eventdev_config[i].nb_eventqueue, + em_conf->eventdev_config[i].nb_eventport); + sprintf(print_buf + strlen(print_buf), + "\tQueue mode: %s", + queue_mode[em_conf->eventdev_config[i].ev_queue_mode]); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +static void +eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_rx_adapter = em_conf->nb_rx_adapter; + struct rx_adapter_connection_info *conn; + struct rx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Rx adapters configured: %d", nb_rx_adapter); + + for (i = 0; i < nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + EH_LOG_INFO( + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" + "\tRx core: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id, + adapter->rx_core_id); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_rx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2d", + conn->ethdev_rx_qid); + + sprintf(print_buf + strlen(print_buf), + "\tEvent queue: %-2d", conn->eventq_id); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_tx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_tx_adapter = em_conf->nb_tx_adapter; + struct tx_adapter_connection_info *conn; + struct tx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Tx adapters configured: %d", nb_tx_adapter); + + for (i = 0; i < nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + sprintf(print_buf, + "\tTx adapter ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id); + if (adapter->tx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->tx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2d,\tInput event queue: %-2d", + adapter->tx_core_id, adapter->tx_ev_queue); + + EH_LOG_INFO("%s", print_buf); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_tx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2d", + conn->ethdev_tx_qid); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_link_conf(struct eventmode_conf *em_conf) +{ + struct eh_event_link_info *link; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Links configured: %d", em_conf->nb_link); + + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + + sprintf(print_buf, + "\tEvent dev ID: %-2d\tEvent port: %-2d", + link->eventdev_id, + link->event_port_id); + + if (em_conf->ext_params.all_ev_queue_to_ev_port) + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2s\t", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2d\t", link->eventq_id); + + sprintf(print_buf + strlen(print_buf), + "Lcore: %-2d", link->lcore_id); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +void +eh_display_conf(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Display user exposed operating modes */ + eh_display_operating_mode(em_conf); + + /* Display event device conf */ + eh_display_event_dev_conf(em_conf); + + /* Display Rx adapter conf */ + eh_display_rx_adapter_conf(em_conf); + + /* Display Tx adapter conf */ + eh_display_tx_adapter_conf(em_conf); + + /* Display event-lcore link */ + eh_display_link_conf(em_conf); +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -855,6 +1059,9 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Display the current configuration */ + eh_display_conf(conf); + /* Stop eth devices before setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index e76d764..d7191a6 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -17,6 +17,11 @@ extern "C" { RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) +#define EH_LOG_INFO(...) \ + RTE_LOG(INFO, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS @@ -218,6 +223,15 @@ eh_devs_uninit(struct eh_conf *conf); uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); +/** + * Display event mode configuration + * + * @param conf + * Event helper configuration + */ +void +eh_display_conf(struct eh_conf *conf); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 07/13] examples/ipsec-secgw: add routines to launch workers 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (5 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 06/13] examples/ipsec-secgw: add routines to display config Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 08/13] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik ` (6 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev In eventmode workers can be drafted differently according to the capabilities of the underlying event device. The added functions will receive an array of such workers and probe the eventmode properties to choose the worker. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 336 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 384 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 883cb19..d51be29 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -11,6 +11,8 @@ #include "event_helper.h" +static volatile bool eth_core_running; + static int eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { @@ -93,6 +95,16 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } +static inline bool +eh_dev_has_burst_mode(uint8_t dev_id) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(dev_id, &dev_info); + return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE) ? + true : false; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -695,6 +707,257 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int32_t +eh_start_worker_eth_core(struct eventmode_conf *conf, uint32_t lcore_id) +{ + uint32_t service_id[EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE]; + struct rx_adapter_conf *rx_adapter; + struct tx_adapter_conf *tx_adapter; + int service_count = 0; + int adapter_id; + int32_t ret; + int i; + + EH_LOG_INFO("Entering eth_core processing on lcore %u", lcore_id); + + /* + * Parse adapter config to check which of all Rx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_rx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per rx core"); + break; + } + + rx_adapter = &(conf->rx_adapter[i]); + if (rx_adapter->rx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = rx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by rx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + /* + * Parse adapter config to see which of all Tx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_tx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per tx core"); + break; + } + + tx_adapter = &conf->tx_adapter[i]; + if (tx_adapter->tx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = tx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by tx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + eth_core_running = true; + + while (eth_core_running) { + for (i = 0; i < service_count; i++) { + /* Initiate adapter service */ + rte_service_run_iter_on_app_lcore(service_id[i], 0); + } + } + + return 0; +} + +static int32_t +eh_stop_worker_eth_core(void) +{ + if (eth_core_running) { + EH_LOG_INFO("Stopping eth cores"); + eth_core_running = false; + } + return 0; +} + +static struct eh_app_worker_params * +eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, + struct eh_app_worker_params *app_wrkrs, uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params curr_conf = { {{0} }, NULL}; + struct eh_event_link_info *link = NULL; + struct eh_app_worker_params *tmp_wrkr; + struct eventmode_conf *em_conf; + uint8_t eventdev_id; + int i; + + /* Get eventmode config */ + em_conf = conf->mode_params; + + /* + * Use event device from the first lcore-event link. + * + * Assumption: All lcore-event links tied to a core are using the + * same event device. In other words, one core would be polling on + * queues of a single event device only. + */ + + /* Get a link for this lcore */ + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + if (link->lcore_id == lcore_id) + break; + } + + if (link == NULL) { + EH_LOG_ERR("No valid link found for lcore %d", lcore_id); + return NULL; + } + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* Populate the curr_conf with the capabilities */ + + /* Check for burst mode */ + if (eh_dev_has_burst_mode(eventdev_id)) + curr_conf.cap.burst = EH_RX_TYPE_BURST; + else + curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + + /* Parse the passed list and see if we have matching capabilities */ + + /* Initialize the pointer used to traverse the list */ + tmp_wrkr = app_wrkrs; + + for (i = 0; i < nb_wrkr_param; i++, tmp_wrkr++) { + + /* Skip this if capabilities are not matching */ + if (tmp_wrkr->cap.u64 != curr_conf.cap.u64) + continue; + + /* If the checks pass, we have a match */ + return tmp_wrkr; + } + + return NULL; +} + +static int +eh_verify_match_worker(struct eh_app_worker_params *match_wrkr) +{ + /* Verify registered worker */ + if (match_wrkr->worker_thread == NULL) { + EH_LOG_ERR("No worker registered"); + return 0; + } + + /* Success */ + return 1; +} + +static uint8_t +eh_get_event_lcore_links(uint32_t lcore_id, struct eh_conf *conf, + struct eh_event_link_info **links) +{ + struct eh_event_link_info *link_cache; + struct eventmode_conf *em_conf = NULL; + struct eh_event_link_info *link; + uint8_t lcore_nb_link = 0; + size_t single_link_size; + size_t cache_size; + int index = 0; + int i; + + if (conf == NULL || links == NULL) { + EH_LOG_ERR("Invalid args"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (em_conf == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Update the number of links for this core */ + lcore_nb_link++; + + } + } + + /* Compute size of one entry to be copied */ + single_link_size = sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + cache_size = lcore_nb_link * sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + link_cache = calloc(1, cache_size); + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Cache the link */ + memcpy(&link_cache[index], link, single_link_size); + + /* Update index */ + index++; + } + } + + /* Update the links for application to use the cached links */ + *links = link_cache; + + /* Return the number of cached links */ + return lcore_nb_link; +} + static int eh_tx_adapter_configure(struct eventmode_conf *em_conf, struct tx_adapter_conf *adapter) @@ -1208,6 +1471,79 @@ eh_devs_uninit(struct eh_conf *conf) return 0; } +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params *match_wrkr; + struct eh_event_link_info *links = NULL; + struct eventmode_conf *em_conf; + uint32_t lcore_id; + uint8_t nb_links; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Check if this is eth core */ + if (rte_bitmap_get(em_conf->eth_core_mask, lcore_id)) { + eh_start_worker_eth_core(em_conf, lcore_id); + return; + } + + if (app_wrkr == NULL || nb_wrkr_param == 0) { + EH_LOG_ERR("Invalid args"); + return; + } + + /* + * This is a regular worker thread. The application registers + * multiple workers with various capabilities. Run worker + * based on the selected capabilities of the event + * device configured. + */ + + /* Get the first matching worker for the event device */ + match_wrkr = eh_find_worker(lcore_id, conf, app_wrkr, nb_wrkr_param); + if (match_wrkr == NULL) { + EH_LOG_ERR("Failed to match worker registered for lcore %d", + lcore_id); + goto clean_and_exit; + } + + /* Verify sanity of the matched worker */ + if (eh_verify_match_worker(match_wrkr) != 1) { + EH_LOG_ERR("Failed to validate the matched worker"); + goto clean_and_exit; + } + + /* Get worker links */ + nb_links = eh_get_event_lcore_links(lcore_id, conf, &links); + + /* Launch the worker thread */ + match_wrkr->worker_thread(links, nb_links); + + /* Free links info memory */ + free(links); + +clean_and_exit: + + /* Flag eth_cores to stop, if started */ + eh_stop_worker_eth_core(); +} + uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index d7191a6..31a158e 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -44,6 +44,9 @@ extern "C" { #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Rx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE EVENT_MODE_MAX_RX_ADAPTERS + /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS @@ -55,6 +58,14 @@ enum eh_pkt_transfer_mode { EH_PKT_TRANSFER_MODE_EVENT, }; +/** + * Event mode packet rx types + */ +enum eh_rx_types { + EH_RX_TYPE_NON_BURST = 0, + EH_RX_TYPE_BURST +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -165,6 +176,22 @@ struct eh_conf { /**< Mode specific parameters */ }; +/* Workers registered by the application */ +struct eh_app_worker_params { + union { + RTE_STD_C11 + struct { + uint64_t burst : 1; + /**< Specify status of rx type burst */ + }; + uint64_t u64; + } cap; + /**< Capabilities of this worker */ + void (*worker_thread)(struct eh_event_link_info *links, + uint8_t nb_links); + /**< Worker thread */ +}; + /** * Initialize event mode devices * @@ -232,6 +259,27 @@ eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); void eh_display_conf(struct eh_conf *conf); + +/** + * Launch eventmode worker + * + * The application can request the eventmode helper subsystem to launch the + * worker based on the capabilities of event device and the options selected + * while initializing the eventmode. + * + * @param conf + * Event helper configuration + * @param app_wrkr + * List of all the workers registered by application, along with its + * capabilities + * @param nb_wrkr_param + * Number of workers passed by the application + * + */ +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param); + #ifdef __cplusplus } #endif -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 08/13] examples/ipsec-secgw: add support for internal ports 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (6 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 07/13] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 09/13] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik ` (5 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add support for Rx and Tx internal ports. When internal ports are available then a packet can be received from eth port and forwarded to event queue by HW without any software intervention. The same applies to Tx side where a packet sent to an event queue can by forwarded by HW to eth port without any software intervention. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 179 +++++++++++++++++++++++++++++++----- examples/ipsec-secgw/event_helper.h | 11 +++ 2 files changed, 167 insertions(+), 23 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index d51be29..6b21884 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -95,6 +95,39 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } + +static inline bool +eh_dev_has_rx_internal_port(uint8_t eventdev_id) +{ + int j; + bool flag = true; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_rx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + +static inline bool +eh_dev_has_tx_internal_port(uint8_t eventdev_id) +{ + int j; + bool flag = true; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_tx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + static inline bool eh_dev_has_burst_mode(uint8_t dev_id) { @@ -179,6 +212,42 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return 0; } +static void +eh_do_capability_check(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + int all_internal_ports = 1; + uint32_t eventdev_id; + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + eventdev_id = eventdev_config->eventdev_id; + + /* Check if event device has internal port for Rx & Tx */ + if (eh_dev_has_rx_internal_port(eventdev_id) && + eh_dev_has_tx_internal_port(eventdev_id)) { + eventdev_config->all_internal_ports = 1; + } else { + all_internal_ports = 0; + } + } + + /* + * If Rx & Tx internal ports are supported by all event devices then + * eth cores won't be required. Override the eth core mask requested + * and decrement number of event queues by one as it won't be needed + * for Tx. + */ + if (all_internal_ports) { + rte_bitmap_reset(em_conf->eth_core_mask); + for (i = 0; i < em_conf->nb_eventdev; i++) + em_conf->eventdev_config[i].nb_eventqueue--; + } +} + static int eh_set_default_conf_link(struct eventmode_conf *em_conf) { @@ -250,7 +319,10 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) struct rx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct rx_adapter_conf *adapter; + bool rx_internal_port = true; bool single_ev_queue = false; + int nb_eventqueue; + uint32_t caps = 0; int eventdev_id; int nb_eth_dev; int adapter_id; @@ -280,14 +352,21 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Set adapter conf */ adapter->eventdev_id = eventdev_id; adapter->adapter_id = adapter_id; - adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * If event device does not have internal ports for passing + * packets then reserved one queue for Tx path + */ + nb_eventqueue = eventdev_config->all_internal_ports ? + eventdev_config->nb_eventqueue : + eventdev_config->nb_eventqueue - 1; /* * Map all queues of eth device (port) to an event queue. If there * are more event queues than eth ports then create 1:1 mapping. * Otherwise map all eth ports to a single event queue. */ - if (nb_eth_dev > eventdev_config->nb_eventqueue) + if (nb_eth_dev > nb_eventqueue) single_ev_queue = true; for (i = 0; i < nb_eth_dev; i++) { @@ -309,11 +388,24 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Add all eth queues eth port to event queue */ conn->ethdev_rx_qid = -1; + /* Get Rx adapter capabilities */ + rte_event_eth_rx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + rx_internal_port = false; + /* Update no of connections */ adapter->nb_connections++; } + if (rx_internal_port) { + /* Rx core is not required */ + adapter->rx_core_id = -1; + } else { + /* Rx core is required */ + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + } + /* We have setup one adapter */ em_conf->nb_rx_adapter = 1; @@ -326,6 +418,8 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) struct tx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct tx_adapter_conf *tx_adapter; + bool tx_internal_port = true; + uint32_t caps = 0; int eventdev_id; int adapter_id; int nb_eth_dev; @@ -359,18 +453,6 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) tx_adapter->eventdev_id = eventdev_id; tx_adapter->adapter_id = adapter_id; - /* TODO: Tx core is required only when internal port is not present */ - tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); - - /* - * Application uses one event queue per adapter for submitting - * packets for Tx. Reserve the last queue available and decrement - * the total available event queues for this - */ - - /* Queue numbers start at 0 */ - tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; - /* * Map all Tx queues of the eth device (port) to the event device. */ @@ -400,10 +482,30 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) /* Add all eth tx queues to adapter */ conn->ethdev_tx_qid = -1; + /* Get Tx adapter capabilities */ + rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + tx_internal_port = false; + /* Update no of connections */ tx_adapter->nb_connections++; } + if (tx_internal_port) { + /* Tx core is not required */ + tx_adapter->tx_core_id = -1; + } else { + /* Tx core is required */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Use one event queue per adapter for submitting packets + * for Tx. Reserving the last queue available + */ + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + } + /* We have setup one adapter */ em_conf->nb_tx_adapter = 1; return 0; @@ -424,6 +526,9 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* Perform capability check for the selected event devices */ + eh_do_capability_check(em_conf); + /* * Check if links are specified. Else generate a default config for * the event ports used. @@ -529,11 +634,13 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode; /* * All queues need to be set with sched_type as - * schedule type for the application stage. One queue - * would be reserved for the final eth tx stage. This - * will be an atomic queue. + * schedule type for the application stage. One + * queue would be reserved for the final eth tx + * stage if event device does not have internal + * ports. This will be an atomic queue. */ - if (j == nb_eventqueue-1) { + if (!eventdev_config->all_internal_ports && + j == nb_eventqueue-1) { eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; } else { @@ -847,6 +954,12 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, /* Populate the curr_conf with the capabilities */ + /* Check for Tx internal port */ + if (eh_dev_has_tx_internal_port(eventdev_id)) + curr_conf.cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + else + curr_conf.cap.tx_internal_port = EH_TX_TYPE_NO_INTERNAL_PORT; + /* Check for burst mode */ if (eh_dev_has_burst_mode(eventdev_id)) curr_conf.cap.burst = EH_RX_TYPE_BURST; @@ -1018,6 +1131,16 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } } + /* + * Check if Tx core is assigned. If Tx core is not assigned then + * the adapter has internal port for submitting Tx packets and + * Tx event queue & port setup is not required + */ + if (adapter->tx_core_id == (uint32_t) (-1)) { + /* Internal port is present */ + goto skip_tx_queue_port_setup; + } + /* Setup Tx queue & port */ /* Get event port used by the adapter */ @@ -1057,6 +1180,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, rte_service_set_runstate_mapped_check(service_id, 0); +skip_tx_queue_port_setup: /* Start adapter */ ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); if (ret < 0) { @@ -1141,13 +1265,22 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); - EH_LOG_INFO( - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" - "\tRx core: %-2d", + sprintf(print_buf, + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, - adapter->eventdev_id, - adapter->rx_core_id); + adapter->eventdev_id); + if (adapter->rx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->rx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2d", adapter->rx_core_id); + + EH_LOG_INFO("%s", print_buf); for (j = 0; j < adapter->nb_connections; j++) { conn = &(adapter->conn[j]); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 31a158e..15a7bd6 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -66,12 +66,21 @@ enum eh_rx_types { EH_RX_TYPE_BURST }; +/** + * Event mode packet tx types + */ +enum eh_tx_types { + EH_TX_TYPE_INTERNAL_PORT = 0, + EH_TX_TYPE_NO_INTERNAL_PORT +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; uint8_t nb_eventqueue; uint8_t nb_eventport; uint8_t ev_queue_mode; + uint8_t all_internal_ports; }; /** @@ -183,6 +192,8 @@ struct eh_app_worker_params { struct { uint64_t burst : 1; /**< Specify status of rx type burst */ + uint64_t tx_internal_port : 1; + /**< Specify whether tx internal port is available */ }; uint64_t u64; } cap; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 09/13] examples/ipsec-secgw: add event helper config init/uninit 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (7 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 08/13] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 10/13] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik ` (4 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add eventmode helper eh_conf_init and eh_conf_uninit functions which purpose is to initialize and unitialize eventmode helper configuration. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 103 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 23 ++++++++ 2 files changed, 126 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 6b21884..423576d 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1385,6 +1385,109 @@ eh_display_link_conf(struct eventmode_conf *em_conf) EH_LOG_INFO(""); } +struct eh_conf * +eh_conf_init(void) +{ + struct eventmode_conf *em_conf = NULL; + struct eh_conf *conf = NULL; + unsigned int eth_core_id; + void *bitmap = NULL; + uint32_t nb_bytes; + + /* Allocate memory for config */ + conf = calloc(1, sizeof(struct eh_conf)); + if (conf == NULL) { + EH_LOG_ERR("Failed to allocate memory for eventmode helper " + "config"); + return NULL; + } + + /* Set default conf */ + + /* Packet transfer mode: poll */ + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + + /* Keep all ethernet ports enabled by default */ + conf->eth_portmask = -1; + + /* Allocate memory for event mode params */ + conf->mode_params = calloc(1, sizeof(struct eventmode_conf)); + if (conf->mode_params == NULL) { + EH_LOG_ERR("Failed to allocate memory for event mode params"); + goto free_conf; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Allocate and initialize bitmap for eth cores */ + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); + if (!nb_bytes) { + EH_LOG_ERR("Failed to get bitmap footprint"); + goto free_em_conf; + } + + bitmap = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, + RTE_CACHE_LINE_SIZE); + if (!bitmap) { + EH_LOG_ERR("Failed to allocate memory for eth cores bitmap\n"); + goto free_em_conf; + } + + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, bitmap, + nb_bytes); + if (!em_conf->eth_core_mask) { + EH_LOG_ERR("Failed to initialize bitmap"); + goto free_bitmap; + } + + /* Set schedule type as not set */ + em_conf->ext_params.sched_type = SCHED_TYPE_NOT_SET; + + /* Set two cores as eth cores for Rx & Tx */ + + /* Use first core other than master core as Rx core */ + eth_core_id = rte_get_next_lcore(0, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + /* Use next core as Tx core */ + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + return conf; + +free_bitmap: + rte_free(bitmap); +free_em_conf: + free(em_conf); +free_conf: + free(conf); + return NULL; +} + +void +eh_conf_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf = NULL; + + if (!conf || !conf->mode_params) + return; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Free evenmode configuration memory */ + rte_free(em_conf->eth_core_mask); + free(em_conf); + free(conf); +} + void eh_display_conf(struct eh_conf *conf) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 15a7bd6..7ad975f 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -50,6 +50,9 @@ extern "C" { /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS +/* Used to indicate that queue schedule type is not set */ +#define SCHED_TYPE_NOT_SET 3 + /** * Packet transfer mode of the application */ @@ -204,6 +207,26 @@ struct eh_app_worker_params { }; /** + * Allocate memory for event helper configuration and initialize + * it with default values. + * + * @return + * - pointer to event helper configuration structure on success. + * - NULL on failure. + */ +struct eh_conf * +eh_conf_init(void); + +/** + * Uninitialize event helper configuration and release its memory +. * + * @param conf + * Event helper configuration + */ +void +eh_conf_uninit(struct eh_conf *conf); + +/** * Initialize event mode devices * * Application can call this function to get the event devices, eth devices -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 10/13] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (8 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 09/13] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 11/13] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik ` (3 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add eventmode support to ipsec-secgw. With the aid of event helper configure and use the eventmode capabilities. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 3 + examples/ipsec-secgw/event_helper.h | 14 ++ examples/ipsec-secgw/ipsec-secgw.c | 258 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec.h | 17 +++ examples/ipsec-secgw/sa.c | 19 +-- 5 files changed, 294 insertions(+), 17 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 423576d..60bece7 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -966,6 +966,8 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, else curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + curr_conf.cap.ipsec_mode = conf->ipsec_mode; + /* Parse the passed list and see if we have matching capabilities */ /* Initialize the pointer used to traverse the list */ @@ -1406,6 +1408,7 @@ eh_conf_init(void) /* Packet transfer mode: poll */ conf->mode = EH_PKT_TRANSFER_MODE_POLL; + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; /* Keep all ethernet ports enabled by default */ conf->eth_portmask = -1; diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 7ad975f..2cdc0d1 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -77,6 +77,14 @@ enum eh_tx_types { EH_TX_TYPE_NO_INTERNAL_PORT }; +/** + * Event mode ipsec mode types + */ +enum eh_ipsec_mode_types { + EH_IPSEC_MODE_TYPE_APP = 0, + EH_IPSEC_MODE_TYPE_DRIVER +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -186,6 +194,10 @@ struct eh_conf { */ void *mode_params; /**< Mode specific parameters */ + + /** Application specific params */ + enum eh_ipsec_mode_types ipsec_mode; + /**< Mode of ipsec run */ }; /* Workers registered by the application */ @@ -197,6 +209,8 @@ struct eh_app_worker_params { /**< Specify status of rx type burst */ uint64_t tx_internal_port : 1; /**< Specify whether tx internal port is available */ + uint64_t ipsec_mode : 1; + /**< Specify ipsec processing level */ }; uint64_t u64; } cap; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index d5e8fe5..7d7092d 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2,6 +2,7 @@ * Copyright(c) 2016 Intel Corporation */ +#include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> @@ -14,6 +15,7 @@ #include <sys/queue.h> #include <stdarg.h> #include <errno.h> +#include <signal.h> #include <getopt.h> #include <rte_common.h> @@ -41,12 +43,17 @@ #include <rte_jhash.h> #include <rte_cryptodev.h> #include <rte_security.h> +#include <rte_bitmap.h> +#include <rte_eventdev.h> #include <rte_ip.h> #include <rte_ip_frag.h> +#include "event_helper.h" #include "ipsec.h" #include "parser.h" +volatile bool force_quit; + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define MAX_JUMBO_PKT_LEN 9600 @@ -133,12 +140,20 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" +#define CMD_LINE_OPT_SCHEDULE_TYPE "event-schedule-type" #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" #define CMD_LINE_OPT_REASSEMBLE "reassemble" #define CMD_LINE_OPT_MTU "mtu" #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" +#define CMD_LINE_ARG_EVENT "event" +#define CMD_LINE_ARG_POLL "poll" +#define CMD_LINE_ARG_ORDERED "ordered" +#define CMD_LINE_ARG_ATOMIC "atomic" +#define CMD_LINE_ARG_PARALLEL "parallel" + enum { /* long options mapped to a short option */ @@ -149,6 +164,8 @@ enum { CMD_LINE_OPT_CONFIG_NUM, CMD_LINE_OPT_SINGLE_SA_NUM, CMD_LINE_OPT_CRYPTODEV_MASK_NUM, + CMD_LINE_OPT_TRANSFER_MODE_NUM, + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, CMD_LINE_OPT_RX_OFFLOAD_NUM, CMD_LINE_OPT_TX_OFFLOAD_NUM, CMD_LINE_OPT_REASSEMBLE_NUM, @@ -160,6 +177,8 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, @@ -1277,6 +1296,8 @@ print_usage(const char *prgname) " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" " [--cryptodev_mask MASK]" + " [--transfer-mode MODE]" + " [--event-schedule-type TYPE]" " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" @@ -1298,6 +1319,14 @@ print_usage(const char *prgname) " bypassing the SP\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" + " --transfer-mode MODE\n" + " \"poll\" : Packet transfer via polling (default)\n" + " \"event\" : Packet transfer via event device\n" + " --event-schedule-type TYPE queue schedule type, used only when\n" + " transfer mode is set to event\n" + " \"ordered\" : Ordered (default)\n" + " \"atomic\" : Atomic\n" + " \"parallel\" : Parallel\n" " --" CMD_LINE_OPT_RX_OFFLOAD ": bitmask of the RX HW offload capabilities to enable/use\n" " (DEV_RX_OFFLOAD_*)\n" @@ -1432,8 +1461,45 @@ print_app_sa_prm(const struct app_sa_prm *prm) printf("Frag TTL: %" PRIu64 " ns\n", frag_ttl_ns); } +static int +parse_transfer_mode(struct eh_conf *conf, const char *optarg) +{ + if (!strcmp(CMD_LINE_ARG_POLL, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + else if (!strcmp(CMD_LINE_ARG_EVENT, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_EVENT; + else { + printf("Unsupported packet transfer mode\n"); + return -EINVAL; + } + + return 0; +} + +static int +parse_schedule_type(struct eh_conf *conf, const char *optarg) +{ + struct eventmode_conf *em_conf = NULL; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (!strcmp(CMD_LINE_ARG_ORDERED, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + else if (!strcmp(CMD_LINE_ARG_ATOMIC, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ATOMIC; + else if (!strcmp(CMD_LINE_ARG_PARALLEL, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_PARALLEL; + else { + printf("Unsupported queue schedule type\n"); + return -EINVAL; + } + + return 0; +} + static int32_t -parse_args(int32_t argc, char **argv) +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) { int opt; int64_t ret; @@ -1522,6 +1588,7 @@ parse_args(int32_t argc, char **argv) /* else */ single_sa = 1; single_sa_idx = ret; + eh_conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; printf("Configured with single SA index %u\n", single_sa_idx); break; @@ -1536,6 +1603,25 @@ parse_args(int32_t argc, char **argv) /* else */ enabled_cryptodev_mask = ret; break; + + case CMD_LINE_OPT_TRANSFER_MODE_NUM: + ret = parse_transfer_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid packet transfer mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: + ret = parse_schedule_type(eh_conf, optarg); + if (ret < 0) { + printf("Invalid queue schedule type\n"); + print_usage(prgname); + return -1; + } + break; + case CMD_LINE_OPT_RX_OFFLOAD_NUM: ret = parse_mask(optarg, &dev_rx_offload); if (ret != 0) { @@ -2450,16 +2536,116 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) port_id); } +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + } +} + +static void +ev_mode_sess_verify(struct ipsec_sa *sa, int nb_sa) +{ + struct rte_ipsec_session *ips; + int32_t i; + + if (!sa || !nb_sa) + return; + + for (i = 0; i < nb_sa; i++) { + ips = ipsec_get_primary_session(sa); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) + rte_exit(EXIT_FAILURE, "Event mode supports only " + "inline protocol sessions\n"); + } + +} + +static int32_t +check_eh_conf(struct eh_conf *eh_conf) +{ + struct eventmode_conf *em_conf = NULL; + + if (!eh_conf || !eh_conf->mode_params) + return -EINVAL; + + /* Get eventmode conf */ + em_conf = eh_conf->mode_params; + + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL && + em_conf->ext_params.sched_type != SCHED_TYPE_NOT_SET) { + printf("error: option --event-schedule-type applies only to " + "event mode\n"); + return -EINVAL; + } + + if (eh_conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + /* Set schedule type to ORDERED if it wasn't explicitly set by user */ + if (em_conf->ext_params.sched_type == SCHED_TYPE_NOT_SET) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + + /* + * Event mode currently supports only inline protocol sessions. + * If there are other types of sessions configured then exit with + * error. + */ + ev_mode_sess_verify(sa_in, nb_sa_in); + ev_mode_sess_verify(sa_out, nb_sa_out); + + return 0; +} + +static void +inline_sessions_free(struct sa_ctx *sa_ctx) +{ + struct rte_ipsec_session *ips; + struct ipsec_sa *sa; + int32_t i, ret; + + if (!sa_ctx) + return; + + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { + + sa = &sa_ctx->sa[i]; + if (!sa->spi) + continue; + + ips = ipsec_get_primary_session(sa); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && + ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + continue; + + if (!rte_eth_dev_is_valid_port(sa->portid)) + continue; + + ret = rte_security_session_destroy( + rte_eth_dev_get_sec_ctx(sa->portid), + ips->security.ses); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy security " + "session type %d, spi %d\n", + ips->type, sa->spi); + } +} + int32_t main(int32_t argc, char **argv) { int32_t ret; uint32_t lcore_id; + uint32_t cdev_id; uint32_t i; uint8_t socket_id; uint16_t portid; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; + struct eh_conf *eh_conf = NULL; size_t sess_sz; /* init EAL */ @@ -2469,8 +2655,17 @@ main(int32_t argc, char **argv) argc -= ret; argv += ret; + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* initialize event helper configuration */ + eh_conf = eh_conf_init(); + if (eh_conf == NULL) + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); + /* parse application arguments (after the EAL ones) */ - ret = parse_args(argc, argv); + ret = parse_args(argc, argv, eh_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid parameters\n"); @@ -2490,6 +2685,9 @@ main(int32_t argc, char **argv) if (check_params() < 0) rte_exit(EXIT_FAILURE, "check_params failed\n"); + if (check_eh_conf(eh_conf) < 0) + rte_exit(EXIT_FAILURE, "check_eh_conf failed\n"); + ret = init_lcore_rx_queues(); if (ret < 0) rte_exit(EXIT_FAILURE, "init_lcore_rx_queues failed\n"); @@ -2529,6 +2727,18 @@ main(int32_t argc, char **argv) cryptodevs_init(); + /* + * Set the enabled port mask in helper config for use by helper + * sub-system. This will be used while initializing devices using + * helper sub-system. + */ + eh_conf->eth_portmask = enabled_port_mask; + + /* Initialize eventmode components */ + ret = eh_devs_init(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); + /* start ports */ RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2583,10 +2793,54 @@ main(int32_t argc, char **argv) /* launch per-lcore init on every lcore */ rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; } + /* Uninitialize eventmode components */ + ret = eh_devs_uninit(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); + + /* Free eventmode configuration memory */ + eh_conf_uninit(eh_conf); + + /* Destroy inline inbound and outbound sessions */ + for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { + socket_id = rte_socket_id_by_idx(i); + inline_sessions_free(socket_ctx[socket_id].sa_in); + inline_sessions_free(socket_ctx[socket_id].sa_out); + } + + for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { + printf("Closing cryptodev %d...", cdev_id); + rte_cryptodev_stop(cdev_id); + rte_cryptodev_close(cdev_id); + printf(" Done\n"); + } + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + printf("Closing port %d...", portid); + if (flow_info_tbl[portid].rx_def_flow) { + struct rte_flow_error err; + + ret = rte_flow_destroy(portid, + flow_info_tbl[portid].rx_def_flow, &err); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy flow " + " for port %u, err msg: %s\n", portid, + err.message); + } + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } + printf("Bye...\n"); + return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 28ff07d..383a379 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -153,6 +153,17 @@ struct ipsec_sa { struct rte_security_session_conf sess_conf; } __rte_cache_aligned; +struct sa_ctx { + void *satbl; /* pointer to array of rte_ipsec_sa objects*/ + struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; + union { + struct { + struct rte_crypto_sym_xform a; + struct rte_crypto_sym_xform b; + }; + } xf[IPSEC_SA_MAX_ENTRIES]; +}; + struct ipsec_mbuf_metadata { struct ipsec_sa *sa; struct rte_crypto_op cop; @@ -247,6 +258,12 @@ struct ipsec_traffic { struct traffic_type ip6; }; +extern struct ipsec_sa sa_out[IPSEC_SA_MAX_ENTRIES]; +extern uint32_t nb_sa_out; + +extern struct ipsec_sa sa_in[IPSEC_SA_MAX_ENTRIES]; +extern uint32_t nb_sa_in; + uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t len); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index c75a5a1..c097d67 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -132,11 +132,11 @@ const struct supported_aead_algo aead_algos[] = { } }; -static struct ipsec_sa sa_out[IPSEC_SA_MAX_ENTRIES]; -static uint32_t nb_sa_out; +struct ipsec_sa sa_out[IPSEC_SA_MAX_ENTRIES]; +uint32_t nb_sa_out; -static struct ipsec_sa sa_in[IPSEC_SA_MAX_ENTRIES]; -static uint32_t nb_sa_in; +struct ipsec_sa sa_in[IPSEC_SA_MAX_ENTRIES]; +uint32_t nb_sa_in; static const struct supported_cipher_algo * find_match_cipher_algo(const char *cipher_keyword) @@ -781,17 +781,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) printf("\n"); } -struct sa_ctx { - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ - struct ipsec_sa sa[IPSEC_SA_MAX_ENTRIES]; - union { - struct { - struct rte_crypto_sym_xform a; - struct rte_crypto_sym_xform b; - }; - } xf[IPSEC_SA_MAX_ENTRIES]; -}; - static struct sa_ctx * sa_create(const char *name, int32_t socket_id) { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 11/13] examples/ipsec-secgw: add driver mode worker 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (9 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 10/13] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 12/13] examples/ipsec-secgw: add app " Lukasz Bartosik ` (2 subsequent siblings) 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add driver inbound and outbound worker thread for ipsec-secgw. In driver mode application does as little as possible. It simply forwards packets back to port from which traffic was received instructing HW to apply inline security processing using first outbound SA configured for a given port. If a port does not have SA configured outbound traffic on that port will be silently dropped. The aim of this mode is to measure HW capabilities. Driver mode is selected with single-sa option. The single-sa option accepts SA index however in event mode the SA index is ignored. Example command to run ipsec-secgw in driver mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel --single-sa 0 Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/ipsec-secgw.c | 32 +++-- examples/ipsec-secgw/ipsec-secgw.h | 21 ++++ examples/ipsec-secgw/ipsec.h | 11 ++ examples/ipsec-secgw/ipsec_worker.c | 243 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/meson.build | 2 +- 6 files changed, 292 insertions(+), 18 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index 09e3c5a..f6fd94c 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -15,6 +15,7 @@ SRCS-y += sa.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += ipsec_worker.c SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 7d7092d..acd7135 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -70,8 +70,6 @@ volatile bool force_quit; #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ -#define NB_SOCKETS 4 - /* Configure how many packets ahead to prefetch, when reading packets */ #define PREFETCH_OFFSET 3 @@ -79,8 +77,6 @@ volatile bool force_quit; #define MAX_LCORE_PARAMS 1024 -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) - /* * Configurable number of RX/TX ring descriptors */ @@ -187,15 +183,15 @@ static const struct option lgopts[] = { {NULL, 0, 0, 0} }; +uint32_t unprotected_port_mask; +uint32_t single_sa_idx; /* mask of enabled ports */ static uint32_t enabled_port_mask; static uint64_t enabled_cryptodev_mask = UINT64_MAX; -static uint32_t unprotected_port_mask; static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; -static uint32_t single_sa_idx; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -278,7 +274,7 @@ static struct rte_eth_conf port_conf = { }, }; -static struct socket_ctx socket_ctx[NB_SOCKETS]; +struct socket_ctx socket_ctx[NB_SOCKETS]; /* * Determine is multi-segment support required: @@ -997,12 +993,12 @@ process_pkts(struct lcore_conf *qconf, struct rte_mbuf **pkts, prepare_traffic(pkts, &traffic, nb_pkts); if (unlikely(single_sa)) { - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) process_pkts_inbound_nosp(&qconf->inbound, &traffic); else process_pkts_outbound_nosp(&qconf->outbound, &traffic); } else { - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) process_pkts_inbound(&qconf->inbound, &traffic); else process_pkts_outbound(&qconf->outbound, &traffic); @@ -1113,8 +1109,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, } /* main processing loop */ -static int32_t -main_loop(__attribute__((unused)) void *dummy) +void +ipsec_poll_mode_worker(void) { struct rte_mbuf *pkts[MAX_PKT_BURST]; uint32_t lcore_id; @@ -1156,7 +1152,7 @@ main_loop(__attribute__((unused)) void *dummy) if (qconf->nb_rx_queue == 0) { RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", lcore_id); - return 0; + return; } RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); @@ -1169,7 +1165,7 @@ main_loop(__attribute__((unused)) void *dummy) lcore_id, portid, queueid); } - while (1) { + while (!force_quit) { cur_tsc = rte_rdtsc(); /* TX queue buffer drain */ @@ -1193,7 +1189,7 @@ main_loop(__attribute__((unused)) void *dummy) process_pkts(qconf, pkts, nb_rx, portid); /* dequeue and process completed crypto-ops */ - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) drain_inbound_crypto_queues(qconf, &qconf->inbound); else @@ -1315,8 +1311,10 @@ print_usage(const char *prgname) " -a enables SA SQN atomic behaviour\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration\n" - " --single-sa SAIDX: Use single SA index for outbound traffic,\n" - " bypassing the SP\n" + " --single-sa SAIDX: In poll mode use single SA index for\n" + " outbound traffic, bypassing the SP\n" + " In event mode selects driver mode,\n" + " SA index value is ignored\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" " --transfer-mode MODE\n" @@ -2792,7 +2790,7 @@ main(int32_t argc, char **argv) check_all_ports_link_status(enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h new file mode 100644 index 0000000..06995cf --- /dev/null +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_SECGW_H_ +#define _IPSEC_SECGW_H_ + +#define NB_SOCKETS 4 + +/* Port mask to identify the unprotected ports */ +extern uint32_t unprotected_port_mask; + +/* Index of SA in single mode */ +extern uint32_t single_sa_idx; + +static inline uint8_t +is_unprotected_port(uint16_t port_id) +{ + return unprotected_port_mask & (1 << port_id); +} + +#endif /* _IPSEC_SECGW_H_ */ diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 383a379..15360fb 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -13,6 +13,8 @@ #include <rte_flow.h> #include <rte_ipsec.h> +#include "ipsec-secgw.h" + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 @@ -258,6 +260,15 @@ struct ipsec_traffic { struct traffic_type ip6; }; +/* Socket ctx */ +extern struct socket_ctx socket_ctx[NB_SOCKETS]; + +void +ipsec_poll_mode_worker(void); + +int +ipsec_launch_one_lcore(void *args); + extern struct ipsec_sa sa_out[IPSEC_SA_MAX_ENTRIES]; extern uint32_t nb_sa_out; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c new file mode 100644 index 0000000..3f63ab0 --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -0,0 +1,243 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2016 Intel Corporation + * Copyright (C) 2020 Marvell International Ltd. + */ +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <stdint.h> +#include <inttypes.h> +#include <sys/types.h> +#include <sys/queue.h> +#include <netinet/in.h> +#include <setjmp.h> +#include <stdarg.h> +#include <ctype.h> +#include <stdbool.h> + +#include <rte_common.h> +#include <rte_log.h> +#include <rte_memcpy.h> +#include <rte_atomic.h> +#include <rte_cycles.h> +#include <rte_prefetch.h> +#include <rte_lcore.h> +#include <rte_branch_prediction.h> +#include <rte_event_eth_tx_adapter.h> +#include <rte_ether.h> +#include <rte_ethdev.h> +#include <rte_eventdev.h> +#include <rte_malloc.h> +#include <rte_mbuf.h> + +#include "event_helper.h" +#include "ipsec.h" +#include "ipsec-secgw.h" + +extern volatile bool force_quit; + +static inline void +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) +{ + /* Save the destination port in the mbuf */ + m->port = port_id; + + /* Save eth queue for Tx */ + rte_event_eth_tx_adapter_txq_set(m, 0); +} + +static inline void +prepare_out_sessions_tbl(struct sa_ctx *sa_out, + struct rte_security_session **sess_tbl, uint16_t size) +{ + struct rte_ipsec_session *pri_sess; + struct ipsec_sa *sa; + int i; + + if (!sa_out) + return; + + for (i = 0; i < IPSEC_SA_MAX_ENTRIES; i++) { + + sa = &sa_out->sa[i]; + if (!sa->spi) + continue; + + pri_sess = ipsec_get_primary_session(sa); + if (pri_sess->type != + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + + RTE_LOG(ERR, IPSEC, "Invalid session type %d\n", + pri_sess->type); + continue; + } + + if (sa->portid >= size) { + RTE_LOG(ERR, IPSEC, + "Port id >= than table size %d, %d\n", + sa->portid, size); + continue; + } + + /* Use only first inline session found for a given port */ + if (sess_tbl[sa->portid]) + continue; + sess_tbl[sa->portid] = pri_sess->security.ses; + } +} + +/* + * Event mode exposes various operating modes depending on the + * capabilities of the event device and the operating mode + * selected. + */ + +/* Workers registered */ +#define IPSEC_EVENTMODE_WORKERS 1 + +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - driver mode + */ +static void +ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct rte_security_session *sess_tbl[RTE_MAX_ETHPORTS] = { NULL }; + unsigned int nb_rx = 0; + struct rte_mbuf *pkt; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int16_t port_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* + * Prepare security sessions table. In outbound driver mode + * we always use first session configured for a given port + */ + prepare_out_sessions_tbl(socket_ctx[socket_id].sa_out, sess_tbl, + RTE_MAX_ETHPORTS); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "driver mode) on lcore %d\n", lcore_id); + + /* We have valid links */ + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + pkt = ev.mbuf; + port_id = pkt->port; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + if (!is_unprotected_port(port_id)) { + + if (unlikely(!sess_tbl[port_id])) { + rte_pktmbuf_free(pkt); + continue; + } + + /* Save security session */ + pkt->udata64 = (uint64_t) sess_tbl[port_id]; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + } + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + +static uint8_t +ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) +{ + struct eh_app_worker_params *wrkr; + uint8_t nb_wrkr_param = 0; + + /* Save workers */ + wrkr = wrkrs; + + /* Non-burst - Tx internal port - driver mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; + wrkr++; + + return nb_wrkr_param; +} + +static void +ipsec_eventmode_worker(struct eh_conf *conf) +{ + struct eh_app_worker_params ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { + {{{0} }, NULL } }; + uint8_t nb_wrkr_param; + + /* Populate l2fwd_wrkr params */ + nb_wrkr_param = ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); + + /* + * Launch correct worker after checking + * the event device's capabilities. + */ + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); +} + +int ipsec_launch_one_lcore(void *args) +{ + struct eh_conf *conf; + + conf = (struct eh_conf *)args; + + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { + /* Run in poll mode */ + ipsec_poll_mode_worker(); + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { + /* Run in event mode */ + ipsec_eventmode_worker(conf); + } + return 0; +} diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 20f4064..ab40ca5 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c' + 'parser.c', 'rt.c', 'sa.c', 'sp4.c', 'sp6.c', 'event_helper.c', 'ipsec_worker.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 12/13] examples/ipsec-secgw: add app mode worker 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (10 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 11/13] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 13/13] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik 13 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add application inbound/outbound worker thread and IPsec application processing code for event mode. Exampple ipsec-secgw command in app mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 31 +-- examples/ipsec-secgw/ipsec-secgw.h | 65 ++++++ examples/ipsec-secgw/ipsec.h | 22 -- examples/ipsec-secgw/ipsec_worker.c | 420 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec_worker.h | 39 ++++ 5 files changed, 523 insertions(+), 54 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec_worker.h diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index acd7135..862a7f0 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -50,12 +50,11 @@ #include "event_helper.h" #include "ipsec.h" +#include "ipsec_worker.h" #include "parser.h" volatile bool force_quit; -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 - #define MAX_JUMBO_PKT_LEN 9600 #define MEMPOOL_CACHE_SIZE 256 @@ -85,29 +84,6 @@ volatile bool force_quit; static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((a) & 0xff) << 56) | \ - ((uint64_t)((b) & 0xff) << 48) | \ - ((uint64_t)((c) & 0xff) << 40) | \ - ((uint64_t)((d) & 0xff) << 32) | \ - ((uint64_t)((e) & 0xff) << 24) | \ - ((uint64_t)((f) & 0xff) << 16) | \ - ((uint64_t)((g) & 0xff) << 8) | \ - ((uint64_t)(h) & 0xff)) -#else -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((h) & 0xff) << 56) | \ - ((uint64_t)((g) & 0xff) << 48) | \ - ((uint64_t)((f) & 0xff) << 40) | \ - ((uint64_t)((e) & 0xff) << 32) | \ - ((uint64_t)((d) & 0xff) << 24) | \ - ((uint64_t)((c) & 0xff) << 16) | \ - ((uint64_t)((b) & 0xff) << 8) | \ - ((uint64_t)(a) & 0xff)) -#endif -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) - #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ @@ -119,11 +95,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) -/* port/source ethernet addr and destination ethernet addr */ -struct ethaddr_info { - uint64_t src, dst; -}; - struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index 06995cf..2638c8f 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -4,8 +4,73 @@ #ifndef _IPSEC_SECGW_H_ #define _IPSEC_SECGW_H_ +#include <rte_hash.h> + #define NB_SOCKETS 4 +#define MAX_PKT_BURST 32 + +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 + +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((a) & 0xff) << 56) | \ + ((uint64_t)((b) & 0xff) << 48) | \ + ((uint64_t)((c) & 0xff) << 40) | \ + ((uint64_t)((d) & 0xff) << 32) | \ + ((uint64_t)((e) & 0xff) << 24) | \ + ((uint64_t)((f) & 0xff) << 16) | \ + ((uint64_t)((g) & 0xff) << 8) | \ + ((uint64_t)(h) & 0xff)) +#else +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((h) & 0xff) << 56) | \ + ((uint64_t)((g) & 0xff) << 48) | \ + ((uint64_t)((f) & 0xff) << 40) | \ + ((uint64_t)((e) & 0xff) << 32) | \ + ((uint64_t)((d) & 0xff) << 24) | \ + ((uint64_t)((c) & 0xff) << 16) | \ + ((uint64_t)((b) & 0xff) << 8) | \ + ((uint64_t)(a) & 0xff)) +#endif + +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) + +struct traffic_type { + const uint8_t *data[MAX_PKT_BURST * 2]; + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; + void *saptr[MAX_PKT_BURST * 2]; + uint32_t res[MAX_PKT_BURST * 2]; + uint32_t num; +}; + +struct ipsec_traffic { + struct traffic_type ipsec; + struct traffic_type ip4; + struct traffic_type ip6; +}; + +/* Fields optimized for devices without burst */ +struct traffic_type_nb { + const uint8_t *data; + struct rte_mbuf *pkt; + uint32_t res; + uint32_t num; +}; + +struct ipsec_traffic_nb { + struct traffic_type_nb ipsec; + struct traffic_type_nb ip4; + struct traffic_type_nb ip6; +}; + +/* port/source ethernet addr and destination ethernet addr */ +struct ethaddr_info { + uint64_t src, dst; +}; + +extern struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; + /* Port mask to identify the unprotected ports */ extern uint32_t unprotected_port_mask; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 15360fb..447e936 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -15,11 +15,9 @@ #include "ipsec-secgw.h" -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 -#define MAX_PKT_BURST 32 #define MAX_INFLIGHT 128 #define MAX_QP_PER_LCORE 256 @@ -246,29 +244,9 @@ struct cnt_blk { uint32_t cnt; } __attribute__((packed)); -struct traffic_type { - const uint8_t *data[MAX_PKT_BURST * 2]; - struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; - void *saptr[MAX_PKT_BURST * 2]; - uint32_t res[MAX_PKT_BURST * 2]; - uint32_t num; -}; - -struct ipsec_traffic { - struct traffic_type ipsec; - struct traffic_type ip4; - struct traffic_type ip6; -}; - /* Socket ctx */ extern struct socket_ctx socket_ctx[NB_SOCKETS]; -void -ipsec_poll_mode_worker(void); - -int -ipsec_launch_one_lcore(void *args); - extern struct ipsec_sa sa_out[IPSEC_SA_MAX_ENTRIES]; extern uint32_t nb_sa_out; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index 3f63ab0..715774b 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -15,6 +15,7 @@ #include <ctype.h> #include <stdbool.h> +#include <rte_acl.h> #include <rte_common.h> #include <rte_log.h> #include <rte_memcpy.h> @@ -29,13 +30,52 @@ #include <rte_eventdev.h> #include <rte_malloc.h> #include <rte_mbuf.h> +#include <rte_lpm.h> +#include <rte_lpm6.h> #include "event_helper.h" #include "ipsec.h" #include "ipsec-secgw.h" +#include "ipsec_worker.h" extern volatile bool force_quit; +static inline enum pkt_type +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) +{ + struct rte_ether_hdr *eth; + + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip, ip_p)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV4; + else + return PKT_TYPE_PLAIN_IPV4; + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip6_hdr, ip6_nxt)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV6; + else + return PKT_TYPE_PLAIN_IPV6; + } + + /* Unknown/Unsupported type */ + return PKT_TYPE_INVALID; +} + +static inline void +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) +{ + struct rte_ether_hdr *ethhdr; + + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); +} + static inline void ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) { @@ -86,6 +126,286 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, } } +static inline int +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) +{ + uint32_t res; + + if (unlikely(sp == NULL)) + return 0; + + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, + DEFAULT_MAX_CATEGORIES); + + if (unlikely(res == 0)) { + /* No match */ + return 0; + } + + if (res == DISCARD) + return 0; + else if (res == BYPASS) { + *sa_idx = 0; + return 1; + } + + *sa_idx = SPI2IDX(res); + if (*sa_idx < IPSEC_SA_MAX_ENTRIES) + return 1; + + /* Invalid SA IDX */ + return 0; +} + +static inline uint16_t +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint32_t dst_ip; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); + dst_ip = rte_be_to_cpu_32(dst_ip); + + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +/* TODO: To be tested */ +static inline uint16_t +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint8_t dst_ip[16]; + uint8_t *ip6_dst; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); + memcpy(&dst_ip[0], ip6_dst, 16); + + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +static inline uint16_t +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) +{ + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) + return route4_pkt(pkt, rt->rt4_ctx); + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) + return route6_pkt(pkt, rt->rt6_ctx); + + return RTE_MAX_ETHPORTS; +} + +static inline int +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct ipsec_sa *sa = NULL; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + case PKT_TYPE_PLAIN_IPV6: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + default: + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == 0) + goto route_and_send_pkt; + + /* Else the packet has to be protected with SA */ + + /* If the packet was IPsec processed, then SA pointer should be set */ + if (sa == NULL) + goto drop_pkt_and_exit; + + /* SPI on the packet should match with the one in SA */ + if (unlikely(sa->spi != sa_idx)) + goto drop_pkt_and_exit; + +route_and_send_pkt: + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + +static inline int +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct rte_ipsec_session *sess; + struct sa_ctx *sa_ctx; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + struct ipsec_sa *sa; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + case PKT_TYPE_PLAIN_IPV6: + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + default: + /* + * Only plain IPv4 & IPv6 packets are allowed + * on protected port. Drop the rest. + */ + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == 0) { + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + goto send_pkt; + } + + /* Else the packet has to be protected */ + + /* Get SA ctx*/ + sa_ctx = ctx->sa_ctx; + + /* Get SA */ + sa = &(sa_ctx->sa[sa_idx]); + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + /* Allow only inline protocol for now */ + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); + goto drop_pkt_and_exit; + } + + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) + pkt->userdata = sess->security.ses; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + + /* Get the port to which this pkt need to be submitted */ + port_id = sa->portid; + +send_pkt: + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -93,7 +413,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 1 +#define IPSEC_EVENTMODE_WORKERS 2 /* * Event mode worker @@ -171,7 +491,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } /* Save security session */ - pkt->udata64 = (uint64_t) sess_tbl[port_id]; + pkt->userdata = sess_tbl[port_id]; /* Mark the packet for Tx security offload */ pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; @@ -190,6 +510,94 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - app mode + */ +static void +ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct lcore_conf_ev_tx_int_port_wrkr lconf; + unsigned int nb_rx = 0; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int ret; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* We have valid links */ + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Save routing table */ + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "app mode) on lcore %d\n", lcore_id); + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + if (is_unprotected_port(ev.mbuf->port)) + ret = process_ipsec_ev_inbound(&lconf.inbound, + &lconf.rt, &ev); + else + ret = process_ipsec_ev_outbound(&lconf.outbound, + &lconf.rt, &ev); + if (ret != 1) + /* The pkt has been dropped */ + continue; + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -205,6 +613,14 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - app mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; + nb_wrkr_param++; return nb_wrkr_param; } diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h new file mode 100644 index 0000000..1b18b3c --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_WORKER_H_ +#define _IPSEC_WORKER_H_ + +#include "ipsec.h" + +enum pkt_type { + PKT_TYPE_PLAIN_IPV4 = 1, + PKT_TYPE_IPSEC_IPV4, + PKT_TYPE_PLAIN_IPV6, + PKT_TYPE_IPSEC_IPV6, + PKT_TYPE_INVALID +}; + +struct route_table { + struct rt_ctx *rt4_ctx; + struct rt_ctx *rt6_ctx; +}; + +/* + * Conf required by event mode worker with tx internal port + */ +struct lcore_conf_ev_tx_int_port_wrkr { + struct ipsec_ctx inbound; + struct ipsec_ctx outbound; + struct route_table rt; +} __rte_cache_aligned; + +/* TODO + * + * Move this function to ipsec_worker.c + */ +void ipsec_poll_mode_worker(void); + +int ipsec_launch_one_lcore(void *args); + +#endif /* _IPSEC_WORKER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v3 13/13] examples/ipsec-secgw: make number of buffers dynamic 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (11 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 12/13] examples/ipsec-secgw: add app " Lukasz Bartosik @ 2020-02-04 13:58 ` Lukasz Bartosik 2020-02-05 13:42 ` Ananyev, Konstantin 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik 13 siblings, 1 reply; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-04 13:58 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Make number of buffers in a pool nb_mbuf_in_pool dependent on number of ports, cores and crypto queues. Add command line option -s which when used overrides dynamic calculation of number of buffers in a pool. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 59 +++++++++++++++++++++++++++++++------- 1 file changed, 48 insertions(+), 11 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 862a7f0..f7acb52 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -59,8 +59,6 @@ volatile bool force_quit; #define MEMPOOL_CACHE_SIZE 256 -#define NB_MBUF (32000) - #define CDEV_QUEUE_DESC 2048 #define CDEV_MAP_ENTRIES 16384 #define CDEV_MP_NB_OBJS 1024 @@ -163,6 +161,7 @@ static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; +static uint32_t nb_bufs_in_pool; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -1259,6 +1258,7 @@ print_usage(const char *prgname) " [-w REPLAY_WINDOW_SIZE]" " [-e]" " [-a]" + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" " -f CONFIG_FILE" " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" @@ -1280,6 +1280,7 @@ print_usage(const char *prgname) " size for each SA\n" " -e enables ESN\n" " -a enables SA SQN atomic behaviour\n" + " -s number of mbufs in packet pool (default 8192)\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration\n" " --single-sa SAIDX: In poll mode use single SA index for\n" @@ -1479,7 +1480,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) argvopt = argv; - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", lgopts, &option_index)) != EOF) { switch (opt) { @@ -1513,6 +1514,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) cfgfile = optarg; f_present = 1; break; + + case 's': + ret = parse_decimal(optarg); + if (ret < 0) { + printf("Invalid number of buffers in a pool: " + "%s\n", optarg); + print_usage(prgname); + return -1; + } + + nb_bufs_in_pool = ret; + break; + case 'j': ret = parse_decimal(optarg); if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ -1876,12 +1890,12 @@ check_cryptodev_mask(uint8_t cdev_id) return -1; } -static int32_t +static uint16_t cryptodevs_init(void) { struct rte_cryptodev_config dev_conf; struct rte_cryptodev_qp_conf qp_conf; - uint16_t idx, max_nb_qps, qp, i; + uint16_t idx, max_nb_qps, qp, total_nb_qps, i; int16_t cdev_id; struct rte_hash_parameters params = { 0 }; @@ -1909,6 +1923,7 @@ cryptodevs_init(void) printf("lcore/cryptodev/qp mappings:\n"); idx = 0; + total_nb_qps = 0; for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { struct rte_cryptodev_info cdev_info; @@ -1942,6 +1957,7 @@ cryptodevs_init(void) if (qp == 0) continue; + total_nb_qps += qp; dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id); dev_conf.nb_queue_pairs = qp; dev_conf.ff_disable = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO; @@ -1974,7 +1990,7 @@ cryptodevs_init(void) printf("\n"); - return 0; + return total_nb_qps; } static void @@ -2607,16 +2623,18 @@ int32_t main(int32_t argc, char **argv) { int32_t ret; - uint32_t lcore_id; + uint32_t lcore_id, nb_txq, nb_rxq = 0; uint32_t cdev_id; uint32_t i; uint8_t socket_id; - uint16_t portid; + uint16_t portid, nb_crypto_qp, nb_ports = 0; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; struct eh_conf *eh_conf = NULL; size_t sess_sz; + nb_bufs_in_pool = 0; + /* init EAL */ ret = rte_eal_init(argc, argv); if (ret < 0) @@ -2665,6 +2683,26 @@ main(int32_t argc, char **argv) sess_sz = max_session_size(); + nb_crypto_qp = cryptodevs_init(); + + if (nb_bufs_in_pool == 0) { + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + nb_ports++; + nb_rxq += get_port_nb_rx_queues(portid); + } + + nb_txq = nb_lcores; + + nb_bufs_in_pool = RTE_MAX((nb_rxq*nb_rxd + + nb_ports*nb_lcores*MAX_PKT_BURST + + nb_ports*nb_txq*nb_txd + + nb_lcores*MEMPOOL_CACHE_SIZE + + nb_crypto_qp*CDEV_QUEUE_DESC), + 8192U); + } + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { if (rte_lcore_is_enabled(lcore_id) == 0) continue; @@ -2678,11 +2716,12 @@ main(int32_t argc, char **argv) if (socket_ctx[socket_id].mbuf_pool) continue; - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); session_priv_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); } + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2694,8 +2733,6 @@ main(int32_t argc, char **argv) req_tx_offloads[portid]); } - cryptodevs_init(); - /* * Set the enabled port mask in helper config for use by helper * sub-system. This will be used while initializing devices using -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v3 13/13] examples/ipsec-secgw: make number of buffers dynamic 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 13/13] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik @ 2020-02-05 13:42 ` Ananyev, Konstantin 2020-02-05 16:08 ` [dpdk-dev] [EXT] " Lukas Bartosik 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-02-05 13:42 UTC (permalink / raw) To: Lukasz Bartosik, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev Hi Lukasz, > Make number of buffers in a pool nb_mbuf_in_pool dependent on number > of ports, cores and crypto queues. Add command line option -s which > when used overrides dynamic calculation of number of buffers in a pool. > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- > examples/ipsec-secgw/ipsec-secgw.c | 59 +++++++++++++++++++++++++++++++------- > 1 file changed, 48 insertions(+), 11 deletions(-) > > diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c > index 862a7f0..f7acb52 100644 > --- a/examples/ipsec-secgw/ipsec-secgw.c > +++ b/examples/ipsec-secgw/ipsec-secgw.c > @@ -59,8 +59,6 @@ volatile bool force_quit; > > #define MEMPOOL_CACHE_SIZE 256 > > -#define NB_MBUF (32000) > - > #define CDEV_QUEUE_DESC 2048 > #define CDEV_MAP_ENTRIES 16384 > #define CDEV_MP_NB_OBJS 1024 > @@ -163,6 +161,7 @@ static int32_t promiscuous_on = 1; > static int32_t numa_on = 1; /**< NUMA is enabled by default. */ > static uint32_t nb_lcores; > static uint32_t single_sa; > +static uint32_t nb_bufs_in_pool; > > /* > * RX/TX HW offload capabilities to enable/use on ethernet ports. > @@ -1259,6 +1258,7 @@ print_usage(const char *prgname) > " [-w REPLAY_WINDOW_SIZE]" > " [-e]" > " [-a]" > + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" > " -f CONFIG_FILE" > " --config (port,queue,lcore)[,(port,queue,lcore)]" > " [--single-sa SAIDX]" > @@ -1280,6 +1280,7 @@ print_usage(const char *prgname) > " size for each SA\n" > " -e enables ESN\n" > " -a enables SA SQN atomic behaviour\n" > + " -s number of mbufs in packet pool (default 8192)\n" > " -f CONFIG_FILE: Configuration file\n" > " --config (port,queue,lcore): Rx queue configuration\n" > " --single-sa SAIDX: In poll mode use single SA index for\n" > @@ -1479,7 +1480,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > > argvopt = argv; > > - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", > + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", > lgopts, &option_index)) != EOF) { > > switch (opt) { > @@ -1513,6 +1514,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) > cfgfile = optarg; > f_present = 1; > break; > + > + case 's': > + ret = parse_decimal(optarg); > + if (ret < 0) { > + printf("Invalid number of buffers in a pool: " > + "%s\n", optarg); > + print_usage(prgname); > + return -1; > + } > + > + nb_bufs_in_pool = ret; > + break; > + > case 'j': > ret = parse_decimal(optarg); > if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || > @@ -1876,12 +1890,12 @@ check_cryptodev_mask(uint8_t cdev_id) > return -1; > } > > -static int32_t > +static uint16_t > cryptodevs_init(void) > { > struct rte_cryptodev_config dev_conf; > struct rte_cryptodev_qp_conf qp_conf; > - uint16_t idx, max_nb_qps, qp, i; > + uint16_t idx, max_nb_qps, qp, total_nb_qps, i; > int16_t cdev_id; > struct rte_hash_parameters params = { 0 }; > > @@ -1909,6 +1923,7 @@ cryptodevs_init(void) > printf("lcore/cryptodev/qp mappings:\n"); > > idx = 0; > + total_nb_qps = 0; > for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { > struct rte_cryptodev_info cdev_info; > > @@ -1942,6 +1957,7 @@ cryptodevs_init(void) > if (qp == 0) > continue; > > + total_nb_qps += qp; > dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id); > dev_conf.nb_queue_pairs = qp; > dev_conf.ff_disable = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO; > @@ -1974,7 +1990,7 @@ cryptodevs_init(void) > > printf("\n"); > > - return 0; > + return total_nb_qps; > } > > static void > @@ -2607,16 +2623,18 @@ int32_t > main(int32_t argc, char **argv) > { > int32_t ret; > - uint32_t lcore_id; > + uint32_t lcore_id, nb_txq, nb_rxq = 0; > uint32_t cdev_id; > uint32_t i; > uint8_t socket_id; > - uint16_t portid; > + uint16_t portid, nb_crypto_qp, nb_ports = 0; > uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; > uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; > struct eh_conf *eh_conf = NULL; > size_t sess_sz; > > + nb_bufs_in_pool = 0; > + > /* init EAL */ > ret = rte_eal_init(argc, argv); > if (ret < 0) > @@ -2665,6 +2683,26 @@ main(int32_t argc, char **argv) > > sess_sz = max_session_size(); > > + nb_crypto_qp = cryptodevs_init(); > + > + if (nb_bufs_in_pool == 0) { > + RTE_ETH_FOREACH_DEV(portid) { > + if ((enabled_port_mask & (1 << portid)) == 0) > + continue; > + nb_ports++; > + nb_rxq += get_port_nb_rx_queues(portid); > + } > + > + nb_txq = nb_lcores; > + > + nb_bufs_in_pool = RTE_MAX((nb_rxq*nb_rxd + > + nb_ports*nb_lcores*MAX_PKT_BURST + > + nb_ports*nb_txq*nb_txd + > + nb_lcores*MEMPOOL_CACHE_SIZE + > + nb_crypto_qp*CDEV_QUEUE_DESC), I think you forgot to take into account possible reassemble table: @@ -2699,7 +2699,9 @@ main(int32_t argc, char **argv) nb_ports*nb_lcores*MAX_PKT_BURST + nb_ports*nb_txq*nb_txd + nb_lcores*MEMPOOL_CACHE_SIZE + - nb_crypto_qp*CDEV_QUEUE_DESC), + nb_crypto_qp*CDEV_QUEUE_DESC + + nb_lcores * frag_tbl_sz * + FRAG_TBL_BUCKET_ENTRIES), 8192U); Also it might be worth for better readability to put code for nb_bufs_in_pool calculation in a separate function (and add spaces between '*' and its' operands). Apart from that - whole series LGTM. Konstantin > + 8192U); > + } > + > for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { > if (rte_lcore_is_enabled(lcore_id) == 0) > continue; > @@ -2678,11 +2716,12 @@ main(int32_t argc, char **argv) > if (socket_ctx[socket_id].mbuf_pool) > continue; > > - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); > + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); > session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); > session_priv_pool_init(&socket_ctx[socket_id], socket_id, > sess_sz); > } > + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); > > RTE_ETH_FOREACH_DEV(portid) { > if ((enabled_port_mask & (1 << portid)) == 0) > @@ -2694,8 +2733,6 @@ main(int32_t argc, char **argv) > req_tx_offloads[portid]); > } > > - cryptodevs_init(); > - > /* > * Set the enabled port mask in helper config for use by helper > * sub-system. This will be used while initializing devices using > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v3 13/13] examples/ipsec-secgw: make number of buffers dynamic 2020-02-05 13:42 ` Ananyev, Konstantin @ 2020-02-05 16:08 ` Lukas Bartosik 0 siblings, 0 replies; 147+ messages in thread From: Lukas Bartosik @ 2020-02-05 16:08 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev Hi Konstantin, Please see inline. Thanks, Lukasz On 05.02.2020 14:42, Ananyev, Konstantin wrote: > External Email > > ---------------------------------------------------------------------- > > Hi Lukasz, > >> Make number of buffers in a pool nb_mbuf_in_pool dependent on number >> of ports, cores and crypto queues. Add command line option -s which >> when used overrides dynamic calculation of number of buffers in a pool. >> >> Signed-off-by: Anoob Joseph <anoobj@marvell.com> >> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> >> --- >> examples/ipsec-secgw/ipsec-secgw.c | 59 +++++++++++++++++++++++++++++++------- >> 1 file changed, 48 insertions(+), 11 deletions(-) >> >> diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c >> index 862a7f0..f7acb52 100644 >> --- a/examples/ipsec-secgw/ipsec-secgw.c >> +++ b/examples/ipsec-secgw/ipsec-secgw.c >> @@ -59,8 +59,6 @@ volatile bool force_quit; >> >> #define MEMPOOL_CACHE_SIZE 256 >> >> -#define NB_MBUF (32000) >> - >> #define CDEV_QUEUE_DESC 2048 >> #define CDEV_MAP_ENTRIES 16384 >> #define CDEV_MP_NB_OBJS 1024 >> @@ -163,6 +161,7 @@ static int32_t promiscuous_on = 1; >> static int32_t numa_on = 1; /**< NUMA is enabled by default. */ >> static uint32_t nb_lcores; >> static uint32_t single_sa; >> +static uint32_t nb_bufs_in_pool; >> >> /* >> * RX/TX HW offload capabilities to enable/use on ethernet ports. >> @@ -1259,6 +1258,7 @@ print_usage(const char *prgname) >> " [-w REPLAY_WINDOW_SIZE]" >> " [-e]" >> " [-a]" >> + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" >> " -f CONFIG_FILE" >> " --config (port,queue,lcore)[,(port,queue,lcore)]" >> " [--single-sa SAIDX]" >> @@ -1280,6 +1280,7 @@ print_usage(const char *prgname) >> " size for each SA\n" >> " -e enables ESN\n" >> " -a enables SA SQN atomic behaviour\n" >> + " -s number of mbufs in packet pool (default 8192)\n" >> " -f CONFIG_FILE: Configuration file\n" >> " --config (port,queue,lcore): Rx queue configuration\n" >> " --single-sa SAIDX: In poll mode use single SA index for\n" >> @@ -1479,7 +1480,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) >> >> argvopt = argv; >> >> - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:", >> + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:s:", >> lgopts, &option_index)) != EOF) { >> >> switch (opt) { >> @@ -1513,6 +1514,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) >> cfgfile = optarg; >> f_present = 1; >> break; >> + >> + case 's': >> + ret = parse_decimal(optarg); >> + if (ret < 0) { >> + printf("Invalid number of buffers in a pool: " >> + "%s\n", optarg); >> + print_usage(prgname); >> + return -1; >> + } >> + >> + nb_bufs_in_pool = ret; >> + break; >> + >> case 'j': >> ret = parse_decimal(optarg); >> if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || >> @@ -1876,12 +1890,12 @@ check_cryptodev_mask(uint8_t cdev_id) >> return -1; >> } >> >> -static int32_t >> +static uint16_t >> cryptodevs_init(void) >> { >> struct rte_cryptodev_config dev_conf; >> struct rte_cryptodev_qp_conf qp_conf; >> - uint16_t idx, max_nb_qps, qp, i; >> + uint16_t idx, max_nb_qps, qp, total_nb_qps, i; >> int16_t cdev_id; >> struct rte_hash_parameters params = { 0 }; >> >> @@ -1909,6 +1923,7 @@ cryptodevs_init(void) >> printf("lcore/cryptodev/qp mappings:\n"); >> >> idx = 0; >> + total_nb_qps = 0; >> for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { >> struct rte_cryptodev_info cdev_info; >> >> @@ -1942,6 +1957,7 @@ cryptodevs_init(void) >> if (qp == 0) >> continue; >> >> + total_nb_qps += qp; >> dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id); >> dev_conf.nb_queue_pairs = qp; >> dev_conf.ff_disable = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO; >> @@ -1974,7 +1990,7 @@ cryptodevs_init(void) >> >> printf("\n"); >> >> - return 0; >> + return total_nb_qps; >> } >> >> static void >> @@ -2607,16 +2623,18 @@ int32_t >> main(int32_t argc, char **argv) >> { >> int32_t ret; >> - uint32_t lcore_id; >> + uint32_t lcore_id, nb_txq, nb_rxq = 0; >> uint32_t cdev_id; >> uint32_t i; >> uint8_t socket_id; >> - uint16_t portid; >> + uint16_t portid, nb_crypto_qp, nb_ports = 0; >> uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; >> uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; >> struct eh_conf *eh_conf = NULL; >> size_t sess_sz; >> >> + nb_bufs_in_pool = 0; >> + >> /* init EAL */ >> ret = rte_eal_init(argc, argv); >> if (ret < 0) >> @@ -2665,6 +2683,26 @@ main(int32_t argc, char **argv) >> >> sess_sz = max_session_size(); >> >> + nb_crypto_qp = cryptodevs_init(); >> + >> + if (nb_bufs_in_pool == 0) { >> + RTE_ETH_FOREACH_DEV(portid) { >> + if ((enabled_port_mask & (1 << portid)) == 0) >> + continue; >> + nb_ports++; >> + nb_rxq += get_port_nb_rx_queues(portid); >> + } >> + >> + nb_txq = nb_lcores; >> + >> + nb_bufs_in_pool = RTE_MAX((nb_rxq*nb_rxd + >> + nb_ports*nb_lcores*MAX_PKT_BURST + >> + nb_ports*nb_txq*nb_txd + >> + nb_lcores*MEMPOOL_CACHE_SIZE + >> + nb_crypto_qp*CDEV_QUEUE_DESC), > > I think you forgot to take into account possible reassemble table: > @@ -2699,7 +2699,9 @@ main(int32_t argc, char **argv) > nb_ports*nb_lcores*MAX_PKT_BURST + > nb_ports*nb_txq*nb_txd + > nb_lcores*MEMPOOL_CACHE_SIZE + > - nb_crypto_qp*CDEV_QUEUE_DESC), > + nb_crypto_qp*CDEV_QUEUE_DESC + > + nb_lcores * frag_tbl_sz * > + FRAG_TBL_BUCKET_ENTRIES), > 8192U); [Lukasz] I will add it in V4. > > > Also it might be worth for better readability to put code for nb_bufs_in_pool calculation > in a separate function (and add spaces between '*' and its' operands). > Apart from that - whole series LGTM. > Konstantin [Lukasz] Thank you for reviewing the changes. I will resolve your comment in V4. > > >> + 8192U); >> + } >> + >> for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { >> if (rte_lcore_is_enabled(lcore_id) == 0) >> continue; >> @@ -2678,11 +2716,12 @@ main(int32_t argc, char **argv) >> if (socket_ctx[socket_id].mbuf_pool) >> continue; >> >> - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); >> + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); >> session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); >> session_priv_pool_init(&socket_ctx[socket_id], socket_id, >> sess_sz); >> } >> + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); >> >> RTE_ETH_FOREACH_DEV(portid) { >> if ((enabled_port_mask & (1 << portid)) == 0) >> @@ -2694,8 +2733,6 @@ main(int32_t argc, char **argv) >> req_tx_offloads[portid]); >> } >> >> - cryptodevs_init(); >> - >> /* >> * Set the enabled port mask in helper config for use by helper >> * sub-system. This will be used while initializing devices using >> -- >> 2.7.4 > ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik ` (12 preceding siblings ...) 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 13/13] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik @ 2020-02-20 8:01 ` Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 01/15] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik ` (17 more replies) 13 siblings, 18 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:01 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev This series introduces event-mode additions to ipsec-secgw. With this series, ipsec-secgw would be able to run in eventmode. The worker thread (executing loop) would be receiving events and would be submitting it back to the eventdev after the processing. This way, multicore scaling and h/w assisted scheduling is achieved by making use of the eventdev capabilities. Since the underlying event device would be having varying capabilities, the worker thread could be drafted differently to maximize performance. This series introduces usage of multiple worker threads, among which the one to be used will be determined by the operating conditions and the underlying device capabilities. For example, if an event device - eth device pair has Tx internal port, then application can do tx_adapter_enqueue() instead of regular event_enqueue(). So a thread making an assumption that the device pair has internal port will not be the right solution for another pair. The infrastructure added with these patches aims to help application to have multiple worker threads, there by extracting maximum performance from every device without affecting existing paths/use cases. The eventmode configuration is predefined. All packets reaching one eth port will hit one event queue. All event queues will be mapped to all event ports. So all cores will be able to receive traffic from all ports. When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event device will ensure the ordering. Ordering would be lost when tried in PARALLEL. Following command line options are introduced, --transfer-mode: to choose between poll mode & event mode --event-schedule-type: to specify the scheduling type (RTE_SCHED_TYPE_ORDERED/ RTE_SCHED_TYPE_ATOMIC/ RTE_SCHED_TYPE_PARALLEL) Additionally the event mode introduces two modes of processing packets: Driver-mode: This mode will have bare minimum changes in the application to support ipsec. There woudn't be any lookup etc done in the application. And for inline-protocol use case, the thread would resemble l2fwd as the ipsec processing would be done entirely in the h/w. This mode can be used to benchmark the raw performance of the h/w. All the application side steps (like lookup) can be redone based on the requirement of the end user. Hence the need for a mode which would report the raw performance. App-mode: This mode will have all the features currently implemented with ipsec-secgw (non librte_ipsec mode). All the lookups etc would follow the existing methods and would report numbers that can be compared against regular ipsec-secgw benchmark numbers. The driver mode is selected with existing --single-sa option (used also by poll mode). When --single-sa option is used in conjution with event mode then index passed to --single-sa is ignored. Example commands to execute ipsec-secgw in various modes on OCTEON TX2 platform, #Inbound and outbound app mode ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel #Inbound and outbound driver mode ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel --single-sa 0 This series adds non burst tx internal port workers only. It provides infrastructure for non internal port workers, however does not define any. Also, only inline ipsec protocol mode is supported by the worker threads added. Following are planned features, 1. Add burst mode workers. 2. Add non internal port workers. 3. Verify support for Rx core (the support is added but lack of h/w to verify). 4. Add lookaside protocol support. Following are features that Marvell won't be attempting. 1. Inline crypto support. 2. Lookaside crypto support. For the features that Marvell won't be attempting, new workers can be introduced by the respective stake holders. This series is tested on Marvell OCTEON TX2. This series is targeted for 20.05 release. Changes in v4: * Update ipsec-secgw documentation to describe the new options as well as event mode support. * In event mode reserve number of crypto queues equal to number of eth ports in order to meet inline protocol offload requirements. * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool and include fragments table size into the calculation. * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static keyword from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), check_sp() and prepare_out_sessions_tbl() functions as a result of changes introduced by SAD feature. * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx is created with rte_zmalloc. * Minor cleanup enhancements: - In eh_set_default_conf_eventdev() function in event_helper.c put definition of int local vars in one line, remove invalid comment, put "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" in one line instead of two. - Remove extern "C" from event_helper.h. - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() and eh_dev_has_tx_internal_port() functions in event_helper.c. - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec-secgw.h, remove #include <rte_hash.h>. - Remove not needed includes in ipsec_worker.c. - Remove expired todo from ipsec_worker.h. Changes in v3: * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c including minor rework. * Rename --schedule-type option to --event-schedule-type. * Replace macro UNPROTECTED_PORT with static inline function is_unprotected_port(). * Move definitions of global variables used by multiple modules to .c files and add externs in .h headers. * Add eh_check_conf() which validates ipsec-secgw configuration for event mode. * Add dynamic calculation of number of buffers in a pool based on number of cores, ports and crypto queues. * Fix segmentation fault in event mode driver worker which happens when there are no inline outbound sessions configured. * Remove change related to updating number of crypto queues in cryptodevs_init(). The update of crypto queues will be handled in a separate patch. * Fix compilation error on 32-bit platforms by using userdata instead of udata64 from rte_mbuf. Changes in v2: * Remove --process-dir option. Instead use existing unprotected port mask option (-u) to decide wheter port handles inbound or outbound traffic. * Remove --process-mode option. Instead use existing --single-sa option to select between app and driver modes. * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). * Move destruction of flows to a location where eth ports are stopped and closed. * Print error and exit when event mode --schedule-type option is used in poll mode. * Reduce number of goto statements replacing them with loop constructs. * Remove sec_session_fixed table and replace it with locally build table in driver worker thread. Table is indexed by port identifier and holds first inline session pointer found for a given port. * Print error and exit when sessions other than inline are configured in event mode. * When number of event queues is less than number of eth ports then map all eth ports to one event queue. * Cleanup and minor improvements in code as suggested by Konstantin Ankur Dwivedi (1): examples/ipsec-secgw: add default rte flow for inline Rx Anoob Joseph (5): examples/ipsec-secgw: add framework for eventmode helper examples/ipsec-secgw: add eventdev port-lcore link examples/ipsec-secgw: add Rx adapter support examples/ipsec-secgw: add Tx adapter support examples/ipsec-secgw: add routines to display config Lukasz Bartosik (9): examples/ipsec-secgw: add routines to launch workers examples/ipsec-secgw: add support for internal ports examples/ipsec-secgw: add event helper config init/uninit examples/ipsec-secgw: add eventmode to ipsec-secgw examples/ipsec-secgw: add driver mode worker examples/ipsec-secgw: add app mode worker examples/ipsec-secgw: make number of buffers dynamic doc: add event mode support to ipsec-secgw examples/ipsec-secgw: reserve crypto queues in event mode doc/guides/sample_app_ug/ipsec_secgw.rst | 138 ++- examples/ipsec-secgw/Makefile | 2 + examples/ipsec-secgw/event_helper.c | 1812 ++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 327 ++++++ examples/ipsec-secgw/ipsec-secgw.c | 463 ++++++-- examples/ipsec-secgw/ipsec-secgw.h | 88 ++ examples/ipsec-secgw/ipsec.c | 5 +- examples/ipsec-secgw/ipsec.h | 53 +- examples/ipsec-secgw/ipsec_worker.c | 638 +++++++++++ examples/ipsec-secgw/ipsec_worker.h | 35 + examples/ipsec-secgw/meson.build | 6 +- examples/ipsec-secgw/sa.c | 21 +- examples/ipsec-secgw/sad.h | 5 - 13 files changed, 3464 insertions(+), 129 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c create mode 100644 examples/ipsec-secgw/ipsec_worker.h -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 01/15] examples/ipsec-secgw: add default rte flow for inline Rx 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik @ 2020-02-20 8:01 ` Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 02/15] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik ` (16 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:01 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Ankur Dwivedi <adwivedi@marvell.com> The default flow created would enable security processing on all ESP packets. If the default flow is created, SA based rte_flow creation would be skipped. Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Anoob Joseph <anoobj@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 61 +++++++++++++++++++++++++++++++++----- examples/ipsec-secgw/ipsec.c | 5 +++- examples/ipsec-secgw/ipsec.h | 6 ++++ 3 files changed, 63 insertions(+), 9 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 4799bc9..e1ee7c3 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -129,6 +129,8 @@ struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" @@ -2432,6 +2434,48 @@ reassemble_init(void) return rc; } +static void +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) +{ + struct rte_flow_action action[2]; + struct rte_flow_item pattern[2]; + struct rte_flow_attr attr = {0}; + struct rte_flow_error err; + struct rte_flow *flow; + int ret; + + if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY)) + return; + + /* Add the default rte_flow to enable SECURITY for all ESP packets */ + + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; + pattern[0].spec = NULL; + pattern[0].mask = NULL; + pattern[0].last = NULL; + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; + + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; + action[0].conf = NULL; + action[1].type = RTE_FLOW_ACTION_TYPE_END; + action[1].conf = NULL; + + attr.ingress = 1; + + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); + if (ret) + return; + + flow = rte_flow_create(port_id, &attr, pattern, action, &err); + if (flow == NULL) + return; + + flow_info_tbl[port_id].rx_def_flow = flow; + RTE_LOG(INFO, IPSEC, + "Created default flow enabling SECURITY for all ESP traffic on port %d\n", + port_id); +} + int32_t main(int32_t argc, char **argv) { @@ -2440,7 +2484,8 @@ main(int32_t argc, char **argv) uint32_t i; uint8_t socket_id; uint16_t portid; - uint64_t req_rx_offloads, req_tx_offloads; + uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; + uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; size_t sess_sz; /* init EAL */ @@ -2502,8 +2547,10 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); - port_init(portid, req_rx_offloads, req_tx_offloads); + sa_check_offloads(portid, &req_rx_offloads[portid], + &req_tx_offloads[portid]); + port_init(portid, req_rx_offloads[portid], + req_tx_offloads[portid]); } cryptodevs_init(); @@ -2513,11 +2560,9 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - /* - * Start device - * note: device must be started before a flow rule - * can be installed. - */ + /* Create flow before starting the device */ + create_default_ipsec_flow(portid, req_rx_offloads[portid]); + ret = rte_eth_dev_start(portid); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index 6e81207..d406571 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -275,6 +275,10 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, unsigned int i; unsigned int j; + /* Don't create flow if default flow is created */ + if (flow_info_tbl[sa->portid].rx_def_flow) + return 0; + ret = rte_eth_dev_info_get(sa->portid, &dev_info); if (ret != 0) { RTE_LOG(ERR, IPSEC, @@ -410,7 +414,6 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, ips->security.ol_flags = sec_cap->ol_flags; ips->security.ctx = sec_ctx; } - sa->cdev_id_qp = 0; return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 4f2fd61..8f5d382 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -87,6 +87,12 @@ struct app_sa_prm { extern struct app_sa_prm app_sa_prm; +struct flow_info { + struct rte_flow *rx_def_flow; +}; + +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + enum { IPSEC_SESSION_PRIMARY = 0, IPSEC_SESSION_FALLBACK = 1, -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 02/15] examples/ipsec-secgw: add framework for eventmode helper 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 01/15] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik @ 2020-02-20 8:01 ` Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 03/15] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik ` (15 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:01 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add framework for eventmode helper. Event mode involves initialization of multiple devices like eventdev, ethdev and etc. Add routines to initialize and uninitialize event device. Generate a default config for event device if it is not specified in the configuration. Currently event helper supports single event device only. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/event_helper.c | 320 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 107 ++++++++++++ examples/ipsec-secgw/meson.build | 4 +- 4 files changed, 430 insertions(+), 2 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index ad83d79..66d05d4 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -16,6 +16,7 @@ SRCS-y += sad.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c new file mode 100644 index 0000000..0c38474 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.c @@ -0,0 +1,320 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#include <rte_ethdev.h> +#include <rte_eventdev.h> + +#include "event_helper.h" + +static int +eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) +{ + int lcore_count, nb_eventdev, nb_eth_dev, ret; + struct eventdev_params *eventdev_config; + struct rte_event_dev_info dev_info; + + /* Get the number of event devices */ + nb_eventdev = rte_event_dev_count(); + if (nb_eventdev == 0) { + EH_LOG_ERR("No event devices detected"); + return -EINVAL; + } + + if (nb_eventdev != 1) { + EH_LOG_ERR("Event mode does not support multiple event devices. " + "Please provide only one event device."); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + if (nb_eth_dev == 0) { + EH_LOG_ERR("No eth devices detected"); + return -EINVAL; + } + + /* Get the number of lcores */ + lcore_count = rte_lcore_count(); + + /* Read event device info */ + ret = rte_event_dev_info_get(0, &dev_info); + if (ret < 0) { + EH_LOG_ERR("Failed to read event device info %d", ret); + return ret; + } + + /* Check if enough ports are available */ + if (dev_info.max_event_ports < 2) { + EH_LOG_ERR("Not enough event ports available"); + return -EINVAL; + } + + /* Get the first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Save number of queues & ports available */ + eventdev_config->eventdev_id = 0; + eventdev_config->nb_eventqueue = dev_info.max_event_queues; + eventdev_config->nb_eventport = dev_info.max_event_ports; + eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + /* Check if there are more queues than required */ + if (eventdev_config->nb_eventqueue > nb_eth_dev + 1) { + /* One queue is reserved for Tx */ + eventdev_config->nb_eventqueue = nb_eth_dev + 1; + } + + /* Check if there are more ports than required */ + if (eventdev_config->nb_eventport > lcore_count) { + /* One port per lcore is enough */ + eventdev_config->nb_eventport = lcore_count; + } + + /* Update the number of event devices */ + em_conf->nb_eventdev++; + + return 0; +} + +static int +eh_validate_conf(struct eventmode_conf *em_conf) +{ + int ret; + + /* + * Check if event devs are specified. Else probe the event devices + * and initialize the config with all ports & queues available + */ + if (em_conf->nb_eventdev == 0) { + ret = eh_set_default_conf_eventdev(em_conf); + if (ret != 0) + return ret; + } + + return 0; +} + +static int +eh_initialize_eventdev(struct eventmode_conf *em_conf) +{ + struct rte_event_queue_conf eventq_conf = {0}; + struct rte_event_dev_info evdev_default_conf; + struct rte_event_dev_config eventdev_conf; + struct eventdev_params *eventdev_config; + int nb_eventdev = em_conf->nb_eventdev; + uint8_t eventdev_id; + int nb_eventqueue; + uint8_t i, j; + int ret; + + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Get event dev ID */ + eventdev_id = eventdev_config->eventdev_id; + + /* Get the number of queues */ + nb_eventqueue = eventdev_config->nb_eventqueue; + + /* Reset the default conf */ + memset(&evdev_default_conf, 0, + sizeof(struct rte_event_dev_info)); + + /* Get default conf of eventdev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR( + "Error in getting event device info[devID:%d]", + eventdev_id); + return ret; + } + + memset(&eventdev_conf, 0, sizeof(struct rte_event_dev_config)); + eventdev_conf.nb_events_limit = + evdev_default_conf.max_num_events; + eventdev_conf.nb_event_queues = nb_eventqueue; + eventdev_conf.nb_event_ports = + eventdev_config->nb_eventport; + eventdev_conf.nb_event_queue_flows = + evdev_default_conf.max_event_queue_flows; + eventdev_conf.nb_event_port_dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + eventdev_conf.nb_event_port_enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Configure event device */ + ret = rte_event_dev_configure(eventdev_id, &eventdev_conf); + if (ret < 0) { + EH_LOG_ERR("Error in configuring event device"); + return ret; + } + + /* Configure event queues */ + for (j = 0; j < nb_eventqueue; j++) { + + memset(&eventq_conf, 0, + sizeof(struct rte_event_queue_conf)); + + /* Per event dev queues can be ATQ or SINGLE LINK */ + eventq_conf.event_queue_cfg = + eventdev_config->ev_queue_mode; + /* + * All queues need to be set with sched_type as + * schedule type for the application stage. One queue + * would be reserved for the final eth tx stage. This + * will be an atomic queue. + */ + if (j == nb_eventqueue-1) { + eventq_conf.schedule_type = + RTE_SCHED_TYPE_ATOMIC; + } else { + eventq_conf.schedule_type = + em_conf->ext_params.sched_type; + } + + /* Set max atomic flows to 1024 */ + eventq_conf.nb_atomic_flows = 1024; + eventq_conf.nb_atomic_order_sequences = 1024; + + /* Setup the queue */ + ret = rte_event_queue_setup(eventdev_id, j, + &eventq_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event queue %d", + ret); + return ret; + } + } + + /* Configure event ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + ret = rte_event_port_setup(eventdev_id, j, NULL); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event port %d", + ret); + return ret; + } + } + } + + /* Start event devices */ + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + ret = rte_event_dev_start(eventdev_config->eventdev_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start event device %d, %d", + i, ret); + return ret; + } + } + return 0; +} + +int32_t +eh_devs_init(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t port_id; + int ret; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Validate the requested config */ + ret = eh_validate_conf(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to validate the requested config %d", ret); + return ret; + } + + /* Stop eth devices before setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + rte_eth_dev_stop(port_id); + } + + /* Setup eventdev */ + ret = eh_initialize_eventdev(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize event dev %d", ret); + return ret; + } + + /* Start eth devices after setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + ret = rte_eth_dev_start(port_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start eth dev %d, %d", + port_id, ret); + return ret; + } + } + + return 0; +} + +int32_t +eh_devs_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t id; + int ret, i; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Stop and release event devices */ + for (i = 0; i < em_conf->nb_eventdev; i++) { + + id = em_conf->eventdev_config[i].eventdev_id; + rte_event_dev_stop(id); + + ret = rte_event_dev_close(id); + if (ret < 0) { + EH_LOG_ERR("Failed to close event dev %d, %d", id, ret); + return ret; + } + } + + return 0; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h new file mode 100644 index 0000000..040f977 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.h @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _EVENT_HELPER_H_ +#define _EVENT_HELPER_H_ + +#include <rte_log.h> + +#define RTE_LOGTYPE_EH RTE_LOGTYPE_USER4 + +#define EH_LOG_ERR(...) \ + RTE_LOG(ERR, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/* Max event devices supported */ +#define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS + +/** + * Packet transfer mode of the application + */ +enum eh_pkt_transfer_mode { + EH_PKT_TRANSFER_MODE_POLL = 0, + EH_PKT_TRANSFER_MODE_EVENT, +}; + +/* Event dev params */ +struct eventdev_params { + uint8_t eventdev_id; + uint8_t nb_eventqueue; + uint8_t nb_eventport; + uint8_t ev_queue_mode; +}; + +/* Eventmode conf data */ +struct eventmode_conf { + int nb_eventdev; + /**< No of event devs */ + struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; + /**< Per event dev conf */ + union { + RTE_STD_C11 + struct { + uint64_t sched_type : 2; + /**< Schedule type */ + }; + uint64_t u64; + } ext_params; + /**< 64 bit field to specify extended params */ +}; + +/** + * Event helper configuration + */ +struct eh_conf { + enum eh_pkt_transfer_mode mode; + /**< Packet transfer mode of the application */ + uint32_t eth_portmask; + /**< + * Mask of the eth ports to be used. This portmask would be + * checked while initializing devices using helper routines. + */ + void *mode_params; + /**< Mode specific parameters */ +}; + +/** + * Initialize event mode devices + * + * Application can call this function to get the event devices, eth devices + * and eth rx & tx adapters initialized according to the default config or + * config populated using the command line args. + * + * Application is expected to initialize the eth devices and then the event + * mode helper subsystem will stop & start eth devices according to its + * requirement. Call to this function should be done after the eth devices + * are successfully initialized. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_init(struct eh_conf *conf); + +/** + * Release event mode devices + * + * Application can call this function to release event devices, + * eth rx & tx adapters according to the config. + * + * Call to this function should be done before application stops + * and closes eth devices. This function will not close and stop + * eth devices. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_uninit(struct eh_conf *conf); + +#endif /* _EVENT_HELPER_H_ */ diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 6bd5b78..2415d47 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -6,9 +6,9 @@ # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' -deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec'] +deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( - 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', + 'esp.c', 'event_helper.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 03/15] examples/ipsec-secgw: add eventdev port-lcore link 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 01/15] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 02/15] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik @ 2020-02-20 8:01 ` Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 04/15] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik ` (14 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:01 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add event device port-lcore link and specify which event queues should be connected to the event port. Generate a default config for event port-lcore links if it is not specified in the configuration. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues are to be linked with every port. This enables one core to receive packets from all ethernet ports. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 126 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 33 ++++++++++ 2 files changed, 159 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 0c38474..c90249f 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,11 +1,33 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2020 Marvell International Ltd. */ +#include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_malloc.h> #include "event_helper.h" +static inline unsigned int +eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) +{ + unsigned int next_core; + + /* Get next active core skipping cores reserved as eth cores */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + prev_core = next_core; + } while (rte_bitmap_get(em_conf->eth_core_mask, next_core)); + + return next_core; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -77,6 +99,71 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_link(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct eh_event_link_info *link; + unsigned int lcore_id = -1; + int i, link_index; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If there are more event ports, then some ports + * won't be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link config, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues + * to the port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + /* Get first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Loop through the ports */ + for (i = 0; i < eventdev_config->nb_eventport; i++) { + + /* Get next active core id */ + lcore_id = eh_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_port_id = i; + link->lcore_id = lcore_id; + + /* + * Don't set eventq_id as by default all queues + * need to be mapped to the port, which is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -91,6 +178,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if links are specified. Else generate a default config for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = eh_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -102,6 +199,8 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) struct rte_event_dev_config eventdev_conf; struct eventdev_params *eventdev_config; int nb_eventdev = em_conf->nb_eventdev; + struct eh_event_link_info *link; + uint8_t *queue = NULL; uint8_t eventdev_id; int nb_eventqueue; uint8_t i, j; @@ -199,6 +298,33 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) } } + /* Make event queue - event port link */ + for (j = 0; j < em_conf->nb_link; j++) { + + /* Get link info */ + link = &(em_conf->link[j]); + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); + + /* Link queue to port */ + ret = rte_event_port_link(eventdev_id, link->event_port_id, + queue, NULL, 1); + if (ret < 0) { + EH_LOG_ERR("Failed to link event port %d", ret); + return ret; + } + } + /* Start event devices */ for (i = 0; i < nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 040f977..c8afc84 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -16,6 +16,13 @@ /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max event queues supported per event device */ +#define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV + +/* Max event-lcore links */ +#define EVENT_MODE_MAX_LCORE_LINKS \ + (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) + /** * Packet transfer mode of the application */ @@ -32,17 +39,43 @@ struct eventdev_params { uint8_t ev_queue_mode; }; +/** + * Event-lcore link configuration + */ +struct eh_event_link_info { + uint8_t eventdev_id; + /**< Event device ID */ + uint8_t event_port_id; + /**< Event port ID */ + uint8_t eventq_id; + /**< Event queue to be linked to the port */ + uint8_t lcore_id; + /**< Lcore to be polling on this port */ +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_link; + /**< No of links */ + struct eh_event_link_info + link[EVENT_MODE_MAX_LCORE_LINKS]; + /**< Per link conf */ + struct rte_bitmap *eth_core_mask; + /**< Core mask of cores to be used for software Rx and Tx */ union { RTE_STD_C11 struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 04/15] examples/ipsec-secgw: add Rx adapter support 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (2 preceding siblings ...) 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 03/15] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik @ 2020-02-20 8:01 ` Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 05/15] examples/ipsec-secgw: add Tx " Lukasz Bartosik ` (13 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:01 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add Rx adapter support. The event helper init routine will initialize the Rx adapter according to the configuration. If Rx adapter config is not present it will generate a default config. If there are enough event queues available it will map eth ports and event queues 1:1 (one eth port will be connected to one event queue). Otherwise it will map all eth ports to one event queue. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 273 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/event_helper.h | 29 ++++ 2 files changed, 301 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index c90249f..2653e86 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -4,10 +4,58 @@ #include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_event_eth_rx_adapter.h> #include <rte_malloc.h> +#include <stdbool.h> #include "event_helper.h" +static int +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) +{ + int i, count = 0; + + RTE_LCORE_FOREACH(i) { + /* Check if this core is enabled in core mask*/ + if (rte_bitmap_get(eth_core_mask, i)) { + /* Found enabled core */ + count++; + } + } + return count; +} + +static inline unsigned int +eh_get_next_eth_core(struct eventmode_conf *em_conf) +{ + static unsigned int prev_core = -1; + unsigned int next_core; + + /* + * Make sure we have at least one eth core running, else the following + * logic would lead to an infinite loop. + */ + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { + EH_LOG_ERR("No enabled eth core found"); + return RTE_MAX_LCORE; + } + + /* Only some cores are marked as eth cores, skip others */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 1); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Update prev_core */ + prev_core = next_core; + } while (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))); + + return next_core; +} + static inline unsigned int eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) { @@ -164,6 +212,82 @@ eh_set_default_conf_link(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct rx_adapter_conf *adapter; + bool single_ev_queue = false; + int eventdev_id; + int nb_eth_dev; + int adapter_id; + int conn_id; + int i; + + /* Create one adapter with eth queues mapped to event queue(s) */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + adapter = &(em_conf->rx_adapter[adapter_id]); + + /* Set adapter conf */ + adapter->eventdev_id = eventdev_id; + adapter->adapter_id = adapter_id; + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Map all queues of eth device (port) to an event queue. If there + * are more event queues than eth ports then create 1:1 mapping. + * Otherwise map all eth ports to a single event queue. + */ + if (nb_eth_dev > eventdev_config->nb_eventqueue) + single_ev_queue = true; + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = adapter->nb_connections; + + /* Get the connection */ + conn = &(adapter->conn[conn_id]); + + /* Set mapping between eth ports & event queues*/ + conn->ethdev_id = i; + conn->eventq_id = single_ev_queue ? 0 : i; + + /* Add all eth queues eth port to event queue */ + conn->ethdev_rx_qid = -1; + + /* Update no of connections */ + adapter->nb_connections++; + + } + + /* We have setup one adapter */ + em_conf->nb_rx_adapter = 1; + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -188,6 +312,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if rx adapters are specified. Else generate a default config + * with one rx adapter and all eth queues - event queue mapped. + */ + if (em_conf->nb_rx_adapter == 0) { + ret = eh_set_default_conf_rx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -341,6 +475,104 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) return 0; } +static int +eh_rx_adapter_configure(struct eventmode_conf *em_conf, + struct rx_adapter_conf *adapter) +{ + struct rte_event_eth_rx_adapter_queue_conf queue_conf = {0}; + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct rx_adapter_connection_info *conn; + uint8_t eventdev_id; + uint32_t service_id; + int ret; + int j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = 1200; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create Rx adapter */ + ret = rte_event_eth_rx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create rx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Setup queue conf */ + queue_conf.ev.queue_id = conn->eventq_id; + queue_conf.ev.sched_type = em_conf->ext_params.sched_type; + queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV; + + /* Add queue to the adapter */ + ret = rte_event_eth_rx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_rx_qid, + &queue_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to rx adapter %d", + ret); + return ret; + } + } + + /* Get the service ID used by rx adapter */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by rx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_rx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start rx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_conf *adapter; + int i, ret; + + /* Configure rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + ret = eh_rx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure rx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -364,6 +596,9 @@ eh_devs_init(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Eventmode conf would need eth portmask */ + em_conf->eth_portmask = conf->eth_portmask; + /* Validate the requested config */ ret = eh_validate_conf(em_conf); if (ret < 0) { @@ -388,6 +623,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Rx adapter */ + ret = eh_initialize_rx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize rx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -410,8 +652,8 @@ int32_t eh_devs_uninit(struct eh_conf *conf) { struct eventmode_conf *em_conf; + int ret, i, j; uint16_t id; - int ret, i; if (conf == NULL) { EH_LOG_ERR("Invalid event helper configuration"); @@ -429,6 +671,35 @@ eh_devs_uninit(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Stop and release rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + + id = em_conf->rx_adapter[i].adapter_id; + ret = rte_event_eth_rx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop rx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->rx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_rx_adapter_queue_del(id, + em_conf->rx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove rx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_rx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free rx adapter %d", ret); + return ret; + } + } + /* Stop and release event devices */ for (i = 0; i < em_conf->nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index c8afc84..00ce14e 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -16,6 +16,12 @@ /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max Rx adapters supported */ +#define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS + +/* Max Rx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -53,12 +59,33 @@ struct eh_event_link_info { /**< Lcore to be polling on this port */ }; +/* Rx adapter connection info */ +struct rx_adapter_connection_info { + uint8_t ethdev_id; + uint8_t eventq_id; + int32_t ethdev_rx_qid; +}; + +/* Rx adapter conf */ +struct rx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t rx_core_id; + uint8_t nb_connections; + struct rx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_rx_adapter; + /**< No of Rx adapters */ + struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; + /**< Rx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -66,6 +93,8 @@ struct eventmode_conf { /**< Per link conf */ struct rte_bitmap *eth_core_mask; /**< Core mask of cores to be used for software Rx and Tx */ + uint32_t eth_portmask; + /**< Mask of the eth ports to be used */ union { RTE_STD_C11 struct { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 05/15] examples/ipsec-secgw: add Tx adapter support 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (3 preceding siblings ...) 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 04/15] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik @ 2020-02-20 8:01 ` Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 06/15] examples/ipsec-secgw: add routines to display config Lukasz Bartosik ` (12 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:01 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add Tx adapter support. The event helper init routine will initialize the Tx adapter according to the configuration. If Tx adapter config is not present it will generate a default config. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 313 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 361 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 2653e86..fca1e08 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -5,6 +5,7 @@ #include <rte_ethdev.h> #include <rte_eventdev.h> #include <rte_event_eth_rx_adapter.h> +#include <rte_event_eth_tx_adapter.h> #include <rte_malloc.h> #include <stdbool.h> @@ -76,6 +77,22 @@ eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) return next_core; } +static struct eventdev_params * +eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) +{ + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + if (em_conf->eventdev_config[i].eventdev_id == eventdev_id) + break; + } + + /* No match */ + if (i == em_conf->nb_eventdev) + return NULL; + + return &(em_conf->eventdev_config[i]); +} static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -288,6 +305,95 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct tx_adapter_conf *tx_adapter; + int eventdev_id; + int adapter_id; + int nb_eth_dev; + int conn_id; + int i; + + /* + * Create one Tx adapter with all eth queues mapped to event queues + * 1:1. + */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + tx_adapter = &(em_conf->tx_adapter[adapter_id]); + + /* Set adapter conf */ + tx_adapter->eventdev_id = eventdev_id; + tx_adapter->adapter_id = adapter_id; + + /* TODO: Tx core is required only when internal port is not present */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Application uses one event queue per adapter for submitting + * packets for Tx. Reserve the last queue available and decrement + * the total available event queues for this + */ + + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + + /* + * Map all Tx queues of the eth device (port) to the event device. + */ + + /* Set defaults for connections */ + + /* + * One eth device (port) is one connection. Map all Tx queues + * of the device to the Tx adapter. + */ + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = tx_adapter->nb_connections; + + /* Get the connection */ + conn = &(tx_adapter->conn[conn_id]); + + /* Add ethdev to connections */ + conn->ethdev_id = i; + + /* Add all eth tx queues to adapter */ + conn->ethdev_tx_qid = -1; + + /* Update no of connections */ + tx_adapter->nb_connections++; + } + + /* We have setup one adapter */ + em_conf->nb_tx_adapter = 1; + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -322,6 +428,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if tx adapters are specified. Else generate a default config + * with one tx adapter. + */ + if (em_conf->nb_tx_adapter == 0) { + ret = eh_set_default_conf_tx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -573,6 +689,133 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int +eh_tx_adapter_configure(struct eventmode_conf *em_conf, + struct tx_adapter_conf *adapter) +{ + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + uint8_t tx_port_id = 0; + uint8_t eventdev_id; + uint32_t service_id; + int ret, j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + /* Create Tx adapter */ + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = + evdev_default_conf.max_num_events; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create adapter */ + ret = rte_event_eth_tx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create tx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Add queue to the adapter */ + ret = rte_event_eth_tx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_tx_qid); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to tx adapter %d", + ret); + return ret; + } + } + + /* Setup Tx queue & port */ + + /* Get event port used by the adapter */ + ret = rte_event_eth_tx_adapter_event_port_get( + adapter->adapter_id, &tx_port_id); + if (ret) { + EH_LOG_ERR("Failed to get tx adapter port id %d", ret); + return ret; + } + + /* + * Tx event queue is reserved for Tx adapter. Unlink this queue + * from all other ports + * + */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + rte_event_port_unlink(eventdev_id, j, + &(adapter->tx_ev_queue), 1); + } + + /* Link Tx event queue to Tx port */ + ret = rte_event_port_link(eventdev_id, tx_port_id, + &(adapter->tx_ev_queue), NULL, 1); + if (ret != 1) { + EH_LOG_ERR("Failed to link event queue to port"); + return ret; + } + + /* Get the service ID used by Tx adapter */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by tx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start tx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_conf *adapter; + int i, ret; + + /* Configure Tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + ret = eh_tx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure tx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -630,6 +873,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Tx adapter */ + ret = eh_initialize_tx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize tx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -713,5 +963,68 @@ eh_devs_uninit(struct eh_conf *conf) } } + /* Stop and release tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + + id = em_conf->tx_adapter[i].adapter_id; + ret = rte_event_eth_tx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop tx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->tx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_tx_adapter_queue_del(id, + em_conf->tx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove tx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_tx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free tx adapter %d", ret); + return ret; + } + } + return 0; } + +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) +{ + struct eventdev_params *eventdev_config; + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + if (eventdev_config == NULL) { + EH_LOG_ERR("Failed to read eventdev config"); + return -EINVAL; + } + + /* + * The last queue is reserved to be used as atomic queue for the + * last stage (eth packet tx stage) + */ + return eventdev_config->nb_eventqueue - 1; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 00ce14e..913b172 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -19,9 +19,15 @@ /* Max Rx adapters supported */ #define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS +/* Max Tx adapters supported */ +#define EVENT_MODE_MAX_TX_ADAPTERS RTE_EVENT_MAX_DEVS + /* Max Rx adapter connections */ #define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 +/* Max Tx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -29,6 +35,9 @@ #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Tx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS + /** * Packet transfer mode of the application */ @@ -76,6 +85,23 @@ struct rx_adapter_conf { conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; }; +/* Tx adapter connection info */ +struct tx_adapter_connection_info { + uint8_t ethdev_id; + int32_t ethdev_tx_qid; +}; + +/* Tx adapter conf */ +struct tx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t tx_core_id; + uint8_t nb_connections; + struct tx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER]; + uint8_t tx_ev_queue; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; @@ -86,6 +112,10 @@ struct eventmode_conf { /**< No of Rx adapters */ struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; /**< Rx adapter conf */ + uint8_t nb_tx_adapter; + /**< No of Tx adapters */ + struct tx_adapter_conf tx_adapter[EVENT_MODE_MAX_TX_ADAPTERS]; + /** Tx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -166,4 +196,22 @@ eh_devs_init(struct eh_conf *conf); int32_t eh_devs_uninit(struct eh_conf *conf); +/** + * Get eventdev tx queue + * + * If the application uses event device which does not support internal port + * then it needs to submit the events to a Tx queue before final transmission. + * This Tx queue will be created internally by the eventmode helper subsystem, + * and application will need its queue ID when it runs the execution loop. + * + * @param mode_conf + * Event helper configuration + * @param eventdev_id + * Event device ID + * @return + * Tx queue ID + */ +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); + #endif /* _EVENT_HELPER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 06/15] examples/ipsec-secgw: add routines to display config 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (4 preceding siblings ...) 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 05/15] examples/ipsec-secgw: add Tx " Lukasz Bartosik @ 2020-02-20 8:01 ` Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 07/15] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik ` (11 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:01 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add routines to display the eventmode configuration and provide an overview of the devices used. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 207 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 14 +++ 2 files changed, 221 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index fca1e08..d09bf7d 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -816,6 +816,210 @@ eh_initialize_tx_adapter(struct eventmode_conf *em_conf) return 0; } +static void +eh_display_operating_mode(struct eventmode_conf *em_conf) +{ + char sched_types[][32] = { + "RTE_SCHED_TYPE_ORDERED", + "RTE_SCHED_TYPE_ATOMIC", + "RTE_SCHED_TYPE_PARALLEL", + }; + EH_LOG_INFO("Operating mode:"); + + EH_LOG_INFO("\tScheduling type: \t%s", + sched_types[em_conf->ext_params.sched_type]); + + EH_LOG_INFO(""); +} + +static void +eh_display_event_dev_conf(struct eventmode_conf *em_conf) +{ + char queue_mode[][32] = { + "", + "ATQ (ALL TYPE QUEUE)", + "SINGLE LINK", + }; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Event Device Configuration:"); + + for (i = 0; i < em_conf->nb_eventdev; i++) { + sprintf(print_buf, + "\tDev ID: %-2d \tQueues: %-2d \tPorts: %-2d", + em_conf->eventdev_config[i].eventdev_id, + em_conf->eventdev_config[i].nb_eventqueue, + em_conf->eventdev_config[i].nb_eventport); + sprintf(print_buf + strlen(print_buf), + "\tQueue mode: %s", + queue_mode[em_conf->eventdev_config[i].ev_queue_mode]); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +static void +eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_rx_adapter = em_conf->nb_rx_adapter; + struct rx_adapter_connection_info *conn; + struct rx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Rx adapters configured: %d", nb_rx_adapter); + + for (i = 0; i < nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + EH_LOG_INFO( + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" + "\tRx core: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id, + adapter->rx_core_id); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_rx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2d", + conn->ethdev_rx_qid); + + sprintf(print_buf + strlen(print_buf), + "\tEvent queue: %-2d", conn->eventq_id); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_tx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_tx_adapter = em_conf->nb_tx_adapter; + struct tx_adapter_connection_info *conn; + struct tx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Tx adapters configured: %d", nb_tx_adapter); + + for (i = 0; i < nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + sprintf(print_buf, + "\tTx adapter ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id); + if (adapter->tx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->tx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2d,\tInput event queue: %-2d", + adapter->tx_core_id, adapter->tx_ev_queue); + + EH_LOG_INFO("%s", print_buf); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_tx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2d", + conn->ethdev_tx_qid); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_link_conf(struct eventmode_conf *em_conf) +{ + struct eh_event_link_info *link; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Links configured: %d", em_conf->nb_link); + + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + + sprintf(print_buf, + "\tEvent dev ID: %-2d\tEvent port: %-2d", + link->eventdev_id, + link->event_port_id); + + if (em_conf->ext_params.all_ev_queue_to_ev_port) + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2s\t", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2d\t", link->eventq_id); + + sprintf(print_buf + strlen(print_buf), + "Lcore: %-2d", link->lcore_id); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +void +eh_display_conf(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Display user exposed operating modes */ + eh_display_operating_mode(em_conf); + + /* Display event device conf */ + eh_display_event_dev_conf(em_conf); + + /* Display Rx adapter conf */ + eh_display_rx_adapter_conf(em_conf); + + /* Display Tx adapter conf */ + eh_display_tx_adapter_conf(em_conf); + + /* Display event-lcore link */ + eh_display_link_conf(em_conf); +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -849,6 +1053,9 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Display the current configuration */ + eh_display_conf(conf); + /* Stop eth devices before setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 913b172..8eb5e25 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -13,6 +13,11 @@ RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) +#define EH_LOG_INFO(...) \ + RTE_LOG(INFO, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS @@ -214,4 +219,13 @@ eh_devs_uninit(struct eh_conf *conf); uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); +/** + * Display event mode configuration + * + * @param conf + * Event helper configuration + */ +void +eh_display_conf(struct eh_conf *conf); + #endif /* _EVENT_HELPER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 07/15] examples/ipsec-secgw: add routines to launch workers 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (5 preceding siblings ...) 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 06/15] examples/ipsec-secgw: add routines to display config Lukasz Bartosik @ 2020-02-20 8:01 ` Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 08/15] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik ` (10 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:01 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev In eventmode workers can be drafted differently according to the capabilities of the underlying event device. The added functions will receive an array of such workers and probe the eventmode properties to choose the worker. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 336 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 384 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index d09bf7d..e3dfaf5 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -11,6 +11,8 @@ #include "event_helper.h" +static volatile bool eth_core_running; + static int eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { @@ -93,6 +95,16 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } +static inline bool +eh_dev_has_burst_mode(uint8_t dev_id) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(dev_id, &dev_info); + return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE) ? + true : false; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -689,6 +701,257 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int32_t +eh_start_worker_eth_core(struct eventmode_conf *conf, uint32_t lcore_id) +{ + uint32_t service_id[EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE]; + struct rx_adapter_conf *rx_adapter; + struct tx_adapter_conf *tx_adapter; + int service_count = 0; + int adapter_id; + int32_t ret; + int i; + + EH_LOG_INFO("Entering eth_core processing on lcore %u", lcore_id); + + /* + * Parse adapter config to check which of all Rx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_rx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per rx core"); + break; + } + + rx_adapter = &(conf->rx_adapter[i]); + if (rx_adapter->rx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = rx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by rx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + /* + * Parse adapter config to see which of all Tx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_tx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per tx core"); + break; + } + + tx_adapter = &conf->tx_adapter[i]; + if (tx_adapter->tx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = tx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by tx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + eth_core_running = true; + + while (eth_core_running) { + for (i = 0; i < service_count; i++) { + /* Initiate adapter service */ + rte_service_run_iter_on_app_lcore(service_id[i], 0); + } + } + + return 0; +} + +static int32_t +eh_stop_worker_eth_core(void) +{ + if (eth_core_running) { + EH_LOG_INFO("Stopping eth cores"); + eth_core_running = false; + } + return 0; +} + +static struct eh_app_worker_params * +eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, + struct eh_app_worker_params *app_wrkrs, uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params curr_conf = { {{0} }, NULL}; + struct eh_event_link_info *link = NULL; + struct eh_app_worker_params *tmp_wrkr; + struct eventmode_conf *em_conf; + uint8_t eventdev_id; + int i; + + /* Get eventmode config */ + em_conf = conf->mode_params; + + /* + * Use event device from the first lcore-event link. + * + * Assumption: All lcore-event links tied to a core are using the + * same event device. In other words, one core would be polling on + * queues of a single event device only. + */ + + /* Get a link for this lcore */ + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + if (link->lcore_id == lcore_id) + break; + } + + if (link == NULL) { + EH_LOG_ERR("No valid link found for lcore %d", lcore_id); + return NULL; + } + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* Populate the curr_conf with the capabilities */ + + /* Check for burst mode */ + if (eh_dev_has_burst_mode(eventdev_id)) + curr_conf.cap.burst = EH_RX_TYPE_BURST; + else + curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + + /* Parse the passed list and see if we have matching capabilities */ + + /* Initialize the pointer used to traverse the list */ + tmp_wrkr = app_wrkrs; + + for (i = 0; i < nb_wrkr_param; i++, tmp_wrkr++) { + + /* Skip this if capabilities are not matching */ + if (tmp_wrkr->cap.u64 != curr_conf.cap.u64) + continue; + + /* If the checks pass, we have a match */ + return tmp_wrkr; + } + + return NULL; +} + +static int +eh_verify_match_worker(struct eh_app_worker_params *match_wrkr) +{ + /* Verify registered worker */ + if (match_wrkr->worker_thread == NULL) { + EH_LOG_ERR("No worker registered"); + return 0; + } + + /* Success */ + return 1; +} + +static uint8_t +eh_get_event_lcore_links(uint32_t lcore_id, struct eh_conf *conf, + struct eh_event_link_info **links) +{ + struct eh_event_link_info *link_cache; + struct eventmode_conf *em_conf = NULL; + struct eh_event_link_info *link; + uint8_t lcore_nb_link = 0; + size_t single_link_size; + size_t cache_size; + int index = 0; + int i; + + if (conf == NULL || links == NULL) { + EH_LOG_ERR("Invalid args"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (em_conf == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Update the number of links for this core */ + lcore_nb_link++; + + } + } + + /* Compute size of one entry to be copied */ + single_link_size = sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + cache_size = lcore_nb_link * sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + link_cache = calloc(1, cache_size); + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Cache the link */ + memcpy(&link_cache[index], link, single_link_size); + + /* Update index */ + index++; + } + } + + /* Update the links for application to use the cached links */ + *links = link_cache; + + /* Return the number of cached links */ + return lcore_nb_link; +} + static int eh_tx_adapter_configure(struct eventmode_conf *em_conf, struct tx_adapter_conf *adapter) @@ -1202,6 +1465,79 @@ eh_devs_uninit(struct eh_conf *conf) return 0; } +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params *match_wrkr; + struct eh_event_link_info *links = NULL; + struct eventmode_conf *em_conf; + uint32_t lcore_id; + uint8_t nb_links; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Check if this is eth core */ + if (rte_bitmap_get(em_conf->eth_core_mask, lcore_id)) { + eh_start_worker_eth_core(em_conf, lcore_id); + return; + } + + if (app_wrkr == NULL || nb_wrkr_param == 0) { + EH_LOG_ERR("Invalid args"); + return; + } + + /* + * This is a regular worker thread. The application registers + * multiple workers with various capabilities. Run worker + * based on the selected capabilities of the event + * device configured. + */ + + /* Get the first matching worker for the event device */ + match_wrkr = eh_find_worker(lcore_id, conf, app_wrkr, nb_wrkr_param); + if (match_wrkr == NULL) { + EH_LOG_ERR("Failed to match worker registered for lcore %d", + lcore_id); + goto clean_and_exit; + } + + /* Verify sanity of the matched worker */ + if (eh_verify_match_worker(match_wrkr) != 1) { + EH_LOG_ERR("Failed to validate the matched worker"); + goto clean_and_exit; + } + + /* Get worker links */ + nb_links = eh_get_event_lcore_links(lcore_id, conf, &links); + + /* Launch the worker thread */ + match_wrkr->worker_thread(links, nb_links); + + /* Free links info memory */ + free(links); + +clean_and_exit: + + /* Flag eth_cores to stop, if started */ + eh_stop_worker_eth_core(); +} + uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 8eb5e25..9a4dfab 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -40,6 +40,9 @@ #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Rx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE EVENT_MODE_MAX_RX_ADAPTERS + /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS @@ -51,6 +54,14 @@ enum eh_pkt_transfer_mode { EH_PKT_TRANSFER_MODE_EVENT, }; +/** + * Event mode packet rx types + */ +enum eh_rx_types { + EH_RX_TYPE_NON_BURST = 0, + EH_RX_TYPE_BURST +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -161,6 +172,22 @@ struct eh_conf { /**< Mode specific parameters */ }; +/* Workers registered by the application */ +struct eh_app_worker_params { + union { + RTE_STD_C11 + struct { + uint64_t burst : 1; + /**< Specify status of rx type burst */ + }; + uint64_t u64; + } cap; + /**< Capabilities of this worker */ + void (*worker_thread)(struct eh_event_link_info *links, + uint8_t nb_links); + /**< Worker thread */ +}; + /** * Initialize event mode devices * @@ -228,4 +255,25 @@ eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); void eh_display_conf(struct eh_conf *conf); + +/** + * Launch eventmode worker + * + * The application can request the eventmode helper subsystem to launch the + * worker based on the capabilities of event device and the options selected + * while initializing the eventmode. + * + * @param conf + * Event helper configuration + * @param app_wrkr + * List of all the workers registered by application, along with its + * capabilities + * @param nb_wrkr_param + * Number of workers passed by the application + * + */ +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param); + #endif /* _EVENT_HELPER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 08/15] examples/ipsec-secgw: add support for internal ports 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (6 preceding siblings ...) 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 07/15] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik @ 2020-02-20 8:02 ` Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 09/15] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik ` (9 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:02 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add support for Rx and Tx internal ports. When internal ports are available then a packet can be received from eth port and forwarded to event queue by HW without any software intervention. The same applies to Tx side where a packet sent to an event queue can by forwarded by HW to eth port without any software intervention. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 179 +++++++++++++++++++++++++++++++----- examples/ipsec-secgw/event_helper.h | 11 +++ 2 files changed, 167 insertions(+), 23 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index e3dfaf5..fe047ab 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -95,6 +95,39 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } + +static inline bool +eh_dev_has_rx_internal_port(uint8_t eventdev_id) +{ + bool flag = true; + int j; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_rx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + +static inline bool +eh_dev_has_tx_internal_port(uint8_t eventdev_id) +{ + bool flag = true; + int j; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_tx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + static inline bool eh_dev_has_burst_mode(uint8_t dev_id) { @@ -175,6 +208,42 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return 0; } +static void +eh_do_capability_check(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + int all_internal_ports = 1; + uint32_t eventdev_id; + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + eventdev_id = eventdev_config->eventdev_id; + + /* Check if event device has internal port for Rx & Tx */ + if (eh_dev_has_rx_internal_port(eventdev_id) && + eh_dev_has_tx_internal_port(eventdev_id)) { + eventdev_config->all_internal_ports = 1; + } else { + all_internal_ports = 0; + } + } + + /* + * If Rx & Tx internal ports are supported by all event devices then + * eth cores won't be required. Override the eth core mask requested + * and decrement number of event queues by one as it won't be needed + * for Tx. + */ + if (all_internal_ports) { + rte_bitmap_reset(em_conf->eth_core_mask); + for (i = 0; i < em_conf->nb_eventdev; i++) + em_conf->eventdev_config[i].nb_eventqueue--; + } +} + static int eh_set_default_conf_link(struct eventmode_conf *em_conf) { @@ -246,7 +315,10 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) struct rx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct rx_adapter_conf *adapter; + bool rx_internal_port = true; bool single_ev_queue = false; + int nb_eventqueue; + uint32_t caps = 0; int eventdev_id; int nb_eth_dev; int adapter_id; @@ -276,14 +348,21 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Set adapter conf */ adapter->eventdev_id = eventdev_id; adapter->adapter_id = adapter_id; - adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * If event device does not have internal ports for passing + * packets then reserved one queue for Tx path + */ + nb_eventqueue = eventdev_config->all_internal_ports ? + eventdev_config->nb_eventqueue : + eventdev_config->nb_eventqueue - 1; /* * Map all queues of eth device (port) to an event queue. If there * are more event queues than eth ports then create 1:1 mapping. * Otherwise map all eth ports to a single event queue. */ - if (nb_eth_dev > eventdev_config->nb_eventqueue) + if (nb_eth_dev > nb_eventqueue) single_ev_queue = true; for (i = 0; i < nb_eth_dev; i++) { @@ -305,11 +384,24 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Add all eth queues eth port to event queue */ conn->ethdev_rx_qid = -1; + /* Get Rx adapter capabilities */ + rte_event_eth_rx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + rx_internal_port = false; + /* Update no of connections */ adapter->nb_connections++; } + if (rx_internal_port) { + /* Rx core is not required */ + adapter->rx_core_id = -1; + } else { + /* Rx core is required */ + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + } + /* We have setup one adapter */ em_conf->nb_rx_adapter = 1; @@ -322,6 +414,8 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) struct tx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct tx_adapter_conf *tx_adapter; + bool tx_internal_port = true; + uint32_t caps = 0; int eventdev_id; int adapter_id; int nb_eth_dev; @@ -355,18 +449,6 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) tx_adapter->eventdev_id = eventdev_id; tx_adapter->adapter_id = adapter_id; - /* TODO: Tx core is required only when internal port is not present */ - tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); - - /* - * Application uses one event queue per adapter for submitting - * packets for Tx. Reserve the last queue available and decrement - * the total available event queues for this - */ - - /* Queue numbers start at 0 */ - tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; - /* * Map all Tx queues of the eth device (port) to the event device. */ @@ -396,10 +478,30 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) /* Add all eth tx queues to adapter */ conn->ethdev_tx_qid = -1; + /* Get Tx adapter capabilities */ + rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + tx_internal_port = false; + /* Update no of connections */ tx_adapter->nb_connections++; } + if (tx_internal_port) { + /* Tx core is not required */ + tx_adapter->tx_core_id = -1; + } else { + /* Tx core is required */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Use one event queue per adapter for submitting packets + * for Tx. Reserving the last queue available + */ + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + } + /* We have setup one adapter */ em_conf->nb_tx_adapter = 1; return 0; @@ -420,6 +522,9 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* Perform capability check for the selected event devices */ + eh_do_capability_check(em_conf); + /* * Check if links are specified. Else generate a default config for * the event ports used. @@ -523,11 +628,13 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode; /* * All queues need to be set with sched_type as - * schedule type for the application stage. One queue - * would be reserved for the final eth tx stage. This - * will be an atomic queue. + * schedule type for the application stage. One + * queue would be reserved for the final eth tx + * stage if event device does not have internal + * ports. This will be an atomic queue. */ - if (j == nb_eventqueue-1) { + if (!eventdev_config->all_internal_ports && + j == nb_eventqueue-1) { eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; } else { @@ -841,6 +948,12 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, /* Populate the curr_conf with the capabilities */ + /* Check for Tx internal port */ + if (eh_dev_has_tx_internal_port(eventdev_id)) + curr_conf.cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + else + curr_conf.cap.tx_internal_port = EH_TX_TYPE_NO_INTERNAL_PORT; + /* Check for burst mode */ if (eh_dev_has_burst_mode(eventdev_id)) curr_conf.cap.burst = EH_RX_TYPE_BURST; @@ -1012,6 +1125,16 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } } + /* + * Check if Tx core is assigned. If Tx core is not assigned then + * the adapter has internal port for submitting Tx packets and + * Tx event queue & port setup is not required + */ + if (adapter->tx_core_id == (uint32_t) (-1)) { + /* Internal port is present */ + goto skip_tx_queue_port_setup; + } + /* Setup Tx queue & port */ /* Get event port used by the adapter */ @@ -1051,6 +1174,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, rte_service_set_runstate_mapped_check(service_id, 0); +skip_tx_queue_port_setup: /* Start adapter */ ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); if (ret < 0) { @@ -1135,13 +1259,22 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); - EH_LOG_INFO( - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" - "\tRx core: %-2d", + sprintf(print_buf, + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, - adapter->eventdev_id, - adapter->rx_core_id); + adapter->eventdev_id); + if (adapter->rx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->rx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2d", adapter->rx_core_id); + + EH_LOG_INFO("%s", print_buf); for (j = 0; j < adapter->nb_connections; j++) { conn = &(adapter->conn[j]); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 9a4dfab..25c8563 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -62,12 +62,21 @@ enum eh_rx_types { EH_RX_TYPE_BURST }; +/** + * Event mode packet tx types + */ +enum eh_tx_types { + EH_TX_TYPE_INTERNAL_PORT = 0, + EH_TX_TYPE_NO_INTERNAL_PORT +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; uint8_t nb_eventqueue; uint8_t nb_eventport; uint8_t ev_queue_mode; + uint8_t all_internal_ports; }; /** @@ -179,6 +188,8 @@ struct eh_app_worker_params { struct { uint64_t burst : 1; /**< Specify status of rx type burst */ + uint64_t tx_internal_port : 1; + /**< Specify whether tx internal port is available */ }; uint64_t u64; } cap; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 09/15] examples/ipsec-secgw: add event helper config init/uninit 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (7 preceding siblings ...) 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 08/15] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik @ 2020-02-20 8:02 ` Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik ` (8 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:02 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add eventmode helper eh_conf_init and eh_conf_uninit functions which purpose is to initialize and unitialize eventmode helper configuration. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 103 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 23 ++++++++ 2 files changed, 126 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index fe047ab..0854fc2 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1379,6 +1379,109 @@ eh_display_link_conf(struct eventmode_conf *em_conf) EH_LOG_INFO(""); } +struct eh_conf * +eh_conf_init(void) +{ + struct eventmode_conf *em_conf = NULL; + struct eh_conf *conf = NULL; + unsigned int eth_core_id; + void *bitmap = NULL; + uint32_t nb_bytes; + + /* Allocate memory for config */ + conf = calloc(1, sizeof(struct eh_conf)); + if (conf == NULL) { + EH_LOG_ERR("Failed to allocate memory for eventmode helper " + "config"); + return NULL; + } + + /* Set default conf */ + + /* Packet transfer mode: poll */ + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + + /* Keep all ethernet ports enabled by default */ + conf->eth_portmask = -1; + + /* Allocate memory for event mode params */ + conf->mode_params = calloc(1, sizeof(struct eventmode_conf)); + if (conf->mode_params == NULL) { + EH_LOG_ERR("Failed to allocate memory for event mode params"); + goto free_conf; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Allocate and initialize bitmap for eth cores */ + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); + if (!nb_bytes) { + EH_LOG_ERR("Failed to get bitmap footprint"); + goto free_em_conf; + } + + bitmap = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, + RTE_CACHE_LINE_SIZE); + if (!bitmap) { + EH_LOG_ERR("Failed to allocate memory for eth cores bitmap\n"); + goto free_em_conf; + } + + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, bitmap, + nb_bytes); + if (!em_conf->eth_core_mask) { + EH_LOG_ERR("Failed to initialize bitmap"); + goto free_bitmap; + } + + /* Set schedule type as not set */ + em_conf->ext_params.sched_type = SCHED_TYPE_NOT_SET; + + /* Set two cores as eth cores for Rx & Tx */ + + /* Use first core other than master core as Rx core */ + eth_core_id = rte_get_next_lcore(0, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + /* Use next core as Tx core */ + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + return conf; + +free_bitmap: + rte_free(bitmap); +free_em_conf: + free(em_conf); +free_conf: + free(conf); + return NULL; +} + +void +eh_conf_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf = NULL; + + if (!conf || !conf->mode_params) + return; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Free evenmode configuration memory */ + rte_free(em_conf->eth_core_mask); + free(em_conf); + free(conf); +} + void eh_display_conf(struct eh_conf *conf) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 25c8563..e17cab1 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -46,6 +46,9 @@ /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS +/* Used to indicate that queue schedule type is not set */ +#define SCHED_TYPE_NOT_SET 3 + /** * Packet transfer mode of the application */ @@ -200,6 +203,26 @@ struct eh_app_worker_params { }; /** + * Allocate memory for event helper configuration and initialize + * it with default values. + * + * @return + * - pointer to event helper configuration structure on success. + * - NULL on failure. + */ +struct eh_conf * +eh_conf_init(void); + +/** + * Uninitialize event helper configuration and release its memory +. * + * @param conf + * Event helper configuration + */ +void +eh_conf_uninit(struct eh_conf *conf); + +/** * Initialize event mode devices * * Application can call this function to get the event devices, eth devices -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (8 preceding siblings ...) 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 09/15] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik @ 2020-02-20 8:02 ` Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 11/15] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik ` (7 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:02 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add eventmode support to ipsec-secgw. With the aid of event helper configure and use the eventmode capabilities. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 3 + examples/ipsec-secgw/event_helper.h | 14 ++ examples/ipsec-secgw/ipsec-secgw.c | 258 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec.h | 24 ++++ examples/ipsec-secgw/sa.c | 21 +-- examples/ipsec-secgw/sad.h | 5 - 6 files changed, 301 insertions(+), 24 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 0854fc2..076f1f2 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -960,6 +960,8 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, else curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + curr_conf.cap.ipsec_mode = conf->ipsec_mode; + /* Parse the passed list and see if we have matching capabilities */ /* Initialize the pointer used to traverse the list */ @@ -1400,6 +1402,7 @@ eh_conf_init(void) /* Packet transfer mode: poll */ conf->mode = EH_PKT_TRANSFER_MODE_POLL; + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; /* Keep all ethernet ports enabled by default */ conf->eth_portmask = -1; diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index e17cab1..b65b343 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -73,6 +73,14 @@ enum eh_tx_types { EH_TX_TYPE_NO_INTERNAL_PORT }; +/** + * Event mode ipsec mode types + */ +enum eh_ipsec_mode_types { + EH_IPSEC_MODE_TYPE_APP = 0, + EH_IPSEC_MODE_TYPE_DRIVER +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -182,6 +190,10 @@ struct eh_conf { */ void *mode_params; /**< Mode specific parameters */ + + /** Application specific params */ + enum eh_ipsec_mode_types ipsec_mode; + /**< Mode of ipsec run */ }; /* Workers registered by the application */ @@ -193,6 +205,8 @@ struct eh_app_worker_params { /**< Specify status of rx type burst */ uint64_t tx_internal_port : 1; /**< Specify whether tx internal port is available */ + uint64_t ipsec_mode : 1; + /**< Specify ipsec processing level */ }; uint64_t u64; } cap; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index e1ee7c3..82915e2 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2,6 +2,7 @@ * Copyright(c) 2016 Intel Corporation */ +#include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> @@ -14,9 +15,11 @@ #include <sys/queue.h> #include <stdarg.h> #include <errno.h> +#include <signal.h> #include <getopt.h> #include <rte_common.h> +#include <rte_bitmap.h> #include <rte_byteorder.h> #include <rte_log.h> #include <rte_eal.h> @@ -41,13 +44,17 @@ #include <rte_jhash.h> #include <rte_cryptodev.h> #include <rte_security.h> +#include <rte_eventdev.h> #include <rte_ip.h> #include <rte_ip_frag.h> +#include "event_helper.h" #include "ipsec.h" #include "parser.h" #include "sad.h" +volatile bool force_quit; + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define MAX_JUMBO_PKT_LEN 9600 @@ -134,12 +141,20 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" +#define CMD_LINE_OPT_SCHEDULE_TYPE "event-schedule-type" #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" #define CMD_LINE_OPT_REASSEMBLE "reassemble" #define CMD_LINE_OPT_MTU "mtu" #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" +#define CMD_LINE_ARG_EVENT "event" +#define CMD_LINE_ARG_POLL "poll" +#define CMD_LINE_ARG_ORDERED "ordered" +#define CMD_LINE_ARG_ATOMIC "atomic" +#define CMD_LINE_ARG_PARALLEL "parallel" + enum { /* long options mapped to a short option */ @@ -150,6 +165,8 @@ enum { CMD_LINE_OPT_CONFIG_NUM, CMD_LINE_OPT_SINGLE_SA_NUM, CMD_LINE_OPT_CRYPTODEV_MASK_NUM, + CMD_LINE_OPT_TRANSFER_MODE_NUM, + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, CMD_LINE_OPT_RX_OFFLOAD_NUM, CMD_LINE_OPT_TX_OFFLOAD_NUM, CMD_LINE_OPT_REASSEMBLE_NUM, @@ -161,6 +178,8 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, @@ -1292,6 +1311,8 @@ print_usage(const char *prgname) " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" " [--cryptodev_mask MASK]" + " [--transfer-mode MODE]" + " [--event-schedule-type TYPE]" " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" @@ -1315,6 +1336,14 @@ print_usage(const char *prgname) " bypassing the SP\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" + " --transfer-mode MODE\n" + " \"poll\" : Packet transfer via polling (default)\n" + " \"event\" : Packet transfer via event device\n" + " --event-schedule-type TYPE queue schedule type, used only when\n" + " transfer mode is set to event\n" + " \"ordered\" : Ordered (default)\n" + " \"atomic\" : Atomic\n" + " \"parallel\" : Parallel\n" " --" CMD_LINE_OPT_RX_OFFLOAD ": bitmask of the RX HW offload capabilities to enable/use\n" " (DEV_RX_OFFLOAD_*)\n" @@ -1449,8 +1478,45 @@ print_app_sa_prm(const struct app_sa_prm *prm) printf("Frag TTL: %" PRIu64 " ns\n", frag_ttl_ns); } +static int +parse_transfer_mode(struct eh_conf *conf, const char *optarg) +{ + if (!strcmp(CMD_LINE_ARG_POLL, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + else if (!strcmp(CMD_LINE_ARG_EVENT, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_EVENT; + else { + printf("Unsupported packet transfer mode\n"); + return -EINVAL; + } + + return 0; +} + +static int +parse_schedule_type(struct eh_conf *conf, const char *optarg) +{ + struct eventmode_conf *em_conf = NULL; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (!strcmp(CMD_LINE_ARG_ORDERED, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + else if (!strcmp(CMD_LINE_ARG_ATOMIC, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ATOMIC; + else if (!strcmp(CMD_LINE_ARG_PARALLEL, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_PARALLEL; + else { + printf("Unsupported queue schedule type\n"); + return -EINVAL; + } + + return 0; +} + static int32_t -parse_args(int32_t argc, char **argv) +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) { int opt; int64_t ret; @@ -1548,6 +1614,7 @@ parse_args(int32_t argc, char **argv) /* else */ single_sa = 1; single_sa_idx = ret; + eh_conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; printf("Configured with single SA index %u\n", single_sa_idx); break; @@ -1562,6 +1629,25 @@ parse_args(int32_t argc, char **argv) /* else */ enabled_cryptodev_mask = ret; break; + + case CMD_LINE_OPT_TRANSFER_MODE_NUM: + ret = parse_transfer_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid packet transfer mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: + ret = parse_schedule_type(eh_conf, optarg); + if (ret < 0) { + printf("Invalid queue schedule type\n"); + print_usage(prgname); + return -1; + } + break; + case CMD_LINE_OPT_RX_OFFLOAD_NUM: ret = parse_mask(optarg, &dev_rx_offload); if (ret != 0) { @@ -2476,16 +2562,117 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) port_id); } +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + } +} + +static void +ev_mode_sess_verify(struct ipsec_sa *sa, int nb_sa) +{ + struct rte_ipsec_session *ips; + int32_t i; + + if (!sa || !nb_sa) + return; + + for (i = 0; i < nb_sa; i++) { + ips = ipsec_get_primary_session(&sa[i]); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) + rte_exit(EXIT_FAILURE, "Event mode supports only " + "inline protocol sessions\n"); + } + +} + +static int32_t +check_eh_conf(struct eh_conf *eh_conf) +{ + struct eventmode_conf *em_conf = NULL; + + if (!eh_conf || !eh_conf->mode_params) + return -EINVAL; + + /* Get eventmode conf */ + em_conf = eh_conf->mode_params; + + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL && + em_conf->ext_params.sched_type != SCHED_TYPE_NOT_SET) { + printf("error: option --event-schedule-type applies only to " + "event mode\n"); + return -EINVAL; + } + + if (eh_conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + /* Set schedule type to ORDERED if it wasn't explicitly set by user */ + if (em_conf->ext_params.sched_type == SCHED_TYPE_NOT_SET) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + + /* + * Event mode currently supports only inline protocol sessions. + * If there are other types of sessions configured then exit with + * error. + */ + ev_mode_sess_verify(sa_in, nb_sa_in); + ev_mode_sess_verify(sa_out, nb_sa_out); + + return 0; +} + +static void +inline_sessions_free(struct sa_ctx *sa_ctx) +{ + struct rte_ipsec_session *ips; + struct ipsec_sa *sa; + int32_t ret; + uint32_t i; + + if (!sa_ctx) + return; + + for (i = 0; i < sa_ctx->nb_sa; i++) { + + sa = &sa_ctx->sa[i]; + if (!sa->spi) + continue; + + ips = ipsec_get_primary_session(sa); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && + ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + continue; + + if (!rte_eth_dev_is_valid_port(sa->portid)) + continue; + + ret = rte_security_session_destroy( + rte_eth_dev_get_sec_ctx(sa->portid), + ips->security.ses); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy security " + "session type %d, spi %d\n", + ips->type, sa->spi); + } +} + int32_t main(int32_t argc, char **argv) { int32_t ret; uint32_t lcore_id; + uint32_t cdev_id; uint32_t i; uint8_t socket_id; uint16_t portid; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; + struct eh_conf *eh_conf = NULL; size_t sess_sz; /* init EAL */ @@ -2495,8 +2682,17 @@ main(int32_t argc, char **argv) argc -= ret; argv += ret; + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* initialize event helper configuration */ + eh_conf = eh_conf_init(); + if (eh_conf == NULL) + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); + /* parse application arguments (after the EAL ones) */ - ret = parse_args(argc, argv); + ret = parse_args(argc, argv, eh_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid parameters\n"); @@ -2516,6 +2712,9 @@ main(int32_t argc, char **argv) if (check_params() < 0) rte_exit(EXIT_FAILURE, "check_params failed\n"); + if (check_eh_conf(eh_conf) < 0) + rte_exit(EXIT_FAILURE, "check_eh_conf failed\n"); + ret = init_lcore_rx_queues(); if (ret < 0) rte_exit(EXIT_FAILURE, "init_lcore_rx_queues failed\n"); @@ -2555,6 +2754,18 @@ main(int32_t argc, char **argv) cryptodevs_init(); + /* + * Set the enabled port mask in helper config for use by helper + * sub-system. This will be used while initializing devices using + * helper sub-system. + */ + eh_conf->eth_portmask = enabled_port_mask; + + /* Initialize eventmode components */ + ret = eh_devs_init(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); + /* start ports */ RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2614,5 +2825,48 @@ main(int32_t argc, char **argv) return -1; } + /* Uninitialize eventmode components */ + ret = eh_devs_uninit(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); + + /* Free eventmode configuration memory */ + eh_conf_uninit(eh_conf); + + /* Destroy inline inbound and outbound sessions */ + for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { + socket_id = rte_socket_id_by_idx(i); + inline_sessions_free(socket_ctx[socket_id].sa_in); + inline_sessions_free(socket_ctx[socket_id].sa_out); + } + + for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { + printf("Closing cryptodev %d...", cdev_id); + rte_cryptodev_stop(cdev_id); + rte_cryptodev_close(cdev_id); + printf(" Done\n"); + } + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + printf("Closing port %d...", portid); + if (flow_info_tbl[portid].rx_def_flow) { + struct rte_flow_error err; + + ret = rte_flow_destroy(portid, + flow_info_tbl[portid].rx_def_flow, &err); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy flow " + " for port %u, err msg: %s\n", portid, + err.message); + } + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } + printf("Bye...\n"); + return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 8f5d382..ec3d60b 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -159,6 +159,24 @@ struct ipsec_sa { struct rte_security_session_conf sess_conf; } __rte_cache_aligned; +struct ipsec_xf { + struct rte_crypto_sym_xform a; + struct rte_crypto_sym_xform b; +}; + +struct ipsec_sad { + struct rte_ipsec_sad *sad_v4; + struct rte_ipsec_sad *sad_v6; +}; + +struct sa_ctx { + void *satbl; /* pointer to array of rte_ipsec_sa objects*/ + struct ipsec_sad sad; + struct ipsec_xf *xf; + uint32_t nb_sa; + struct ipsec_sa sa[]; +}; + struct ipsec_mbuf_metadata { struct ipsec_sa *sa; struct rte_crypto_op cop; @@ -253,6 +271,12 @@ struct ipsec_traffic { struct traffic_type ip6; }; +extern struct ipsec_sa *sa_out; +extern uint32_t nb_sa_out; + +extern struct ipsec_sa *sa_in; +extern uint32_t nb_sa_in; + uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t len); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index e75b687..29ea141 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -135,14 +135,14 @@ const struct supported_aead_algo aead_algos[] = { #define SA_INIT_NB 128 -static struct ipsec_sa *sa_out; +struct ipsec_sa *sa_out; +uint32_t nb_sa_out; static uint32_t sa_out_sz; -static uint32_t nb_sa_out; static struct ipsec_sa_cnt sa_out_cnt; -static struct ipsec_sa *sa_in; +struct ipsec_sa *sa_in; +uint32_t nb_sa_in; static uint32_t sa_in_sz; -static uint32_t nb_sa_in; static struct ipsec_sa_cnt sa_in_cnt; static const struct supported_cipher_algo * @@ -826,19 +826,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) printf("\n"); } -struct ipsec_xf { - struct rte_crypto_sym_xform a; - struct rte_crypto_sym_xform b; -}; - -struct sa_ctx { - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ - struct ipsec_sad sad; - struct ipsec_xf *xf; - uint32_t nb_sa; - struct ipsec_sa sa[]; -}; - static struct sa_ctx * sa_create(const char *name, int32_t socket_id, uint32_t nb_sa) { diff --git a/examples/ipsec-secgw/sad.h b/examples/ipsec-secgw/sad.h index 55712ba..473aaa9 100644 --- a/examples/ipsec-secgw/sad.h +++ b/examples/ipsec-secgw/sad.h @@ -18,11 +18,6 @@ struct ipsec_sad_cache { RTE_DECLARE_PER_LCORE(struct ipsec_sad_cache, sad_cache); -struct ipsec_sad { - struct rte_ipsec_sad *sad_v4; - struct rte_ipsec_sad *sad_v6; -}; - int ipsec_sad_create(const char *name, struct ipsec_sad *sad, int socket_id, struct ipsec_sa_cnt *sa_cnt); -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 11/15] examples/ipsec-secgw: add driver mode worker 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (9 preceding siblings ...) 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik @ 2020-02-20 8:02 ` Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 12/15] examples/ipsec-secgw: add app " Lukasz Bartosik ` (6 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:02 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add driver inbound and outbound worker thread for ipsec-secgw. In driver mode application does as little as possible. It simply forwards packets back to port from which traffic was received instructing HW to apply inline security processing using first outbound SA configured for a given port. If a port does not have SA configured outbound traffic on that port will be silently dropped. The aim of this mode is to measure HW capabilities. Driver mode is selected with single-sa option. The single-sa option accepts SA index however in event mode the SA index is ignored. Example command to run ipsec-secgw in driver mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel --single-sa 0 Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/ipsec-secgw.c | 34 +++--- examples/ipsec-secgw/ipsec-secgw.h | 25 +++++ examples/ipsec-secgw/ipsec.h | 11 ++ examples/ipsec-secgw/ipsec_worker.c | 218 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/meson.build | 2 +- 6 files changed, 272 insertions(+), 19 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index 66d05d4..c4a272a 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -16,6 +16,7 @@ SRCS-y += sad.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += ipsec_worker.c SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 82915e2..bebda38 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -71,8 +71,6 @@ volatile bool force_quit; #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ -#define NB_SOCKETS 4 - /* Configure how many packets ahead to prefetch, when reading packets */ #define PREFETCH_OFFSET 3 @@ -80,8 +78,6 @@ volatile bool force_quit; #define MAX_LCORE_PARAMS 1024 -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) - /* * Configurable number of RX/TX ring descriptors */ @@ -188,15 +184,15 @@ static const struct option lgopts[] = { {NULL, 0, 0, 0} }; +uint32_t unprotected_port_mask; +uint32_t single_sa_idx; /* mask of enabled ports */ static uint32_t enabled_port_mask; static uint64_t enabled_cryptodev_mask = UINT64_MAX; -static uint32_t unprotected_port_mask; static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; -static uint32_t single_sa_idx; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -282,7 +278,7 @@ static struct rte_eth_conf port_conf = { }, }; -static struct socket_ctx socket_ctx[NB_SOCKETS]; +struct socket_ctx socket_ctx[NB_SOCKETS]; /* * Determine is multi-segment support required: @@ -1003,12 +999,12 @@ process_pkts(struct lcore_conf *qconf, struct rte_mbuf **pkts, prepare_traffic(pkts, &traffic, nb_pkts); if (unlikely(single_sa)) { - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) process_pkts_inbound_nosp(&qconf->inbound, &traffic); else process_pkts_outbound_nosp(&qconf->outbound, &traffic); } else { - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) process_pkts_inbound(&qconf->inbound, &traffic); else process_pkts_outbound(&qconf->outbound, &traffic); @@ -1119,8 +1115,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, } /* main processing loop */ -static int32_t -main_loop(__attribute__((unused)) void *dummy) +void +ipsec_poll_mode_worker(void) { struct rte_mbuf *pkts[MAX_PKT_BURST]; uint32_t lcore_id; @@ -1164,13 +1160,13 @@ main_loop(__attribute__((unused)) void *dummy) RTE_LOG(ERR, IPSEC, "SAD cache init on lcore %u, failed with code: %d\n", lcore_id, rc); - return rc; + return; } if (qconf->nb_rx_queue == 0) { RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", lcore_id); - return 0; + return; } RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); @@ -1183,7 +1179,7 @@ main_loop(__attribute__((unused)) void *dummy) lcore_id, portid, queueid); } - while (1) { + while (!force_quit) { cur_tsc = rte_rdtsc(); /* TX queue buffer drain */ @@ -1207,7 +1203,7 @@ main_loop(__attribute__((unused)) void *dummy) process_pkts(qconf, pkts, nb_rx, portid); /* dequeue and process completed crypto-ops */ - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) drain_inbound_crypto_queues(qconf, &qconf->inbound); else @@ -1332,8 +1328,10 @@ print_usage(const char *prgname) " zero value disables the cache (default value: 128)\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration\n" - " --single-sa SAIDX: Use single SA index for outbound traffic,\n" - " bypassing the SP\n" + " --single-sa SAIDX: In poll mode use single SA index for\n" + " outbound traffic, bypassing the SP\n" + " In event mode selects driver submode,\n" + " SA index value is ignored\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" " --transfer-mode MODE\n" @@ -2819,7 +2817,7 @@ main(int32_t argc, char **argv) check_all_ports_link_status(enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h new file mode 100644 index 0000000..a07a920 --- /dev/null +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_SECGW_H_ +#define _IPSEC_SECGW_H_ + +#include <stdbool.h> + +#define NB_SOCKETS 4 + +/* Port mask to identify the unprotected ports */ +extern uint32_t unprotected_port_mask; + +/* Index of SA in single mode */ +extern uint32_t single_sa_idx; + +extern volatile bool force_quit; + +static inline uint8_t +is_unprotected_port(uint16_t port_id) +{ + return unprotected_port_mask & (1 << port_id); +} + +#endif /* _IPSEC_SECGW_H_ */ diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index ec3d60b..ad913bf 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -13,6 +13,8 @@ #include <rte_flow.h> #include <rte_ipsec.h> +#include "ipsec-secgw.h" + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 @@ -271,6 +273,15 @@ struct ipsec_traffic { struct traffic_type ip6; }; +/* Socket ctx */ +extern struct socket_ctx socket_ctx[NB_SOCKETS]; + +void +ipsec_poll_mode_worker(void); + +int +ipsec_launch_one_lcore(void *args); + extern struct ipsec_sa *sa_out; extern uint32_t nb_sa_out; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c new file mode 100644 index 0000000..b7a1ef9 --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -0,0 +1,218 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2016 Intel Corporation + * Copyright (C) 2020 Marvell International Ltd. + */ +#include <rte_event_eth_tx_adapter.h> + +#include "event_helper.h" +#include "ipsec.h" +#include "ipsec-secgw.h" + +static inline void +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) +{ + /* Save the destination port in the mbuf */ + m->port = port_id; + + /* Save eth queue for Tx */ + rte_event_eth_tx_adapter_txq_set(m, 0); +} + +static inline void +prepare_out_sessions_tbl(struct sa_ctx *sa_out, + struct rte_security_session **sess_tbl, uint16_t size) +{ + struct rte_ipsec_session *pri_sess; + struct ipsec_sa *sa; + uint32_t i; + + if (!sa_out) + return; + + for (i = 0; i < sa_out->nb_sa; i++) { + + sa = &sa_out->sa[i]; + if (!sa) + continue; + + pri_sess = ipsec_get_primary_session(sa); + if (!pri_sess) + continue; + + if (pri_sess->type != + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + + RTE_LOG(ERR, IPSEC, "Invalid session type %d\n", + pri_sess->type); + continue; + } + + if (sa->portid >= size) { + RTE_LOG(ERR, IPSEC, + "Port id >= than table size %d, %d\n", + sa->portid, size); + continue; + } + + /* Use only first inline session found for a given port */ + if (sess_tbl[sa->portid]) + continue; + sess_tbl[sa->portid] = pri_sess->security.ses; + } +} + +/* + * Event mode exposes various operating modes depending on the + * capabilities of the event device and the operating mode + * selected. + */ + +/* Workers registered */ +#define IPSEC_EVENTMODE_WORKERS 1 + +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - driver mode + */ +static void +ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct rte_security_session *sess_tbl[RTE_MAX_ETHPORTS] = { NULL }; + unsigned int nb_rx = 0; + struct rte_mbuf *pkt; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int16_t port_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* + * Prepare security sessions table. In outbound driver mode + * we always use first session configured for a given port + */ + prepare_out_sessions_tbl(socket_ctx[socket_id].sa_out, sess_tbl, + RTE_MAX_ETHPORTS); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "driver mode) on lcore %d\n", lcore_id); + + /* We have valid links */ + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + pkt = ev.mbuf; + port_id = pkt->port; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + if (!is_unprotected_port(port_id)) { + + if (unlikely(!sess_tbl[port_id])) { + rte_pktmbuf_free(pkt); + continue; + } + + /* Save security session */ + pkt->udata64 = (uint64_t) sess_tbl[port_id]; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + } + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + +static uint8_t +ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) +{ + struct eh_app_worker_params *wrkr; + uint8_t nb_wrkr_param = 0; + + /* Save workers */ + wrkr = wrkrs; + + /* Non-burst - Tx internal port - driver mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; + wrkr++; + + return nb_wrkr_param; +} + +static void +ipsec_eventmode_worker(struct eh_conf *conf) +{ + struct eh_app_worker_params ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { + {{{0} }, NULL } }; + uint8_t nb_wrkr_param; + + /* Populate l2fwd_wrkr params */ + nb_wrkr_param = ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); + + /* + * Launch correct worker after checking + * the event device's capabilities. + */ + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); +} + +int ipsec_launch_one_lcore(void *args) +{ + struct eh_conf *conf; + + conf = (struct eh_conf *)args; + + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { + /* Run in poll mode */ + ipsec_poll_mode_worker(); + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { + /* Run in event mode */ + ipsec_eventmode_worker(conf); + } + return 0; +} diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 2415d47..f9ba2a2 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'event_helper.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' + 'ipsec_worker.c', 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (10 preceding siblings ...) 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 11/15] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik @ 2020-02-20 8:02 ` Lukasz Bartosik 2020-02-24 14:13 ` Akhil Goyal 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 13/15] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik ` (5 subsequent siblings) 17 siblings, 1 reply; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:02 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add application inbound/outbound worker thread and IPsec application processing code for event mode. Example ipsec-secgw command in app mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 31 +-- examples/ipsec-secgw/ipsec-secgw.h | 63 ++++++ examples/ipsec-secgw/ipsec.h | 16 -- examples/ipsec-secgw/ipsec_worker.c | 424 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec_worker.h | 35 +++ 5 files changed, 521 insertions(+), 48 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec_worker.h diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index bebda38..c98620e 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -50,13 +50,12 @@ #include "event_helper.h" #include "ipsec.h" +#include "ipsec_worker.h" #include "parser.h" #include "sad.h" volatile bool force_quit; -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 - #define MAX_JUMBO_PKT_LEN 9600 #define MEMPOOL_CACHE_SIZE 256 @@ -86,29 +85,6 @@ volatile bool force_quit; static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((a) & 0xff) << 56) | \ - ((uint64_t)((b) & 0xff) << 48) | \ - ((uint64_t)((c) & 0xff) << 40) | \ - ((uint64_t)((d) & 0xff) << 32) | \ - ((uint64_t)((e) & 0xff) << 24) | \ - ((uint64_t)((f) & 0xff) << 16) | \ - ((uint64_t)((g) & 0xff) << 8) | \ - ((uint64_t)(h) & 0xff)) -#else -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((h) & 0xff) << 56) | \ - ((uint64_t)((g) & 0xff) << 48) | \ - ((uint64_t)((f) & 0xff) << 40) | \ - ((uint64_t)((e) & 0xff) << 32) | \ - ((uint64_t)((d) & 0xff) << 24) | \ - ((uint64_t)((c) & 0xff) << 16) | \ - ((uint64_t)((b) & 0xff) << 8) | \ - ((uint64_t)(a) & 0xff)) -#endif -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) - #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ @@ -120,11 +96,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) -/* port/source ethernet addr and destination ethernet addr */ -struct ethaddr_info { - uint64_t src, dst; -}; - struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index a07a920..4b53cb5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -8,6 +8,69 @@ #define NB_SOCKETS 4 +#define MAX_PKT_BURST 32 + +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 + +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((a) & 0xff) << 56) | \ + ((uint64_t)((b) & 0xff) << 48) | \ + ((uint64_t)((c) & 0xff) << 40) | \ + ((uint64_t)((d) & 0xff) << 32) | \ + ((uint64_t)((e) & 0xff) << 24) | \ + ((uint64_t)((f) & 0xff) << 16) | \ + ((uint64_t)((g) & 0xff) << 8) | \ + ((uint64_t)(h) & 0xff)) +#else +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((h) & 0xff) << 56) | \ + ((uint64_t)((g) & 0xff) << 48) | \ + ((uint64_t)((f) & 0xff) << 40) | \ + ((uint64_t)((e) & 0xff) << 32) | \ + ((uint64_t)((d) & 0xff) << 24) | \ + ((uint64_t)((c) & 0xff) << 16) | \ + ((uint64_t)((b) & 0xff) << 8) | \ + ((uint64_t)(a) & 0xff)) +#endif + +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) + +struct traffic_type { + const uint8_t *data[MAX_PKT_BURST * 2]; + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; + void *saptr[MAX_PKT_BURST * 2]; + uint32_t res[MAX_PKT_BURST * 2]; + uint32_t num; +}; + +struct ipsec_traffic { + struct traffic_type ipsec; + struct traffic_type ip4; + struct traffic_type ip6; +}; + +/* Fields optimized for devices without burst */ +struct traffic_type_nb { + const uint8_t *data; + struct rte_mbuf *pkt; + uint32_t res; + uint32_t num; +}; + +struct ipsec_traffic_nb { + struct traffic_type_nb ipsec; + struct traffic_type_nb ip4; + struct traffic_type_nb ip6; +}; + +/* port/source ethernet addr and destination ethernet addr */ +struct ethaddr_info { + uint64_t src, dst; +}; + +extern struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; + /* Port mask to identify the unprotected ports */ extern uint32_t unprotected_port_mask; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index ad913bf..f8f29f9 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -15,11 +15,9 @@ #include "ipsec-secgw.h" -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 -#define MAX_PKT_BURST 32 #define MAX_INFLIGHT 128 #define MAX_QP_PER_LCORE 256 @@ -259,20 +257,6 @@ struct cnt_blk { uint32_t cnt; } __attribute__((packed)); -struct traffic_type { - const uint8_t *data[MAX_PKT_BURST * 2]; - struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; - void *saptr[MAX_PKT_BURST * 2]; - uint32_t res[MAX_PKT_BURST * 2]; - uint32_t num; -}; - -struct ipsec_traffic { - struct traffic_type ipsec; - struct traffic_type ip4; - struct traffic_type ip6; -}; - /* Socket ctx */ extern struct socket_ctx socket_ctx[NB_SOCKETS]; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index b7a1ef9..6313c98 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -2,11 +2,51 @@ * Copyright(c) 2010-2016 Intel Corporation * Copyright (C) 2020 Marvell International Ltd. */ +#include <rte_acl.h> #include <rte_event_eth_tx_adapter.h> +#include <rte_lpm.h> +#include <rte_lpm6.h> #include "event_helper.h" #include "ipsec.h" #include "ipsec-secgw.h" +#include "ipsec_worker.h" + +static inline enum pkt_type +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) +{ + struct rte_ether_hdr *eth; + + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip, ip_p)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV4; + else + return PKT_TYPE_PLAIN_IPV4; + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip6_hdr, ip6_nxt)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV6; + else + return PKT_TYPE_PLAIN_IPV6; + } + + /* Unknown/Unsupported type */ + return PKT_TYPE_INVALID; +} + +static inline void +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) +{ + struct rte_ether_hdr *ethhdr; + + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); +} static inline void ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) @@ -61,6 +101,290 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, } } +static inline int +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) +{ + uint32_t res; + + if (unlikely(sp == NULL)) + return 0; + + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, + DEFAULT_MAX_CATEGORIES); + + if (unlikely(res == 0)) { + /* No match */ + return 0; + } + + if (res == DISCARD) + return 0; + else if (res == BYPASS) { + *sa_idx = -1; + return 1; + } + + *sa_idx = res - 1; + return 1; +} + +static inline uint16_t +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint32_t dst_ip; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); + dst_ip = rte_be_to_cpu_32(dst_ip); + + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +/* TODO: To be tested */ +static inline uint16_t +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint8_t dst_ip[16]; + uint8_t *ip6_dst; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); + memcpy(&dst_ip[0], ip6_dst, 16); + + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +static inline uint16_t +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) +{ + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) + return route4_pkt(pkt, rt->rt4_ctx); + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) + return route6_pkt(pkt, rt->rt6_ctx); + + return RTE_MAX_ETHPORTS; +} + +static inline int +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct ipsec_sa *sa = NULL; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + case PKT_TYPE_PLAIN_IPV6: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + default: + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == BYPASS) + goto route_and_send_pkt; + + /* Validate sa_idx */ + if (sa_idx >= ctx->sa_ctx->nb_sa) + goto drop_pkt_and_exit; + + /* Else the packet has to be protected with SA */ + + /* If the packet was IPsec processed, then SA pointer should be set */ + if (sa == NULL) + goto drop_pkt_and_exit; + + /* SPI on the packet should match with the one in SA */ + if (unlikely(sa->spi != ctx->sa_ctx->sa[sa_idx].spi)) + goto drop_pkt_and_exit; + +route_and_send_pkt: + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + +static inline int +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct rte_ipsec_session *sess; + struct sa_ctx *sa_ctx; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + struct ipsec_sa *sa; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + case PKT_TYPE_PLAIN_IPV6: + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + default: + /* + * Only plain IPv4 & IPv6 packets are allowed + * on protected port. Drop the rest. + */ + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == BYPASS) { + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + goto send_pkt; + } + + /* Validate sa_idx */ + if (sa_idx >= ctx->sa_ctx->nb_sa) + goto drop_pkt_and_exit; + + /* Else the packet has to be protected */ + + /* Get SA ctx*/ + sa_ctx = ctx->sa_ctx; + + /* Get SA */ + sa = &(sa_ctx->sa[sa_idx]); + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + /* Allow only inline protocol for now */ + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); + goto drop_pkt_and_exit; + } + + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) + pkt->userdata = sess->security.ses; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + + /* Get the port to which this pkt need to be submitted */ + port_id = sa->portid; + +send_pkt: + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return 1; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return 0; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -68,7 +392,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 1 +#define IPSEC_EVENTMODE_WORKERS 2 /* * Event mode worker @@ -146,7 +470,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } /* Save security session */ - pkt->udata64 = (uint64_t) sess_tbl[port_id]; + pkt->userdata = sess_tbl[port_id]; /* Mark the packet for Tx security offload */ pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; @@ -165,6 +489,94 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - app mode + */ +static void +ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct lcore_conf_ev_tx_int_port_wrkr lconf; + unsigned int nb_rx = 0; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int ret; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* We have valid links */ + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Save routing table */ + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "app mode) on lcore %d\n", lcore_id); + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + if (is_unprotected_port(ev.mbuf->port)) + ret = process_ipsec_ev_inbound(&lconf.inbound, + &lconf.rt, &ev); + else + ret = process_ipsec_ev_outbound(&lconf.outbound, + &lconf.rt, &ev); + if (ret != 1) + /* The pkt has been dropped */ + continue; + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -180,6 +592,14 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - app mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; + nb_wrkr_param++; return nb_wrkr_param; } diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h new file mode 100644 index 0000000..87b4f22 --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_WORKER_H_ +#define _IPSEC_WORKER_H_ + +#include "ipsec.h" + +enum pkt_type { + PKT_TYPE_PLAIN_IPV4 = 1, + PKT_TYPE_IPSEC_IPV4, + PKT_TYPE_PLAIN_IPV6, + PKT_TYPE_IPSEC_IPV6, + PKT_TYPE_INVALID +}; + +struct route_table { + struct rt_ctx *rt4_ctx; + struct rt_ctx *rt6_ctx; +}; + +/* + * Conf required by event mode worker with tx internal port + */ +struct lcore_conf_ev_tx_int_port_wrkr { + struct ipsec_ctx inbound; + struct ipsec_ctx outbound; + struct route_table rt; +} __rte_cache_aligned; + +void ipsec_poll_mode_worker(void); + +int ipsec_launch_one_lcore(void *args); + +#endif /* _IPSEC_WORKER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 12/15] examples/ipsec-secgw: add app " Lukasz Bartosik @ 2020-02-24 14:13 ` Akhil Goyal 2020-02-25 11:50 ` [dpdk-dev] [EXT] " Lukas Bartosik 0 siblings, 1 reply; 147+ messages in thread From: Akhil Goyal @ 2020-02-24 14:13 UTC (permalink / raw) To: Lukasz Bartosik, Anoob Joseph Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev, Thomas Monjalon, Radu Nicolau Hi Lukasz/Anoob, > > Add application inbound/outbound worker thread and > IPsec application processing code for event mode. > > Example ipsec-secgw command in app mode: > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 > -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" > -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > --- ... > +static inline enum pkt_type > +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) > +{ > + struct rte_ether_hdr *eth; > + > + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { > + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > + offsetof(struct ip, ip_p)); > + if (**nlp == IPPROTO_ESP) > + return PKT_TYPE_IPSEC_IPV4; > + else > + return PKT_TYPE_PLAIN_IPV4; > + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) > { > + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > + offsetof(struct ip6_hdr, ip6_nxt)); > + if (**nlp == IPPROTO_ESP) > + return PKT_TYPE_IPSEC_IPV6; > + else > + return PKT_TYPE_PLAIN_IPV6; > + } > + > + /* Unknown/Unsupported type */ > + return PKT_TYPE_INVALID; > +} > + > +static inline void > +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) > +{ > + struct rte_ether_hdr *ethhdr; > + > + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, > RTE_ETHER_ADDR_LEN); > + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, > RTE_ETHER_ADDR_LEN); > +} > > static inline void > ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) > @@ -61,6 +101,290 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, > } > } > > +static inline int > +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) > +{ > + uint32_t res; > + > + if (unlikely(sp == NULL)) > + return 0; > + > + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, > + DEFAULT_MAX_CATEGORIES); > + > + if (unlikely(res == 0)) { > + /* No match */ > + return 0; > + } > + > + if (res == DISCARD) > + return 0; > + else if (res == BYPASS) { > + *sa_idx = -1; > + return 1; > + } > + > + *sa_idx = res - 1; > + return 1; > +} > + > +static inline uint16_t > +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) > +{ > + uint32_t dst_ip; > + uint16_t offset; > + uint32_t hop; > + int ret; > + > + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); > + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); > + dst_ip = rte_be_to_cpu_32(dst_ip); > + > + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); > + > + if (ret == 0) { > + /* We have a hit */ > + return hop; > + } > + > + /* else */ > + return RTE_MAX_ETHPORTS; > +} > + > +/* TODO: To be tested */ > +static inline uint16_t > +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) > +{ > + uint8_t dst_ip[16]; > + uint8_t *ip6_dst; > + uint16_t offset; > + uint32_t hop; > + int ret; > + > + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); > + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); > + memcpy(&dst_ip[0], ip6_dst, 16); > + > + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); > + > + if (ret == 0) { > + /* We have a hit */ > + return hop; > + } > + > + /* else */ > + return RTE_MAX_ETHPORTS; > +} > + > +static inline uint16_t > +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) > +{ > + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) > + return route4_pkt(pkt, rt->rt4_ctx); > + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) > + return route6_pkt(pkt, rt->rt6_ctx); > + > + return RTE_MAX_ETHPORTS; > +} Is it not possible to use the existing functions for finding routes, checking packet types and checking security policies. It will be very difficult to manage two separate functions for same work. I can see that the pkt->data_offs Are not required to be updated in the inline case, but can we split the existing functions in two so that they can be Called in the appropriate cases. As you have said in the cover note as well to add lookaside protocol support. I also tried adding it, and it will get very Difficult to manage separate functions for separate code paths. > + > +static inline int > +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, > + struct rte_event *ev) > +{ > + struct ipsec_sa *sa = NULL; > + struct rte_mbuf *pkt; > + uint16_t port_id = 0; > + enum pkt_type type; > + uint32_t sa_idx; > + uint8_t *nlp; > + > + /* Get pkt from event */ > + pkt = ev->mbuf; > + > + /* Check the packet type */ > + type = process_ipsec_get_pkt_type(pkt, &nlp); > + > + switch (type) { > + case PKT_TYPE_PLAIN_IPV4: > + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { > + if (unlikely(pkt->ol_flags & > + PKT_RX_SEC_OFFLOAD_FAILED)) { > + RTE_LOG(ERR, IPSEC, > + "Inbound security offload failed\n"); > + goto drop_pkt_and_exit; > + } > + sa = pkt->userdata; > + } > + > + /* Check if we have a match */ > + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > + /* No valid match */ > + goto drop_pkt_and_exit; > + } > + break; > + > + case PKT_TYPE_PLAIN_IPV6: > + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { > + if (unlikely(pkt->ol_flags & > + PKT_RX_SEC_OFFLOAD_FAILED)) { > + RTE_LOG(ERR, IPSEC, > + "Inbound security offload failed\n"); > + goto drop_pkt_and_exit; > + } > + sa = pkt->userdata; > + } > + > + /* Check if we have a match */ > + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > + /* No valid match */ > + goto drop_pkt_and_exit; > + } > + break; > + > + default: > + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > + goto drop_pkt_and_exit; > + } > + > + /* Check if the packet has to be bypassed */ > + if (sa_idx == BYPASS) > + goto route_and_send_pkt; > + > + /* Validate sa_idx */ > + if (sa_idx >= ctx->sa_ctx->nb_sa) > + goto drop_pkt_and_exit; > + > + /* Else the packet has to be protected with SA */ > + > + /* If the packet was IPsec processed, then SA pointer should be set */ > + if (sa == NULL) > + goto drop_pkt_and_exit; > + > + /* SPI on the packet should match with the one in SA */ > + if (unlikely(sa->spi != ctx->sa_ctx->sa[sa_idx].spi)) > + goto drop_pkt_and_exit; > + > +route_and_send_pkt: > + port_id = get_route(pkt, rt, type); > + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > + /* no match */ > + goto drop_pkt_and_exit; > + } > + /* else, we have a matching route */ > + > + /* Update mac addresses */ > + update_mac_addrs(pkt, port_id); > + > + /* Update the event with the dest port */ > + ipsec_event_pre_forward(pkt, port_id); > + return 1; > + > +drop_pkt_and_exit: > + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); > + rte_pktmbuf_free(pkt); > + ev->mbuf = NULL; > + return 0; > +} > + > +static inline int > +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, > + struct rte_event *ev) > +{ > + struct rte_ipsec_session *sess; > + struct sa_ctx *sa_ctx; > + struct rte_mbuf *pkt; > + uint16_t port_id = 0; > + struct ipsec_sa *sa; > + enum pkt_type type; > + uint32_t sa_idx; > + uint8_t *nlp; > + > + /* Get pkt from event */ > + pkt = ev->mbuf; > + > + /* Check the packet type */ > + type = process_ipsec_get_pkt_type(pkt, &nlp); > + > + switch (type) { > + case PKT_TYPE_PLAIN_IPV4: > + /* Check if we have a match */ > + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > + /* No valid match */ > + goto drop_pkt_and_exit; > + } > + break; > + case PKT_TYPE_PLAIN_IPV6: > + /* Check if we have a match */ > + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > + /* No valid match */ > + goto drop_pkt_and_exit; > + } > + break; > + default: > + /* > + * Only plain IPv4 & IPv6 packets are allowed > + * on protected port. Drop the rest. > + */ > + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > + goto drop_pkt_and_exit; > + } > + > + /* Check if the packet has to be bypassed */ > + if (sa_idx == BYPASS) { > + port_id = get_route(pkt, rt, type); > + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > + /* no match */ > + goto drop_pkt_and_exit; > + } > + /* else, we have a matching route */ > + goto send_pkt; > + } > + > + /* Validate sa_idx */ > + if (sa_idx >= ctx->sa_ctx->nb_sa) > + goto drop_pkt_and_exit; > + > + /* Else the packet has to be protected */ > + > + /* Get SA ctx*/ > + sa_ctx = ctx->sa_ctx; > + > + /* Get SA */ > + sa = &(sa_ctx->sa[sa_idx]); > + > + /* Get IPsec session */ > + sess = ipsec_get_primary_session(sa); > + > + /* Allow only inline protocol for now */ > + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { > + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); > + goto drop_pkt_and_exit; > + } > + > + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) > + pkt->userdata = sess->security.ses; > + > + /* Mark the packet for Tx security offload */ > + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > + > + /* Get the port to which this pkt need to be submitted */ > + port_id = sa->portid; > + > +send_pkt: > + /* Update mac addresses */ > + update_mac_addrs(pkt, port_id); > + > + /* Update the event with the dest port */ > + ipsec_event_pre_forward(pkt, port_id); How is IP checksum getting updated for the processed packet. If the hardware is not updating it, should we add a fallback mechanism for SW based Checksum update. > + return 1; It will be better to use some MACROS while returning Like #define PKT_FORWARD 1 #define PKT_DROPPED 0 #define PKT_POSTED 2 /*may be for lookaside cases */ > + > +drop_pkt_and_exit: > + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); > + rte_pktmbuf_free(pkt); > + ev->mbuf = NULL; > + return 0; > +} > + > /* > * Event mode exposes various operating modes depending on the > * capabilities of the event device and the operating mode > @@ -68,7 +392,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, > */ > > /* Workers registered */ > -#define IPSEC_EVENTMODE_WORKERS 1 > +#define IPSEC_EVENTMODE_WORKERS 2 > > /* > * Event mode worker > @@ -146,7 +470,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct > eh_event_link_info *links, > } > > /* Save security session */ > - pkt->udata64 = (uint64_t) sess_tbl[port_id]; > + pkt->userdata = sess_tbl[port_id]; > > /* Mark the packet for Tx security offload */ > pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > @@ -165,6 +489,94 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct > eh_event_link_info *links, > } > } > > +/* > + * Event mode worker > + * Operating parameters : non-burst - Tx internal port - app mode > + */ > +static void > +ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, > + uint8_t nb_links) > +{ > + struct lcore_conf_ev_tx_int_port_wrkr lconf; > + unsigned int nb_rx = 0; > + struct rte_event ev; > + uint32_t lcore_id; > + int32_t socket_id; > + int ret; > + > + /* Check if we have links registered for this lcore */ > + if (nb_links == 0) { > + /* No links registered - exit */ > + return; > + } > + > + /* We have valid links */ > + > + /* Get core ID */ > + lcore_id = rte_lcore_id(); > + > + /* Get socket ID */ > + socket_id = rte_lcore_to_socket_id(lcore_id); > + > + /* Save routing table */ > + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; > + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; > + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; > + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; > + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; > + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; Session_priv_pool should also be added for both inbound and outbound > + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; > + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; > + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; > + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; > + > + RTE_LOG(INFO, IPSEC, > + "Launching event mode worker (non-burst - Tx internal port - " > + "app mode) on lcore %d\n", lcore_id); > + > + /* Check if it's single link */ > + if (nb_links != 1) { > + RTE_LOG(INFO, IPSEC, > + "Multiple links not supported. Using first link\n"); > + } > + > + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, > + links[0].event_port_id); > + > + while (!force_quit) { > + /* Read packet from event queues */ > + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > + links[0].event_port_id, > + &ev, /* events */ > + 1, /* nb_events */ > + 0 /* timeout_ticks */); > + > + if (nb_rx == 0) > + continue; > + Event type should be checked here before dereferencing it. > + if (is_unprotected_port(ev.mbuf->port)) > + ret = process_ipsec_ev_inbound(&lconf.inbound, > + &lconf.rt, &ev); > + else > + ret = process_ipsec_ev_outbound(&lconf.outbound, > + &lconf.rt, &ev); > + if (ret != 1) > + /* The pkt has been dropped */ > + continue; > + > + /* > + * Since tx internal port is available, events can be > + * directly enqueued to the adapter and it would be > + * internally submitted to the eth device. > + */ > + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > + links[0].event_port_id, > + &ev, /* events */ > + 1, /* nb_events */ > + 0 /* flags */); > + } > +} > + > static uint8_t > ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params > *wrkrs) > { > @@ -180,6 +592,14 @@ ipsec_eventmode_populate_wrkr_params(struct > eh_app_worker_params *wrkrs) > wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; > wrkr++; > + nb_wrkr_param++; > + > + /* Non-burst - Tx internal port - app mode */ > + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; > + nb_wrkr_param++; > > return nb_wrkr_param; > } > diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec- > secgw/ipsec_worker.h > new file mode 100644 > index 0000000..87b4f22 > --- /dev/null > +++ b/examples/ipsec-secgw/ipsec_worker.h > @@ -0,0 +1,35 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright (C) 2020 Marvell International Ltd. > + */ > +#ifndef _IPSEC_WORKER_H_ > +#define _IPSEC_WORKER_H_ > + > +#include "ipsec.h" > + > +enum pkt_type { > + PKT_TYPE_PLAIN_IPV4 = 1, > + PKT_TYPE_IPSEC_IPV4, > + PKT_TYPE_PLAIN_IPV6, > + PKT_TYPE_IPSEC_IPV6, > + PKT_TYPE_INVALID > +}; > + > +struct route_table { > + struct rt_ctx *rt4_ctx; > + struct rt_ctx *rt6_ctx; > +}; > + > +/* > + * Conf required by event mode worker with tx internal port > + */ > +struct lcore_conf_ev_tx_int_port_wrkr { > + struct ipsec_ctx inbound; > + struct ipsec_ctx outbound; > + struct route_table rt; > +} __rte_cache_aligned; > + > +void ipsec_poll_mode_worker(void); > + > +int ipsec_launch_one_lcore(void *args); > + > +#endif /* _IPSEC_WORKER_H_ */ > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-24 14:13 ` Akhil Goyal @ 2020-02-25 11:50 ` Lukas Bartosik 2020-02-25 12:13 ` Anoob Joseph 2020-02-26 6:04 ` Akhil Goyal 0 siblings, 2 replies; 147+ messages in thread From: Lukas Bartosik @ 2020-02-25 11:50 UTC (permalink / raw) To: Akhil Goyal, Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Konstantin Ananyev, dev, Thomas Monjalon, Radu Nicolau Hi Akhil, Please see my answers below. Thanks, Lukasz On 24.02.2020 15:13, Akhil Goyal wrote: > External Email > > ---------------------------------------------------------------------- > Hi Lukasz/Anoob, > >> >> Add application inbound/outbound worker thread and >> IPsec application processing code for event mode. >> >> Example ipsec-secgw command in app mode: >> ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 >> -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 >> --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" >> -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel >> >> Signed-off-by: Anoob Joseph <anoobj@marvell.com> >> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> >> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> >> --- > > ... > >> +static inline enum pkt_type >> +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) >> +{ >> + struct rte_ether_hdr *eth; >> + >> + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); >> + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + >> + offsetof(struct ip, ip_p)); >> + if (**nlp == IPPROTO_ESP) >> + return PKT_TYPE_IPSEC_IPV4; >> + else >> + return PKT_TYPE_PLAIN_IPV4; >> + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) >> { >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + >> + offsetof(struct ip6_hdr, ip6_nxt)); >> + if (**nlp == IPPROTO_ESP) >> + return PKT_TYPE_IPSEC_IPV6; >> + else >> + return PKT_TYPE_PLAIN_IPV6; >> + } >> + >> + /* Unknown/Unsupported type */ >> + return PKT_TYPE_INVALID; >> +} >> + >> +static inline void >> +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) >> +{ >> + struct rte_ether_hdr *ethhdr; >> + >> + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); >> + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, >> RTE_ETHER_ADDR_LEN); >> + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, >> RTE_ETHER_ADDR_LEN); >> +} >> >> static inline void >> ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) >> @@ -61,6 +101,290 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, >> } >> } >> >> +static inline int >> +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) >> +{ >> + uint32_t res; >> + >> + if (unlikely(sp == NULL)) >> + return 0; >> + >> + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, >> + DEFAULT_MAX_CATEGORIES); >> + >> + if (unlikely(res == 0)) { >> + /* No match */ >> + return 0; >> + } >> + >> + if (res == DISCARD) >> + return 0; >> + else if (res == BYPASS) { >> + *sa_idx = -1; >> + return 1; >> + } >> + >> + *sa_idx = res - 1; >> + return 1; >> +} >> + >> +static inline uint16_t >> +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) >> +{ >> + uint32_t dst_ip; >> + uint16_t offset; >> + uint32_t hop; >> + int ret; >> + >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); >> + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); >> + dst_ip = rte_be_to_cpu_32(dst_ip); >> + >> + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); >> + >> + if (ret == 0) { >> + /* We have a hit */ >> + return hop; >> + } >> + >> + /* else */ >> + return RTE_MAX_ETHPORTS; >> +} >> + >> +/* TODO: To be tested */ >> +static inline uint16_t >> +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) >> +{ >> + uint8_t dst_ip[16]; >> + uint8_t *ip6_dst; >> + uint16_t offset; >> + uint32_t hop; >> + int ret; >> + >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); >> + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); >> + memcpy(&dst_ip[0], ip6_dst, 16); >> + >> + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); >> + >> + if (ret == 0) { >> + /* We have a hit */ >> + return hop; >> + } >> + >> + /* else */ >> + return RTE_MAX_ETHPORTS; >> +} >> + >> +static inline uint16_t >> +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) >> +{ >> + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) >> + return route4_pkt(pkt, rt->rt4_ctx); >> + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) >> + return route6_pkt(pkt, rt->rt6_ctx); >> + >> + return RTE_MAX_ETHPORTS; >> +} > > Is it not possible to use the existing functions for finding routes, checking packet types and checking security policies. > It will be very difficult to manage two separate functions for same work. I can see that the pkt->data_offs > Are not required to be updated in the inline case, but can we split the existing functions in two so that they can be > Called in the appropriate cases. > > As you have said in the cover note as well to add lookaside protocol support. I also tried adding it, and it will get very > Difficult to manage separate functions for separate code paths. > [Lukasz] This was also Konstantin's comment during review of one of previous revisions. The prepare_one_packet() and prepare_tx_pkt() do much more than we need and for performance reasons we crafted new functions. For example, process_ipsec_get_pkt_type function returns nlp and whether packet type is plain or IPsec. That's all. Prepare_one_packet() process packets in chunks and does much more - it adjusts mbuf and packet length then it demultiplex packets into plain and IPsec flows and finally does inline checks. This is similar for update_mac_addrs() vs prepare_tx_pkt() and check_sp() vs inbound_sp_sa() that prepare_tx_pkt() and inbound_sp_sa() do more that we need in event mode. I understand your concern from the perspective of code maintenance but on the other hand we are concerned with performance. The current code is not optimized to support multiple mode processing introduced with rte_security. We can work on a common routines once we have other modes also added, so that we can come up with a better solution than what we have today. >> + >> +static inline int >> +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, >> + struct rte_event *ev) >> +{ >> + struct ipsec_sa *sa = NULL; >> + struct rte_mbuf *pkt; >> + uint16_t port_id = 0; >> + enum pkt_type type; >> + uint32_t sa_idx; >> + uint8_t *nlp; >> + >> + /* Get pkt from event */ >> + pkt = ev->mbuf; >> + >> + /* Check the packet type */ >> + type = process_ipsec_get_pkt_type(pkt, &nlp); >> + >> + switch (type) { >> + case PKT_TYPE_PLAIN_IPV4: >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { >> + if (unlikely(pkt->ol_flags & >> + PKT_RX_SEC_OFFLOAD_FAILED)) { >> + RTE_LOG(ERR, IPSEC, >> + "Inbound security offload failed\n"); >> + goto drop_pkt_and_exit; >> + } >> + sa = pkt->userdata; >> + } >> + >> + /* Check if we have a match */ >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { >> + /* No valid match */ >> + goto drop_pkt_and_exit; >> + } >> + break; >> + >> + case PKT_TYPE_PLAIN_IPV6: >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { >> + if (unlikely(pkt->ol_flags & >> + PKT_RX_SEC_OFFLOAD_FAILED)) { >> + RTE_LOG(ERR, IPSEC, >> + "Inbound security offload failed\n"); >> + goto drop_pkt_and_exit; >> + } >> + sa = pkt->userdata; >> + } >> + >> + /* Check if we have a match */ >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { >> + /* No valid match */ >> + goto drop_pkt_and_exit; >> + } >> + break; >> + >> + default: >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); >> + goto drop_pkt_and_exit; >> + } >> + >> + /* Check if the packet has to be bypassed */ >> + if (sa_idx == BYPASS) >> + goto route_and_send_pkt; >> + >> + /* Validate sa_idx */ >> + if (sa_idx >= ctx->sa_ctx->nb_sa) >> + goto drop_pkt_and_exit; >> + >> + /* Else the packet has to be protected with SA */ >> + >> + /* If the packet was IPsec processed, then SA pointer should be set */ >> + if (sa == NULL) >> + goto drop_pkt_and_exit; >> + >> + /* SPI on the packet should match with the one in SA */ >> + if (unlikely(sa->spi != ctx->sa_ctx->sa[sa_idx].spi)) >> + goto drop_pkt_and_exit; >> + >> +route_and_send_pkt: >> + port_id = get_route(pkt, rt, type); >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { >> + /* no match */ >> + goto drop_pkt_and_exit; >> + } >> + /* else, we have a matching route */ >> + >> + /* Update mac addresses */ >> + update_mac_addrs(pkt, port_id); >> + >> + /* Update the event with the dest port */ >> + ipsec_event_pre_forward(pkt, port_id); >> + return 1; >> + >> +drop_pkt_and_exit: >> + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); >> + rte_pktmbuf_free(pkt); >> + ev->mbuf = NULL; >> + return 0; >> +} >> + >> +static inline int >> +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, >> + struct rte_event *ev) >> +{ >> + struct rte_ipsec_session *sess; >> + struct sa_ctx *sa_ctx; >> + struct rte_mbuf *pkt; >> + uint16_t port_id = 0; >> + struct ipsec_sa *sa; >> + enum pkt_type type; >> + uint32_t sa_idx; >> + uint8_t *nlp; >> + >> + /* Get pkt from event */ >> + pkt = ev->mbuf; >> + >> + /* Check the packet type */ >> + type = process_ipsec_get_pkt_type(pkt, &nlp); >> + >> + switch (type) { >> + case PKT_TYPE_PLAIN_IPV4: >> + /* Check if we have a match */ >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { >> + /* No valid match */ >> + goto drop_pkt_and_exit; >> + } >> + break; >> + case PKT_TYPE_PLAIN_IPV6: >> + /* Check if we have a match */ >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { >> + /* No valid match */ >> + goto drop_pkt_and_exit; >> + } >> + break; >> + default: >> + /* >> + * Only plain IPv4 & IPv6 packets are allowed >> + * on protected port. Drop the rest. >> + */ >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); >> + goto drop_pkt_and_exit; >> + } >> + >> + /* Check if the packet has to be bypassed */ >> + if (sa_idx == BYPASS) { >> + port_id = get_route(pkt, rt, type); >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { >> + /* no match */ >> + goto drop_pkt_and_exit; >> + } >> + /* else, we have a matching route */ >> + goto send_pkt; >> + } >> + >> + /* Validate sa_idx */ >> + if (sa_idx >= ctx->sa_ctx->nb_sa) >> + goto drop_pkt_and_exit; >> + >> + /* Else the packet has to be protected */ >> + >> + /* Get SA ctx*/ >> + sa_ctx = ctx->sa_ctx; >> + >> + /* Get SA */ >> + sa = &(sa_ctx->sa[sa_idx]); >> + >> + /* Get IPsec session */ >> + sess = ipsec_get_primary_session(sa); >> + >> + /* Allow only inline protocol for now */ >> + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { >> + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); >> + goto drop_pkt_and_exit; >> + } >> + >> + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) >> + pkt->userdata = sess->security.ses; >> + >> + /* Mark the packet for Tx security offload */ >> + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; >> + >> + /* Get the port to which this pkt need to be submitted */ >> + port_id = sa->portid; >> + >> +send_pkt: >> + /* Update mac addresses */ >> + update_mac_addrs(pkt, port_id); >> + >> + /* Update the event with the dest port */ >> + ipsec_event_pre_forward(pkt, port_id); > > How is IP checksum getting updated for the processed packet. > If the hardware is not updating it, should we add a fallback mechanism for SW based > Checksum update. > [Lukasz] In case of outbound inline protocol checksum has to be calculated by HW as final packet is formed by crypto device. There is no need to calculate it in SW. >> + return 1; > > It will be better to use some MACROS while returning > Like > #define PKT_FORWARD 1 > #define PKT_DROPPED 0 > #define PKT_POSTED 2 /*may be for lookaside cases */ > >> + >> +drop_pkt_and_exit: >> + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); >> + rte_pktmbuf_free(pkt); >> + ev->mbuf = NULL; >> + return 0; >> +} >> + >> /* >> * Event mode exposes various operating modes depending on the >> * capabilities of the event device and the operating mode >> @@ -68,7 +392,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, >> */ >> >> /* Workers registered */ >> -#define IPSEC_EVENTMODE_WORKERS 1 >> +#define IPSEC_EVENTMODE_WORKERS 2 >> >> /* >> * Event mode worker >> @@ -146,7 +470,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct >> eh_event_link_info *links, >> } >> >> /* Save security session */ >> - pkt->udata64 = (uint64_t) sess_tbl[port_id]; >> + pkt->userdata = sess_tbl[port_id]; >> >> /* Mark the packet for Tx security offload */ >> pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; >> @@ -165,6 +489,94 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct >> eh_event_link_info *links, >> } >> } >> >> +/* >> + * Event mode worker >> + * Operating parameters : non-burst - Tx internal port - app mode >> + */ >> +static void >> +ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, >> + uint8_t nb_links) >> +{ >> + struct lcore_conf_ev_tx_int_port_wrkr lconf; >> + unsigned int nb_rx = 0; >> + struct rte_event ev; >> + uint32_t lcore_id; >> + int32_t socket_id; >> + int ret; >> + >> + /* Check if we have links registered for this lcore */ >> + if (nb_links == 0) { >> + /* No links registered - exit */ >> + return; >> + } >> + >> + /* We have valid links */ >> + >> + /* Get core ID */ >> + lcore_id = rte_lcore_id(); >> + >> + /* Get socket ID */ >> + socket_id = rte_lcore_to_socket_id(lcore_id); >> + >> + /* Save routing table */ >> + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; >> + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; >> + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; >> + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; >> + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; >> + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; > > Session_priv_pool should also be added for both inbound and outbound > [Lukasz] I will add it in V5. >> + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; >> + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; >> + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; >> + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; >> + >> + RTE_LOG(INFO, IPSEC, >> + "Launching event mode worker (non-burst - Tx internal port - " >> + "app mode) on lcore %d\n", lcore_id); >> + >> + /* Check if it's single link */ >> + if (nb_links != 1) { >> + RTE_LOG(INFO, IPSEC, >> + "Multiple links not supported. Using first link\n"); >> + } >> + >> + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, >> + links[0].event_port_id); >> + >> + while (!force_quit) { >> + /* Read packet from event queues */ >> + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, >> + links[0].event_port_id, >> + &ev, /* events */ >> + 1, /* nb_events */ >> + 0 /* timeout_ticks */); >> + >> + if (nb_rx == 0) >> + continue; >> + > > Event type should be checked here before dereferencing it. > [Lukasz] I will add event type check in V5. >> + if (is_unprotected_port(ev.mbuf->port)) >> + ret = process_ipsec_ev_inbound(&lconf.inbound, >> + &lconf.rt, &ev); >> + else >> + ret = process_ipsec_ev_outbound(&lconf.outbound, >> + &lconf.rt, &ev); >> + if (ret != 1) >> + /* The pkt has been dropped */ >> + continue; >> + >> + /* >> + * Since tx internal port is available, events can be >> + * directly enqueued to the adapter and it would be >> + * internally submitted to the eth device. >> + */ >> + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, >> + links[0].event_port_id, >> + &ev, /* events */ >> + 1, /* nb_events */ >> + 0 /* flags */); >> + } >> +} >> + >> static uint8_t >> ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params >> *wrkrs) >> { >> @@ -180,6 +592,14 @@ ipsec_eventmode_populate_wrkr_params(struct >> eh_app_worker_params *wrkrs) >> wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; >> wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; >> wrkr++; >> + nb_wrkr_param++; >> + >> + /* Non-burst - Tx internal port - app mode */ >> + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; >> + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; >> + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; >> + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; >> + nb_wrkr_param++; >> >> return nb_wrkr_param; >> } >> diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec- >> secgw/ipsec_worker.h >> new file mode 100644 >> index 0000000..87b4f22 >> --- /dev/null >> +++ b/examples/ipsec-secgw/ipsec_worker.h >> @@ -0,0 +1,35 @@ >> +/* SPDX-License-Identifier: BSD-3-Clause >> + * Copyright (C) 2020 Marvell International Ltd. >> + */ >> +#ifndef _IPSEC_WORKER_H_ >> +#define _IPSEC_WORKER_H_ >> + >> +#include "ipsec.h" >> + >> +enum pkt_type { >> + PKT_TYPE_PLAIN_IPV4 = 1, >> + PKT_TYPE_IPSEC_IPV4, >> + PKT_TYPE_PLAIN_IPV6, >> + PKT_TYPE_IPSEC_IPV6, >> + PKT_TYPE_INVALID >> +}; >> + >> +struct route_table { >> + struct rt_ctx *rt4_ctx; >> + struct rt_ctx *rt6_ctx; >> +}; >> + >> +/* >> + * Conf required by event mode worker with tx internal port >> + */ >> +struct lcore_conf_ev_tx_int_port_wrkr { >> + struct ipsec_ctx inbound; >> + struct ipsec_ctx outbound; >> + struct route_table rt; >> +} __rte_cache_aligned; >> + >> +void ipsec_poll_mode_worker(void); >> + >> +int ipsec_launch_one_lcore(void *args); >> + >> +#endif /* _IPSEC_WORKER_H_ */ >> -- >> 2.7.4 > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-25 11:50 ` [dpdk-dev] [EXT] " Lukas Bartosik @ 2020-02-25 12:13 ` Anoob Joseph 2020-02-25 16:03 ` Ananyev, Konstantin 2020-02-26 6:04 ` Akhil Goyal 1 sibling, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-02-25 12:13 UTC (permalink / raw) To: Akhil Goyal, Konstantin Ananyev Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Thomas Monjalon, Radu Nicolau, Lukas Bartosik Hi Akhil, Konstantin, One question below. Thanks, Anoob > -----Original Message----- > From: Lukas Bartosik <lbartosik@marvell.com> > Sent: Tuesday, February 25, 2020 5:21 PM > To: Akhil Goyal <akhil.goyal@nxp.com>; Anoob Joseph <anoobj@marvell.com> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>; > Konstantin Ananyev <konstantin.ananyev@intel.com>; dev@dpdk.org; Thomas > Monjalon <thomas@monjalon.net>; Radu Nicolau <radu.nicolau@intel.com> > Subject: Re: [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode > worker > > Hi Akhil, > > Please see my answers below. > > Thanks, > Lukasz > > On 24.02.2020 15:13, Akhil Goyal wrote: > > External Email > > > > ---------------------------------------------------------------------- > > Hi Lukasz/Anoob, > > > >> > >> Add application inbound/outbound worker thread and IPsec application > >> processing code for event mode. > >> > >> Example ipsec-secgw command in app mode: > >> ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > >> 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > >> --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" > >> -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel > >> > >> Signed-off-by: Anoob Joseph <anoobj@marvell.com> > >> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > >> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > >> --- > > > > ... > > > >> +static inline enum pkt_type > >> +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) { > >> + struct rte_ether_hdr *eth; > >> + > >> + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > >> + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { > >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > >> + offsetof(struct ip, ip_p)); > >> + if (**nlp == IPPROTO_ESP) > >> + return PKT_TYPE_IPSEC_IPV4; > >> + else > >> + return PKT_TYPE_PLAIN_IPV4; > >> + } else if (eth->ether_type == > >> +rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) > >> { > >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > >> + offsetof(struct ip6_hdr, ip6_nxt)); > >> + if (**nlp == IPPROTO_ESP) > >> + return PKT_TYPE_IPSEC_IPV6; > >> + else > >> + return PKT_TYPE_PLAIN_IPV6; > >> + } > >> + > >> + /* Unknown/Unsupported type */ > >> + return PKT_TYPE_INVALID; > >> +} > >> + > >> +static inline void > >> +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) { > >> + struct rte_ether_hdr *ethhdr; > >> + > >> + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > >> + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, > >> RTE_ETHER_ADDR_LEN); > >> + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, > >> RTE_ETHER_ADDR_LEN); > >> +} > >> > >> static inline void > >> ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) @@ > >> -61,6 +101,290 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, > >> } > >> } > >> > >> +static inline int > >> +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) { > >> + uint32_t res; > >> + > >> + if (unlikely(sp == NULL)) > >> + return 0; > >> + > >> + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, > >> + DEFAULT_MAX_CATEGORIES); > >> + > >> + if (unlikely(res == 0)) { > >> + /* No match */ > >> + return 0; > >> + } > >> + > >> + if (res == DISCARD) > >> + return 0; > >> + else if (res == BYPASS) { > >> + *sa_idx = -1; > >> + return 1; > >> + } > >> + > >> + *sa_idx = res - 1; > >> + return 1; > >> +} > >> + > >> +static inline uint16_t > >> +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) { > >> + uint32_t dst_ip; > >> + uint16_t offset; > >> + uint32_t hop; > >> + int ret; > >> + > >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); > >> + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); > >> + dst_ip = rte_be_to_cpu_32(dst_ip); > >> + > >> + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); > >> + > >> + if (ret == 0) { > >> + /* We have a hit */ > >> + return hop; > >> + } > >> + > >> + /* else */ > >> + return RTE_MAX_ETHPORTS; > >> +} > >> + > >> +/* TODO: To be tested */ > >> +static inline uint16_t > >> +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) { > >> + uint8_t dst_ip[16]; > >> + uint8_t *ip6_dst; > >> + uint16_t offset; > >> + uint32_t hop; > >> + int ret; > >> + > >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); > >> + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); > >> + memcpy(&dst_ip[0], ip6_dst, 16); > >> + > >> + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); > >> + > >> + if (ret == 0) { > >> + /* We have a hit */ > >> + return hop; > >> + } > >> + > >> + /* else */ > >> + return RTE_MAX_ETHPORTS; > >> +} > >> + > >> +static inline uint16_t > >> +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum > >> +pkt_type type) { > >> + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) > >> + return route4_pkt(pkt, rt->rt4_ctx); > >> + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) > >> + return route6_pkt(pkt, rt->rt6_ctx); > >> + > >> + return RTE_MAX_ETHPORTS; > >> +} > > > > Is it not possible to use the existing functions for finding routes, checking > packet types and checking security policies. > > It will be very difficult to manage two separate functions for same > > work. I can see that the pkt->data_offs Are not required to be updated > > in the inline case, but can we split the existing functions in two so that they can > be Called in the appropriate cases. > > > > As you have said in the cover note as well to add lookaside protocol > > support. I also tried adding it, and it will get very Difficult to manage separate > functions for separate code paths. > > > > [Lukasz] This was also Konstantin's comment during review of one of previous > revisions. > The prepare_one_packet() and prepare_tx_pkt() do much more than we need > and for performance reasons we crafted new functions. For example, > process_ipsec_get_pkt_type function returns nlp and whether packet type is > plain or IPsec. That's all. Prepare_one_packet() process packets in chunks and > does much more - it adjusts mbuf and packet length then it demultiplex packets > into plain and IPsec flows and finally does inline checks. This is similar for > update_mac_addrs() vs prepare_tx_pkt() and check_sp() vs inbound_sp_sa() that > prepare_tx_pkt() and inbound_sp_sa() do more that we need in event mode. > > I understand your concern from the perspective of code maintenance but on the > other hand we are concerned with performance. > The current code is not optimized to support multiple mode processing > introduced with rte_security. We can work on a common routines once we have > other modes also added, so that we can come up with a better solution than > what we have today. > > >> + > >> +static inline int > >> +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, > >> + struct rte_event *ev) > >> +{ > >> + struct ipsec_sa *sa = NULL; > >> + struct rte_mbuf *pkt; > >> + uint16_t port_id = 0; > >> + enum pkt_type type; > >> + uint32_t sa_idx; > >> + uint8_t *nlp; > >> + > >> + /* Get pkt from event */ > >> + pkt = ev->mbuf; > >> + > >> + /* Check the packet type */ > >> + type = process_ipsec_get_pkt_type(pkt, &nlp); > >> + > >> + switch (type) { > >> + case PKT_TYPE_PLAIN_IPV4: > >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { > >> + if (unlikely(pkt->ol_flags & > >> + PKT_RX_SEC_OFFLOAD_FAILED)) { > >> + RTE_LOG(ERR, IPSEC, > >> + "Inbound security offload failed\n"); > >> + goto drop_pkt_and_exit; > >> + } > >> + sa = pkt->userdata; > >> + } > >> + > >> + /* Check if we have a match */ > >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > >> + /* No valid match */ > >> + goto drop_pkt_and_exit; > >> + } > >> + break; > >> + > >> + case PKT_TYPE_PLAIN_IPV6: > >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { > >> + if (unlikely(pkt->ol_flags & > >> + PKT_RX_SEC_OFFLOAD_FAILED)) { > >> + RTE_LOG(ERR, IPSEC, > >> + "Inbound security offload failed\n"); > >> + goto drop_pkt_and_exit; > >> + } > >> + sa = pkt->userdata; > >> + } > >> + > >> + /* Check if we have a match */ > >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > >> + /* No valid match */ > >> + goto drop_pkt_and_exit; > >> + } > >> + break; > >> + > >> + default: > >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > >> + goto drop_pkt_and_exit; > >> + } > >> + > >> + /* Check if the packet has to be bypassed */ > >> + if (sa_idx == BYPASS) > >> + goto route_and_send_pkt; > >> + > >> + /* Validate sa_idx */ > >> + if (sa_idx >= ctx->sa_ctx->nb_sa) > >> + goto drop_pkt_and_exit; > >> + > >> + /* Else the packet has to be protected with SA */ > >> + > >> + /* If the packet was IPsec processed, then SA pointer should be set */ > >> + if (sa == NULL) > >> + goto drop_pkt_and_exit; > >> + > >> + /* SPI on the packet should match with the one in SA */ > >> + if (unlikely(sa->spi != ctx->sa_ctx->sa[sa_idx].spi)) > >> + goto drop_pkt_and_exit; > >> + > >> +route_and_send_pkt: > >> + port_id = get_route(pkt, rt, type); > >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > >> + /* no match */ > >> + goto drop_pkt_and_exit; > >> + } > >> + /* else, we have a matching route */ > >> + > >> + /* Update mac addresses */ > >> + update_mac_addrs(pkt, port_id); > >> + > >> + /* Update the event with the dest port */ > >> + ipsec_event_pre_forward(pkt, port_id); > >> + return 1; > >> + > >> +drop_pkt_and_exit: > >> + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); > >> + rte_pktmbuf_free(pkt); > >> + ev->mbuf = NULL; > >> + return 0; > >> +} > >> + > >> +static inline int > >> +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, > >> + struct rte_event *ev) > >> +{ > >> + struct rte_ipsec_session *sess; > >> + struct sa_ctx *sa_ctx; > >> + struct rte_mbuf *pkt; > >> + uint16_t port_id = 0; > >> + struct ipsec_sa *sa; > >> + enum pkt_type type; > >> + uint32_t sa_idx; > >> + uint8_t *nlp; > >> + > >> + /* Get pkt from event */ > >> + pkt = ev->mbuf; > >> + > >> + /* Check the packet type */ > >> + type = process_ipsec_get_pkt_type(pkt, &nlp); > >> + > >> + switch (type) { > >> + case PKT_TYPE_PLAIN_IPV4: > >> + /* Check if we have a match */ > >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > >> + /* No valid match */ > >> + goto drop_pkt_and_exit; > >> + } > >> + break; > >> + case PKT_TYPE_PLAIN_IPV6: > >> + /* Check if we have a match */ > >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > >> + /* No valid match */ > >> + goto drop_pkt_and_exit; > >> + } > >> + break; > >> + default: > >> + /* > >> + * Only plain IPv4 & IPv6 packets are allowed > >> + * on protected port. Drop the rest. > >> + */ > >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > >> + goto drop_pkt_and_exit; > >> + } > >> + > >> + /* Check if the packet has to be bypassed */ > >> + if (sa_idx == BYPASS) { > >> + port_id = get_route(pkt, rt, type); > >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > >> + /* no match */ > >> + goto drop_pkt_and_exit; > >> + } > >> + /* else, we have a matching route */ > >> + goto send_pkt; > >> + } > >> + > >> + /* Validate sa_idx */ > >> + if (sa_idx >= ctx->sa_ctx->nb_sa) > >> + goto drop_pkt_and_exit; > >> + > >> + /* Else the packet has to be protected */ > >> + > >> + /* Get SA ctx*/ > >> + sa_ctx = ctx->sa_ctx; > >> + > >> + /* Get SA */ > >> + sa = &(sa_ctx->sa[sa_idx]); > >> + > >> + /* Get IPsec session */ > >> + sess = ipsec_get_primary_session(sa); > >> + > >> + /* Allow only inline protocol for now */ > >> + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { > >> + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); > >> + goto drop_pkt_and_exit; > >> + } > >> + > >> + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) > >> + pkt->userdata = sess->security.ses; > >> + > >> + /* Mark the packet for Tx security offload */ > >> + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > >> + > >> + /* Get the port to which this pkt need to be submitted */ > >> + port_id = sa->portid; > >> + > >> +send_pkt: > >> + /* Update mac addresses */ > >> + update_mac_addrs(pkt, port_id); > >> + > >> + /* Update the event with the dest port */ > >> + ipsec_event_pre_forward(pkt, port_id); > > > > How is IP checksum getting updated for the processed packet. > > If the hardware is not updating it, should we add a fallback mechanism > > for SW based Checksum update. > > > > [Lukasz] In case of outbound inline protocol checksum has to be calculated by > HW as final packet is formed by crypto device. There is no need to calculate it in > SW. > > >> + return 1; > > > > It will be better to use some MACROS while returning Like > > #define PKT_FORWARD 1 > > #define PKT_DROPPED 0 > > #define PKT_POSTED 2 /*may be for lookaside cases */ > > > >> + > >> +drop_pkt_and_exit: > >> + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); > >> + rte_pktmbuf_free(pkt); > >> + ev->mbuf = NULL; > >> + return 0; > >> +} > >> + > >> /* > >> * Event mode exposes various operating modes depending on the > >> * capabilities of the event device and the operating mode @@ -68,7 > >> +392,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, > >> */ > >> > >> /* Workers registered */ > >> -#define IPSEC_EVENTMODE_WORKERS 1 > >> +#define IPSEC_EVENTMODE_WORKERS 2 > >> > >> /* > >> * Event mode worker > >> @@ -146,7 +470,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct > >> eh_event_link_info *links, > >> } > >> > >> /* Save security session */ > >> - pkt->udata64 = (uint64_t) sess_tbl[port_id]; > >> + pkt->userdata = sess_tbl[port_id]; > >> > >> /* Mark the packet for Tx security offload */ > >> pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; @@ -165,6 > +489,94 @@ > >> ipsec_wrkr_non_burst_int_port_drv_mode(struct > >> eh_event_link_info *links, > >> } > >> } > >> > >> +/* > >> + * Event mode worker > >> + * Operating parameters : non-burst - Tx internal port - app mode > >> +*/ static void ipsec_wrkr_non_burst_int_port_app_mode(struct > >> +eh_event_link_info *links, > >> + uint8_t nb_links) > >> +{ > >> + struct lcore_conf_ev_tx_int_port_wrkr lconf; > >> + unsigned int nb_rx = 0; > >> + struct rte_event ev; > >> + uint32_t lcore_id; > >> + int32_t socket_id; > >> + int ret; > >> + > >> + /* Check if we have links registered for this lcore */ > >> + if (nb_links == 0) { > >> + /* No links registered - exit */ > >> + return; > >> + } > >> + > >> + /* We have valid links */ > >> + > >> + /* Get core ID */ > >> + lcore_id = rte_lcore_id(); > >> + > >> + /* Get socket ID */ > >> + socket_id = rte_lcore_to_socket_id(lcore_id); > >> + > >> + /* Save routing table */ > >> + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; > >> + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; > >> + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; > >> + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; > >> + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; > >> + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; > > > > Session_priv_pool should also be added for both inbound and outbound > > > > [Lukasz] I will add it in V5. [Anoob] Actually, why do need both session_pool and private_pool? I think it's a remnant from the time we had session being created when the first packet arrives. @Konstantin, thoughts? > > >> + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; > >> + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; > >> + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; > >> + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; > >> + > >> + RTE_LOG(INFO, IPSEC, > >> + "Launching event mode worker (non-burst - Tx internal port - " > >> + "app mode) on lcore %d\n", lcore_id); > >> + > >> + /* Check if it's single link */ > >> + if (nb_links != 1) { > >> + RTE_LOG(INFO, IPSEC, > >> + "Multiple links not supported. Using first link\n"); > >> + } > >> + > >> + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, > >> + links[0].event_port_id); > >> + > >> + while (!force_quit) { > >> + /* Read packet from event queues */ > >> + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > >> + links[0].event_port_id, > >> + &ev, /* events */ > >> + 1, /* nb_events */ > >> + 0 /* timeout_ticks */); > >> + > >> + if (nb_rx == 0) > >> + continue; > >> + > > > > Event type should be checked here before dereferencing it. > > > > [Lukasz] I will add event type check in V5. > > >> + if (is_unprotected_port(ev.mbuf->port)) > >> + ret = process_ipsec_ev_inbound(&lconf.inbound, > >> + &lconf.rt, &ev); > >> + else > >> + ret = process_ipsec_ev_outbound(&lconf.outbound, > >> + &lconf.rt, &ev); > >> + if (ret != 1) > >> + /* The pkt has been dropped */ > >> + continue; > >> + > >> + /* > >> + * Since tx internal port is available, events can be > >> + * directly enqueued to the adapter and it would be > >> + * internally submitted to the eth device. > >> + */ > >> + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > >> + links[0].event_port_id, > >> + &ev, /* events */ > >> + 1, /* nb_events */ > >> + 0 /* flags */); > >> + } > >> +} > >> + > >> static uint8_t > >> ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params > >> *wrkrs) > >> { > >> @@ -180,6 +592,14 @@ ipsec_eventmode_populate_wrkr_params(struct > >> eh_app_worker_params *wrkrs) > >> wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > >> wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; > >> wrkr++; > >> + nb_wrkr_param++; > >> + > >> + /* Non-burst - Tx internal port - app mode */ > >> + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > >> + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > >> + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > >> + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; > >> + nb_wrkr_param++; > >> > >> return nb_wrkr_param; > >> } > >> diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec- > >> secgw/ipsec_worker.h new file mode 100644 index 0000000..87b4f22 > >> --- /dev/null > >> +++ b/examples/ipsec-secgw/ipsec_worker.h > >> @@ -0,0 +1,35 @@ > >> +/* SPDX-License-Identifier: BSD-3-Clause > >> + * Copyright (C) 2020 Marvell International Ltd. > >> + */ > >> +#ifndef _IPSEC_WORKER_H_ > >> +#define _IPSEC_WORKER_H_ > >> + > >> +#include "ipsec.h" > >> + > >> +enum pkt_type { > >> + PKT_TYPE_PLAIN_IPV4 = 1, > >> + PKT_TYPE_IPSEC_IPV4, > >> + PKT_TYPE_PLAIN_IPV6, > >> + PKT_TYPE_IPSEC_IPV6, > >> + PKT_TYPE_INVALID > >> +}; > >> + > >> +struct route_table { > >> + struct rt_ctx *rt4_ctx; > >> + struct rt_ctx *rt6_ctx; > >> +}; > >> + > >> +/* > >> + * Conf required by event mode worker with tx internal port */ > >> +struct lcore_conf_ev_tx_int_port_wrkr { > >> + struct ipsec_ctx inbound; > >> + struct ipsec_ctx outbound; > >> + struct route_table rt; > >> +} __rte_cache_aligned; > >> + > >> +void ipsec_poll_mode_worker(void); > >> + > >> +int ipsec_launch_one_lcore(void *args); > >> + > >> +#endif /* _IPSEC_WORKER_H_ */ > >> -- > >> 2.7.4 > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-25 12:13 ` Anoob Joseph @ 2020-02-25 16:03 ` Ananyev, Konstantin 2020-02-26 4:33 ` Anoob Joseph 0 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-02-25 16:03 UTC (permalink / raw) To: Anoob Joseph, Akhil Goyal Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Thomas Monjalon, Nicolau, Radu, Lukas Bartosik > > >> Add application inbound/outbound worker thread and IPsec application > > >> processing code for event mode. > > >> > > >> Example ipsec-secgw command in app mode: > > >> ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > >> 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > > >> --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" > > >> -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel > > >> > > >> Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > >> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > >> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > >> --- > > > > > > ... > > > > > >> +static inline enum pkt_type > > >> +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) { > > >> + struct rte_ether_hdr *eth; > > >> + > > >> + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > > >> + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { > > >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > > >> + offsetof(struct ip, ip_p)); > > >> + if (**nlp == IPPROTO_ESP) > > >> + return PKT_TYPE_IPSEC_IPV4; > > >> + else > > >> + return PKT_TYPE_PLAIN_IPV4; > > >> + } else if (eth->ether_type == > > >> +rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) > > >> { > > >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > > >> + offsetof(struct ip6_hdr, ip6_nxt)); > > >> + if (**nlp == IPPROTO_ESP) > > >> + return PKT_TYPE_IPSEC_IPV6; > > >> + else > > >> + return PKT_TYPE_PLAIN_IPV6; > > >> + } > > >> + > > >> + /* Unknown/Unsupported type */ > > >> + return PKT_TYPE_INVALID; > > >> +} > > >> + > > >> +static inline void > > >> +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) { > > >> + struct rte_ether_hdr *ethhdr; > > >> + > > >> + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > > >> + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, > > >> RTE_ETHER_ADDR_LEN); > > >> + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, > > >> RTE_ETHER_ADDR_LEN); > > >> +} > > >> > > >> static inline void > > >> ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) @@ > > >> -61,6 +101,290 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, > > >> } > > >> } > > >> > > >> +static inline int > > >> +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) { > > >> + uint32_t res; > > >> + > > >> + if (unlikely(sp == NULL)) > > >> + return 0; > > >> + > > >> + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, > > >> + DEFAULT_MAX_CATEGORIES); > > >> + > > >> + if (unlikely(res == 0)) { > > >> + /* No match */ > > >> + return 0; > > >> + } > > >> + > > >> + if (res == DISCARD) > > >> + return 0; > > >> + else if (res == BYPASS) { > > >> + *sa_idx = -1; > > >> + return 1; > > >> + } > > >> + > > >> + *sa_idx = res - 1; > > >> + return 1; > > >> +} > > >> + > > >> +static inline uint16_t > > >> +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) { > > >> + uint32_t dst_ip; > > >> + uint16_t offset; > > >> + uint32_t hop; > > >> + int ret; > > >> + > > >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); > > >> + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); > > >> + dst_ip = rte_be_to_cpu_32(dst_ip); > > >> + > > >> + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); > > >> + > > >> + if (ret == 0) { > > >> + /* We have a hit */ > > >> + return hop; > > >> + } > > >> + > > >> + /* else */ > > >> + return RTE_MAX_ETHPORTS; > > >> +} > > >> + > > >> +/* TODO: To be tested */ > > >> +static inline uint16_t > > >> +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) { > > >> + uint8_t dst_ip[16]; > > >> + uint8_t *ip6_dst; > > >> + uint16_t offset; > > >> + uint32_t hop; > > >> + int ret; > > >> + > > >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); > > >> + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); > > >> + memcpy(&dst_ip[0], ip6_dst, 16); > > >> + > > >> + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); > > >> + > > >> + if (ret == 0) { > > >> + /* We have a hit */ > > >> + return hop; > > >> + } > > >> + > > >> + /* else */ > > >> + return RTE_MAX_ETHPORTS; > > >> +} > > >> + > > >> +static inline uint16_t > > >> +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum > > >> +pkt_type type) { > > >> + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) > > >> + return route4_pkt(pkt, rt->rt4_ctx); > > >> + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) > > >> + return route6_pkt(pkt, rt->rt6_ctx); > > >> + > > >> + return RTE_MAX_ETHPORTS; > > >> +} > > > > > > Is it not possible to use the existing functions for finding routes, checking > > packet types and checking security policies. > > > It will be very difficult to manage two separate functions for same > > > work. I can see that the pkt->data_offs Are not required to be updated > > > in the inline case, but can we split the existing functions in two so that they can > > be Called in the appropriate cases. > > > > > > As you have said in the cover note as well to add lookaside protocol > > > support. I also tried adding it, and it will get very Difficult to manage separate > > functions for separate code paths. > > > > > > > [Lukasz] This was also Konstantin's comment during review of one of previous > > revisions. > > The prepare_one_packet() and prepare_tx_pkt() do much more than we need > > and for performance reasons we crafted new functions. For example, > > process_ipsec_get_pkt_type function returns nlp and whether packet type is > > plain or IPsec. That's all. Prepare_one_packet() process packets in chunks and > > does much more - it adjusts mbuf and packet length then it demultiplex packets > > into plain and IPsec flows and finally does inline checks. This is similar for > > update_mac_addrs() vs prepare_tx_pkt() and check_sp() vs inbound_sp_sa() that > > prepare_tx_pkt() and inbound_sp_sa() do more that we need in event mode. > > > > I understand your concern from the perspective of code maintenance but on the > > other hand we are concerned with performance. > > The current code is not optimized to support multiple mode processing > > introduced with rte_security. We can work on a common routines once we have > > other modes also added, so that we can come up with a better solution than > > what we have today. > > > > >> + > > >> +static inline int > > >> +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, > > >> + struct rte_event *ev) > > >> +{ > > >> + struct ipsec_sa *sa = NULL; > > >> + struct rte_mbuf *pkt; > > >> + uint16_t port_id = 0; > > >> + enum pkt_type type; > > >> + uint32_t sa_idx; > > >> + uint8_t *nlp; > > >> + > > >> + /* Get pkt from event */ > > >> + pkt = ev->mbuf; > > >> + > > >> + /* Check the packet type */ > > >> + type = process_ipsec_get_pkt_type(pkt, &nlp); > > >> + > > >> + switch (type) { > > >> + case PKT_TYPE_PLAIN_IPV4: > > >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { > > >> + if (unlikely(pkt->ol_flags & > > >> + PKT_RX_SEC_OFFLOAD_FAILED)) { > > >> + RTE_LOG(ERR, IPSEC, > > >> + "Inbound security offload failed\n"); > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + sa = pkt->userdata; > > >> + } > > >> + > > >> + /* Check if we have a match */ > > >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > > >> + /* No valid match */ > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + break; > > >> + > > >> + case PKT_TYPE_PLAIN_IPV6: > > >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { > > >> + if (unlikely(pkt->ol_flags & > > >> + PKT_RX_SEC_OFFLOAD_FAILED)) { > > >> + RTE_LOG(ERR, IPSEC, > > >> + "Inbound security offload failed\n"); > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + sa = pkt->userdata; > > >> + } > > >> + > > >> + /* Check if we have a match */ > > >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > > >> + /* No valid match */ > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + break; > > >> + > > >> + default: > > >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + > > >> + /* Check if the packet has to be bypassed */ > > >> + if (sa_idx == BYPASS) > > >> + goto route_and_send_pkt; > > >> + > > >> + /* Validate sa_idx */ > > >> + if (sa_idx >= ctx->sa_ctx->nb_sa) > > >> + goto drop_pkt_and_exit; > > >> + > > >> + /* Else the packet has to be protected with SA */ > > >> + > > >> + /* If the packet was IPsec processed, then SA pointer should be set */ > > >> + if (sa == NULL) > > >> + goto drop_pkt_and_exit; > > >> + > > >> + /* SPI on the packet should match with the one in SA */ > > >> + if (unlikely(sa->spi != ctx->sa_ctx->sa[sa_idx].spi)) > > >> + goto drop_pkt_and_exit; > > >> + > > >> +route_and_send_pkt: > > >> + port_id = get_route(pkt, rt, type); > > >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > > >> + /* no match */ > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + /* else, we have a matching route */ > > >> + > > >> + /* Update mac addresses */ > > >> + update_mac_addrs(pkt, port_id); > > >> + > > >> + /* Update the event with the dest port */ > > >> + ipsec_event_pre_forward(pkt, port_id); > > >> + return 1; > > >> + > > >> +drop_pkt_and_exit: > > >> + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); > > >> + rte_pktmbuf_free(pkt); > > >> + ev->mbuf = NULL; > > >> + return 0; > > >> +} > > >> + > > >> +static inline int > > >> +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, > > >> + struct rte_event *ev) > > >> +{ > > >> + struct rte_ipsec_session *sess; > > >> + struct sa_ctx *sa_ctx; > > >> + struct rte_mbuf *pkt; > > >> + uint16_t port_id = 0; > > >> + struct ipsec_sa *sa; > > >> + enum pkt_type type; > > >> + uint32_t sa_idx; > > >> + uint8_t *nlp; > > >> + > > >> + /* Get pkt from event */ > > >> + pkt = ev->mbuf; > > >> + > > >> + /* Check the packet type */ > > >> + type = process_ipsec_get_pkt_type(pkt, &nlp); > > >> + > > >> + switch (type) { > > >> + case PKT_TYPE_PLAIN_IPV4: > > >> + /* Check if we have a match */ > > >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > > >> + /* No valid match */ > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + break; > > >> + case PKT_TYPE_PLAIN_IPV6: > > >> + /* Check if we have a match */ > > >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > > >> + /* No valid match */ > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + break; > > >> + default: > > >> + /* > > >> + * Only plain IPv4 & IPv6 packets are allowed > > >> + * on protected port. Drop the rest. > > >> + */ > > >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + > > >> + /* Check if the packet has to be bypassed */ > > >> + if (sa_idx == BYPASS) { > > >> + port_id = get_route(pkt, rt, type); > > >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > > >> + /* no match */ > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + /* else, we have a matching route */ > > >> + goto send_pkt; > > >> + } > > >> + > > >> + /* Validate sa_idx */ > > >> + if (sa_idx >= ctx->sa_ctx->nb_sa) > > >> + goto drop_pkt_and_exit; > > >> + > > >> + /* Else the packet has to be protected */ > > >> + > > >> + /* Get SA ctx*/ > > >> + sa_ctx = ctx->sa_ctx; > > >> + > > >> + /* Get SA */ > > >> + sa = &(sa_ctx->sa[sa_idx]); > > >> + > > >> + /* Get IPsec session */ > > >> + sess = ipsec_get_primary_session(sa); > > >> + > > >> + /* Allow only inline protocol for now */ > > >> + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { > > >> + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); > > >> + goto drop_pkt_and_exit; > > >> + } > > >> + > > >> + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) > > >> + pkt->userdata = sess->security.ses; > > >> + > > >> + /* Mark the packet for Tx security offload */ > > >> + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > > >> + > > >> + /* Get the port to which this pkt need to be submitted */ > > >> + port_id = sa->portid; > > >> + > > >> +send_pkt: > > >> + /* Update mac addresses */ > > >> + update_mac_addrs(pkt, port_id); > > >> + > > >> + /* Update the event with the dest port */ > > >> + ipsec_event_pre_forward(pkt, port_id); > > > > > > How is IP checksum getting updated for the processed packet. > > > If the hardware is not updating it, should we add a fallback mechanism > > > for SW based Checksum update. > > > > > > > [Lukasz] In case of outbound inline protocol checksum has to be calculated by > > HW as final packet is formed by crypto device. There is no need to calculate it in > > SW. > > > > >> + return 1; > > > > > > It will be better to use some MACROS while returning Like > > > #define PKT_FORWARD 1 > > > #define PKT_DROPPED 0 > > > #define PKT_POSTED 2 /*may be for lookaside cases */ > > > > > >> + > > >> +drop_pkt_and_exit: > > >> + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); > > >> + rte_pktmbuf_free(pkt); > > >> + ev->mbuf = NULL; > > >> + return 0; > > >> +} > > >> + > > >> /* > > >> * Event mode exposes various operating modes depending on the > > >> * capabilities of the event device and the operating mode @@ -68,7 > > >> +392,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, > > >> */ > > >> > > >> /* Workers registered */ > > >> -#define IPSEC_EVENTMODE_WORKERS 1 > > >> +#define IPSEC_EVENTMODE_WORKERS 2 > > >> > > >> /* > > >> * Event mode worker > > >> @@ -146,7 +470,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct > > >> eh_event_link_info *links, > > >> } > > >> > > >> /* Save security session */ > > >> - pkt->udata64 = (uint64_t) sess_tbl[port_id]; > > >> + pkt->userdata = sess_tbl[port_id]; > > >> > > >> /* Mark the packet for Tx security offload */ > > >> pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; @@ -165,6 > > +489,94 @@ > > >> ipsec_wrkr_non_burst_int_port_drv_mode(struct > > >> eh_event_link_info *links, > > >> } > > >> } > > >> > > >> +/* > > >> + * Event mode worker > > >> + * Operating parameters : non-burst - Tx internal port - app mode > > >> +*/ static void ipsec_wrkr_non_burst_int_port_app_mode(struct > > >> +eh_event_link_info *links, > > >> + uint8_t nb_links) > > >> +{ > > >> + struct lcore_conf_ev_tx_int_port_wrkr lconf; > > >> + unsigned int nb_rx = 0; > > >> + struct rte_event ev; > > >> + uint32_t lcore_id; > > >> + int32_t socket_id; > > >> + int ret; > > >> + > > >> + /* Check if we have links registered for this lcore */ > > >> + if (nb_links == 0) { > > >> + /* No links registered - exit */ > > >> + return; > > >> + } > > >> + > > >> + /* We have valid links */ > > >> + > > >> + /* Get core ID */ > > >> + lcore_id = rte_lcore_id(); > > >> + > > >> + /* Get socket ID */ > > >> + socket_id = rte_lcore_to_socket_id(lcore_id); > > >> + > > >> + /* Save routing table */ > > >> + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; > > >> + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; > > >> + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; > > >> + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; > > >> + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; > > >> + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; > > > > > > Session_priv_pool should also be added for both inbound and outbound > > > > > > > [Lukasz] I will add it in V5. > > [Anoob] Actually, why do need both session_pool and private_pool? I think it's a remnant from the time we had session being created when > the first packet arrives. > > @Konstantin, thoughts? I think we do need it for lksd sessions. See create_lookaside_session() in ipsec.c > > > > > >> + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; > > >> + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; > > >> + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; > > >> + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; > > >> + > > >> + RTE_LOG(INFO, IPSEC, > > >> + "Launching event mode worker (non-burst - Tx internal port - " > > >> + "app mode) on lcore %d\n", lcore_id); > > >> + > > >> + /* Check if it's single link */ > > >> + if (nb_links != 1) { > > >> + RTE_LOG(INFO, IPSEC, > > >> + "Multiple links not supported. Using first link\n"); > > >> + } > > >> + > > >> + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, > > >> + links[0].event_port_id); > > >> + > > >> + while (!force_quit) { > > >> + /* Read packet from event queues */ > > >> + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, > > >> + links[0].event_port_id, > > >> + &ev, /* events */ > > >> + 1, /* nb_events */ > > >> + 0 /* timeout_ticks */); > > >> + > > >> + if (nb_rx == 0) > > >> + continue; > > >> + > > > > > > Event type should be checked here before dereferencing it. > > > > > > > [Lukasz] I will add event type check in V5. > > > > >> + if (is_unprotected_port(ev.mbuf->port)) > > >> + ret = process_ipsec_ev_inbound(&lconf.inbound, > > >> + &lconf.rt, &ev); > > >> + else > > >> + ret = process_ipsec_ev_outbound(&lconf.outbound, > > >> + &lconf.rt, &ev); > > >> + if (ret != 1) > > >> + /* The pkt has been dropped */ > > >> + continue; > > >> + > > >> + /* > > >> + * Since tx internal port is available, events can be > > >> + * directly enqueued to the adapter and it would be > > >> + * internally submitted to the eth device. > > >> + */ > > >> + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > > >> + links[0].event_port_id, > > >> + &ev, /* events */ > > >> + 1, /* nb_events */ > > >> + 0 /* flags */); > > >> + } > > >> +} > > >> + > > >> static uint8_t > > >> ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params > > >> *wrkrs) > > >> { > > >> @@ -180,6 +592,14 @@ ipsec_eventmode_populate_wrkr_params(struct > > >> eh_app_worker_params *wrkrs) > > >> wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > >> wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; > > >> wrkr++; > > >> + nb_wrkr_param++; > > >> + > > >> + /* Non-burst - Tx internal port - app mode */ > > >> + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > > >> + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > > >> + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > >> + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; > > >> + nb_wrkr_param++; > > >> > > >> return nb_wrkr_param; > > >> } > > >> diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec- > > >> secgw/ipsec_worker.h new file mode 100644 index 0000000..87b4f22 > > >> --- /dev/null > > >> +++ b/examples/ipsec-secgw/ipsec_worker.h > > >> @@ -0,0 +1,35 @@ > > >> +/* SPDX-License-Identifier: BSD-3-Clause > > >> + * Copyright (C) 2020 Marvell International Ltd. > > >> + */ > > >> +#ifndef _IPSEC_WORKER_H_ > > >> +#define _IPSEC_WORKER_H_ > > >> + > > >> +#include "ipsec.h" > > >> + > > >> +enum pkt_type { > > >> + PKT_TYPE_PLAIN_IPV4 = 1, > > >> + PKT_TYPE_IPSEC_IPV4, > > >> + PKT_TYPE_PLAIN_IPV6, > > >> + PKT_TYPE_IPSEC_IPV6, > > >> + PKT_TYPE_INVALID > > >> +}; > > >> + > > >> +struct route_table { > > >> + struct rt_ctx *rt4_ctx; > > >> + struct rt_ctx *rt6_ctx; > > >> +}; > > >> + > > >> +/* > > >> + * Conf required by event mode worker with tx internal port */ > > >> +struct lcore_conf_ev_tx_int_port_wrkr { > > >> + struct ipsec_ctx inbound; > > >> + struct ipsec_ctx outbound; > > >> + struct route_table rt; > > >> +} __rte_cache_aligned; > > >> + > > >> +void ipsec_poll_mode_worker(void); > > >> + > > >> +int ipsec_launch_one_lcore(void *args); > > >> + > > >> +#endif /* _IPSEC_WORKER_H_ */ > > >> -- > > >> 2.7.4 > > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-25 16:03 ` Ananyev, Konstantin @ 2020-02-26 4:33 ` Anoob Joseph 2020-02-26 5:55 ` Akhil Goyal 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-02-26 4:33 UTC (permalink / raw) To: Ananyev, Konstantin, Akhil Goyal Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Thomas Monjalon, Nicolau, Radu, Lukas Bartosik Hi Konstantin, Please see inline. Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Tuesday, February 25, 2020 9:34 PM > To: Anoob Joseph <anoobj@marvell.com>; Akhil Goyal > <akhil.goyal@nxp.com> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > Athreya <pathreya@marvell.com>; Ankur Dwivedi > <adwivedi@marvell.com>; Archana Muniganti <marchana@marvell.com>; > Tejasree Kondoj <ktejasree@marvell.com>; Vamsi Krishna Attunuru > <vattunuru@marvell.com>; dev@dpdk.org; Thomas Monjalon > <thomas@monjalon.net>; Nicolau, Radu <radu.nicolau@intel.com>; Lukas > Bartosik <lbartosik@marvell.com> > Subject: RE: [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app > mode worker > > > > >> Add application inbound/outbound worker thread and IPsec > > > >> application processing code for event mode. > > > >> > > > >> Example ipsec-secgw command in app mode: > > > >> ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > > >> 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > > > >> --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" > > > >> -f aes-gcm.cfg --transfer-mode event --event-schedule-type > > > >> parallel > > > >> > > > >> Signed-off-by: Anoob Joseph <anoobj@marvell.com> > > > >> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> > > > >> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> > > > >> --- > > > > > > > > ... > > > > > > > >> +static inline enum pkt_type > > > >> +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) { > > > >> + struct rte_ether_hdr *eth; > > > >> + > > > >> + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > > > >> + if (eth->ether_type == > rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { > > > >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > > > >> + offsetof(struct ip, ip_p)); > > > >> + if (**nlp == IPPROTO_ESP) > > > >> + return PKT_TYPE_IPSEC_IPV4; > > > >> + else > > > >> + return PKT_TYPE_PLAIN_IPV4; > > > >> + } else if (eth->ether_type == > > > >> +rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) > > > >> { > > > >> + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + > > > >> + offsetof(struct ip6_hdr, ip6_nxt)); > > > >> + if (**nlp == IPPROTO_ESP) > > > >> + return PKT_TYPE_IPSEC_IPV6; > > > >> + else > > > >> + return PKT_TYPE_PLAIN_IPV6; > > > >> + } > > > >> + > > > >> + /* Unknown/Unsupported type */ > > > >> + return PKT_TYPE_INVALID; > > > >> +} > > > >> + > > > >> +static inline void > > > >> +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) { > > > >> + struct rte_ether_hdr *ethhdr; > > > >> + > > > >> + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); > > > >> + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, > > > >> RTE_ETHER_ADDR_LEN); > > > >> + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, > > > >> RTE_ETHER_ADDR_LEN); > > > >> +} > > > >> > > > >> static inline void > > > >> ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int > > > >> port_id) @@ > > > >> -61,6 +101,290 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, > > > >> } > > > >> } > > > >> > > > >> +static inline int > > > >> +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) { > > > >> + uint32_t res; > > > >> + > > > >> + if (unlikely(sp == NULL)) > > > >> + return 0; > > > >> + > > > >> + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, > > > >> + DEFAULT_MAX_CATEGORIES); > > > >> + > > > >> + if (unlikely(res == 0)) { > > > >> + /* No match */ > > > >> + return 0; > > > >> + } > > > >> + > > > >> + if (res == DISCARD) > > > >> + return 0; > > > >> + else if (res == BYPASS) { > > > >> + *sa_idx = -1; > > > >> + return 1; > > > >> + } > > > >> + > > > >> + *sa_idx = res - 1; > > > >> + return 1; > > > >> +} > > > >> + > > > >> +static inline uint16_t > > > >> +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) { > > > >> + uint32_t dst_ip; > > > >> + uint16_t offset; > > > >> + uint32_t hop; > > > >> + int ret; > > > >> + > > > >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); > > > >> + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); > > > >> + dst_ip = rte_be_to_cpu_32(dst_ip); > > > >> + > > > >> + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, > &hop); > > > >> + > > > >> + if (ret == 0) { > > > >> + /* We have a hit */ > > > >> + return hop; > > > >> + } > > > >> + > > > >> + /* else */ > > > >> + return RTE_MAX_ETHPORTS; > > > >> +} > > > >> + > > > >> +/* TODO: To be tested */ > > > >> +static inline uint16_t > > > >> +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) { > > > >> + uint8_t dst_ip[16]; > > > >> + uint8_t *ip6_dst; > > > >> + uint16_t offset; > > > >> + uint32_t hop; > > > >> + int ret; > > > >> + > > > >> + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, > ip6_dst); > > > >> + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); > > > >> + memcpy(&dst_ip[0], ip6_dst, 16); > > > >> + > > > >> + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, > &hop); > > > >> + > > > >> + if (ret == 0) { > > > >> + /* We have a hit */ > > > >> + return hop; > > > >> + } > > > >> + > > > >> + /* else */ > > > >> + return RTE_MAX_ETHPORTS; > > > >> +} > > > >> + > > > >> +static inline uint16_t > > > >> +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum > > > >> +pkt_type type) { > > > >> + if (type == PKT_TYPE_PLAIN_IPV4 || type == > PKT_TYPE_IPSEC_IPV4) > > > >> + return route4_pkt(pkt, rt->rt4_ctx); > > > >> + else if (type == PKT_TYPE_PLAIN_IPV6 || type == > PKT_TYPE_IPSEC_IPV6) > > > >> + return route6_pkt(pkt, rt->rt6_ctx); > > > >> + > > > >> + return RTE_MAX_ETHPORTS; > > > >> +} > > > > > > > > Is it not possible to use the existing functions for finding > > > > routes, checking > > > packet types and checking security policies. > > > > It will be very difficult to manage two separate functions for > > > > same work. I can see that the pkt->data_offs Are not required to > > > > be updated in the inline case, but can we split the existing > > > > functions in two so that they can > > > be Called in the appropriate cases. > > > > > > > > As you have said in the cover note as well to add lookaside > > > > protocol support. I also tried adding it, and it will get very > > > > Difficult to manage separate > > > functions for separate code paths. > > > > > > > > > > [Lukasz] This was also Konstantin's comment during review of one of > > > previous revisions. > > > The prepare_one_packet() and prepare_tx_pkt() do much more than we > > > need and for performance reasons we crafted new functions. For > > > example, process_ipsec_get_pkt_type function returns nlp and whether > > > packet type is plain or IPsec. That's all. Prepare_one_packet() > > > process packets in chunks and does much more - it adjusts mbuf and > > > packet length then it demultiplex packets into plain and IPsec flows > > > and finally does inline checks. This is similar for > > > update_mac_addrs() vs prepare_tx_pkt() and check_sp() vs > > > inbound_sp_sa() that > > > prepare_tx_pkt() and inbound_sp_sa() do more that we need in event > mode. > > > > > > I understand your concern from the perspective of code maintenance > > > but on the other hand we are concerned with performance. > > > The current code is not optimized to support multiple mode > > > processing introduced with rte_security. We can work on a common > > > routines once we have other modes also added, so that we can come up > > > with a better solution than what we have today. > > > > > > >> + > > > >> +static inline int > > > >> +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table > *rt, > > > >> + struct rte_event *ev) > > > >> +{ > > > >> + struct ipsec_sa *sa = NULL; > > > >> + struct rte_mbuf *pkt; > > > >> + uint16_t port_id = 0; > > > >> + enum pkt_type type; > > > >> + uint32_t sa_idx; > > > >> + uint8_t *nlp; > > > >> + > > > >> + /* Get pkt from event */ > > > >> + pkt = ev->mbuf; > > > >> + > > > >> + /* Check the packet type */ > > > >> + type = process_ipsec_get_pkt_type(pkt, &nlp); > > > >> + > > > >> + switch (type) { > > > >> + case PKT_TYPE_PLAIN_IPV4: > > > >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { > > > >> + if (unlikely(pkt->ol_flags & > > > >> + PKT_RX_SEC_OFFLOAD_FAILED)) { > > > >> + RTE_LOG(ERR, IPSEC, > > > >> + "Inbound security offload > failed\n"); > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + sa = pkt->userdata; > > > >> + } > > > >> + > > > >> + /* Check if we have a match */ > > > >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > > > >> + /* No valid match */ > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + break; > > > >> + > > > >> + case PKT_TYPE_PLAIN_IPV6: > > > >> + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { > > > >> + if (unlikely(pkt->ol_flags & > > > >> + PKT_RX_SEC_OFFLOAD_FAILED)) { > > > >> + RTE_LOG(ERR, IPSEC, > > > >> + "Inbound security offload > failed\n"); > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + sa = pkt->userdata; > > > >> + } > > > >> + > > > >> + /* Check if we have a match */ > > > >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > > > >> + /* No valid match */ > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + break; > > > >> + > > > >> + default: > > > >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = > %d\n", type); > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + > > > >> + /* Check if the packet has to be bypassed */ > > > >> + if (sa_idx == BYPASS) > > > >> + goto route_and_send_pkt; > > > >> + > > > >> + /* Validate sa_idx */ > > > >> + if (sa_idx >= ctx->sa_ctx->nb_sa) > > > >> + goto drop_pkt_and_exit; > > > >> + > > > >> + /* Else the packet has to be protected with SA */ > > > >> + > > > >> + /* If the packet was IPsec processed, then SA pointer should > be set */ > > > >> + if (sa == NULL) > > > >> + goto drop_pkt_and_exit; > > > >> + > > > >> + /* SPI on the packet should match with the one in SA */ > > > >> + if (unlikely(sa->spi != ctx->sa_ctx->sa[sa_idx].spi)) > > > >> + goto drop_pkt_and_exit; > > > >> + > > > >> +route_and_send_pkt: > > > >> + port_id = get_route(pkt, rt, type); > > > >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > > > >> + /* no match */ > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + /* else, we have a matching route */ > > > >> + > > > >> + /* Update mac addresses */ > > > >> + update_mac_addrs(pkt, port_id); > > > >> + > > > >> + /* Update the event with the dest port */ > > > >> + ipsec_event_pre_forward(pkt, port_id); > > > >> + return 1; > > > >> + > > > >> +drop_pkt_and_exit: > > > >> + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); > > > >> + rte_pktmbuf_free(pkt); > > > >> + ev->mbuf = NULL; > > > >> + return 0; > > > >> +} > > > >> + > > > >> +static inline int > > > >> +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct > route_table *rt, > > > >> + struct rte_event *ev) > > > >> +{ > > > >> + struct rte_ipsec_session *sess; > > > >> + struct sa_ctx *sa_ctx; > > > >> + struct rte_mbuf *pkt; > > > >> + uint16_t port_id = 0; > > > >> + struct ipsec_sa *sa; > > > >> + enum pkt_type type; > > > >> + uint32_t sa_idx; > > > >> + uint8_t *nlp; > > > >> + > > > >> + /* Get pkt from event */ > > > >> + pkt = ev->mbuf; > > > >> + > > > >> + /* Check the packet type */ > > > >> + type = process_ipsec_get_pkt_type(pkt, &nlp); > > > >> + > > > >> + switch (type) { > > > >> + case PKT_TYPE_PLAIN_IPV4: > > > >> + /* Check if we have a match */ > > > >> + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { > > > >> + /* No valid match */ > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + break; > > > >> + case PKT_TYPE_PLAIN_IPV6: > > > >> + /* Check if we have a match */ > > > >> + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { > > > >> + /* No valid match */ > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + break; > > > >> + default: > > > >> + /* > > > >> + * Only plain IPv4 & IPv6 packets are allowed > > > >> + * on protected port. Drop the rest. > > > >> + */ > > > >> + RTE_LOG(ERR, IPSEC, "Unsupported packet type = > %d\n", type); > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + > > > >> + /* Check if the packet has to be bypassed */ > > > >> + if (sa_idx == BYPASS) { > > > >> + port_id = get_route(pkt, rt, type); > > > >> + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { > > > >> + /* no match */ > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + /* else, we have a matching route */ > > > >> + goto send_pkt; > > > >> + } > > > >> + > > > >> + /* Validate sa_idx */ > > > >> + if (sa_idx >= ctx->sa_ctx->nb_sa) > > > >> + goto drop_pkt_and_exit; > > > >> + > > > >> + /* Else the packet has to be protected */ > > > >> + > > > >> + /* Get SA ctx*/ > > > >> + sa_ctx = ctx->sa_ctx; > > > >> + > > > >> + /* Get SA */ > > > >> + sa = &(sa_ctx->sa[sa_idx]); > > > >> + > > > >> + /* Get IPsec session */ > > > >> + sess = ipsec_get_primary_session(sa); > > > >> + > > > >> + /* Allow only inline protocol for now */ > > > >> + if (sess->type != > RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { > > > >> + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); > > > >> + goto drop_pkt_and_exit; > > > >> + } > > > >> + > > > >> + if (sess->security.ol_flags & > RTE_SECURITY_TX_OLOAD_NEED_MDATA) > > > >> + pkt->userdata = sess->security.ses; > > > >> + > > > >> + /* Mark the packet for Tx security offload */ > > > >> + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; > > > >> + > > > >> + /* Get the port to which this pkt need to be submitted */ > > > >> + port_id = sa->portid; > > > >> + > > > >> +send_pkt: > > > >> + /* Update mac addresses */ > > > >> + update_mac_addrs(pkt, port_id); > > > >> + > > > >> + /* Update the event with the dest port */ > > > >> + ipsec_event_pre_forward(pkt, port_id); > > > > > > > > How is IP checksum getting updated for the processed packet. > > > > If the hardware is not updating it, should we add a fallback > > > > mechanism for SW based Checksum update. > > > > > > > > > > [Lukasz] In case of outbound inline protocol checksum has to be > > > calculated by HW as final packet is formed by crypto device. There > > > is no need to calculate it in SW. > > > > > > >> + return 1; > > > > > > > > It will be better to use some MACROS while returning Like > > > > #define PKT_FORWARD 1 > > > > #define PKT_DROPPED 0 > > > > #define PKT_POSTED 2 /*may be for lookaside cases */ > > > > > > > >> + > > > >> +drop_pkt_and_exit: > > > >> + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); > > > >> + rte_pktmbuf_free(pkt); > > > >> + ev->mbuf = NULL; > > > >> + return 0; > > > >> +} > > > >> + > > > >> /* > > > >> * Event mode exposes various operating modes depending on the > > > >> * capabilities of the event device and the operating mode @@ > > > >> -68,7 > > > >> +392,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, > > > >> */ > > > >> > > > >> /* Workers registered */ > > > >> -#define IPSEC_EVENTMODE_WORKERS 1 > > > >> +#define IPSEC_EVENTMODE_WORKERS 2 > > > >> > > > >> /* > > > >> * Event mode worker > > > >> @@ -146,7 +470,7 @@ > ipsec_wrkr_non_burst_int_port_drv_mode(struct > > > >> eh_event_link_info *links, > > > >> } > > > >> > > > >> /* Save security session */ > > > >> - pkt->udata64 = (uint64_t) sess_tbl[port_id]; > > > >> + pkt->userdata = sess_tbl[port_id]; > > > >> > > > >> /* Mark the packet for Tx security offload */ > > > >> pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; @@ -165,6 > > > +489,94 @@ > > > >> ipsec_wrkr_non_burst_int_port_drv_mode(struct > > > >> eh_event_link_info *links, > > > >> } > > > >> } > > > >> > > > >> +/* > > > >> + * Event mode worker > > > >> + * Operating parameters : non-burst - Tx internal port - app > > > >> +mode */ static void > > > >> +ipsec_wrkr_non_burst_int_port_app_mode(struct > > > >> +eh_event_link_info *links, > > > >> + uint8_t nb_links) > > > >> +{ > > > >> + struct lcore_conf_ev_tx_int_port_wrkr lconf; > > > >> + unsigned int nb_rx = 0; > > > >> + struct rte_event ev; > > > >> + uint32_t lcore_id; > > > >> + int32_t socket_id; > > > >> + int ret; > > > >> + > > > >> + /* Check if we have links registered for this lcore */ > > > >> + if (nb_links == 0) { > > > >> + /* No links registered - exit */ > > > >> + return; > > > >> + } > > > >> + > > > >> + /* We have valid links */ > > > >> + > > > >> + /* Get core ID */ > > > >> + lcore_id = rte_lcore_id(); > > > >> + > > > >> + /* Get socket ID */ > > > >> + socket_id = rte_lcore_to_socket_id(lcore_id); > > > >> + > > > >> + /* Save routing table */ > > > >> + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; > > > >> + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; > > > >> + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; > > > >> + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; > > > >> + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; > > > >> + lconf.inbound.session_pool = > > > >> +socket_ctx[socket_id].session_pool; > > > > > > > > Session_priv_pool should also be added for both inbound and > > > > outbound > > > > > > > > > > [Lukasz] I will add it in V5. > > > > [Anoob] Actually, why do need both session_pool and private_pool? I > > think it's a remnant from the time we had session being created when the > first packet arrives. > > > > @Konstantin, thoughts? > > I think we do need it for lksd sessions. > See create_lookaside_session() in ipsec.c [Anoob] You are right. It seems for lookaside, we still create session only when first packet arrives. The fix was done only for inline. Said that, do you think we should fix the same for lookaside as well? Often, session creation is treated as a control path entity, and ipsec-secgw doesn't support changing sessions on the fly as well. But in ipsec-secgw, we create sessions in the data path. Also, once we do this, both inline & lookaside will have similar kind of treatment as well. Do you think there is any value in retaining the current behavior? If not I can take this up following the merge. > > > > > > > > > >> + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; > > > >> + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; > > > >> + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; > > > >> + lconf.outbound.session_pool = > > > >> +socket_ctx[socket_id].session_pool; > > > >> + > > > >> + RTE_LOG(INFO, IPSEC, > > > >> + "Launching event mode worker (non-burst - Tx > internal port - " > > > >> + "app mode) on lcore %d\n", lcore_id); > > > >> + > > > >> + /* Check if it's single link */ > > > >> + if (nb_links != 1) { > > > >> + RTE_LOG(INFO, IPSEC, > > > >> + "Multiple links not supported. Using first > link\n"); > > > >> + } > > > >> + > > > >> + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", > lcore_id, > > > >> + links[0].event_port_id); > > > >> + > > > >> + while (!force_quit) { > > > >> + /* Read packet from event queues */ > > > >> + nb_rx = > rte_event_dequeue_burst(links[0].eventdev_id, > > > >> + links[0].event_port_id, > > > >> + &ev, /* events */ > > > >> + 1, /* nb_events */ > > > >> + 0 /* timeout_ticks */); > > > >> + > > > >> + if (nb_rx == 0) > > > >> + continue; > > > >> + > > > > > > > > Event type should be checked here before dereferencing it. > > > > > > > > > > [Lukasz] I will add event type check in V5. > > > > > > >> + if (is_unprotected_port(ev.mbuf->port)) > > > >> + ret = > process_ipsec_ev_inbound(&lconf.inbound, > > > >> + &lconf.rt, > &ev); > > > >> + else > > > >> + ret = > process_ipsec_ev_outbound(&lconf.outbound, > > > >> + &lconf.rt, > &ev); > > > >> + if (ret != 1) > > > >> + /* The pkt has been dropped */ > > > >> + continue; > > > >> + > > > >> + /* > > > >> + * Since tx internal port is available, events can be > > > >> + * directly enqueued to the adapter and it would be > > > >> + * internally submitted to the eth device. > > > >> + */ > > > >> + > rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, > > > >> + links[0].event_port_id, > > > >> + &ev, /* events */ > > > >> + 1, /* nb_events */ > > > >> + 0 /* flags */); > > > >> + } > > > >> +} > > > >> + > > > >> static uint8_t > > > >> ipsec_eventmode_populate_wrkr_params(struct > eh_app_worker_params > > > >> *wrkrs) > > > >> { > > > >> @@ -180,6 +592,14 @@ > ipsec_eventmode_populate_wrkr_params(struct > > > >> eh_app_worker_params *wrkrs) > > > >> wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; > > > >> wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; > > > >> wrkr++; > > > >> + nb_wrkr_param++; > > > >> + > > > >> + /* Non-burst - Tx internal port - app mode */ > > > >> + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; > > > >> + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; > > > >> + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; > > > >> + wrkr->worker_thread = > ipsec_wrkr_non_burst_int_port_app_mode; > > > >> + nb_wrkr_param++; > > > >> > > > >> return nb_wrkr_param; > > > >> } > > > >> diff --git a/examples/ipsec-secgw/ipsec_worker.h > > > >> b/examples/ipsec- secgw/ipsec_worker.h new file mode 100644 > index > > > >> 0000000..87b4f22 > > > >> --- /dev/null > > > >> +++ b/examples/ipsec-secgw/ipsec_worker.h > > > >> @@ -0,0 +1,35 @@ > > > >> +/* SPDX-License-Identifier: BSD-3-Clause > > > >> + * Copyright (C) 2020 Marvell International Ltd. > > > >> + */ > > > >> +#ifndef _IPSEC_WORKER_H_ > > > >> +#define _IPSEC_WORKER_H_ > > > >> + > > > >> +#include "ipsec.h" > > > >> + > > > >> +enum pkt_type { > > > >> + PKT_TYPE_PLAIN_IPV4 = 1, > > > >> + PKT_TYPE_IPSEC_IPV4, > > > >> + PKT_TYPE_PLAIN_IPV6, > > > >> + PKT_TYPE_IPSEC_IPV6, > > > >> + PKT_TYPE_INVALID > > > >> +}; > > > >> + > > > >> +struct route_table { > > > >> + struct rt_ctx *rt4_ctx; > > > >> + struct rt_ctx *rt6_ctx; > > > >> +}; > > > >> + > > > >> +/* > > > >> + * Conf required by event mode worker with tx internal port */ > > > >> +struct lcore_conf_ev_tx_int_port_wrkr { > > > >> + struct ipsec_ctx inbound; > > > >> + struct ipsec_ctx outbound; > > > >> + struct route_table rt; > > > >> +} __rte_cache_aligned; > > > >> + > > > >> +void ipsec_poll_mode_worker(void); > > > >> + > > > >> +int ipsec_launch_one_lcore(void *args); > > > >> + > > > >> +#endif /* _IPSEC_WORKER_H_ */ > > > >> -- > > > >> 2.7.4 > > > > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-26 4:33 ` Anoob Joseph @ 2020-02-26 5:55 ` Akhil Goyal 2020-02-26 12:36 ` Ananyev, Konstantin 0 siblings, 1 reply; 147+ messages in thread From: Akhil Goyal @ 2020-02-26 5:55 UTC (permalink / raw) To: Anoob Joseph, Ananyev, Konstantin Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Thomas Monjalon, Nicolau, Radu, Lukas Bartosik Hi Anoob, > > > > >> + /* Get core ID */ > > > > >> + lcore_id = rte_lcore_id(); > > > > >> + > > > > >> + /* Get socket ID */ > > > > >> + socket_id = rte_lcore_to_socket_id(lcore_id); > > > > >> + > > > > >> + /* Save routing table */ > > > > >> + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; > > > > >> + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; > > > > >> + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; > > > > >> + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; > > > > >> + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; > > > > >> + lconf.inbound.session_pool = > > > > >> +socket_ctx[socket_id].session_pool; > > > > > > > > > > Session_priv_pool should also be added for both inbound and > > > > > outbound > > > > > > > > > > > > > [Lukasz] I will add it in V5. > > > > > > [Anoob] Actually, why do need both session_pool and private_pool? I > > > think it's a remnant from the time we had session being created when the > > first packet arrives. > > > > > > @Konstantin, thoughts? > > > > I think we do need it for lksd sessions. > > See create_lookaside_session() in ipsec.c > > [Anoob] You are right. It seems for lookaside, we still create session only when > first packet arrives. The fix was done only for inline. > > Said that, do you think we should fix the same for lookaside as well? Often, > session creation is treated as a control path entity, and ipsec-secgw doesn't > support changing sessions on the fly as well. But in ipsec-secgw, we create > sessions in the data path. Also, once we do this, both inline & lookaside will have > similar kind of treatment as well. > > Do you think there is any value in retaining the current behavior? If not I can > take this up following the merge. > Yes we need that for lookaside cases. And yes session creation was added in control path for inline cases only. We can move that Part for lookaside cases as well. Earlier the patch was submitted for both but had issues in lookaside cases, so Intel just fixed the Inline cases as that was necessary for the inline cases(first packet was getting dropped). Regards, Akhil ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-26 5:55 ` Akhil Goyal @ 2020-02-26 12:36 ` Ananyev, Konstantin 0 siblings, 0 replies; 147+ messages in thread From: Ananyev, Konstantin @ 2020-02-26 12:36 UTC (permalink / raw) To: Akhil Goyal, Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Thomas Monjalon, Nicolau, Radu, Lukas Bartosik > > > > > >> + /* Get core ID */ > > > > > >> + lcore_id = rte_lcore_id(); > > > > > >> + > > > > > >> + /* Get socket ID */ > > > > > >> + socket_id = rte_lcore_to_socket_id(lcore_id); > > > > > >> + > > > > > >> + /* Save routing table */ > > > > > >> + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; > > > > > >> + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; > > > > > >> + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; > > > > > >> + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; > > > > > >> + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; > > > > > >> + lconf.inbound.session_pool = > > > > > >> +socket_ctx[socket_id].session_pool; > > > > > > > > > > > > Session_priv_pool should also be added for both inbound and > > > > > > outbound > > > > > > > > > > > > > > > > [Lukasz] I will add it in V5. > > > > > > > > [Anoob] Actually, why do need both session_pool and private_pool? I > > > > think it's a remnant from the time we had session being created when the > > > first packet arrives. > > > > > > > > @Konstantin, thoughts? > > > > > > I think we do need it for lksd sessions. > > > See create_lookaside_session() in ipsec.c > > > > [Anoob] You are right. It seems for lookaside, we still create session only when > > first packet arrives. The fix was done only for inline. > > > > Said that, do you think we should fix the same for lookaside as well? Often, > > session creation is treated as a control path entity, and ipsec-secgw doesn't > > support changing sessions on the fly as well. But in ipsec-secgw, we create > > sessions in the data path. Also, once we do this, both inline & lookaside will have > > similar kind of treatment as well. > > > > Do you think there is any value in retaining the current behavior? If not I can > > take this up following the merge. > > > > Yes we need that for lookaside cases. > > And yes session creation was added in control path for inline cases only. We can move that > Part for lookaside cases as well. > > Earlier the patch was submitted for both but had issues in lookaside cases, so Intel just fixed the > Inline cases as that was necessary for the inline cases(first packet was getting dropped). > Yep, as Akhil pointed out it was some problems with lksd-proto cases as I remember. I wouldn't object if we'll change the code to create all sessions at startup. Though, I think we probably would still need a pool for private sessions. Konstantin ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-25 11:50 ` [dpdk-dev] [EXT] " Lukas Bartosik 2020-02-25 12:13 ` Anoob Joseph @ 2020-02-26 6:04 ` Akhil Goyal 2020-02-26 10:32 ` Lukas Bartosik 2020-02-27 12:07 ` Akhil Goyal 1 sibling, 2 replies; 147+ messages in thread From: Akhil Goyal @ 2020-02-26 6:04 UTC (permalink / raw) To: Lukas Bartosik, Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Konstantin Ananyev, dev, Thomas Monjalon, Radu Nicolau Hi Lukasz, > > > > Is it not possible to use the existing functions for finding routes, checking > packet types and checking security policies. > > It will be very difficult to manage two separate functions for same work. I can > see that the pkt->data_offs > > Are not required to be updated in the inline case, but can we split the existing > functions in two so that they can be > > Called in the appropriate cases. > > > > As you have said in the cover note as well to add lookaside protocol support. I > also tried adding it, and it will get very > > Difficult to manage separate functions for separate code paths. > > > > [Lukasz] This was also Konstantin's comment during review of one of previous > revisions. > The prepare_one_packet() and prepare_tx_pkt() do much more than we need > and for performance reasons > we crafted new functions. For example, process_ipsec_get_pkt_type function > returns nlp and whether > packet type is plain or IPsec. That's all. Prepare_one_packet() process packets in > chunks and does much more - > it adjusts mbuf and packet length then it demultiplex packets into plain and IPsec > flows and finally does > inline checks. This is similar for update_mac_addrs() vs prepare_tx_pkt() and > check_sp() vs inbound_sp_sa() > that prepare_tx_pkt() and inbound_sp_sa() do more that we need in event mode. > > I understand your concern from the perspective of code maintenance but on the > other hand we are concerned with performance. > The current code is not optimized to support multiple mode processing > introduced with rte_security. We can work on a common > routines once we have other modes also added, so that we can come up with a > better solution than what we have today. > Yes that is correct, but we should split the existing functions so that the part which is common In both mode should stay common and we do not have duplicate code in the app. I believe we should take care of this when we add lookaside cases. We shall remove all duplicate Code. Ideally it should be part of this patchset. But we can postpone it to the lookaside case addition. > > >> + return 1; > > > > It will be better to use some MACROS while returning > > Like > > #define PKT_FORWARD 1 > > #define PKT_DROPPED 0 > > #define PKT_POSTED 2 /*may be for lookaside cases */ > > I think you missed this comment. > >> + > >> +drop_pkt_and_exit: > >> + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); > >> + rte_pktmbuf_free(pkt); > >> + ev->mbuf = NULL; > >> + return 0; > >> +} > >> + ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-26 6:04 ` Akhil Goyal @ 2020-02-26 10:32 ` Lukas Bartosik 2020-02-27 12:07 ` Akhil Goyal 1 sibling, 0 replies; 147+ messages in thread From: Lukas Bartosik @ 2020-02-26 10:32 UTC (permalink / raw) To: Akhil Goyal, Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Konstantin Ananyev, dev, Thomas Monjalon, Radu Nicolau Hi Akhil, Please see my answer below. Thanks, Lukasz On 26.02.2020 07:04, Akhil Goyal wrote: > Hi Lukasz, > >>> >>> Is it not possible to use the existing functions for finding routes, checking >> packet types and checking security policies. >>> It will be very difficult to manage two separate functions for same work. I can >> see that the pkt->data_offs >>> Are not required to be updated in the inline case, but can we split the existing >> functions in two so that they can be >>> Called in the appropriate cases. >>> >>> As you have said in the cover note as well to add lookaside protocol support. I >> also tried adding it, and it will get very >>> Difficult to manage separate functions for separate code paths. >>> >> >> [Lukasz] This was also Konstantin's comment during review of one of previous >> revisions. >> The prepare_one_packet() and prepare_tx_pkt() do much more than we need >> and for performance reasons >> we crafted new functions. For example, process_ipsec_get_pkt_type function >> returns nlp and whether >> packet type is plain or IPsec. That's all. Prepare_one_packet() process packets in >> chunks and does much more - >> it adjusts mbuf and packet length then it demultiplex packets into plain and IPsec >> flows and finally does >> inline checks. This is similar for update_mac_addrs() vs prepare_tx_pkt() and >> check_sp() vs inbound_sp_sa() >> that prepare_tx_pkt() and inbound_sp_sa() do more that we need in event mode. >> >> I understand your concern from the perspective of code maintenance but on the >> other hand we are concerned with performance. >> The current code is not optimized to support multiple mode processing >> introduced with rte_security. We can work on a common >> routines once we have other modes also added, so that we can come up with a >> better solution than what we have today. >> > > Yes that is correct, but we should split the existing functions so that the part which is common > In both mode should stay common and we do not have duplicate code in the app. > > I believe we should take care of this when we add lookaside cases. We shall remove all duplicate > Code. Ideally it should be part of this patchset. But we can postpone it to the lookaside case addition. > > >> >>>> + return 1; >>> >>> It will be better to use some MACROS while returning >>> Like >>> #define PKT_FORWARD 1 >>> #define PKT_DROPPED 0 >>> #define PKT_POSTED 2 /*may be for lookaside cases */ >>> > > I think you missed this comment. > [Lukasz] Thank you for pointing out that I missed the comment. I will use macros when returning instead of magic numbers. >>>> + >>>> +drop_pkt_and_exit: >>>> + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); >>>> + rte_pktmbuf_free(pkt); >>>> + ev->mbuf = NULL; >>>> + return 0; >>>> +} >>>> + ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-26 6:04 ` Akhil Goyal 2020-02-26 10:32 ` Lukas Bartosik @ 2020-02-27 12:07 ` Akhil Goyal 2020-02-27 14:31 ` Lukas Bartosik 1 sibling, 1 reply; 147+ messages in thread From: Akhil Goyal @ 2020-02-27 12:07 UTC (permalink / raw) To: Lukas Bartosik, Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Konstantin Ananyev, dev, Thomas Monjalon, Radu Nicolau > > Hi Lukasz, > > > > > > > Is it not possible to use the existing functions for finding routes, checking > > packet types and checking security policies. > > > It will be very difficult to manage two separate functions for same work. I > can > > see that the pkt->data_offs > > > Are not required to be updated in the inline case, but can we split the existing > > functions in two so that they can be > > > Called in the appropriate cases. > > > > > > As you have said in the cover note as well to add lookaside protocol support. > I > > also tried adding it, and it will get very > > > Difficult to manage separate functions for separate code paths. > > > > > > > [Lukasz] This was also Konstantin's comment during review of one of previous > > revisions. > > The prepare_one_packet() and prepare_tx_pkt() do much more than we need > > and for performance reasons > > we crafted new functions. For example, process_ipsec_get_pkt_type function > > returns nlp and whether > > packet type is plain or IPsec. That's all. Prepare_one_packet() process packets > in > > chunks and does much more - > > it adjusts mbuf and packet length then it demultiplex packets into plain and > IPsec > > flows and finally does > > inline checks. This is similar for update_mac_addrs() vs prepare_tx_pkt() and > > check_sp() vs inbound_sp_sa() > > that prepare_tx_pkt() and inbound_sp_sa() do more that we need in event > mode. > > > > I understand your concern from the perspective of code maintenance but on > the > > other hand we are concerned with performance. > > The current code is not optimized to support multiple mode processing > > introduced with rte_security. We can work on a common > > routines once we have other modes also added, so that we can come up with > a > > better solution than what we have today. > > > > Yes that is correct, but we should split the existing functions so that the part > which is common > In both mode should stay common and we do not have duplicate code in the app. > > I believe we should take care of this when we add lookaside cases. We shall > remove all duplicate > Code. Ideally it should be part of this patchset. But we can postpone it to the > lookaside case addition. > > I believe the route(4/6)_pkts and route(4/6)_pkt can be made uniform quite easily. Now you can call either send_single_pkt() or rte_event_eth_tx_adapter_enqueue() from the caller of route4_pkts. I don’t think this will impact the performance at all. Instead of having 3 for loops, now there will be only 2 and nothing else is getting changed for anybody. In fact we can reduce 1 more, if we can call send pkts from inside the route4_pkts. I think that can also be done, but it may increase the lookup duration as there may be cache miss. But that need to be experimented. What say?? static inline void route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint32_t *hop[], uint8_t nb_pkts) { uint32_t dst_ip; uint16_t i, offset; if (nb_pkts == 0) return; /* Need to do an LPM lookup for non-inline packets. Inline packets will * have port ID in the SA */ for (i = 0; i < nb_pkts; i++) { if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) { /* Security offload not enabled. So an LPM lookup is * required to get the hop */ offset = offsetof(struct ip, ip_dst); dst_ip = *rte_pktmbuf_mtod_offset(pkts[i], uint32_t *, offset); dst_ip = rte_be_to_cpu_32(dst_ip); if (rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, hop[i])) rte_pktmbuf_free(pkts[i]); } else { *hop[i] = get_hop_for_offload_pkt(pkts[i], 0); } } } ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 12/15] examples/ipsec-secgw: add app mode worker 2020-02-27 12:07 ` Akhil Goyal @ 2020-02-27 14:31 ` Lukas Bartosik 0 siblings, 0 replies; 147+ messages in thread From: Lukas Bartosik @ 2020-02-27 14:31 UTC (permalink / raw) To: Akhil Goyal, Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Konstantin Ananyev, dev, Thomas Monjalon, Radu Nicolau Hi Akhil, Please see my answer below. Thanks, Lukasz On 27.02.2020 13:07, Akhil Goyal wrote: >> >> Hi Lukasz, >> >>>> >>>> Is it not possible to use the existing functions for finding routes, checking >>> packet types and checking security policies. >>>> It will be very difficult to manage two separate functions for same work. I >> can >>> see that the pkt->data_offs >>>> Are not required to be updated in the inline case, but can we split the existing >>> functions in two so that they can be >>>> Called in the appropriate cases. >>>> >>>> As you have said in the cover note as well to add lookaside protocol support. >> I >>> also tried adding it, and it will get very >>>> Difficult to manage separate functions for separate code paths. >>>> >>> >>> [Lukasz] This was also Konstantin's comment during review of one of previous >>> revisions. >>> The prepare_one_packet() and prepare_tx_pkt() do much more than we need >>> and for performance reasons >>> we crafted new functions. For example, process_ipsec_get_pkt_type function >>> returns nlp and whether >>> packet type is plain or IPsec. That's all. Prepare_one_packet() process packets >> in >>> chunks and does much more - >>> it adjusts mbuf and packet length then it demultiplex packets into plain and >> IPsec >>> flows and finally does >>> inline checks. This is similar for update_mac_addrs() vs prepare_tx_pkt() and >>> check_sp() vs inbound_sp_sa() >>> that prepare_tx_pkt() and inbound_sp_sa() do more that we need in event >> mode. >>> >>> I understand your concern from the perspective of code maintenance but on >> the >>> other hand we are concerned with performance. >>> The current code is not optimized to support multiple mode processing >>> introduced with rte_security. We can work on a common >>> routines once we have other modes also added, so that we can come up with >> a >>> better solution than what we have today. >>> >> >> Yes that is correct, but we should split the existing functions so that the part >> which is common >> In both mode should stay common and we do not have duplicate code in the app. >> >> I believe we should take care of this when we add lookaside cases. We shall >> remove all duplicate >> Code. Ideally it should be part of this patchset. But we can postpone it to the >> lookaside case addition. >> >> > > I believe the route(4/6)_pkts and route(4/6)_pkt can be made uniform quite easily. > Now you can call either send_single_pkt() or rte_event_eth_tx_adapter_enqueue() from > the caller of route4_pkts. > I don’t think this will impact the performance at all. > Instead of having 3 for loops, now there will be only 2 and nothing else is getting changed for > anybody. In fact we can reduce 1 more, if we can call send pkts from inside the route4_pkts. > I think that can also be done, but it may increase the lookup duration as there may be cache miss. > But that need to be experimented. What say?? > > static inline void > route4_pkts(struct rt_ctx *rt_ctx, struct rte_mbuf *pkts[], uint32_t *hop[], > uint8_t nb_pkts) > { > uint32_t dst_ip; > uint16_t i, offset; > > if (nb_pkts == 0) > return; > > /* Need to do an LPM lookup for non-inline packets. Inline packets will > * have port ID in the SA > */ > > for (i = 0; i < nb_pkts; i++) { > if (!(pkts[i]->ol_flags & PKT_TX_SEC_OFFLOAD)) { > /* Security offload not enabled. So an LPM lookup is > * required to get the hop > */ > offset = offsetof(struct ip, ip_dst); > dst_ip = *rte_pktmbuf_mtod_offset(pkts[i], > uint32_t *, offset); > dst_ip = rte_be_to_cpu_32(dst_ip); > if (rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, hop[i])) > rte_pktmbuf_free(pkts[i]); > } else { > *hop[i] = get_hop_for_offload_pkt(pkts[i], 0); > } > } > } > [Lukasz] Thank you for your suggestion. Looking at the proposed change I have major concern related to performance. Current rout4_pkts uses rte_lpm_lookup_bulk() which can benefit from SIMD instructions. Replacing it with rte_lpm_lookup might introduce substantial performance degradation. I will start experimenting with processing functions (routing packets, checking packet type, checking sp policies) to make them as much common as possible between poll and event modes. As agreed the plan is to make processing functions common with the addition of lookaside event mode. In the meantime I will send V5 event mode patches which address your other comments. ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 13/15] examples/ipsec-secgw: make number of buffers dynamic 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (11 preceding siblings ...) 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 12/15] examples/ipsec-secgw: add app " Lukasz Bartosik @ 2020-02-20 8:02 ` Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 14/15] doc: add event mode support to ipsec-secgw Lukasz Bartosik ` (4 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:02 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Make number of buffers in a pool nb_mbuf_in_pool dependent on number of ports, cores and crypto queues. Add command line option -s which when used overrides dynamic calculation of number of buffers in a pool. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 71 ++++++++++++++++++++++++++++++++------ 1 file changed, 60 insertions(+), 11 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index c98620e..341e7b4 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -60,8 +60,6 @@ volatile bool force_quit; #define MEMPOOL_CACHE_SIZE 256 -#define NB_MBUF (32000) - #define CDEV_QUEUE_DESC 2048 #define CDEV_MAP_ENTRIES 16384 #define CDEV_MP_NB_OBJS 1024 @@ -164,6 +162,7 @@ static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; +static uint32_t nb_bufs_in_pool; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -1274,6 +1273,7 @@ print_usage(const char *prgname) " [-e]" " [-a]" " [-c]" + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" " -f CONFIG_FILE" " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" @@ -1297,6 +1297,9 @@ print_usage(const char *prgname) " -a enables SA SQN atomic behaviour\n" " -c specifies inbound SAD cache size,\n" " zero value disables the cache (default value: 128)\n" + " -s number of mbufs in packet pool, if not specified number\n" + " of mbufs will be calculated based on number of cores,\n" + " ports and crypto queues\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration\n" " --single-sa SAIDX: In poll mode use single SA index for\n" @@ -1496,7 +1499,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) argvopt = argv; - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:c:", + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:c:s:", lgopts, &option_index)) != EOF) { switch (opt) { @@ -1530,6 +1533,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) cfgfile = optarg; f_present = 1; break; + + case 's': + ret = parse_decimal(optarg); + if (ret < 0) { + printf("Invalid number of buffers in a pool: " + "%s\n", optarg); + print_usage(prgname); + return -1; + } + + nb_bufs_in_pool = ret; + break; + case 'j': ret = parse_decimal(optarg); if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ -1902,12 +1918,12 @@ check_cryptodev_mask(uint8_t cdev_id) return -1; } -static int32_t +static uint16_t cryptodevs_init(void) { struct rte_cryptodev_config dev_conf; struct rte_cryptodev_qp_conf qp_conf; - uint16_t idx, max_nb_qps, qp, i; + uint16_t idx, max_nb_qps, qp, total_nb_qps, i; int16_t cdev_id; struct rte_hash_parameters params = { 0 }; @@ -1935,6 +1951,7 @@ cryptodevs_init(void) printf("lcore/cryptodev/qp mappings:\n"); idx = 0; + total_nb_qps = 0; for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { struct rte_cryptodev_info cdev_info; @@ -1968,6 +1985,7 @@ cryptodevs_init(void) if (qp == 0) continue; + total_nb_qps += qp; dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id); dev_conf.nb_queue_pairs = qp; dev_conf.ff_disable = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO; @@ -2000,7 +2018,7 @@ cryptodevs_init(void) printf("\n"); - return 0; + return total_nb_qps; } static void @@ -2630,20 +2648,36 @@ inline_sessions_free(struct sa_ctx *sa_ctx) } } +static uint32_t +calculate_nb_mbufs(uint16_t nb_ports, uint16_t nb_crypto_qp, uint32_t nb_rxq, + uint32_t nb_txq) +{ + return RTE_MAX((nb_rxq * nb_rxd + + nb_ports * nb_lcores * MAX_PKT_BURST + + nb_ports * nb_txq * nb_txd + + nb_lcores * MEMPOOL_CACHE_SIZE + + nb_crypto_qp * CDEV_QUEUE_DESC + + nb_lcores * frag_tbl_sz * + FRAG_TBL_BUCKET_ENTRIES), + 8192U); +} + int32_t main(int32_t argc, char **argv) { int32_t ret; - uint32_t lcore_id; + uint32_t lcore_id, nb_txq, nb_rxq = 0; uint32_t cdev_id; uint32_t i; uint8_t socket_id; - uint16_t portid; + uint16_t portid, nb_crypto_qp, nb_ports = 0; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; struct eh_conf *eh_conf = NULL; size_t sess_sz; + nb_bufs_in_pool = 0; + /* init EAL */ ret = rte_eal_init(argc, argv); if (ret < 0) @@ -2692,6 +2726,22 @@ main(int32_t argc, char **argv) sess_sz = max_session_size(); + nb_crypto_qp = cryptodevs_init(); + + if (nb_bufs_in_pool == 0) { + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + nb_ports++; + nb_rxq += get_port_nb_rx_queues(portid); + } + + nb_txq = nb_lcores; + + nb_bufs_in_pool = calculate_nb_mbufs(nb_ports, nb_crypto_qp, + nb_rxq, nb_txq); + } + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { if (rte_lcore_is_enabled(lcore_id) == 0) continue; @@ -2705,11 +2755,12 @@ main(int32_t argc, char **argv) if (socket_ctx[socket_id].mbuf_pool) continue; - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); session_priv_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); } + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2721,8 +2772,6 @@ main(int32_t argc, char **argv) req_tx_offloads[portid]); } - cryptodevs_init(); - /* * Set the enabled port mask in helper config for use by helper * sub-system. This will be used while initializing devices using -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 14/15] doc: add event mode support to ipsec-secgw 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (12 preceding siblings ...) 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 13/15] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik @ 2020-02-20 8:02 ` Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 15/15] examples/ipsec-secgw: reserve crypto queues in event mode Lukasz Bartosik ` (3 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:02 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Document addition of event mode support to ipsec-secgw application. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- doc/guides/sample_app_ug/ipsec_secgw.rst | 138 ++++++++++++++++++++++++++----- 1 file changed, 117 insertions(+), 21 deletions(-) diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst index 5ec9b1e..fddd88c 100644 --- a/doc/guides/sample_app_ug/ipsec_secgw.rst +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst @@ -1,5 +1,6 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2016-2017 Intel Corporation. + Copyright (C) 2020 Marvell International Ltd. IPsec Security Gateway Sample Application ========================================= @@ -61,6 +62,44 @@ The Path for the IPsec Outbound traffic is: * Routing. * Write packet to port. +The application supports two modes of operation: poll mode and event mode. + +* In the poll mode a core receives packets from statically configured list + of eth ports and eth ports' queues. + +* In the event mode a core receives packets as events. After packet processing + is done core submits them back as events to an event device. This enables + multicore scaling and HW assisted scheduling by making use of the event device + capabilities. The event mode configuration is predefined. All packets reaching + given eth port will arrive at the same event queue. All event queues are mapped + to all event ports. This allows all cores to receive traffic from all ports. + Since the underlying event device might have varying capabilities, the worker + threads can be drafted differently to maximize performance. For example, if an + event device - eth device pair has Tx internal port, then application can call + rte_event_eth_tx_adapter_enqueue() instead of regular rte_event_enqueue_burst(). + So a thread which assumes that the device pair has internal port will not be the + right solution for another pair. The infrastructure added for the event mode aims + to help application to have multiple worker threads by maximizing performance from + every type of event device without affecting existing paths/use cases. The worker + to be used will be determined by the operating conditions and the underlying device + capabilities. **Currently the application provides non-burst, internal port worker + threads and supports inline protocol only.** It also provides infrastructure for + non-internal port however does not define any worker threads. + +Additionally the event mode introduces two submodes of processing packets: + +* Driver submode: This submode has bare minimum changes in the application to support + IPsec. There are no lookups, no routing done in the application. And for inline + protocol use case, the worker thread resembles l2fwd worker thread as the IPsec + processing is done entirely in HW. This mode can be used to benchmark the raw + performance of the HW. The driver submode is selected with --single-sa option + (used also by poll mode). When --single-sa option is used in conjution with event + mode then index passed to --single-sa is ignored. + +* App submode: This submode has all the features currently implemented with the + application (non librte_ipsec path). All the lookups, routing follows existing + methods and report numbers that can be compared against regular poll mode + benchmark numbers. Constraints ----------- @@ -94,13 +133,18 @@ The application has a number of command line options:: -p PORTMASK -P -u PORTMASK -j FRAMESIZE -l -w REPLAY_WINOW_SIZE -e -a -c SAD_CACHE_SIZE + -s NUMBER_OF_MBUFS_IN_PACKET_POOL + -f CONFIG_FILE_PATH --config (port,queue,lcore)[,(port,queue,lcore] --single-sa SAIDX + --cryptodev_mask MASK + --transfer-mode MODE + --event-schedule-type TYPE --rxoffload MASK --txoffload MASK - --mtu MTU --reassemble NUM - -f CONFIG_FILE_PATH + --mtu MTU + --frag-ttl FRAG_TTL_NS Where: @@ -138,12 +182,38 @@ Where: Zero value disables cache. Default value: 128. -* ``--config (port,queue,lcore)[,(port,queue,lcore)]``: determines which queues - from which ports are mapped to which cores. +* ``-s``: sets number of mbufs in packet pool, if not provided number of mbufs + will be calculated based on number of cores, eth ports and crypto queues. + +* ``-f CONFIG_FILE_PATH``: the full path of text-based file containing all + configuration items for running the application (See Configuration file + syntax section below). ``-f CONFIG_FILE_PATH`` **must** be specified. + **ONLY** the UNIX format configuration file is accepted. + +* ``--config (port,queue,lcore)[,(port,queue,lcore)]``: in poll mode determines + which queues from which ports are mapped to which cores. In event mode this + is required for eth ports initialization only. Afterwards packets are dynamically + scheduled to cores by HW. + +* ``--single-sa SAIDX``: in poll mode use a single SA for outbound traffic, + bypassing the SP on both Inbound and Outbound. This option is meant for + debugging/performance purposes. In event mode selects driver submode, SA index + value is ignored. -* ``--single-sa SAIDX``: use a single SA for outbound traffic, bypassing the SP - on both Inbound and Outbound. This option is meant for debugging/performance - purposes. +* ``--cryptodev_mask MASK``: hexadecimal bitmask of the crypto devices + to configure. + +* ``--transfer-mode MODE``: sets operating mode of the application + "poll" : packet transfer via polling (default) + "event" : Packet transfer via event device + +* ``--event-schedule-type TYPE``: queue schedule type, applies only when + --transfer-mode is set to event. + "ordered" : Ordered (default) + "atomic" : Atomic + "parallel" : Parallel + When --event-schedule-type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event + device will ensure the ordering. Ordering will be lost when tried in PARALLEL. * ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and @@ -155,6 +225,10 @@ Where: allows user to disable some of the TX HW offload capabilities. By default all HW TX offloads are enabled. +* ``--reassemble NUM``: max number of entries in reassemble fragment table. + Zero value disables reassembly functionality. + Default value: 0. + * ``--mtu MTU``: MTU value (in bytes) on all attached ethernet ports. Outgoing packets with length bigger then MTU will be fragmented. Incoming packets with length bigger then MTU will be discarded. @@ -167,26 +241,17 @@ Where: Should be lower for low number of reassembly buckets. Valid values: from 1 ns to 10 s. Default value: 10000000 (10 s). -* ``--reassemble NUM``: max number of entries in reassemble fragment table. - Zero value disables reassembly functionality. - Default value: 0. - -* ``-f CONFIG_FILE_PATH``: the full path of text-based file containing all - configuration items for running the application (See Configuration file - syntax section below). ``-f CONFIG_FILE_PATH`` **must** be specified. - **ONLY** the UNIX format configuration file is accepted. - The mapping of lcores to port/queues is similar to other l3fwd applications. -For example, given the following command line:: +For example, given the following command line to run application in poll mode:: ./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \ - --vdev "crypto_null" -- -p 0xf -P -u 0x3 \ + --vdev "crypto_null" -- -p 0xf -P -u 0x3 \ --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \ - -f /path/to/config_file \ + -f /path/to/config_file --transfer-mode poll \ -where each options means: +where each option means: * The ``-l`` option enables cores 20 and 21. @@ -200,7 +265,7 @@ where each options means: * The ``-P`` option enables promiscuous mode. -* The ``-u`` option sets ports 1 and 2 as unprotected, leaving 2 and 3 as protected. +* The ``-u`` option sets ports 0 and 1 as unprotected, leaving 2 and 3 as protected. * The ``--config`` option enables one queue per port with the following mapping: @@ -228,6 +293,37 @@ where each options means: **note** the parser only accepts UNIX format text file. Other formats such as DOS/MAC format will cause a parse error. +* The ``--transfer-mode`` option selects poll mode for processing packets. + +Similarly for example, given the following command line to run application in +event app mode:: + + ./build/ipsec-secgw -c 0x3 -- -P -p 0x3 -u 0x1 \ + --config "(1,0,0),(0,0,1)" \ + -f /path/to/config_file --transfer-mode event \ + --event-schedule-type parallel \ + +where each option means: + +* The ``-c`` option selects cores 0 and 1 to run on. + +* The ``-P`` option enables promiscuous mode. + +* The ``-p`` option enables ports (detected) 0 and 1. + +* The ``-u`` option sets ports 0 as unprotected, leaving 1 as protected. + +* The ``--config`` option provides configuration for eth ports initialization + only. Afterwards packets are dynamically scheduled to cores by HW. + +* The ``-f /path/to/config_file`` option has the same behavior as in poll + mode example. + +* The ``--transfer-mode`` option selects event mode for processing packets. + +* The ``--event-schedule-type`` option selects parallel ordering of event queues. + + Refer to the *DPDK Getting Started Guide* for general information on running applications and the Environment Abstraction Layer (EAL) options. -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v4 15/15] examples/ipsec-secgw: reserve crypto queues in event mode 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (13 preceding siblings ...) 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 14/15] doc: add event mode support to ipsec-secgw Lukasz Bartosik @ 2020-02-20 8:02 ` Lukasz Bartosik 2020-02-24 5:20 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Anoob Joseph ` (2 subsequent siblings) 17 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-20 8:02 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Reserve minimum number of crypto queues equal to number of ports. This is to fulfill inline protocol offload requirements. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 341e7b4..982d00b 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -1919,7 +1919,7 @@ check_cryptodev_mask(uint8_t cdev_id) } static uint16_t -cryptodevs_init(void) +cryptodevs_init(uint16_t req_queue_num) { struct rte_cryptodev_config dev_conf; struct rte_cryptodev_qp_conf qp_conf; @@ -1982,6 +1982,7 @@ cryptodevs_init(void) i++; } + qp = RTE_MIN(max_nb_qps, RTE_MAX(req_queue_num, qp)); if (qp == 0) continue; @@ -2726,7 +2727,16 @@ main(int32_t argc, char **argv) sess_sz = max_session_size(); - nb_crypto_qp = cryptodevs_init(); + /* + * In event mode request minimum number of crypto queues + * to be reserved equal to number of ports. + */ + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_EVENT) + nb_crypto_qp = rte_eth_dev_count_avail(); + else + nb_crypto_qp = 0; + + nb_crypto_qp = cryptodevs_init(nb_crypto_qp); if (nb_bufs_in_pool == 0) { RTE_ETH_FOREACH_DEV(portid) { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (14 preceding siblings ...) 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 15/15] examples/ipsec-secgw: reserve crypto queues in event mode Lukasz Bartosik @ 2020-02-24 5:20 ` Anoob Joseph 2020-02-24 13:40 ` Akhil Goyal 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik 17 siblings, 0 replies; 147+ messages in thread From: Anoob Joseph @ 2020-02-24 5:20 UTC (permalink / raw) To: Konstantin Ananyev, Akhil Goyal Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Lukas Bartosik, Radu Nicolau, Thomas Monjalon Hi Konstantin, Akhil, Do you have any further comments on this series? Regarding the config parameter change which was discussed in the last submission, we have deferred it and went back to Konstantin's suggestion of updating qp only for our case. Hope now you can Ack the series. Thanks, Anoob > -----Original Message----- > From: Lukasz Bartosik <lbartosik@marvell.com> > Sent: Thursday, February 20, 2020 1:32 PM > To: Akhil Goyal <akhil.goyal@nxp.com>; Radu Nicolau > <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Anoob Joseph <anoobj@marvell.com>; Archana Muniganti > <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi > Krishna Attunuru <vattunuru@marvell.com>; Konstantin Ananyev > <konstantin.ananyev@intel.com>; dev@dpdk.org > Subject: [PATCH v4 00/15] add eventmode to ipsec-secgw > > This series introduces event-mode additions to ipsec-secgw. > > With this series, ipsec-secgw would be able to run in eventmode. The worker > thread (executing loop) would be receiving events and would be submitting it > back to the eventdev after the processing. This way, multicore scaling and h/w > assisted scheduling is achieved by making use of the eventdev capabilities. > > Since the underlying event device would be having varying capabilities, the > worker thread could be drafted differently to maximize performance. > This series introduces usage of multiple worker threads, among which the one to > be used will be determined by the operating conditions and the underlying > device capabilities. > > For example, if an event device - eth device pair has Tx internal port, then > application can do tx_adapter_enqueue() instead of regular event_enqueue(). So > a thread making an assumption that the device pair has internal port will not be > the right solution for another pair. The infrastructure added with these patches > aims to help application to have multiple worker threads, there by extracting > maximum performance from every device without affecting existing paths/use > cases. > > The eventmode configuration is predefined. All packets reaching one eth port > will hit one event queue. All event queues will be mapped to all event ports. So > all cores will be able to receive traffic from all ports. > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event > device will ensure the ordering. Ordering would be lost when tried in PARALLEL. > > Following command line options are introduced, > > --transfer-mode: to choose between poll mode & event mode > --event-schedule-type: to specify the scheduling type > (RTE_SCHED_TYPE_ORDERED/ > RTE_SCHED_TYPE_ATOMIC/ > RTE_SCHED_TYPE_PARALLEL) > > Additionally the event mode introduces two modes of processing packets: > > Driver-mode: This mode will have bare minimum changes in the application > to support ipsec. There woudn't be any lookup etc done in > the application. And for inline-protocol use case, the > thread would resemble l2fwd as the ipsec processing would be > done entirely in the h/w. This mode can be used to benchmark > the raw performance of the h/w. All the application side > steps (like lookup) can be redone based on the requirement > of the end user. Hence the need for a mode which would > report the raw performance. > > App-mode: This mode will have all the features currently implemented with > ipsec-secgw (non librte_ipsec mode). All the lookups etc > would follow the existing methods and would report numbers > that can be compared against regular ipsec-secgw benchmark > numbers. > > The driver mode is selected with existing --single-sa option (used also by poll > mode). When --single-sa option is used in conjution with event mode then index > passed to --single-sa is ignored. > > Example commands to execute ipsec-secgw in various modes on OCTEON TX2 > platform, > > #Inbound and outbound app mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- > transfer-mode event --event-schedule-type parallel > > #Inbound and outbound driver mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- > transfer-mode event --event-schedule-type parallel --single-sa 0 > > This series adds non burst tx internal port workers only. It provides infrastructure > for non internal port workers, however does not define any. Also, only inline > ipsec protocol mode is supported by the worker threads added. > > Following are planned features, > 1. Add burst mode workers. > 2. Add non internal port workers. > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > 4. Add lookaside protocol support. > > Following are features that Marvell won't be attempting. > 1. Inline crypto support. > 2. Lookaside crypto support. > > For the features that Marvell won't be attempting, new workers can be > introduced by the respective stake holders. > > This series is tested on Marvell OCTEON TX2. > This series is targeted for 20.05 release. > > Changes in v4: > * Update ipsec-secgw documentation to describe the new options as well as > event mode support. > * In event mode reserve number of crypto queues equal to number of eth ports > in order to meet inline protocol offload requirements. > * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool > and include fragments table size into the calculation. > * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static keyword > from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. > * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), > check_sp() > and prepare_out_sessions_tbl() functions as a result of changes introduced > by SAD feature. > * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx > is created with rte_zmalloc. > * Minor cleanup enhancements: > - In eh_set_default_conf_eventdev() function in event_helper.c put definition > of int local vars in one line, remove invalid comment, put > "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" > in one line > instead of two. > - Remove extern "C" from event_helper.h. > - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() and > eh_dev_has_tx_internal_port() functions in event_helper.c. > - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. > - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec- > secgw.h, > remove #include <rte_hash.h>. > - Remove not needed includes in ipsec_worker.c. > - Remove expired todo from ipsec_worker.h. > > Changes in v3: > * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c > including minor rework. > * Rename --schedule-type option to --event-schedule-type. > * Replace macro UNPROTECTED_PORT with static inline function > is_unprotected_port(). > * Move definitions of global variables used by multiple modules > to .c files and add externs in .h headers. > * Add eh_check_conf() which validates ipsec-secgw configuration > for event mode. > * Add dynamic calculation of number of buffers in a pool based > on number of cores, ports and crypto queues. > * Fix segmentation fault in event mode driver worker which happens > when there are no inline outbound sessions configured. > * Remove change related to updating number of crypto queues > in cryptodevs_init(). The update of crypto queues will be handled > in a separate patch. > * Fix compilation error on 32-bit platforms by using userdata instead > of udata64 from rte_mbuf. > > Changes in v2: > * Remove --process-dir option. Instead use existing unprotected port mask > option (-u) to decide wheter port handles inbound or outbound traffic. > * Remove --process-mode option. Instead use existing --single-sa option > to select between app and driver modes. > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > * Move destruction of flows to a location where eth ports are stopped > and closed. > * Print error and exit when event mode --schedule-type option is used > in poll mode. > * Reduce number of goto statements replacing them with loop constructs. > * Remove sec_session_fixed table and replace it with locally build > table in driver worker thread. Table is indexed by port identifier > and holds first inline session pointer found for a given port. > * Print error and exit when sessions other than inline are configured > in event mode. > * When number of event queues is less than number of eth ports then > map all eth ports to one event queue. > * Cleanup and minor improvements in code as suggested by Konstantin > > Ankur Dwivedi (1): > examples/ipsec-secgw: add default rte flow for inline Rx > > Anoob Joseph (5): > examples/ipsec-secgw: add framework for eventmode helper > examples/ipsec-secgw: add eventdev port-lcore link > examples/ipsec-secgw: add Rx adapter support > examples/ipsec-secgw: add Tx adapter support > examples/ipsec-secgw: add routines to display config > > Lukasz Bartosik (9): > examples/ipsec-secgw: add routines to launch workers > examples/ipsec-secgw: add support for internal ports > examples/ipsec-secgw: add event helper config init/uninit > examples/ipsec-secgw: add eventmode to ipsec-secgw > examples/ipsec-secgw: add driver mode worker > examples/ipsec-secgw: add app mode worker > examples/ipsec-secgw: make number of buffers dynamic > doc: add event mode support to ipsec-secgw > examples/ipsec-secgw: reserve crypto queues in event mode > > doc/guides/sample_app_ug/ipsec_secgw.rst | 138 ++- > examples/ipsec-secgw/Makefile | 2 + > examples/ipsec-secgw/event_helper.c | 1812 > ++++++++++++++++++++++++++++++ > examples/ipsec-secgw/event_helper.h | 327 ++++++ > examples/ipsec-secgw/ipsec-secgw.c | 463 ++++++-- > examples/ipsec-secgw/ipsec-secgw.h | 88 ++ > examples/ipsec-secgw/ipsec.c | 5 +- > examples/ipsec-secgw/ipsec.h | 53 +- > examples/ipsec-secgw/ipsec_worker.c | 638 +++++++++++ > examples/ipsec-secgw/ipsec_worker.h | 35 + > examples/ipsec-secgw/meson.build | 6 +- > examples/ipsec-secgw/sa.c | 21 +- > examples/ipsec-secgw/sad.h | 5 - > 13 files changed, 3464 insertions(+), 129 deletions(-) create mode 100644 > examples/ipsec-secgw/event_helper.c > create mode 100644 examples/ipsec-secgw/event_helper.h > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (15 preceding siblings ...) 2020-02-24 5:20 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Anoob Joseph @ 2020-02-24 13:40 ` Akhil Goyal 2020-02-25 12:09 ` [dpdk-dev] [EXT] " Lukas Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik 17 siblings, 1 reply; 147+ messages in thread From: Akhil Goyal @ 2020-02-24 13:40 UTC (permalink / raw) To: Lukasz Bartosik, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Hi Anoob/Lukasz, > > This series introduces event-mode additions to ipsec-secgw. > > With this series, ipsec-secgw would be able to run in eventmode. The > worker thread (executing loop) would be receiving events and would be > submitting it back to the eventdev after the processing. This way, > multicore scaling and h/w assisted scheduling is achieved by making use > of the eventdev capabilities. > > Since the underlying event device would be having varying capabilities, > the worker thread could be drafted differently to maximize performance. > This series introduces usage of multiple worker threads, among which the > one to be used will be determined by the operating conditions and the > underlying device capabilities. > > For example, if an event device - eth device pair has Tx internal port, > then application can do tx_adapter_enqueue() instead of regular > event_enqueue(). So a thread making an assumption that the device pair > has internal port will not be the right solution for another pair. The > infrastructure added with these patches aims to help application to have > multiple worker threads, there by extracting maximum performance from > every device without affecting existing paths/use cases. > > The eventmode configuration is predefined. All packets reaching one eth > port will hit one event queue. All event queues will be mapped to all > event ports. So all cores will be able to receive traffic from all ports. > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event > device > will ensure the ordering. Ordering would be lost when tried in PARALLEL. > > Following command line options are introduced, > > --transfer-mode: to choose between poll mode & event mode > --event-schedule-type: to specify the scheduling type > (RTE_SCHED_TYPE_ORDERED/ > RTE_SCHED_TYPE_ATOMIC/ > RTE_SCHED_TYPE_PARALLEL) > > Additionally the event mode introduces two modes of processing packets: > > Driver-mode: This mode will have bare minimum changes in the application > to support ipsec. There woudn't be any lookup etc done in > the application. And for inline-protocol use case, the > thread would resemble l2fwd as the ipsec processing would be > done entirely in the h/w. This mode can be used to benchmark > the raw performance of the h/w. All the application side > steps (like lookup) can be redone based on the requirement > of the end user. Hence the need for a mode which would > report the raw performance. > > App-mode: This mode will have all the features currently implemented with > ipsec-secgw (non librte_ipsec mode). All the lookups etc > would follow the existing methods and would report numbers > that can be compared against regular ipsec-secgw benchmark > numbers. > > The driver mode is selected with existing --single-sa option > (used also by poll mode). When --single-sa option is used > in conjution with event mode then index passed to --single-sa > is ignored. > > Example commands to execute ipsec-secgw in various modes on OCTEON TX2 > platform, > > #Inbound and outbound app mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- > transfer-mode event --event-schedule-type parallel > What is the need of adding the port queue core mapping in case of event. In case of event, all queues are given to eventdev and there is no need for specifying such specific mapping. In l3fwd also this was not done. > #Inbound and outbound driver mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- > transfer-mode event --event-schedule-type parallel --single-sa 0 > > This series adds non burst tx internal port workers only. It provides infrastructure > for non internal port workers, however does not define any. Also, only inline > ipsec > protocol mode is supported by the worker threads added. > > Following are planned features, > 1. Add burst mode workers. > 2. Add non internal port workers. > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > 4. Add lookaside protocol support. > > Following are features that Marvell won't be attempting. > 1. Inline crypto support. > 2. Lookaside crypto support. > > For the features that Marvell won't be attempting, new workers can be > introduced by the respective stake holders. > > This series is tested on Marvell OCTEON TX2. > This series is targeted for 20.05 release. > > Changes in v4: > * Update ipsec-secgw documentation to describe the new options as well as > event mode support. > * In event mode reserve number of crypto queues equal to number of eth ports > in order to meet inline protocol offload requirements. > * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool > and include fragments table size into the calculation. > * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static keyword > from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. > * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), check_sp() > and prepare_out_sessions_tbl() functions as a result of changes introduced > by SAD feature. > * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx > is created with rte_zmalloc. > * Minor cleanup enhancements: > - In eh_set_default_conf_eventdev() function in event_helper.c put definition > of int local vars in one line, remove invalid comment, put > "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" > in one line > instead of two. > - Remove extern "C" from event_helper.h. > - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() and > eh_dev_has_tx_internal_port() functions in event_helper.c. > - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. > - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec- > secgw.h, > remove #include <rte_hash.h>. > - Remove not needed includes in ipsec_worker.c. > - Remove expired todo from ipsec_worker.h. > > Changes in v3: > * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c > including minor rework. > * Rename --schedule-type option to --event-schedule-type. > * Replace macro UNPROTECTED_PORT with static inline function > is_unprotected_port(). > * Move definitions of global variables used by multiple modules > to .c files and add externs in .h headers. > * Add eh_check_conf() which validates ipsec-secgw configuration > for event mode. > * Add dynamic calculation of number of buffers in a pool based > on number of cores, ports and crypto queues. > * Fix segmentation fault in event mode driver worker which happens > when there are no inline outbound sessions configured. > * Remove change related to updating number of crypto queues > in cryptodevs_init(). The update of crypto queues will be handled > in a separate patch. > * Fix compilation error on 32-bit platforms by using userdata instead > of udata64 from rte_mbuf. > > Changes in v2: > * Remove --process-dir option. Instead use existing unprotected port mask > option (-u) to decide wheter port handles inbound or outbound traffic. > * Remove --process-mode option. Instead use existing --single-sa option > to select between app and driver modes. > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > * Move destruction of flows to a location where eth ports are stopped > and closed. > * Print error and exit when event mode --schedule-type option is used > in poll mode. > * Reduce number of goto statements replacing them with loop constructs. > * Remove sec_session_fixed table and replace it with locally build > table in driver worker thread. Table is indexed by port identifier > and holds first inline session pointer found for a given port. > * Print error and exit when sessions other than inline are configured > in event mode. > * When number of event queues is less than number of eth ports then > map all eth ports to one event queue. > * Cleanup and minor improvements in code as suggested by Konstantin > > Ankur Dwivedi (1): > examples/ipsec-secgw: add default rte flow for inline Rx > > Anoob Joseph (5): > examples/ipsec-secgw: add framework for eventmode helper > examples/ipsec-secgw: add eventdev port-lcore link > examples/ipsec-secgw: add Rx adapter support > examples/ipsec-secgw: add Tx adapter support > examples/ipsec-secgw: add routines to display config > > Lukasz Bartosik (9): > examples/ipsec-secgw: add routines to launch workers > examples/ipsec-secgw: add support for internal ports > examples/ipsec-secgw: add event helper config init/uninit > examples/ipsec-secgw: add eventmode to ipsec-secgw > examples/ipsec-secgw: add driver mode worker > examples/ipsec-secgw: add app mode worker > examples/ipsec-secgw: make number of buffers dynamic > doc: add event mode support to ipsec-secgw > examples/ipsec-secgw: reserve crypto queues in event mode > > doc/guides/sample_app_ug/ipsec_secgw.rst | 138 ++- > examples/ipsec-secgw/Makefile | 2 + > examples/ipsec-secgw/event_helper.c | 1812 > ++++++++++++++++++++++++++++++ > examples/ipsec-secgw/event_helper.h | 327 ++++++ > examples/ipsec-secgw/ipsec-secgw.c | 463 ++++++-- > examples/ipsec-secgw/ipsec-secgw.h | 88 ++ > examples/ipsec-secgw/ipsec.c | 5 +- > examples/ipsec-secgw/ipsec.h | 53 +- > examples/ipsec-secgw/ipsec_worker.c | 638 +++++++++++ > examples/ipsec-secgw/ipsec_worker.h | 35 + > examples/ipsec-secgw/meson.build | 6 +- > examples/ipsec-secgw/sa.c | 21 +- > examples/ipsec-secgw/sad.h | 5 - > 13 files changed, 3464 insertions(+), 129 deletions(-) > create mode 100644 examples/ipsec-secgw/event_helper.c > create mode 100644 examples/ipsec-secgw/event_helper.h > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v4 00/15] add eventmode to ipsec-secgw 2020-02-24 13:40 ` Akhil Goyal @ 2020-02-25 12:09 ` Lukas Bartosik 0 siblings, 0 replies; 147+ messages in thread From: Lukas Bartosik @ 2020-02-25 12:09 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, Konstantin Ananyev, dev Hi Akhil, Please see my answer below. Thanks, Lukasz On 24.02.2020 14:40, Akhil Goyal wrote: > External Email > > ---------------------------------------------------------------------- > Hi Anoob/Lukasz, > >> >> This series introduces event-mode additions to ipsec-secgw. >> >> With this series, ipsec-secgw would be able to run in eventmode. The >> worker thread (executing loop) would be receiving events and would be >> submitting it back to the eventdev after the processing. This way, >> multicore scaling and h/w assisted scheduling is achieved by making use >> of the eventdev capabilities. >> >> Since the underlying event device would be having varying capabilities, >> the worker thread could be drafted differently to maximize performance. >> This series introduces usage of multiple worker threads, among which the >> one to be used will be determined by the operating conditions and the >> underlying device capabilities. >> >> For example, if an event device - eth device pair has Tx internal port, >> then application can do tx_adapter_enqueue() instead of regular >> event_enqueue(). So a thread making an assumption that the device pair >> has internal port will not be the right solution for another pair. The >> infrastructure added with these patches aims to help application to have >> multiple worker threads, there by extracting maximum performance from >> every device without affecting existing paths/use cases. >> >> The eventmode configuration is predefined. All packets reaching one eth >> port will hit one event queue. All event queues will be mapped to all >> event ports. So all cores will be able to receive traffic from all ports. >> When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event >> device >> will ensure the ordering. Ordering would be lost when tried in PARALLEL. >> >> Following command line options are introduced, >> >> --transfer-mode: to choose between poll mode & event mode >> --event-schedule-type: to specify the scheduling type >> (RTE_SCHED_TYPE_ORDERED/ >> RTE_SCHED_TYPE_ATOMIC/ >> RTE_SCHED_TYPE_PARALLEL) >> >> Additionally the event mode introduces two modes of processing packets: >> >> Driver-mode: This mode will have bare minimum changes in the application >> to support ipsec. There woudn't be any lookup etc done in >> the application. And for inline-protocol use case, the >> thread would resemble l2fwd as the ipsec processing would be >> done entirely in the h/w. This mode can be used to benchmark >> the raw performance of the h/w. All the application side >> steps (like lookup) can be redone based on the requirement >> of the end user. Hence the need for a mode which would >> report the raw performance. >> >> App-mode: This mode will have all the features currently implemented with >> ipsec-secgw (non librte_ipsec mode). All the lookups etc >> would follow the existing methods and would report numbers >> that can be compared against regular ipsec-secgw benchmark >> numbers. >> >> The driver mode is selected with existing --single-sa option >> (used also by poll mode). When --single-sa option is used >> in conjution with event mode then index passed to --single-sa >> is ignored. >> >> Example commands to execute ipsec-secgw in various modes on OCTEON TX2 >> platform, >> >> #Inbound and outbound app mode >> ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w >> 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- >> level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- >> transfer-mode event --event-schedule-type parallel >> > > What is the need of adding the port queue core mapping in case of event. > In case of event, all queues are given to eventdev and there is no need for specifying such specific mapping. In l3fwd also this was not done. > [Lukasz] You are right port,queue,core mapping is not needed in case of event mode. I will remove --config option from being used in event mode. Number of ports to be initialized will be derived from the port mask and nb_rx_queues per port will be set to 1. >> #Inbound and outbound driver mode >> ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w >> 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- >> level=8 -c 0x1 -- -P -p 0x3 -u 0x1 --config "(1,0,0),(0,0,0)" -f aes-gcm.cfg -- >> transfer-mode event --event-schedule-type parallel --single-sa 0 >> >> This series adds non burst tx internal port workers only. It provides infrastructure >> for non internal port workers, however does not define any. Also, only inline >> ipsec >> protocol mode is supported by the worker threads added. >> >> Following are planned features, >> 1. Add burst mode workers. >> 2. Add non internal port workers. >> 3. Verify support for Rx core (the support is added but lack of h/w to verify). >> 4. Add lookaside protocol support. >> >> Following are features that Marvell won't be attempting. >> 1. Inline crypto support. >> 2. Lookaside crypto support. >> >> For the features that Marvell won't be attempting, new workers can be >> introduced by the respective stake holders. >> >> This series is tested on Marvell OCTEON TX2. >> This series is targeted for 20.05 release. >> >> Changes in v4: >> * Update ipsec-secgw documentation to describe the new options as well as >> event mode support. >> * In event mode reserve number of crypto queues equal to number of eth ports >> in order to meet inline protocol offload requirements. >> * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool >> and include fragments table size into the calculation. >> * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static keyword >> from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. >> * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), check_sp() >> and prepare_out_sessions_tbl() functions as a result of changes introduced >> by SAD feature. >> * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx >> is created with rte_zmalloc. >> * Minor cleanup enhancements: >> - In eh_set_default_conf_eventdev() function in event_helper.c put definition >> of int local vars in one line, remove invalid comment, put >> "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" >> in one line >> instead of two. >> - Remove extern "C" from event_helper.h. >> - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() and >> eh_dev_has_tx_internal_port() functions in event_helper.c. >> - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. >> - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec- >> secgw.h, >> remove #include <rte_hash.h>. >> - Remove not needed includes in ipsec_worker.c. >> - Remove expired todo from ipsec_worker.h. >> >> Changes in v3: >> * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c >> including minor rework. >> * Rename --schedule-type option to --event-schedule-type. >> * Replace macro UNPROTECTED_PORT with static inline function >> is_unprotected_port(). >> * Move definitions of global variables used by multiple modules >> to .c files and add externs in .h headers. >> * Add eh_check_conf() which validates ipsec-secgw configuration >> for event mode. >> * Add dynamic calculation of number of buffers in a pool based >> on number of cores, ports and crypto queues. >> * Fix segmentation fault in event mode driver worker which happens >> when there are no inline outbound sessions configured. >> * Remove change related to updating number of crypto queues >> in cryptodevs_init(). The update of crypto queues will be handled >> in a separate patch. >> * Fix compilation error on 32-bit platforms by using userdata instead >> of udata64 from rte_mbuf. >> >> Changes in v2: >> * Remove --process-dir option. Instead use existing unprotected port mask >> option (-u) to decide wheter port handles inbound or outbound traffic. >> * Remove --process-mode option. Instead use existing --single-sa option >> to select between app and driver modes. >> * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. >> * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). >> * Move destruction of flows to a location where eth ports are stopped >> and closed. >> * Print error and exit when event mode --schedule-type option is used >> in poll mode. >> * Reduce number of goto statements replacing them with loop constructs. >> * Remove sec_session_fixed table and replace it with locally build >> table in driver worker thread. Table is indexed by port identifier >> and holds first inline session pointer found for a given port. >> * Print error and exit when sessions other than inline are configured >> in event mode. >> * When number of event queues is less than number of eth ports then >> map all eth ports to one event queue. >> * Cleanup and minor improvements in code as suggested by Konstantin >> >> Ankur Dwivedi (1): >> examples/ipsec-secgw: add default rte flow for inline Rx >> >> Anoob Joseph (5): >> examples/ipsec-secgw: add framework for eventmode helper >> examples/ipsec-secgw: add eventdev port-lcore link >> examples/ipsec-secgw: add Rx adapter support >> examples/ipsec-secgw: add Tx adapter support >> examples/ipsec-secgw: add routines to display config >> >> Lukasz Bartosik (9): >> examples/ipsec-secgw: add routines to launch workers >> examples/ipsec-secgw: add support for internal ports >> examples/ipsec-secgw: add event helper config init/uninit >> examples/ipsec-secgw: add eventmode to ipsec-secgw >> examples/ipsec-secgw: add driver mode worker >> examples/ipsec-secgw: add app mode worker >> examples/ipsec-secgw: make number of buffers dynamic >> doc: add event mode support to ipsec-secgw >> examples/ipsec-secgw: reserve crypto queues in event mode >> >> doc/guides/sample_app_ug/ipsec_secgw.rst | 138 ++- >> examples/ipsec-secgw/Makefile | 2 + >> examples/ipsec-secgw/event_helper.c | 1812 >> ++++++++++++++++++++++++++++++ >> examples/ipsec-secgw/event_helper.h | 327 ++++++ >> examples/ipsec-secgw/ipsec-secgw.c | 463 ++++++-- >> examples/ipsec-secgw/ipsec-secgw.h | 88 ++ >> examples/ipsec-secgw/ipsec.c | 5 +- >> examples/ipsec-secgw/ipsec.h | 53 +- >> examples/ipsec-secgw/ipsec_worker.c | 638 +++++++++++ >> examples/ipsec-secgw/ipsec_worker.h | 35 + >> examples/ipsec-secgw/meson.build | 6 +- >> examples/ipsec-secgw/sa.c | 21 +- >> examples/ipsec-secgw/sad.h | 5 - >> 13 files changed, 3464 insertions(+), 129 deletions(-) >> create mode 100644 examples/ipsec-secgw/event_helper.c >> create mode 100644 examples/ipsec-secgw/event_helper.h >> create mode 100644 examples/ipsec-secgw/ipsec-secgw.h >> create mode 100644 examples/ipsec-secgw/ipsec_worker.c >> create mode 100644 examples/ipsec-secgw/ipsec_worker.h >> >> -- >> 2.7.4 > ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik ` (16 preceding siblings ...) 2020-02-24 13:40 ` Akhil Goyal @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 01/15] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik ` (16 more replies) 17 siblings, 17 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev This series introduces event-mode additions to ipsec-secgw. With this series, ipsec-secgw would be able to run in eventmode. The worker thread (executing loop) would be receiving events and would be submitting it back to the eventdev after the processing. This way, multicore scaling and h/w assisted scheduling is achieved by making use of the eventdev capabilities. Since the underlying event device would be having varying capabilities, the worker thread could be drafted differently to maximize performance. This series introduces usage of multiple worker threads, among which the one to be used will be determined by the operating conditions and the underlying device capabilities. For example, if an event device - eth device pair has Tx internal port, then application can do tx_adapter_enqueue() instead of regular event_enqueue(). So a thread making an assumption that the device pair has internal port will not be the right solution for another pair. The infrastructure added with these patches aims to help application to have multiple worker threads, there by extracting maximum performance from every device without affecting existing paths/use cases. The eventmode configuration is predefined. All packets reaching one eth port will hit one event queue. All event queues will be mapped to all event ports. So all cores will be able to receive traffic from all ports. When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event device will ensure the ordering. Ordering would be lost when tried in PARALLEL. Following command line options are introduced, --transfer-mode: to choose between poll mode & event mode --event-schedule-type: to specify the scheduling type (RTE_SCHED_TYPE_ORDERED/ RTE_SCHED_TYPE_ATOMIC/ RTE_SCHED_TYPE_PARALLEL) Additionally the event mode introduces two modes of processing packets: Driver-mode: This mode will have bare minimum changes in the application to support ipsec. There woudn't be any lookup etc done in the application. And for inline-protocol use case, the thread would resemble l2fwd as the ipsec processing would be done entirely in the h/w. This mode can be used to benchmark the raw performance of the h/w. All the application side steps (like lookup) can be redone based on the requirement of the end user. Hence the need for a mode which would report the raw performance. App-mode: This mode will have all the features currently implemented with ipsec-secgw (non librte_ipsec mode). All the lookups etc would follow the existing methods and would report numbers that can be compared against regular ipsec-secgw benchmark numbers. The driver mode is selected with existing --single-sa option (used also by poll mode). When --single-sa option is used in conjution with event mode then index passed to --single-sa is ignored. Example commands to execute ipsec-secgw in various modes on OCTEON TX2 platform, #Inbound and outbound app mode ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel #Inbound and outbound driver mode ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel --single-sa 0 This series adds non burst tx internal port workers only. It provides infrastructure for non internal port workers, however does not define any. Also, only inline ipsec protocol mode is supported by the worker threads added. Following are planned features, 1. Add burst mode workers. 2. Add non internal port workers. 3. Verify support for Rx core (the support is added but lack of h/w to verify). 4. Add lookaside protocol support. Following are features that Marvell won't be attempting. 1. Inline crypto support. 2. Lookaside crypto support. For the features that Marvell won't be attempting, new workers can be introduced by the respective stake holders. This series is tested on Marvell OCTEON TX2. This series is targeted for 20.05 release. Changes in v5: * Rename function check_params() to check_poll_mode_params() and check_eh_conf() to check_event_mode_params() in order to make it clear what is their purpose. * Forbid usage of --config option in event mode. * Replace magic numbers on return with enum values in process_ipsec_ev_inbound() and process_ipsec_ev_outbound() functions. * Add session_priv_pool for both inbound and outbound configuration in ipsec_wrkr_non_burst_int_port_app_mode worker. * Add check of event type in ipsec_wrkr_non_burst_int_port_app_mode worker. * Update description of --config option in both ipsec-secgw help and documentation. Changes in v4: * Update ipsec-secgw documentation to describe the new options as well as event mode support. * In event mode reserve number of crypto queues equal to number of eth ports in order to meet inline protocol offload requirements. * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool and include fragments table size into the calculation. * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static keyword from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), check_sp() and prepare_out_sessions_tbl() functions as a result of changes introduced by SAD feature. * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx is created with rte_zmalloc. * Minor cleanup enhancements: - In eh_set_default_conf_eventdev() function in event_helper.c put definition of int local vars in one line, remove invalid comment, put "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" in one line instead of two. - Remove extern "C" from event_helper.h. - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() and eh_dev_has_tx_internal_port() functions in event_helper.c. - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec-secgw.h, remove #include <rte_hash.h>. - Remove not needed includes in ipsec_worker.c. - Remove expired todo from ipsec_worker.h. Changes in v3: * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c including minor rework. * Rename --schedule-type option to --event-schedule-type. * Replace macro UNPROTECTED_PORT with static inline function is_unprotected_port(). * Move definitions of global variables used by multiple modules to .c files and add externs in .h headers. * Add eh_check_conf() which validates ipsec-secgw configuration for event mode. * Add dynamic calculation of number of buffers in a pool based on number of cores, ports and crypto queues. * Fix segmentation fault in event mode driver worker which happens when there are no inline outbound sessions configured. * Remove change related to updating number of crypto queues in cryptodevs_init(). The update of crypto queues will be handled in a separate patch. * Fix compilation error on 32-bit platforms by using userdata instead of udata64 from rte_mbuf. Changes in v2: * Remove --process-dir option. Instead use existing unprotected port mask option (-u) to decide wheter port handles inbound or outbound traffic. * Remove --process-mode option. Instead use existing --single-sa option to select between app and driver modes. * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). * Move destruction of flows to a location where eth ports are stopped and closed. * Print error and exit when event mode --schedule-type option is used in poll mode. * Reduce number of goto statements replacing them with loop constructs. * Remove sec_session_fixed table and replace it with locally build table in driver worker thread. Table is indexed by port identifier and holds first inline session pointer found for a given port. * Print error and exit when sessions other than inline are configured in event mode. * When number of event queues is less than number of eth ports then map all eth ports to one event queue. * Cleanup and minor improvements in code as suggested by Konstantin Ankur Dwivedi (1): examples/ipsec-secgw: add default rte flow for inline Rx Anoob Joseph (5): examples/ipsec-secgw: add framework for eventmode helper examples/ipsec-secgw: add eventdev port-lcore link examples/ipsec-secgw: add Rx adapter support examples/ipsec-secgw: add Tx adapter support examples/ipsec-secgw: add routines to display config Lukasz Bartosik (9): examples/ipsec-secgw: add routines to launch workers examples/ipsec-secgw: add support for internal ports examples/ipsec-secgw: add event helper config init/uninit examples/ipsec-secgw: add eventmode to ipsec-secgw examples/ipsec-secgw: add driver mode worker examples/ipsec-secgw: add app mode worker examples/ipsec-secgw: make number of buffers dynamic doc: add event mode support to ipsec-secgw examples/ipsec-secgw: reserve crypto queues in event mode doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++- examples/ipsec-secgw/Makefile | 2 + examples/ipsec-secgw/event_helper.c | 1812 ++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 327 ++++++ examples/ipsec-secgw/ipsec-secgw.c | 506 +++++++-- examples/ipsec-secgw/ipsec-secgw.h | 88 ++ examples/ipsec-secgw/ipsec.c | 5 +- examples/ipsec-secgw/ipsec.h | 53 +- examples/ipsec-secgw/ipsec_worker.c | 649 +++++++++++ examples/ipsec-secgw/ipsec_worker.h | 41 + examples/ipsec-secgw/meson.build | 6 +- examples/ipsec-secgw/sa.c | 21 +- examples/ipsec-secgw/sad.h | 5 - 13 files changed, 3516 insertions(+), 134 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c create mode 100644 examples/ipsec-secgw/ipsec_worker.h -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 01/15] examples/ipsec-secgw: add default rte flow for inline Rx 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 02/15] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik ` (15 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Ankur Dwivedi, Jerin Jacob, Narayana Prasad, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Ankur Dwivedi <adwivedi@marvell.com> The default flow created would enable security processing on all ESP packets. If the default flow is created, SA based rte_flow creation would be skipped. Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Anoob Joseph <anoobj@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 61 +++++++++++++++++++++++++++++++++----- examples/ipsec-secgw/ipsec.c | 5 +++- examples/ipsec-secgw/ipsec.h | 6 ++++ 3 files changed, 63 insertions(+), 9 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 4799bc9..e1ee7c3 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -129,6 +129,8 @@ struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x49, 0x9e, 0xdd) } }; +struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" @@ -2432,6 +2434,48 @@ reassemble_init(void) return rc; } +static void +create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) +{ + struct rte_flow_action action[2]; + struct rte_flow_item pattern[2]; + struct rte_flow_attr attr = {0}; + struct rte_flow_error err; + struct rte_flow *flow; + int ret; + + if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY)) + return; + + /* Add the default rte_flow to enable SECURITY for all ESP packets */ + + pattern[0].type = RTE_FLOW_ITEM_TYPE_ESP; + pattern[0].spec = NULL; + pattern[0].mask = NULL; + pattern[0].last = NULL; + pattern[1].type = RTE_FLOW_ITEM_TYPE_END; + + action[0].type = RTE_FLOW_ACTION_TYPE_SECURITY; + action[0].conf = NULL; + action[1].type = RTE_FLOW_ACTION_TYPE_END; + action[1].conf = NULL; + + attr.ingress = 1; + + ret = rte_flow_validate(port_id, &attr, pattern, action, &err); + if (ret) + return; + + flow = rte_flow_create(port_id, &attr, pattern, action, &err); + if (flow == NULL) + return; + + flow_info_tbl[port_id].rx_def_flow = flow; + RTE_LOG(INFO, IPSEC, + "Created default flow enabling SECURITY for all ESP traffic on port %d\n", + port_id); +} + int32_t main(int32_t argc, char **argv) { @@ -2440,7 +2484,8 @@ main(int32_t argc, char **argv) uint32_t i; uint8_t socket_id; uint16_t portid; - uint64_t req_rx_offloads, req_tx_offloads; + uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; + uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; size_t sess_sz; /* init EAL */ @@ -2502,8 +2547,10 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - sa_check_offloads(portid, &req_rx_offloads, &req_tx_offloads); - port_init(portid, req_rx_offloads, req_tx_offloads); + sa_check_offloads(portid, &req_rx_offloads[portid], + &req_tx_offloads[portid]); + port_init(portid, req_rx_offloads[portid], + req_tx_offloads[portid]); } cryptodevs_init(); @@ -2513,11 +2560,9 @@ main(int32_t argc, char **argv) if ((enabled_port_mask & (1 << portid)) == 0) continue; - /* - * Start device - * note: device must be started before a flow rule - * can be installed. - */ + /* Create flow before starting the device */ + create_default_ipsec_flow(portid, req_rx_offloads[portid]); + ret = rte_eth_dev_start(portid); if (ret < 0) rte_exit(EXIT_FAILURE, "rte_eth_dev_start: " diff --git a/examples/ipsec-secgw/ipsec.c b/examples/ipsec-secgw/ipsec.c index 6e81207..d406571 100644 --- a/examples/ipsec-secgw/ipsec.c +++ b/examples/ipsec-secgw/ipsec.c @@ -275,6 +275,10 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, unsigned int i; unsigned int j; + /* Don't create flow if default flow is created */ + if (flow_info_tbl[sa->portid].rx_def_flow) + return 0; + ret = rte_eth_dev_info_get(sa->portid, &dev_info); if (ret != 0) { RTE_LOG(ERR, IPSEC, @@ -410,7 +414,6 @@ create_inline_session(struct socket_ctx *skt_ctx, struct ipsec_sa *sa, ips->security.ol_flags = sec_cap->ol_flags; ips->security.ctx = sec_ctx; } - sa->cdev_id_qp = 0; return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 4f2fd61..8f5d382 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -87,6 +87,12 @@ struct app_sa_prm { extern struct app_sa_prm app_sa_prm; +struct flow_info { + struct rte_flow *rx_def_flow; +}; + +extern struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; + enum { IPSEC_SESSION_PRIMARY = 0, IPSEC_SESSION_FALLBACK = 1, -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 02/15] examples/ipsec-secgw: add framework for eventmode helper 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 01/15] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 03/15] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik ` (14 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add framework for eventmode helper. Event mode involves initialization of multiple devices like eventdev, ethdev and etc. Add routines to initialize and uninitialize event device. Generate a default config for event device if it is not specified in the configuration. Currently event helper supports single event device only. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/event_helper.c | 320 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 107 ++++++++++++ examples/ipsec-secgw/meson.build | 4 +- 4 files changed, 430 insertions(+), 2 deletions(-) create mode 100644 examples/ipsec-secgw/event_helper.c create mode 100644 examples/ipsec-secgw/event_helper.h diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index ad83d79..66d05d4 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -16,6 +16,7 @@ SRCS-y += sad.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c new file mode 100644 index 0000000..0c38474 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.c @@ -0,0 +1,320 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#include <rte_ethdev.h> +#include <rte_eventdev.h> + +#include "event_helper.h" + +static int +eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) +{ + int lcore_count, nb_eventdev, nb_eth_dev, ret; + struct eventdev_params *eventdev_config; + struct rte_event_dev_info dev_info; + + /* Get the number of event devices */ + nb_eventdev = rte_event_dev_count(); + if (nb_eventdev == 0) { + EH_LOG_ERR("No event devices detected"); + return -EINVAL; + } + + if (nb_eventdev != 1) { + EH_LOG_ERR("Event mode does not support multiple event devices. " + "Please provide only one event device."); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + if (nb_eth_dev == 0) { + EH_LOG_ERR("No eth devices detected"); + return -EINVAL; + } + + /* Get the number of lcores */ + lcore_count = rte_lcore_count(); + + /* Read event device info */ + ret = rte_event_dev_info_get(0, &dev_info); + if (ret < 0) { + EH_LOG_ERR("Failed to read event device info %d", ret); + return ret; + } + + /* Check if enough ports are available */ + if (dev_info.max_event_ports < 2) { + EH_LOG_ERR("Not enough event ports available"); + return -EINVAL; + } + + /* Get the first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Save number of queues & ports available */ + eventdev_config->eventdev_id = 0; + eventdev_config->nb_eventqueue = dev_info.max_event_queues; + eventdev_config->nb_eventport = dev_info.max_event_ports; + eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES; + + /* Check if there are more queues than required */ + if (eventdev_config->nb_eventqueue > nb_eth_dev + 1) { + /* One queue is reserved for Tx */ + eventdev_config->nb_eventqueue = nb_eth_dev + 1; + } + + /* Check if there are more ports than required */ + if (eventdev_config->nb_eventport > lcore_count) { + /* One port per lcore is enough */ + eventdev_config->nb_eventport = lcore_count; + } + + /* Update the number of event devices */ + em_conf->nb_eventdev++; + + return 0; +} + +static int +eh_validate_conf(struct eventmode_conf *em_conf) +{ + int ret; + + /* + * Check if event devs are specified. Else probe the event devices + * and initialize the config with all ports & queues available + */ + if (em_conf->nb_eventdev == 0) { + ret = eh_set_default_conf_eventdev(em_conf); + if (ret != 0) + return ret; + } + + return 0; +} + +static int +eh_initialize_eventdev(struct eventmode_conf *em_conf) +{ + struct rte_event_queue_conf eventq_conf = {0}; + struct rte_event_dev_info evdev_default_conf; + struct rte_event_dev_config eventdev_conf; + struct eventdev_params *eventdev_config; + int nb_eventdev = em_conf->nb_eventdev; + uint8_t eventdev_id; + int nb_eventqueue; + uint8_t i, j; + int ret; + + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + /* Get event dev ID */ + eventdev_id = eventdev_config->eventdev_id; + + /* Get the number of queues */ + nb_eventqueue = eventdev_config->nb_eventqueue; + + /* Reset the default conf */ + memset(&evdev_default_conf, 0, + sizeof(struct rte_event_dev_info)); + + /* Get default conf of eventdev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR( + "Error in getting event device info[devID:%d]", + eventdev_id); + return ret; + } + + memset(&eventdev_conf, 0, sizeof(struct rte_event_dev_config)); + eventdev_conf.nb_events_limit = + evdev_default_conf.max_num_events; + eventdev_conf.nb_event_queues = nb_eventqueue; + eventdev_conf.nb_event_ports = + eventdev_config->nb_eventport; + eventdev_conf.nb_event_queue_flows = + evdev_default_conf.max_event_queue_flows; + eventdev_conf.nb_event_port_dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + eventdev_conf.nb_event_port_enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Configure event device */ + ret = rte_event_dev_configure(eventdev_id, &eventdev_conf); + if (ret < 0) { + EH_LOG_ERR("Error in configuring event device"); + return ret; + } + + /* Configure event queues */ + for (j = 0; j < nb_eventqueue; j++) { + + memset(&eventq_conf, 0, + sizeof(struct rte_event_queue_conf)); + + /* Per event dev queues can be ATQ or SINGLE LINK */ + eventq_conf.event_queue_cfg = + eventdev_config->ev_queue_mode; + /* + * All queues need to be set with sched_type as + * schedule type for the application stage. One queue + * would be reserved for the final eth tx stage. This + * will be an atomic queue. + */ + if (j == nb_eventqueue-1) { + eventq_conf.schedule_type = + RTE_SCHED_TYPE_ATOMIC; + } else { + eventq_conf.schedule_type = + em_conf->ext_params.sched_type; + } + + /* Set max atomic flows to 1024 */ + eventq_conf.nb_atomic_flows = 1024; + eventq_conf.nb_atomic_order_sequences = 1024; + + /* Setup the queue */ + ret = rte_event_queue_setup(eventdev_id, j, + &eventq_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event queue %d", + ret); + return ret; + } + } + + /* Configure event ports */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + ret = rte_event_port_setup(eventdev_id, j, NULL); + if (ret < 0) { + EH_LOG_ERR("Failed to setup event port %d", + ret); + return ret; + } + } + } + + /* Start event devices */ + for (i = 0; i < nb_eventdev; i++) { + + /* Get eventdev config */ + eventdev_config = &(em_conf->eventdev_config[i]); + + ret = rte_event_dev_start(eventdev_config->eventdev_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start event device %d, %d", + i, ret); + return ret; + } + } + return 0; +} + +int32_t +eh_devs_init(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t port_id; + int ret; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Validate the requested config */ + ret = eh_validate_conf(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to validate the requested config %d", ret); + return ret; + } + + /* Stop eth devices before setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + rte_eth_dev_stop(port_id); + } + + /* Setup eventdev */ + ret = eh_initialize_eventdev(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize event dev %d", ret); + return ret; + } + + /* Start eth devices after setting up adapter */ + RTE_ETH_FOREACH_DEV(port_id) { + + /* Use only the ports enabled */ + if ((conf->eth_portmask & (1 << port_id)) == 0) + continue; + + ret = rte_eth_dev_start(port_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start eth dev %d, %d", + port_id, ret); + return ret; + } + } + + return 0; +} + +int32_t +eh_devs_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + uint16_t id; + int ret, i; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Stop and release event devices */ + for (i = 0; i < em_conf->nb_eventdev; i++) { + + id = em_conf->eventdev_config[i].eventdev_id; + rte_event_dev_stop(id); + + ret = rte_event_dev_close(id); + if (ret < 0) { + EH_LOG_ERR("Failed to close event dev %d, %d", id, ret); + return ret; + } + } + + return 0; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h new file mode 100644 index 0000000..040f977 --- /dev/null +++ b/examples/ipsec-secgw/event_helper.h @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _EVENT_HELPER_H_ +#define _EVENT_HELPER_H_ + +#include <rte_log.h> + +#define RTE_LOGTYPE_EH RTE_LOGTYPE_USER4 + +#define EH_LOG_ERR(...) \ + RTE_LOG(ERR, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/* Max event devices supported */ +#define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS + +/** + * Packet transfer mode of the application + */ +enum eh_pkt_transfer_mode { + EH_PKT_TRANSFER_MODE_POLL = 0, + EH_PKT_TRANSFER_MODE_EVENT, +}; + +/* Event dev params */ +struct eventdev_params { + uint8_t eventdev_id; + uint8_t nb_eventqueue; + uint8_t nb_eventport; + uint8_t ev_queue_mode; +}; + +/* Eventmode conf data */ +struct eventmode_conf { + int nb_eventdev; + /**< No of event devs */ + struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; + /**< Per event dev conf */ + union { + RTE_STD_C11 + struct { + uint64_t sched_type : 2; + /**< Schedule type */ + }; + uint64_t u64; + } ext_params; + /**< 64 bit field to specify extended params */ +}; + +/** + * Event helper configuration + */ +struct eh_conf { + enum eh_pkt_transfer_mode mode; + /**< Packet transfer mode of the application */ + uint32_t eth_portmask; + /**< + * Mask of the eth ports to be used. This portmask would be + * checked while initializing devices using helper routines. + */ + void *mode_params; + /**< Mode specific parameters */ +}; + +/** + * Initialize event mode devices + * + * Application can call this function to get the event devices, eth devices + * and eth rx & tx adapters initialized according to the default config or + * config populated using the command line args. + * + * Application is expected to initialize the eth devices and then the event + * mode helper subsystem will stop & start eth devices according to its + * requirement. Call to this function should be done after the eth devices + * are successfully initialized. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_init(struct eh_conf *conf); + +/** + * Release event mode devices + * + * Application can call this function to release event devices, + * eth rx & tx adapters according to the config. + * + * Call to this function should be done before application stops + * and closes eth devices. This function will not close and stop + * eth devices. + * + * @param conf + * Event helper configuration + * @return + * - 0 on success. + * - (<0) on failure. + */ +int32_t +eh_devs_uninit(struct eh_conf *conf); + +#endif /* _EVENT_HELPER_H_ */ diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 6bd5b78..2415d47 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -6,9 +6,9 @@ # To build this example as a standalone application with an already-installed # DPDK instance, use 'make' -deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec'] +deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( - 'esp.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', + 'esp.c', 'event_helper.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 03/15] examples/ipsec-secgw: add eventdev port-lcore link 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 01/15] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 02/15] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 04/15] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik ` (13 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add event device port-lcore link and specify which event queues should be connected to the event port. Generate a default config for event port-lcore links if it is not specified in the configuration. This routine will check the number of available ports and then create links according to the number of cores available. This patch also adds a new entry in the eventmode conf to denote that all queues are to be linked with every port. This enables one core to receive packets from all ethernet ports. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 126 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 33 ++++++++++ 2 files changed, 159 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 0c38474..c90249f 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1,11 +1,33 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (C) 2020 Marvell International Ltd. */ +#include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_malloc.h> #include "event_helper.h" +static inline unsigned int +eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) +{ + unsigned int next_core; + + /* Get next active core skipping cores reserved as eth cores */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 0); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + prev_core = next_core; + } while (rte_bitmap_get(em_conf->eth_core_mask, next_core)); + + return next_core; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -77,6 +99,71 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_link(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + struct eh_event_link_info *link; + unsigned int lcore_id = -1; + int i, link_index; + + /* + * Create a 1:1 mapping from event ports to cores. If the number + * of event ports is lesser than the cores, some cores won't + * execute worker. If there are more event ports, then some ports + * won't be used. + * + */ + + /* + * The event queue-port mapping is done according to the link. Since + * we are falling back to the default link config, enabling + * "all_ev_queue_to_ev_port" mode flag. This will map all queues + * to the port. + */ + em_conf->ext_params.all_ev_queue_to_ev_port = 1; + + /* Get first event dev conf */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Loop through the ports */ + for (i = 0; i < eventdev_config->nb_eventport; i++) { + + /* Get next active core id */ + lcore_id = eh_get_next_active_core(em_conf, + lcore_id); + + if (lcore_id == RTE_MAX_LCORE) { + /* Reached max cores */ + return 0; + } + + /* Save the current combination as one link */ + + /* Get the index */ + link_index = em_conf->nb_link; + + /* Get the corresponding link */ + link = &(em_conf->link[link_index]); + + /* Save link */ + link->eventdev_id = eventdev_config->eventdev_id; + link->event_port_id = i; + link->lcore_id = lcore_id; + + /* + * Don't set eventq_id as by default all queues + * need to be mapped to the port, which is controlled + * by the operating mode. + */ + + /* Update number of links */ + em_conf->nb_link++; + } + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -91,6 +178,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if links are specified. Else generate a default config for + * the event ports used. + */ + if (em_conf->nb_link == 0) { + ret = eh_set_default_conf_link(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -102,6 +199,8 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) struct rte_event_dev_config eventdev_conf; struct eventdev_params *eventdev_config; int nb_eventdev = em_conf->nb_eventdev; + struct eh_event_link_info *link; + uint8_t *queue = NULL; uint8_t eventdev_id; int nb_eventqueue; uint8_t i, j; @@ -199,6 +298,33 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) } } + /* Make event queue - event port link */ + for (j = 0; j < em_conf->nb_link; j++) { + + /* Get link info */ + link = &(em_conf->link[j]); + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* + * If "all_ev_queue_to_ev_port" params flag is selected, all + * queues need to be mapped to the port. + */ + if (em_conf->ext_params.all_ev_queue_to_ev_port) + queue = NULL; + else + queue = &(link->eventq_id); + + /* Link queue to port */ + ret = rte_event_port_link(eventdev_id, link->event_port_id, + queue, NULL, 1); + if (ret < 0) { + EH_LOG_ERR("Failed to link event port %d", ret); + return ret; + } + } + /* Start event devices */ for (i = 0; i < nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 040f977..c8afc84 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -16,6 +16,13 @@ /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max event queues supported per event device */ +#define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV + +/* Max event-lcore links */ +#define EVENT_MODE_MAX_LCORE_LINKS \ + (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) + /** * Packet transfer mode of the application */ @@ -32,17 +39,43 @@ struct eventdev_params { uint8_t ev_queue_mode; }; +/** + * Event-lcore link configuration + */ +struct eh_event_link_info { + uint8_t eventdev_id; + /**< Event device ID */ + uint8_t event_port_id; + /**< Event port ID */ + uint8_t eventq_id; + /**< Event queue to be linked to the port */ + uint8_t lcore_id; + /**< Lcore to be polling on this port */ +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_link; + /**< No of links */ + struct eh_event_link_info + link[EVENT_MODE_MAX_LCORE_LINKS]; + /**< Per link conf */ + struct rte_bitmap *eth_core_mask; + /**< Core mask of cores to be used for software Rx and Tx */ union { RTE_STD_C11 struct { uint64_t sched_type : 2; /**< Schedule type */ + uint64_t all_ev_queue_to_ev_port : 1; + /**< + * When enabled, all event queues need to be mapped to + * each event port + */ }; uint64_t u64; } ext_params; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 04/15] examples/ipsec-secgw: add Rx adapter support 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (2 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 03/15] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 05/15] examples/ipsec-secgw: add Tx " Lukasz Bartosik ` (12 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add Rx adapter support. The event helper init routine will initialize the Rx adapter according to the configuration. If Rx adapter config is not present it will generate a default config. If there are enough event queues available it will map eth ports and event queues 1:1 (one eth port will be connected to one event queue). Otherwise it will map all eth ports to one event queue. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 273 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/event_helper.h | 29 ++++ 2 files changed, 301 insertions(+), 1 deletion(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index c90249f..2653e86 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -4,10 +4,58 @@ #include <rte_bitmap.h> #include <rte_ethdev.h> #include <rte_eventdev.h> +#include <rte_event_eth_rx_adapter.h> #include <rte_malloc.h> +#include <stdbool.h> #include "event_helper.h" +static int +eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) +{ + int i, count = 0; + + RTE_LCORE_FOREACH(i) { + /* Check if this core is enabled in core mask*/ + if (rte_bitmap_get(eth_core_mask, i)) { + /* Found enabled core */ + count++; + } + } + return count; +} + +static inline unsigned int +eh_get_next_eth_core(struct eventmode_conf *em_conf) +{ + static unsigned int prev_core = -1; + unsigned int next_core; + + /* + * Make sure we have at least one eth core running, else the following + * logic would lead to an infinite loop. + */ + if (eh_get_enabled_cores(em_conf->eth_core_mask) == 0) { + EH_LOG_ERR("No enabled eth core found"); + return RTE_MAX_LCORE; + } + + /* Only some cores are marked as eth cores, skip others */ + do { + /* Get the next core */ + next_core = rte_get_next_lcore(prev_core, 0, 1); + + /* Check if we have reached max lcores */ + if (next_core == RTE_MAX_LCORE) + return next_core; + + /* Update prev_core */ + prev_core = next_core; + } while (!(rte_bitmap_get(em_conf->eth_core_mask, next_core))); + + return next_core; +} + static inline unsigned int eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) { @@ -164,6 +212,82 @@ eh_set_default_conf_link(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct rx_adapter_conf *adapter; + bool single_ev_queue = false; + int eventdev_id; + int nb_eth_dev; + int adapter_id; + int conn_id; + int i; + + /* Create one adapter with eth queues mapped to event queue(s) */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + adapter = &(em_conf->rx_adapter[adapter_id]); + + /* Set adapter conf */ + adapter->eventdev_id = eventdev_id; + adapter->adapter_id = adapter_id; + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Map all queues of eth device (port) to an event queue. If there + * are more event queues than eth ports then create 1:1 mapping. + * Otherwise map all eth ports to a single event queue. + */ + if (nb_eth_dev > eventdev_config->nb_eventqueue) + single_ev_queue = true; + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = adapter->nb_connections; + + /* Get the connection */ + conn = &(adapter->conn[conn_id]); + + /* Set mapping between eth ports & event queues*/ + conn->ethdev_id = i; + conn->eventq_id = single_ev_queue ? 0 : i; + + /* Add all eth queues eth port to event queue */ + conn->ethdev_rx_qid = -1; + + /* Update no of connections */ + adapter->nb_connections++; + + } + + /* We have setup one adapter */ + em_conf->nb_rx_adapter = 1; + + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -188,6 +312,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if rx adapters are specified. Else generate a default config + * with one rx adapter and all eth queues - event queue mapped. + */ + if (em_conf->nb_rx_adapter == 0) { + ret = eh_set_default_conf_rx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -341,6 +475,104 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) return 0; } +static int +eh_rx_adapter_configure(struct eventmode_conf *em_conf, + struct rx_adapter_conf *adapter) +{ + struct rte_event_eth_rx_adapter_queue_conf queue_conf = {0}; + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct rx_adapter_connection_info *conn; + uint8_t eventdev_id; + uint32_t service_id; + int ret; + int j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = 1200; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create Rx adapter */ + ret = rte_event_eth_rx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create rx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Setup queue conf */ + queue_conf.ev.queue_id = conn->eventq_id; + queue_conf.ev.sched_type = em_conf->ext_params.sched_type; + queue_conf.ev.event_type = RTE_EVENT_TYPE_ETHDEV; + + /* Add queue to the adapter */ + ret = rte_event_eth_rx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_rx_qid, + &queue_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to rx adapter %d", + ret); + return ret; + } + } + + /* Get the service ID used by rx adapter */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by rx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_rx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start rx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_rx_adapter(struct eventmode_conf *em_conf) +{ + struct rx_adapter_conf *adapter; + int i, ret; + + /* Configure rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + ret = eh_rx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure rx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -364,6 +596,9 @@ eh_devs_init(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Eventmode conf would need eth portmask */ + em_conf->eth_portmask = conf->eth_portmask; + /* Validate the requested config */ ret = eh_validate_conf(em_conf); if (ret < 0) { @@ -388,6 +623,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Rx adapter */ + ret = eh_initialize_rx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize rx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -410,8 +652,8 @@ int32_t eh_devs_uninit(struct eh_conf *conf) { struct eventmode_conf *em_conf; + int ret, i, j; uint16_t id; - int ret, i; if (conf == NULL) { EH_LOG_ERR("Invalid event helper configuration"); @@ -429,6 +671,35 @@ eh_devs_uninit(struct eh_conf *conf) /* Get eventmode conf */ em_conf = conf->mode_params; + /* Stop and release rx adapters */ + for (i = 0; i < em_conf->nb_rx_adapter; i++) { + + id = em_conf->rx_adapter[i].adapter_id; + ret = rte_event_eth_rx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop rx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->rx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_rx_adapter_queue_del(id, + em_conf->rx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove rx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_rx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free rx adapter %d", ret); + return ret; + } + } + /* Stop and release event devices */ for (i = 0; i < em_conf->nb_eventdev; i++) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index c8afc84..00ce14e 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -16,6 +16,12 @@ /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS +/* Max Rx adapters supported */ +#define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS + +/* Max Rx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -53,12 +59,33 @@ struct eh_event_link_info { /**< Lcore to be polling on this port */ }; +/* Rx adapter connection info */ +struct rx_adapter_connection_info { + uint8_t ethdev_id; + uint8_t eventq_id; + int32_t ethdev_rx_qid; +}; + +/* Rx adapter conf */ +struct rx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t rx_core_id; + uint8_t nb_connections; + struct rx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; /**< No of event devs */ struct eventdev_params eventdev_config[EVENT_MODE_MAX_EVENT_DEVS]; /**< Per event dev conf */ + uint8_t nb_rx_adapter; + /**< No of Rx adapters */ + struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; + /**< Rx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -66,6 +93,8 @@ struct eventmode_conf { /**< Per link conf */ struct rte_bitmap *eth_core_mask; /**< Core mask of cores to be used for software Rx and Tx */ + uint32_t eth_portmask; + /**< Mask of the eth ports to be used */ union { RTE_STD_C11 struct { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 05/15] examples/ipsec-secgw: add Tx adapter support 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (3 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 04/15] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 06/15] examples/ipsec-secgw: add routines to display config Lukasz Bartosik ` (11 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add Tx adapter support. The event helper init routine will initialize the Tx adapter according to the configuration. If Tx adapter config is not present it will generate a default config. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 313 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 361 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 2653e86..fca1e08 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -5,6 +5,7 @@ #include <rte_ethdev.h> #include <rte_eventdev.h> #include <rte_event_eth_rx_adapter.h> +#include <rte_event_eth_tx_adapter.h> #include <rte_malloc.h> #include <stdbool.h> @@ -76,6 +77,22 @@ eh_get_next_active_core(struct eventmode_conf *em_conf, unsigned int prev_core) return next_core; } +static struct eventdev_params * +eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) +{ + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + if (em_conf->eventdev_config[i].eventdev_id == eventdev_id) + break; + } + + /* No match */ + if (i == em_conf->nb_eventdev) + return NULL; + + return &(em_conf->eventdev_config[i]); +} static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -288,6 +305,95 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) } static int +eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + struct tx_adapter_conf *tx_adapter; + int eventdev_id; + int adapter_id; + int nb_eth_dev; + int conn_id; + int i; + + /* + * Create one Tx adapter with all eth queues mapped to event queues + * 1:1. + */ + + if (em_conf->nb_eventdev == 0) { + EH_LOG_ERR("No event devs registered"); + return -EINVAL; + } + + /* Get the number of eth devs */ + nb_eth_dev = rte_eth_dev_count_avail(); + + /* Use the first event dev */ + eventdev_config = &(em_conf->eventdev_config[0]); + + /* Get eventdev ID */ + eventdev_id = eventdev_config->eventdev_id; + adapter_id = 0; + + /* Get adapter conf */ + tx_adapter = &(em_conf->tx_adapter[adapter_id]); + + /* Set adapter conf */ + tx_adapter->eventdev_id = eventdev_id; + tx_adapter->adapter_id = adapter_id; + + /* TODO: Tx core is required only when internal port is not present */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Application uses one event queue per adapter for submitting + * packets for Tx. Reserve the last queue available and decrement + * the total available event queues for this + */ + + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + + /* + * Map all Tx queues of the eth device (port) to the event device. + */ + + /* Set defaults for connections */ + + /* + * One eth device (port) is one connection. Map all Tx queues + * of the device to the Tx adapter. + */ + + for (i = 0; i < nb_eth_dev; i++) { + + /* Use only the ports enabled */ + if ((em_conf->eth_portmask & (1 << i)) == 0) + continue; + + /* Get the connection id */ + conn_id = tx_adapter->nb_connections; + + /* Get the connection */ + conn = &(tx_adapter->conn[conn_id]); + + /* Add ethdev to connections */ + conn->ethdev_id = i; + + /* Add all eth tx queues to adapter */ + conn->ethdev_tx_qid = -1; + + /* Update no of connections */ + tx_adapter->nb_connections++; + } + + /* We have setup one adapter */ + em_conf->nb_tx_adapter = 1; + return 0; +} + +static int eh_validate_conf(struct eventmode_conf *em_conf) { int ret; @@ -322,6 +428,16 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* + * Check if tx adapters are specified. Else generate a default config + * with one tx adapter. + */ + if (em_conf->nb_tx_adapter == 0) { + ret = eh_set_default_conf_tx_adapter(em_conf); + if (ret != 0) + return ret; + } + return 0; } @@ -573,6 +689,133 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int +eh_tx_adapter_configure(struct eventmode_conf *em_conf, + struct tx_adapter_conf *adapter) +{ + struct rte_event_dev_info evdev_default_conf = {0}; + struct rte_event_port_conf port_conf = {0}; + struct tx_adapter_connection_info *conn; + struct eventdev_params *eventdev_config; + uint8_t tx_port_id = 0; + uint8_t eventdev_id; + uint32_t service_id; + int ret, j; + + /* Get event dev ID */ + eventdev_id = adapter->eventdev_id; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + /* Create Tx adapter */ + + /* Get default configuration of event dev */ + ret = rte_event_dev_info_get(eventdev_id, &evdev_default_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to get event dev info %d", ret); + return ret; + } + + /* Setup port conf */ + port_conf.new_event_threshold = + evdev_default_conf.max_num_events; + port_conf.dequeue_depth = + evdev_default_conf.max_event_port_dequeue_depth; + port_conf.enqueue_depth = + evdev_default_conf.max_event_port_enqueue_depth; + + /* Create adapter */ + ret = rte_event_eth_tx_adapter_create(adapter->adapter_id, + adapter->eventdev_id, &port_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to create tx adapter %d", ret); + return ret; + } + + /* Setup various connections in the adapter */ + for (j = 0; j < adapter->nb_connections; j++) { + + /* Get connection */ + conn = &(adapter->conn[j]); + + /* Add queue to the adapter */ + ret = rte_event_eth_tx_adapter_queue_add(adapter->adapter_id, + conn->ethdev_id, conn->ethdev_tx_qid); + if (ret < 0) { + EH_LOG_ERR("Failed to add eth queue to tx adapter %d", + ret); + return ret; + } + } + + /* Setup Tx queue & port */ + + /* Get event port used by the adapter */ + ret = rte_event_eth_tx_adapter_event_port_get( + adapter->adapter_id, &tx_port_id); + if (ret) { + EH_LOG_ERR("Failed to get tx adapter port id %d", ret); + return ret; + } + + /* + * Tx event queue is reserved for Tx adapter. Unlink this queue + * from all other ports + * + */ + for (j = 0; j < eventdev_config->nb_eventport; j++) { + rte_event_port_unlink(eventdev_id, j, + &(adapter->tx_ev_queue), 1); + } + + /* Link Tx event queue to Tx port */ + ret = rte_event_port_link(eventdev_id, tx_port_id, + &(adapter->tx_ev_queue), NULL, 1); + if (ret != 1) { + EH_LOG_ERR("Failed to link event queue to port"); + return ret; + } + + /* Get the service ID used by Tx adapter */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter->adapter_id, + &service_id); + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR("Failed to get service id used by tx adapter %d", + ret); + return ret; + } + + rte_service_set_runstate_mapped_check(service_id, 0); + + /* Start adapter */ + ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); + if (ret < 0) { + EH_LOG_ERR("Failed to start tx adapter %d", ret); + return ret; + } + + return 0; +} + +static int +eh_initialize_tx_adapter(struct eventmode_conf *em_conf) +{ + struct tx_adapter_conf *adapter; + int i, ret; + + /* Configure Tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + ret = eh_tx_adapter_configure(em_conf, adapter); + if (ret < 0) { + EH_LOG_ERR("Failed to configure tx adapter %d", ret); + return ret; + } + } + return 0; +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -630,6 +873,13 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Setup Tx adapter */ + ret = eh_initialize_tx_adapter(em_conf); + if (ret < 0) { + EH_LOG_ERR("Failed to initialize tx adapter %d", ret); + return ret; + } + /* Start eth devices after setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { @@ -713,5 +963,68 @@ eh_devs_uninit(struct eh_conf *conf) } } + /* Stop and release tx adapters */ + for (i = 0; i < em_conf->nb_tx_adapter; i++) { + + id = em_conf->tx_adapter[i].adapter_id; + ret = rte_event_eth_tx_adapter_stop(id); + if (ret < 0) { + EH_LOG_ERR("Failed to stop tx adapter %d", ret); + return ret; + } + + for (j = 0; j < em_conf->tx_adapter[i].nb_connections; j++) { + + ret = rte_event_eth_tx_adapter_queue_del(id, + em_conf->tx_adapter[i].conn[j].ethdev_id, -1); + if (ret < 0) { + EH_LOG_ERR( + "Failed to remove tx adapter queues %d", + ret); + return ret; + } + } + + ret = rte_event_eth_tx_adapter_free(id); + if (ret < 0) { + EH_LOG_ERR("Failed to free tx adapter %d", ret); + return ret; + } + } + return 0; } + +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) +{ + struct eventdev_params *eventdev_config; + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return -EINVAL; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get event device conf */ + eventdev_config = eh_get_eventdev_params(em_conf, eventdev_id); + + if (eventdev_config == NULL) { + EH_LOG_ERR("Failed to read eventdev config"); + return -EINVAL; + } + + /* + * The last queue is reserved to be used as atomic queue for the + * last stage (eth packet tx stage) + */ + return eventdev_config->nb_eventqueue - 1; +} diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 00ce14e..913b172 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -19,9 +19,15 @@ /* Max Rx adapters supported */ #define EVENT_MODE_MAX_RX_ADAPTERS RTE_EVENT_MAX_DEVS +/* Max Tx adapters supported */ +#define EVENT_MODE_MAX_TX_ADAPTERS RTE_EVENT_MAX_DEVS + /* Max Rx adapter connections */ #define EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER 16 +/* Max Tx adapter connections */ +#define EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER 16 + /* Max event queues supported per event device */ #define EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV RTE_EVENT_MAX_QUEUES_PER_DEV @@ -29,6 +35,9 @@ #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Tx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS + /** * Packet transfer mode of the application */ @@ -76,6 +85,23 @@ struct rx_adapter_conf { conn[EVENT_MODE_MAX_CONNECTIONS_PER_ADAPTER]; }; +/* Tx adapter connection info */ +struct tx_adapter_connection_info { + uint8_t ethdev_id; + int32_t ethdev_tx_qid; +}; + +/* Tx adapter conf */ +struct tx_adapter_conf { + int32_t eventdev_id; + int32_t adapter_id; + uint32_t tx_core_id; + uint8_t nb_connections; + struct tx_adapter_connection_info + conn[EVENT_MODE_MAX_CONNECTIONS_PER_TX_ADAPTER]; + uint8_t tx_ev_queue; +}; + /* Eventmode conf data */ struct eventmode_conf { int nb_eventdev; @@ -86,6 +112,10 @@ struct eventmode_conf { /**< No of Rx adapters */ struct rx_adapter_conf rx_adapter[EVENT_MODE_MAX_RX_ADAPTERS]; /**< Rx adapter conf */ + uint8_t nb_tx_adapter; + /**< No of Tx adapters */ + struct tx_adapter_conf tx_adapter[EVENT_MODE_MAX_TX_ADAPTERS]; + /** Tx adapter conf */ uint8_t nb_link; /**< No of links */ struct eh_event_link_info @@ -166,4 +196,22 @@ eh_devs_init(struct eh_conf *conf); int32_t eh_devs_uninit(struct eh_conf *conf); +/** + * Get eventdev tx queue + * + * If the application uses event device which does not support internal port + * then it needs to submit the events to a Tx queue before final transmission. + * This Tx queue will be created internally by the eventmode helper subsystem, + * and application will need its queue ID when it runs the execution loop. + * + * @param mode_conf + * Event helper configuration + * @param eventdev_id + * Event device ID + * @return + * Tx queue ID + */ +uint8_t +eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); + #endif /* _EVENT_HELPER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 06/15] examples/ipsec-secgw: add routines to display config 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (4 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 05/15] examples/ipsec-secgw: add Tx " Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 07/15] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik ` (10 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Anoob Joseph, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev From: Anoob Joseph <anoobj@marvell.com> Add routines to display the eventmode configuration and provide an overview of the devices used. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 207 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 14 +++ 2 files changed, 221 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index fca1e08..d09bf7d 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -816,6 +816,210 @@ eh_initialize_tx_adapter(struct eventmode_conf *em_conf) return 0; } +static void +eh_display_operating_mode(struct eventmode_conf *em_conf) +{ + char sched_types[][32] = { + "RTE_SCHED_TYPE_ORDERED", + "RTE_SCHED_TYPE_ATOMIC", + "RTE_SCHED_TYPE_PARALLEL", + }; + EH_LOG_INFO("Operating mode:"); + + EH_LOG_INFO("\tScheduling type: \t%s", + sched_types[em_conf->ext_params.sched_type]); + + EH_LOG_INFO(""); +} + +static void +eh_display_event_dev_conf(struct eventmode_conf *em_conf) +{ + char queue_mode[][32] = { + "", + "ATQ (ALL TYPE QUEUE)", + "SINGLE LINK", + }; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Event Device Configuration:"); + + for (i = 0; i < em_conf->nb_eventdev; i++) { + sprintf(print_buf, + "\tDev ID: %-2d \tQueues: %-2d \tPorts: %-2d", + em_conf->eventdev_config[i].eventdev_id, + em_conf->eventdev_config[i].nb_eventqueue, + em_conf->eventdev_config[i].nb_eventport); + sprintf(print_buf + strlen(print_buf), + "\tQueue mode: %s", + queue_mode[em_conf->eventdev_config[i].ev_queue_mode]); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +static void +eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_rx_adapter = em_conf->nb_rx_adapter; + struct rx_adapter_connection_info *conn; + struct rx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Rx adapters configured: %d", nb_rx_adapter); + + for (i = 0; i < nb_rx_adapter; i++) { + adapter = &(em_conf->rx_adapter[i]); + EH_LOG_INFO( + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" + "\tRx core: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id, + adapter->rx_core_id); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_rx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth rx queue: %-2d", + conn->ethdev_rx_qid); + + sprintf(print_buf + strlen(print_buf), + "\tEvent queue: %-2d", conn->eventq_id); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_tx_adapter_conf(struct eventmode_conf *em_conf) +{ + int nb_tx_adapter = em_conf->nb_tx_adapter; + struct tx_adapter_connection_info *conn; + struct tx_adapter_conf *adapter; + char print_buf[256] = { 0 }; + int i, j; + + EH_LOG_INFO("Tx adapters configured: %d", nb_tx_adapter); + + for (i = 0; i < nb_tx_adapter; i++) { + adapter = &(em_conf->tx_adapter[i]); + sprintf(print_buf, + "\tTx adapter ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", + adapter->adapter_id, + adapter->nb_connections, + adapter->eventdev_id); + if (adapter->tx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->tx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tTx core: %-2d,\tInput event queue: %-2d", + adapter->tx_core_id, adapter->tx_ev_queue); + + EH_LOG_INFO("%s", print_buf); + + for (j = 0; j < adapter->nb_connections; j++) { + conn = &(adapter->conn[j]); + + sprintf(print_buf, + "\t\tEthdev ID: %-2d", conn->ethdev_id); + + if (conn->ethdev_tx_qid == -1) + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2s", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "\tEth tx queue: %-2d", + conn->ethdev_tx_qid); + EH_LOG_INFO("%s", print_buf); + } + } + EH_LOG_INFO(""); +} + +static void +eh_display_link_conf(struct eventmode_conf *em_conf) +{ + struct eh_event_link_info *link; + char print_buf[256] = { 0 }; + int i; + + EH_LOG_INFO("Links configured: %d", em_conf->nb_link); + + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + + sprintf(print_buf, + "\tEvent dev ID: %-2d\tEvent port: %-2d", + link->eventdev_id, + link->event_port_id); + + if (em_conf->ext_params.all_ev_queue_to_ev_port) + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2s\t", "ALL"); + else + sprintf(print_buf + strlen(print_buf), + "Event queue: %-2d\t", link->eventq_id); + + sprintf(print_buf + strlen(print_buf), + "Lcore: %-2d", link->lcore_id); + EH_LOG_INFO("%s", print_buf); + } + EH_LOG_INFO(""); +} + +void +eh_display_conf(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return; + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(conf->mode_params); + + /* Display user exposed operating modes */ + eh_display_operating_mode(em_conf); + + /* Display event device conf */ + eh_display_event_dev_conf(em_conf); + + /* Display Rx adapter conf */ + eh_display_rx_adapter_conf(em_conf); + + /* Display Tx adapter conf */ + eh_display_tx_adapter_conf(em_conf); + + /* Display event-lcore link */ + eh_display_link_conf(em_conf); +} + int32_t eh_devs_init(struct eh_conf *conf) { @@ -849,6 +1053,9 @@ eh_devs_init(struct eh_conf *conf) return ret; } + /* Display the current configuration */ + eh_display_conf(conf); + /* Stop eth devices before setting up adapter */ RTE_ETH_FOREACH_DEV(port_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 913b172..8eb5e25 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -13,6 +13,11 @@ RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) +#define EH_LOG_INFO(...) \ + RTE_LOG(INFO, EH, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + /* Max event devices supported */ #define EVENT_MODE_MAX_EVENT_DEVS RTE_EVENT_MAX_DEVS @@ -214,4 +219,13 @@ eh_devs_uninit(struct eh_conf *conf); uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); +/** + * Display event mode configuration + * + * @param conf + * Event helper configuration + */ +void +eh_display_conf(struct eh_conf *conf); + #endif /* _EVENT_HELPER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 07/15] examples/ipsec-secgw: add routines to launch workers 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (5 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 06/15] examples/ipsec-secgw: add routines to display config Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 08/15] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik ` (9 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev In eventmode workers can be drafted differently according to the capabilities of the underlying event device. The added functions will receive an array of such workers and probe the eventmode properties to choose the worker. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 336 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 48 ++++++ 2 files changed, 384 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index d09bf7d..e3dfaf5 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -11,6 +11,8 @@ #include "event_helper.h" +static volatile bool eth_core_running; + static int eh_get_enabled_cores(struct rte_bitmap *eth_core_mask) { @@ -93,6 +95,16 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } +static inline bool +eh_dev_has_burst_mode(uint8_t dev_id) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(dev_id, &dev_info); + return (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE) ? + true : false; +} + static int eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) { @@ -689,6 +701,257 @@ eh_initialize_rx_adapter(struct eventmode_conf *em_conf) return 0; } +static int32_t +eh_start_worker_eth_core(struct eventmode_conf *conf, uint32_t lcore_id) +{ + uint32_t service_id[EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE]; + struct rx_adapter_conf *rx_adapter; + struct tx_adapter_conf *tx_adapter; + int service_count = 0; + int adapter_id; + int32_t ret; + int i; + + EH_LOG_INFO("Entering eth_core processing on lcore %u", lcore_id); + + /* + * Parse adapter config to check which of all Rx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_rx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per rx core"); + break; + } + + rx_adapter = &(conf->rx_adapter[i]); + if (rx_adapter->rx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = rx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_rx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by rx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + /* + * Parse adapter config to see which of all Tx adapters need + * to be handled by this core. + */ + for (i = 0; i < conf->nb_tx_adapter; i++) { + /* Check if we have exceeded the max allowed */ + if (service_count > EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE) { + EH_LOG_ERR( + "Exceeded the max allowed adapters per tx core"); + break; + } + + tx_adapter = &conf->tx_adapter[i]; + if (tx_adapter->tx_core_id != lcore_id) + continue; + + /* Adapter is handled by this core */ + adapter_id = tx_adapter->adapter_id; + + /* Get the service ID for the adapters */ + ret = rte_event_eth_tx_adapter_service_id_get(adapter_id, + &(service_id[service_count])); + + if (ret != -ESRCH && ret < 0) { + EH_LOG_ERR( + "Failed to get service id used by tx adapter"); + return ret; + } + + /* Update service count */ + service_count++; + } + + eth_core_running = true; + + while (eth_core_running) { + for (i = 0; i < service_count; i++) { + /* Initiate adapter service */ + rte_service_run_iter_on_app_lcore(service_id[i], 0); + } + } + + return 0; +} + +static int32_t +eh_stop_worker_eth_core(void) +{ + if (eth_core_running) { + EH_LOG_INFO("Stopping eth cores"); + eth_core_running = false; + } + return 0; +} + +static struct eh_app_worker_params * +eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, + struct eh_app_worker_params *app_wrkrs, uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params curr_conf = { {{0} }, NULL}; + struct eh_event_link_info *link = NULL; + struct eh_app_worker_params *tmp_wrkr; + struct eventmode_conf *em_conf; + uint8_t eventdev_id; + int i; + + /* Get eventmode config */ + em_conf = conf->mode_params; + + /* + * Use event device from the first lcore-event link. + * + * Assumption: All lcore-event links tied to a core are using the + * same event device. In other words, one core would be polling on + * queues of a single event device only. + */ + + /* Get a link for this lcore */ + for (i = 0; i < em_conf->nb_link; i++) { + link = &(em_conf->link[i]); + if (link->lcore_id == lcore_id) + break; + } + + if (link == NULL) { + EH_LOG_ERR("No valid link found for lcore %d", lcore_id); + return NULL; + } + + /* Get event dev ID */ + eventdev_id = link->eventdev_id; + + /* Populate the curr_conf with the capabilities */ + + /* Check for burst mode */ + if (eh_dev_has_burst_mode(eventdev_id)) + curr_conf.cap.burst = EH_RX_TYPE_BURST; + else + curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + + /* Parse the passed list and see if we have matching capabilities */ + + /* Initialize the pointer used to traverse the list */ + tmp_wrkr = app_wrkrs; + + for (i = 0; i < nb_wrkr_param; i++, tmp_wrkr++) { + + /* Skip this if capabilities are not matching */ + if (tmp_wrkr->cap.u64 != curr_conf.cap.u64) + continue; + + /* If the checks pass, we have a match */ + return tmp_wrkr; + } + + return NULL; +} + +static int +eh_verify_match_worker(struct eh_app_worker_params *match_wrkr) +{ + /* Verify registered worker */ + if (match_wrkr->worker_thread == NULL) { + EH_LOG_ERR("No worker registered"); + return 0; + } + + /* Success */ + return 1; +} + +static uint8_t +eh_get_event_lcore_links(uint32_t lcore_id, struct eh_conf *conf, + struct eh_event_link_info **links) +{ + struct eh_event_link_info *link_cache; + struct eventmode_conf *em_conf = NULL; + struct eh_event_link_info *link; + uint8_t lcore_nb_link = 0; + size_t single_link_size; + size_t cache_size; + int index = 0; + int i; + + if (conf == NULL || links == NULL) { + EH_LOG_ERR("Invalid args"); + return -EINVAL; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (em_conf == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return -EINVAL; + } + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Update the number of links for this core */ + lcore_nb_link++; + + } + } + + /* Compute size of one entry to be copied */ + single_link_size = sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + cache_size = lcore_nb_link * sizeof(struct eh_event_link_info); + + /* Compute size of the buffer required */ + link_cache = calloc(1, cache_size); + + /* Get the number of links registered */ + for (i = 0; i < em_conf->nb_link; i++) { + + /* Get link */ + link = &(em_conf->link[i]); + + /* Check if we have link intended for this lcore */ + if (link->lcore_id == lcore_id) { + + /* Cache the link */ + memcpy(&link_cache[index], link, single_link_size); + + /* Update index */ + index++; + } + } + + /* Update the links for application to use the cached links */ + *links = link_cache; + + /* Return the number of cached links */ + return lcore_nb_link; +} + static int eh_tx_adapter_configure(struct eventmode_conf *em_conf, struct tx_adapter_conf *adapter) @@ -1202,6 +1465,79 @@ eh_devs_uninit(struct eh_conf *conf) return 0; } +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param) +{ + struct eh_app_worker_params *match_wrkr; + struct eh_event_link_info *links = NULL; + struct eventmode_conf *em_conf; + uint32_t lcore_id; + uint8_t nb_links; + + if (conf == NULL) { + EH_LOG_ERR("Invalid event helper configuration"); + return; + } + + if (conf->mode_params == NULL) { + EH_LOG_ERR("Invalid event mode parameters"); + return; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Check if this is eth core */ + if (rte_bitmap_get(em_conf->eth_core_mask, lcore_id)) { + eh_start_worker_eth_core(em_conf, lcore_id); + return; + } + + if (app_wrkr == NULL || nb_wrkr_param == 0) { + EH_LOG_ERR("Invalid args"); + return; + } + + /* + * This is a regular worker thread. The application registers + * multiple workers with various capabilities. Run worker + * based on the selected capabilities of the event + * device configured. + */ + + /* Get the first matching worker for the event device */ + match_wrkr = eh_find_worker(lcore_id, conf, app_wrkr, nb_wrkr_param); + if (match_wrkr == NULL) { + EH_LOG_ERR("Failed to match worker registered for lcore %d", + lcore_id); + goto clean_and_exit; + } + + /* Verify sanity of the matched worker */ + if (eh_verify_match_worker(match_wrkr) != 1) { + EH_LOG_ERR("Failed to validate the matched worker"); + goto clean_and_exit; + } + + /* Get worker links */ + nb_links = eh_get_event_lcore_links(lcore_id, conf, &links); + + /* Launch the worker thread */ + match_wrkr->worker_thread(links, nb_links); + + /* Free links info memory */ + free(links); + +clean_and_exit: + + /* Flag eth_cores to stop, if started */ + eh_stop_worker_eth_core(); +} + uint8_t eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 8eb5e25..9a4dfab 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -40,6 +40,9 @@ #define EVENT_MODE_MAX_LCORE_LINKS \ (EVENT_MODE_MAX_EVENT_DEVS * EVENT_MODE_MAX_EVENT_QUEUES_PER_DEV) +/* Max adapters that one Rx core can handle */ +#define EVENT_MODE_MAX_ADAPTERS_PER_RX_CORE EVENT_MODE_MAX_RX_ADAPTERS + /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS @@ -51,6 +54,14 @@ enum eh_pkt_transfer_mode { EH_PKT_TRANSFER_MODE_EVENT, }; +/** + * Event mode packet rx types + */ +enum eh_rx_types { + EH_RX_TYPE_NON_BURST = 0, + EH_RX_TYPE_BURST +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -161,6 +172,22 @@ struct eh_conf { /**< Mode specific parameters */ }; +/* Workers registered by the application */ +struct eh_app_worker_params { + union { + RTE_STD_C11 + struct { + uint64_t burst : 1; + /**< Specify status of rx type burst */ + }; + uint64_t u64; + } cap; + /**< Capabilities of this worker */ + void (*worker_thread)(struct eh_event_link_info *links, + uint8_t nb_links); + /**< Worker thread */ +}; + /** * Initialize event mode devices * @@ -228,4 +255,25 @@ eh_get_tx_queue(struct eh_conf *conf, uint8_t eventdev_id); void eh_display_conf(struct eh_conf *conf); + +/** + * Launch eventmode worker + * + * The application can request the eventmode helper subsystem to launch the + * worker based on the capabilities of event device and the options selected + * while initializing the eventmode. + * + * @param conf + * Event helper configuration + * @param app_wrkr + * List of all the workers registered by application, along with its + * capabilities + * @param nb_wrkr_param + * Number of workers passed by the application + * + */ +void +eh_launch_worker(struct eh_conf *conf, struct eh_app_worker_params *app_wrkr, + uint8_t nb_wrkr_param); + #endif /* _EVENT_HELPER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 08/15] examples/ipsec-secgw: add support for internal ports 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (6 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 07/15] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 09/15] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik ` (8 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add support for Rx and Tx internal ports. When internal ports are available then a packet can be received from eth port and forwarded to event queue by HW without any software intervention. The same applies to Tx side where a packet sent to an event queue can by forwarded by HW to eth port without any software intervention. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 179 +++++++++++++++++++++++++++++++----- examples/ipsec-secgw/event_helper.h | 11 +++ 2 files changed, 167 insertions(+), 23 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index e3dfaf5..fe047ab 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -95,6 +95,39 @@ eh_get_eventdev_params(struct eventmode_conf *em_conf, uint8_t eventdev_id) return &(em_conf->eventdev_config[i]); } + +static inline bool +eh_dev_has_rx_internal_port(uint8_t eventdev_id) +{ + bool flag = true; + int j; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_rx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + +static inline bool +eh_dev_has_tx_internal_port(uint8_t eventdev_id) +{ + bool flag = true; + int j; + + RTE_ETH_FOREACH_DEV(j) { + uint32_t caps = 0; + + rte_event_eth_tx_adapter_caps_get(eventdev_id, j, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + flag = false; + } + return flag; +} + static inline bool eh_dev_has_burst_mode(uint8_t dev_id) { @@ -175,6 +208,42 @@ eh_set_default_conf_eventdev(struct eventmode_conf *em_conf) return 0; } +static void +eh_do_capability_check(struct eventmode_conf *em_conf) +{ + struct eventdev_params *eventdev_config; + int all_internal_ports = 1; + uint32_t eventdev_id; + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + + /* Get the event dev conf */ + eventdev_config = &(em_conf->eventdev_config[i]); + eventdev_id = eventdev_config->eventdev_id; + + /* Check if event device has internal port for Rx & Tx */ + if (eh_dev_has_rx_internal_port(eventdev_id) && + eh_dev_has_tx_internal_port(eventdev_id)) { + eventdev_config->all_internal_ports = 1; + } else { + all_internal_ports = 0; + } + } + + /* + * If Rx & Tx internal ports are supported by all event devices then + * eth cores won't be required. Override the eth core mask requested + * and decrement number of event queues by one as it won't be needed + * for Tx. + */ + if (all_internal_ports) { + rte_bitmap_reset(em_conf->eth_core_mask); + for (i = 0; i < em_conf->nb_eventdev; i++) + em_conf->eventdev_config[i].nb_eventqueue--; + } +} + static int eh_set_default_conf_link(struct eventmode_conf *em_conf) { @@ -246,7 +315,10 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) struct rx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct rx_adapter_conf *adapter; + bool rx_internal_port = true; bool single_ev_queue = false; + int nb_eventqueue; + uint32_t caps = 0; int eventdev_id; int nb_eth_dev; int adapter_id; @@ -276,14 +348,21 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Set adapter conf */ adapter->eventdev_id = eventdev_id; adapter->adapter_id = adapter_id; - adapter->rx_core_id = eh_get_next_eth_core(em_conf); + + /* + * If event device does not have internal ports for passing + * packets then reserved one queue for Tx path + */ + nb_eventqueue = eventdev_config->all_internal_ports ? + eventdev_config->nb_eventqueue : + eventdev_config->nb_eventqueue - 1; /* * Map all queues of eth device (port) to an event queue. If there * are more event queues than eth ports then create 1:1 mapping. * Otherwise map all eth ports to a single event queue. */ - if (nb_eth_dev > eventdev_config->nb_eventqueue) + if (nb_eth_dev > nb_eventqueue) single_ev_queue = true; for (i = 0; i < nb_eth_dev; i++) { @@ -305,11 +384,24 @@ eh_set_default_conf_rx_adapter(struct eventmode_conf *em_conf) /* Add all eth queues eth port to event queue */ conn->ethdev_rx_qid = -1; + /* Get Rx adapter capabilities */ + rte_event_eth_rx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) + rx_internal_port = false; + /* Update no of connections */ adapter->nb_connections++; } + if (rx_internal_port) { + /* Rx core is not required */ + adapter->rx_core_id = -1; + } else { + /* Rx core is required */ + adapter->rx_core_id = eh_get_next_eth_core(em_conf); + } + /* We have setup one adapter */ em_conf->nb_rx_adapter = 1; @@ -322,6 +414,8 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) struct tx_adapter_connection_info *conn; struct eventdev_params *eventdev_config; struct tx_adapter_conf *tx_adapter; + bool tx_internal_port = true; + uint32_t caps = 0; int eventdev_id; int adapter_id; int nb_eth_dev; @@ -355,18 +449,6 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) tx_adapter->eventdev_id = eventdev_id; tx_adapter->adapter_id = adapter_id; - /* TODO: Tx core is required only when internal port is not present */ - tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); - - /* - * Application uses one event queue per adapter for submitting - * packets for Tx. Reserve the last queue available and decrement - * the total available event queues for this - */ - - /* Queue numbers start at 0 */ - tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; - /* * Map all Tx queues of the eth device (port) to the event device. */ @@ -396,10 +478,30 @@ eh_set_default_conf_tx_adapter(struct eventmode_conf *em_conf) /* Add all eth tx queues to adapter */ conn->ethdev_tx_qid = -1; + /* Get Tx adapter capabilities */ + rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (!(caps & RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT)) + tx_internal_port = false; + /* Update no of connections */ tx_adapter->nb_connections++; } + if (tx_internal_port) { + /* Tx core is not required */ + tx_adapter->tx_core_id = -1; + } else { + /* Tx core is required */ + tx_adapter->tx_core_id = eh_get_next_eth_core(em_conf); + + /* + * Use one event queue per adapter for submitting packets + * for Tx. Reserving the last queue available + */ + /* Queue numbers start at 0 */ + tx_adapter->tx_ev_queue = eventdev_config->nb_eventqueue - 1; + } + /* We have setup one adapter */ em_conf->nb_tx_adapter = 1; return 0; @@ -420,6 +522,9 @@ eh_validate_conf(struct eventmode_conf *em_conf) return ret; } + /* Perform capability check for the selected event devices */ + eh_do_capability_check(em_conf); + /* * Check if links are specified. Else generate a default config for * the event ports used. @@ -523,11 +628,13 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_config->ev_queue_mode; /* * All queues need to be set with sched_type as - * schedule type for the application stage. One queue - * would be reserved for the final eth tx stage. This - * will be an atomic queue. + * schedule type for the application stage. One + * queue would be reserved for the final eth tx + * stage if event device does not have internal + * ports. This will be an atomic queue. */ - if (j == nb_eventqueue-1) { + if (!eventdev_config->all_internal_ports && + j == nb_eventqueue-1) { eventq_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; } else { @@ -841,6 +948,12 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, /* Populate the curr_conf with the capabilities */ + /* Check for Tx internal port */ + if (eh_dev_has_tx_internal_port(eventdev_id)) + curr_conf.cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + else + curr_conf.cap.tx_internal_port = EH_TX_TYPE_NO_INTERNAL_PORT; + /* Check for burst mode */ if (eh_dev_has_burst_mode(eventdev_id)) curr_conf.cap.burst = EH_RX_TYPE_BURST; @@ -1012,6 +1125,16 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, } } + /* + * Check if Tx core is assigned. If Tx core is not assigned then + * the adapter has internal port for submitting Tx packets and + * Tx event queue & port setup is not required + */ + if (adapter->tx_core_id == (uint32_t) (-1)) { + /* Internal port is present */ + goto skip_tx_queue_port_setup; + } + /* Setup Tx queue & port */ /* Get event port used by the adapter */ @@ -1051,6 +1174,7 @@ eh_tx_adapter_configure(struct eventmode_conf *em_conf, rte_service_set_runstate_mapped_check(service_id, 0); +skip_tx_queue_port_setup: /* Start adapter */ ret = rte_event_eth_tx_adapter_start(adapter->adapter_id); if (ret < 0) { @@ -1135,13 +1259,22 @@ eh_display_rx_adapter_conf(struct eventmode_conf *em_conf) for (i = 0; i < nb_rx_adapter; i++) { adapter = &(em_conf->rx_adapter[i]); - EH_LOG_INFO( - "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d" - "\tRx core: %-2d", + sprintf(print_buf, + "\tRx adaper ID: %-2d\tConnections: %-2d\tEvent dev ID: %-2d", adapter->adapter_id, adapter->nb_connections, - adapter->eventdev_id, - adapter->rx_core_id); + adapter->eventdev_id); + if (adapter->rx_core_id == (uint32_t)-1) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[INTERNAL PORT]"); + else if (adapter->rx_core_id == RTE_MAX_LCORE) + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2s", "[NONE]"); + else + sprintf(print_buf + strlen(print_buf), + "\tRx core: %-2d", adapter->rx_core_id); + + EH_LOG_INFO("%s", print_buf); for (j = 0; j < adapter->nb_connections; j++) { conn = &(adapter->conn[j]); diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 9a4dfab..25c8563 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -62,12 +62,21 @@ enum eh_rx_types { EH_RX_TYPE_BURST }; +/** + * Event mode packet tx types + */ +enum eh_tx_types { + EH_TX_TYPE_INTERNAL_PORT = 0, + EH_TX_TYPE_NO_INTERNAL_PORT +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; uint8_t nb_eventqueue; uint8_t nb_eventport; uint8_t ev_queue_mode; + uint8_t all_internal_ports; }; /** @@ -179,6 +188,8 @@ struct eh_app_worker_params { struct { uint64_t burst : 1; /**< Specify status of rx type burst */ + uint64_t tx_internal_port : 1; + /**< Specify whether tx internal port is available */ }; uint64_t u64; } cap; -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 09/15] examples/ipsec-secgw: add event helper config init/uninit 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (7 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 08/15] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik ` (7 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add eventmode helper eh_conf_init and eh_conf_uninit functions which purpose is to initialize and unitialize eventmode helper configuration. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 103 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/event_helper.h | 23 ++++++++ 2 files changed, 126 insertions(+) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index fe047ab..0854fc2 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -1379,6 +1379,109 @@ eh_display_link_conf(struct eventmode_conf *em_conf) EH_LOG_INFO(""); } +struct eh_conf * +eh_conf_init(void) +{ + struct eventmode_conf *em_conf = NULL; + struct eh_conf *conf = NULL; + unsigned int eth_core_id; + void *bitmap = NULL; + uint32_t nb_bytes; + + /* Allocate memory for config */ + conf = calloc(1, sizeof(struct eh_conf)); + if (conf == NULL) { + EH_LOG_ERR("Failed to allocate memory for eventmode helper " + "config"); + return NULL; + } + + /* Set default conf */ + + /* Packet transfer mode: poll */ + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + + /* Keep all ethernet ports enabled by default */ + conf->eth_portmask = -1; + + /* Allocate memory for event mode params */ + conf->mode_params = calloc(1, sizeof(struct eventmode_conf)); + if (conf->mode_params == NULL) { + EH_LOG_ERR("Failed to allocate memory for event mode params"); + goto free_conf; + } + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Allocate and initialize bitmap for eth cores */ + nb_bytes = rte_bitmap_get_memory_footprint(RTE_MAX_LCORE); + if (!nb_bytes) { + EH_LOG_ERR("Failed to get bitmap footprint"); + goto free_em_conf; + } + + bitmap = rte_zmalloc("event-helper-ethcore-bitmap", nb_bytes, + RTE_CACHE_LINE_SIZE); + if (!bitmap) { + EH_LOG_ERR("Failed to allocate memory for eth cores bitmap\n"); + goto free_em_conf; + } + + em_conf->eth_core_mask = rte_bitmap_init(RTE_MAX_LCORE, bitmap, + nb_bytes); + if (!em_conf->eth_core_mask) { + EH_LOG_ERR("Failed to initialize bitmap"); + goto free_bitmap; + } + + /* Set schedule type as not set */ + em_conf->ext_params.sched_type = SCHED_TYPE_NOT_SET; + + /* Set two cores as eth cores for Rx & Tx */ + + /* Use first core other than master core as Rx core */ + eth_core_id = rte_get_next_lcore(0, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + /* Use next core as Tx core */ + eth_core_id = rte_get_next_lcore(eth_core_id, /* curr core */ + 1, /* skip master core */ + 0 /* wrap */); + + rte_bitmap_set(em_conf->eth_core_mask, eth_core_id); + + return conf; + +free_bitmap: + rte_free(bitmap); +free_em_conf: + free(em_conf); +free_conf: + free(conf); + return NULL; +} + +void +eh_conf_uninit(struct eh_conf *conf) +{ + struct eventmode_conf *em_conf = NULL; + + if (!conf || !conf->mode_params) + return; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + /* Free evenmode configuration memory */ + rte_free(em_conf->eth_core_mask); + free(em_conf); + free(conf); +} + void eh_display_conf(struct eh_conf *conf) { diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index 25c8563..e17cab1 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -46,6 +46,9 @@ /* Max adapters that one Tx core can handle */ #define EVENT_MODE_MAX_ADAPTERS_PER_TX_CORE EVENT_MODE_MAX_TX_ADAPTERS +/* Used to indicate that queue schedule type is not set */ +#define SCHED_TYPE_NOT_SET 3 + /** * Packet transfer mode of the application */ @@ -200,6 +203,26 @@ struct eh_app_worker_params { }; /** + * Allocate memory for event helper configuration and initialize + * it with default values. + * + * @return + * - pointer to event helper configuration structure on success. + * - NULL on failure. + */ +struct eh_conf * +eh_conf_init(void); + +/** + * Uninitialize event helper configuration and release its memory +. * + * @param conf + * Event helper configuration + */ +void +eh_conf_uninit(struct eh_conf *conf); + +/** * Initialize event mode devices * * Application can call this function to get the event devices, eth devices -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (8 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 09/15] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 11/15] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik ` (6 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add eventmode support to ipsec-secgw. With the aid of event helper configure and use the eventmode capabilities. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/event_helper.c | 3 + examples/ipsec-secgw/event_helper.h | 14 ++ examples/ipsec-secgw/ipsec-secgw.c | 301 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec.h | 24 +++ examples/ipsec-secgw/sa.c | 21 +-- examples/ipsec-secgw/sad.h | 5 - 6 files changed, 340 insertions(+), 28 deletions(-) diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 0854fc2..076f1f2 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -960,6 +960,8 @@ eh_find_worker(uint32_t lcore_id, struct eh_conf *conf, else curr_conf.cap.burst = EH_RX_TYPE_NON_BURST; + curr_conf.cap.ipsec_mode = conf->ipsec_mode; + /* Parse the passed list and see if we have matching capabilities */ /* Initialize the pointer used to traverse the list */ @@ -1400,6 +1402,7 @@ eh_conf_init(void) /* Packet transfer mode: poll */ conf->mode = EH_PKT_TRANSFER_MODE_POLL; + conf->ipsec_mode = EH_IPSEC_MODE_TYPE_APP; /* Keep all ethernet ports enabled by default */ conf->eth_portmask = -1; diff --git a/examples/ipsec-secgw/event_helper.h b/examples/ipsec-secgw/event_helper.h index e17cab1..b65b343 100644 --- a/examples/ipsec-secgw/event_helper.h +++ b/examples/ipsec-secgw/event_helper.h @@ -73,6 +73,14 @@ enum eh_tx_types { EH_TX_TYPE_NO_INTERNAL_PORT }; +/** + * Event mode ipsec mode types + */ +enum eh_ipsec_mode_types { + EH_IPSEC_MODE_TYPE_APP = 0, + EH_IPSEC_MODE_TYPE_DRIVER +}; + /* Event dev params */ struct eventdev_params { uint8_t eventdev_id; @@ -182,6 +190,10 @@ struct eh_conf { */ void *mode_params; /**< Mode specific parameters */ + + /** Application specific params */ + enum eh_ipsec_mode_types ipsec_mode; + /**< Mode of ipsec run */ }; /* Workers registered by the application */ @@ -193,6 +205,8 @@ struct eh_app_worker_params { /**< Specify status of rx type burst */ uint64_t tx_internal_port : 1; /**< Specify whether tx internal port is available */ + uint64_t ipsec_mode : 1; + /**< Specify ipsec processing level */ }; uint64_t u64; } cap; diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index e1ee7c3..0f692d7 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -2,6 +2,7 @@ * Copyright(c) 2016 Intel Corporation */ +#include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <stdint.h> @@ -14,9 +15,11 @@ #include <sys/queue.h> #include <stdarg.h> #include <errno.h> +#include <signal.h> #include <getopt.h> #include <rte_common.h> +#include <rte_bitmap.h> #include <rte_byteorder.h> #include <rte_log.h> #include <rte_eal.h> @@ -41,13 +44,17 @@ #include <rte_jhash.h> #include <rte_cryptodev.h> #include <rte_security.h> +#include <rte_eventdev.h> #include <rte_ip.h> #include <rte_ip_frag.h> +#include "event_helper.h" #include "ipsec.h" #include "parser.h" #include "sad.h" +volatile bool force_quit; + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define MAX_JUMBO_PKT_LEN 9600 @@ -134,12 +141,20 @@ struct flow_info flow_info_tbl[RTE_MAX_ETHPORTS]; #define CMD_LINE_OPT_CONFIG "config" #define CMD_LINE_OPT_SINGLE_SA "single-sa" #define CMD_LINE_OPT_CRYPTODEV_MASK "cryptodev_mask" +#define CMD_LINE_OPT_TRANSFER_MODE "transfer-mode" +#define CMD_LINE_OPT_SCHEDULE_TYPE "event-schedule-type" #define CMD_LINE_OPT_RX_OFFLOAD "rxoffload" #define CMD_LINE_OPT_TX_OFFLOAD "txoffload" #define CMD_LINE_OPT_REASSEMBLE "reassemble" #define CMD_LINE_OPT_MTU "mtu" #define CMD_LINE_OPT_FRAG_TTL "frag-ttl" +#define CMD_LINE_ARG_EVENT "event" +#define CMD_LINE_ARG_POLL "poll" +#define CMD_LINE_ARG_ORDERED "ordered" +#define CMD_LINE_ARG_ATOMIC "atomic" +#define CMD_LINE_ARG_PARALLEL "parallel" + enum { /* long options mapped to a short option */ @@ -150,6 +165,8 @@ enum { CMD_LINE_OPT_CONFIG_NUM, CMD_LINE_OPT_SINGLE_SA_NUM, CMD_LINE_OPT_CRYPTODEV_MASK_NUM, + CMD_LINE_OPT_TRANSFER_MODE_NUM, + CMD_LINE_OPT_SCHEDULE_TYPE_NUM, CMD_LINE_OPT_RX_OFFLOAD_NUM, CMD_LINE_OPT_TX_OFFLOAD_NUM, CMD_LINE_OPT_REASSEMBLE_NUM, @@ -161,6 +178,8 @@ static const struct option lgopts[] = { {CMD_LINE_OPT_CONFIG, 1, 0, CMD_LINE_OPT_CONFIG_NUM}, {CMD_LINE_OPT_SINGLE_SA, 1, 0, CMD_LINE_OPT_SINGLE_SA_NUM}, {CMD_LINE_OPT_CRYPTODEV_MASK, 1, 0, CMD_LINE_OPT_CRYPTODEV_MASK_NUM}, + {CMD_LINE_OPT_TRANSFER_MODE, 1, 0, CMD_LINE_OPT_TRANSFER_MODE_NUM}, + {CMD_LINE_OPT_SCHEDULE_TYPE, 1, 0, CMD_LINE_OPT_SCHEDULE_TYPE_NUM}, {CMD_LINE_OPT_RX_OFFLOAD, 1, 0, CMD_LINE_OPT_RX_OFFLOAD_NUM}, {CMD_LINE_OPT_TX_OFFLOAD, 1, 0, CMD_LINE_OPT_TX_OFFLOAD_NUM}, {CMD_LINE_OPT_REASSEMBLE, 1, 0, CMD_LINE_OPT_REASSEMBLE_NUM}, @@ -1199,13 +1218,19 @@ main_loop(__attribute__((unused)) void *dummy) } static int32_t -check_params(void) +check_poll_mode_params(struct eh_conf *eh_conf) { uint8_t lcore; uint16_t portid; uint16_t i; int32_t socket_id; + if (!eh_conf) + return -EINVAL; + + if (eh_conf->mode != EH_PKT_TRANSFER_MODE_POLL) + return 0; + if (lcore_params == NULL) { printf("Error: No port/queue/core mappings\n"); return -1; @@ -1292,6 +1317,8 @@ print_usage(const char *prgname) " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" " [--cryptodev_mask MASK]" + " [--transfer-mode MODE]" + " [--event-schedule-type TYPE]" " [--" CMD_LINE_OPT_RX_OFFLOAD " RX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_TX_OFFLOAD " TX_OFFLOAD_MASK]" " [--" CMD_LINE_OPT_REASSEMBLE " REASSEMBLE_TABLE_SIZE]" @@ -1310,11 +1337,24 @@ print_usage(const char *prgname) " -c specifies inbound SAD cache size,\n" " zero value disables the cache (default value: 128)\n" " -f CONFIG_FILE: Configuration file\n" - " --config (port,queue,lcore): Rx queue configuration\n" + " --config (port,queue,lcore): Rx queue configuration. In poll\n" + " mode determines which queues from\n" + " which ports are mapped to which cores.\n" + " In event mode this option is not used\n" + " as packets are dynamically scheduled\n" + " to cores by HW.\n" " --single-sa SAIDX: Use single SA index for outbound traffic,\n" " bypassing the SP\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" + " --transfer-mode MODE\n" + " \"poll\" : Packet transfer via polling (default)\n" + " \"event\" : Packet transfer via event device\n" + " --event-schedule-type TYPE queue schedule type, used only when\n" + " transfer mode is set to event\n" + " \"ordered\" : Ordered (default)\n" + " \"atomic\" : Atomic\n" + " \"parallel\" : Parallel\n" " --" CMD_LINE_OPT_RX_OFFLOAD ": bitmask of the RX HW offload capabilities to enable/use\n" " (DEV_RX_OFFLOAD_*)\n" @@ -1449,8 +1489,45 @@ print_app_sa_prm(const struct app_sa_prm *prm) printf("Frag TTL: %" PRIu64 " ns\n", frag_ttl_ns); } +static int +parse_transfer_mode(struct eh_conf *conf, const char *optarg) +{ + if (!strcmp(CMD_LINE_ARG_POLL, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_POLL; + else if (!strcmp(CMD_LINE_ARG_EVENT, optarg)) + conf->mode = EH_PKT_TRANSFER_MODE_EVENT; + else { + printf("Unsupported packet transfer mode\n"); + return -EINVAL; + } + + return 0; +} + +static int +parse_schedule_type(struct eh_conf *conf, const char *optarg) +{ + struct eventmode_conf *em_conf = NULL; + + /* Get eventmode conf */ + em_conf = conf->mode_params; + + if (!strcmp(CMD_LINE_ARG_ORDERED, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + else if (!strcmp(CMD_LINE_ARG_ATOMIC, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ATOMIC; + else if (!strcmp(CMD_LINE_ARG_PARALLEL, optarg)) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_PARALLEL; + else { + printf("Unsupported queue schedule type\n"); + return -EINVAL; + } + + return 0; +} + static int32_t -parse_args(int32_t argc, char **argv) +parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) { int opt; int64_t ret; @@ -1548,6 +1625,7 @@ parse_args(int32_t argc, char **argv) /* else */ single_sa = 1; single_sa_idx = ret; + eh_conf->ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; printf("Configured with single SA index %u\n", single_sa_idx); break; @@ -1562,6 +1640,25 @@ parse_args(int32_t argc, char **argv) /* else */ enabled_cryptodev_mask = ret; break; + + case CMD_LINE_OPT_TRANSFER_MODE_NUM: + ret = parse_transfer_mode(eh_conf, optarg); + if (ret < 0) { + printf("Invalid packet transfer mode\n"); + print_usage(prgname); + return -1; + } + break; + + case CMD_LINE_OPT_SCHEDULE_TYPE_NUM: + ret = parse_schedule_type(eh_conf, optarg); + if (ret < 0) { + printf("Invalid queue schedule type\n"); + print_usage(prgname); + return -1; + } + break; + case CMD_LINE_OPT_RX_OFFLOAD_NUM: ret = parse_mask(optarg, &dev_rx_offload); if (ret != 0) { @@ -2476,16 +2573,141 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads) port_id); } +static void +signal_handler(int signum) +{ + if (signum == SIGINT || signum == SIGTERM) { + printf("\n\nSignal %d received, preparing to exit...\n", + signum); + force_quit = true; + } +} + +static void +ev_mode_sess_verify(struct ipsec_sa *sa, int nb_sa) +{ + struct rte_ipsec_session *ips; + int32_t i; + + if (!sa || !nb_sa) + return; + + for (i = 0; i < nb_sa; i++) { + ips = ipsec_get_primary_session(&sa[i]); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) + rte_exit(EXIT_FAILURE, "Event mode supports only " + "inline protocol sessions\n"); + } + +} + +static int32_t +check_event_mode_params(struct eh_conf *eh_conf) +{ + struct eventmode_conf *em_conf = NULL; + struct lcore_params *params; + uint16_t portid; + + if (!eh_conf || !eh_conf->mode_params) + return -EINVAL; + + /* Get eventmode conf */ + em_conf = eh_conf->mode_params; + + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_POLL && + em_conf->ext_params.sched_type != SCHED_TYPE_NOT_SET) { + printf("error: option --event-schedule-type applies only to " + "event mode\n"); + return -EINVAL; + } + + if (eh_conf->mode != EH_PKT_TRANSFER_MODE_EVENT) + return 0; + + /* Set schedule type to ORDERED if it wasn't explicitly set by user */ + if (em_conf->ext_params.sched_type == SCHED_TYPE_NOT_SET) + em_conf->ext_params.sched_type = RTE_SCHED_TYPE_ORDERED; + + /* + * Event mode currently supports only inline protocol sessions. + * If there are other types of sessions configured then exit with + * error. + */ + ev_mode_sess_verify(sa_in, nb_sa_in); + ev_mode_sess_verify(sa_out, nb_sa_out); + + + /* Option --config does not apply to event mode */ + if (nb_lcore_params > 0) { + printf("error: option --config applies only to poll mode\n"); + return -EINVAL; + } + + /* + * In order to use the same port_init routine for both poll and event + * modes initialize lcore_params with one queue for each eth port + */ + lcore_params = lcore_params_array; + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + params = &lcore_params[nb_lcore_params++]; + params->port_id = portid; + params->queue_id = 0; + params->lcore_id = 0; + } + + return 0; +} + +static void +inline_sessions_free(struct sa_ctx *sa_ctx) +{ + struct rte_ipsec_session *ips; + struct ipsec_sa *sa; + int32_t ret; + uint32_t i; + + if (!sa_ctx) + return; + + for (i = 0; i < sa_ctx->nb_sa; i++) { + + sa = &sa_ctx->sa[i]; + if (!sa->spi) + continue; + + ips = ipsec_get_primary_session(sa); + if (ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && + ips->type != RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO) + continue; + + if (!rte_eth_dev_is_valid_port(sa->portid)) + continue; + + ret = rte_security_session_destroy( + rte_eth_dev_get_sec_ctx(sa->portid), + ips->security.ses); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy security " + "session type %d, spi %d\n", + ips->type, sa->spi); + } +} + int32_t main(int32_t argc, char **argv) { int32_t ret; uint32_t lcore_id; + uint32_t cdev_id; uint32_t i; uint8_t socket_id; uint16_t portid; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; + struct eh_conf *eh_conf = NULL; size_t sess_sz; /* init EAL */ @@ -2495,8 +2717,17 @@ main(int32_t argc, char **argv) argc -= ret; argv += ret; + force_quit = false; + signal(SIGINT, signal_handler); + signal(SIGTERM, signal_handler); + + /* initialize event helper configuration */ + eh_conf = eh_conf_init(); + if (eh_conf == NULL) + rte_exit(EXIT_FAILURE, "Failed to init event helper config"); + /* parse application arguments (after the EAL ones) */ - ret = parse_args(argc, argv); + ret = parse_args(argc, argv, eh_conf); if (ret < 0) rte_exit(EXIT_FAILURE, "Invalid parameters\n"); @@ -2513,8 +2744,11 @@ main(int32_t argc, char **argv) rte_exit(EXIT_FAILURE, "Invalid unprotected portmask 0x%x\n", unprotected_port_mask); - if (check_params() < 0) - rte_exit(EXIT_FAILURE, "check_params failed\n"); + if (check_poll_mode_params(eh_conf) < 0) + rte_exit(EXIT_FAILURE, "check_poll_mode_params failed\n"); + + if (check_event_mode_params(eh_conf) < 0) + rte_exit(EXIT_FAILURE, "check_event_mode_params failed\n"); ret = init_lcore_rx_queues(); if (ret < 0) @@ -2555,6 +2789,18 @@ main(int32_t argc, char **argv) cryptodevs_init(); + /* + * Set the enabled port mask in helper config for use by helper + * sub-system. This will be used while initializing devices using + * helper sub-system. + */ + eh_conf->eth_portmask = enabled_port_mask; + + /* Initialize eventmode components */ + ret = eh_devs_init(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_init failed, err=%d\n", ret); + /* start ports */ RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2614,5 +2860,48 @@ main(int32_t argc, char **argv) return -1; } + /* Uninitialize eventmode components */ + ret = eh_devs_uninit(eh_conf); + if (ret < 0) + rte_exit(EXIT_FAILURE, "eh_devs_uninit failed, err=%d\n", ret); + + /* Free eventmode configuration memory */ + eh_conf_uninit(eh_conf); + + /* Destroy inline inbound and outbound sessions */ + for (i = 0; i < NB_SOCKETS && i < rte_socket_count(); i++) { + socket_id = rte_socket_id_by_idx(i); + inline_sessions_free(socket_ctx[socket_id].sa_in); + inline_sessions_free(socket_ctx[socket_id].sa_out); + } + + for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { + printf("Closing cryptodev %d...", cdev_id); + rte_cryptodev_stop(cdev_id); + rte_cryptodev_close(cdev_id); + printf(" Done\n"); + } + + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + + printf("Closing port %d...", portid); + if (flow_info_tbl[portid].rx_def_flow) { + struct rte_flow_error err; + + ret = rte_flow_destroy(portid, + flow_info_tbl[portid].rx_def_flow, &err); + if (ret) + RTE_LOG(ERR, IPSEC, "Failed to destroy flow " + " for port %u, err msg: %s\n", portid, + err.message); + } + rte_eth_dev_stop(portid); + rte_eth_dev_close(portid); + printf(" Done\n"); + } + printf("Bye...\n"); + return 0; } diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index 8f5d382..ec3d60b 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -159,6 +159,24 @@ struct ipsec_sa { struct rte_security_session_conf sess_conf; } __rte_cache_aligned; +struct ipsec_xf { + struct rte_crypto_sym_xform a; + struct rte_crypto_sym_xform b; +}; + +struct ipsec_sad { + struct rte_ipsec_sad *sad_v4; + struct rte_ipsec_sad *sad_v6; +}; + +struct sa_ctx { + void *satbl; /* pointer to array of rte_ipsec_sa objects*/ + struct ipsec_sad sad; + struct ipsec_xf *xf; + uint32_t nb_sa; + struct ipsec_sa sa[]; +}; + struct ipsec_mbuf_metadata { struct ipsec_sa *sa; struct rte_crypto_op cop; @@ -253,6 +271,12 @@ struct ipsec_traffic { struct traffic_type ip6; }; +extern struct ipsec_sa *sa_out; +extern uint32_t nb_sa_out; + +extern struct ipsec_sa *sa_in; +extern uint32_t nb_sa_in; + uint16_t ipsec_inbound(struct ipsec_ctx *ctx, struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t len); diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c index 4822d6b..0eb52d1 100644 --- a/examples/ipsec-secgw/sa.c +++ b/examples/ipsec-secgw/sa.c @@ -135,14 +135,14 @@ const struct supported_aead_algo aead_algos[] = { #define SA_INIT_NB 128 -static struct ipsec_sa *sa_out; +struct ipsec_sa *sa_out; +uint32_t nb_sa_out; static uint32_t sa_out_sz; -static uint32_t nb_sa_out; static struct ipsec_sa_cnt sa_out_cnt; -static struct ipsec_sa *sa_in; +struct ipsec_sa *sa_in; +uint32_t nb_sa_in; static uint32_t sa_in_sz; -static uint32_t nb_sa_in; static struct ipsec_sa_cnt sa_in_cnt; static const struct supported_cipher_algo * @@ -826,19 +826,6 @@ print_one_sa_rule(const struct ipsec_sa *sa, int inbound) printf("\n"); } -struct ipsec_xf { - struct rte_crypto_sym_xform a; - struct rte_crypto_sym_xform b; -}; - -struct sa_ctx { - void *satbl; /* pointer to array of rte_ipsec_sa objects*/ - struct ipsec_sad sad; - struct ipsec_xf *xf; - uint32_t nb_sa; - struct ipsec_sa sa[]; -}; - static struct sa_ctx * sa_create(const char *name, int32_t socket_id, uint32_t nb_sa) { diff --git a/examples/ipsec-secgw/sad.h b/examples/ipsec-secgw/sad.h index 55712ba..473aaa9 100644 --- a/examples/ipsec-secgw/sad.h +++ b/examples/ipsec-secgw/sad.h @@ -18,11 +18,6 @@ struct ipsec_sad_cache { RTE_DECLARE_PER_LCORE(struct ipsec_sad_cache, sad_cache); -struct ipsec_sad { - struct rte_ipsec_sad *sad_v4; - struct rte_ipsec_sad *sad_v6; -}; - int ipsec_sad_create(const char *name, struct ipsec_sad *sad, int socket_id, struct ipsec_sa_cnt *sa_cnt); -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 11/15] examples/ipsec-secgw: add driver mode worker 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (9 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 12/15] examples/ipsec-secgw: add app " Lukasz Bartosik ` (5 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add driver inbound and outbound worker thread for ipsec-secgw. In driver mode application does as little as possible. It simply forwards packets back to port from which traffic was received instructing HW to apply inline security processing using first outbound SA configured for a given port. If a port does not have SA configured outbound traffic on that port will be silently dropped. The aim of this mode is to measure HW capabilities. Driver mode is selected with single-sa option. The single-sa option accepts SA index however in event mode the SA index is ignored. Example command to run ipsec-secgw in driver mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel --single-sa 0 Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/Makefile | 1 + examples/ipsec-secgw/ipsec-secgw.c | 34 +++--- examples/ipsec-secgw/ipsec-secgw.h | 25 +++++ examples/ipsec-secgw/ipsec.h | 11 ++ examples/ipsec-secgw/ipsec_worker.c | 218 ++++++++++++++++++++++++++++++++++++ examples/ipsec-secgw/meson.build | 2 +- 6 files changed, 272 insertions(+), 19 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec-secgw.h create mode 100644 examples/ipsec-secgw/ipsec_worker.c diff --git a/examples/ipsec-secgw/Makefile b/examples/ipsec-secgw/Makefile index 66d05d4..c4a272a 100644 --- a/examples/ipsec-secgw/Makefile +++ b/examples/ipsec-secgw/Makefile @@ -16,6 +16,7 @@ SRCS-y += sad.c SRCS-y += rt.c SRCS-y += ipsec_process.c SRCS-y += ipsec-secgw.c +SRCS-y += ipsec_worker.c SRCS-y += event_helper.c CFLAGS += -gdwarf-2 diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 0f692d7..9ad4be5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -71,8 +71,6 @@ volatile bool force_quit; #define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */ -#define NB_SOCKETS 4 - /* Configure how many packets ahead to prefetch, when reading packets */ #define PREFETCH_OFFSET 3 @@ -80,8 +78,6 @@ volatile bool force_quit; #define MAX_LCORE_PARAMS 1024 -#define UNPROTECTED_PORT(port) (unprotected_port_mask & (1 << portid)) - /* * Configurable number of RX/TX ring descriptors */ @@ -188,15 +184,15 @@ static const struct option lgopts[] = { {NULL, 0, 0, 0} }; +uint32_t unprotected_port_mask; +uint32_t single_sa_idx; /* mask of enabled ports */ static uint32_t enabled_port_mask; static uint64_t enabled_cryptodev_mask = UINT64_MAX; -static uint32_t unprotected_port_mask; static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; -static uint32_t single_sa_idx; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -282,7 +278,7 @@ static struct rte_eth_conf port_conf = { }, }; -static struct socket_ctx socket_ctx[NB_SOCKETS]; +struct socket_ctx socket_ctx[NB_SOCKETS]; /* * Determine is multi-segment support required: @@ -1003,12 +999,12 @@ process_pkts(struct lcore_conf *qconf, struct rte_mbuf **pkts, prepare_traffic(pkts, &traffic, nb_pkts); if (unlikely(single_sa)) { - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) process_pkts_inbound_nosp(&qconf->inbound, &traffic); else process_pkts_outbound_nosp(&qconf->outbound, &traffic); } else { - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) process_pkts_inbound(&qconf->inbound, &traffic); else process_pkts_outbound(&qconf->outbound, &traffic); @@ -1119,8 +1115,8 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf, } /* main processing loop */ -static int32_t -main_loop(__attribute__((unused)) void *dummy) +void +ipsec_poll_mode_worker(void) { struct rte_mbuf *pkts[MAX_PKT_BURST]; uint32_t lcore_id; @@ -1164,13 +1160,13 @@ main_loop(__attribute__((unused)) void *dummy) RTE_LOG(ERR, IPSEC, "SAD cache init on lcore %u, failed with code: %d\n", lcore_id, rc); - return rc; + return; } if (qconf->nb_rx_queue == 0) { RTE_LOG(DEBUG, IPSEC, "lcore %u has nothing to do\n", lcore_id); - return 0; + return; } RTE_LOG(INFO, IPSEC, "entering main loop on lcore %u\n", lcore_id); @@ -1183,7 +1179,7 @@ main_loop(__attribute__((unused)) void *dummy) lcore_id, portid, queueid); } - while (1) { + while (!force_quit) { cur_tsc = rte_rdtsc(); /* TX queue buffer drain */ @@ -1207,7 +1203,7 @@ main_loop(__attribute__((unused)) void *dummy) process_pkts(qconf, pkts, nb_rx, portid); /* dequeue and process completed crypto-ops */ - if (UNPROTECTED_PORT(portid)) + if (is_unprotected_port(portid)) drain_inbound_crypto_queues(qconf, &qconf->inbound); else @@ -1343,8 +1339,10 @@ print_usage(const char *prgname) " In event mode this option is not used\n" " as packets are dynamically scheduled\n" " to cores by HW.\n" - " --single-sa SAIDX: Use single SA index for outbound traffic,\n" - " bypassing the SP\n" + " --single-sa SAIDX: In poll mode use single SA index for\n" + " outbound traffic, bypassing the SP\n" + " In event mode selects driver submode,\n" + " SA index value is ignored\n" " --cryptodev_mask MASK: Hexadecimal bitmask of the crypto\n" " devices to configure\n" " --transfer-mode MODE\n" @@ -2854,7 +2852,7 @@ main(int32_t argc, char **argv) check_all_ports_link_status(enabled_port_mask); /* launch per-lcore init on every lcore */ - rte_eal_mp_remote_launch(main_loop, NULL, CALL_MASTER); + rte_eal_mp_remote_launch(ipsec_launch_one_lcore, eh_conf, CALL_MASTER); RTE_LCORE_FOREACH_SLAVE(lcore_id) { if (rte_eal_wait_lcore(lcore_id) < 0) return -1; diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h new file mode 100644 index 0000000..a07a920 --- /dev/null +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_SECGW_H_ +#define _IPSEC_SECGW_H_ + +#include <stdbool.h> + +#define NB_SOCKETS 4 + +/* Port mask to identify the unprotected ports */ +extern uint32_t unprotected_port_mask; + +/* Index of SA in single mode */ +extern uint32_t single_sa_idx; + +extern volatile bool force_quit; + +static inline uint8_t +is_unprotected_port(uint16_t port_id) +{ + return unprotected_port_mask & (1 << port_id); +} + +#endif /* _IPSEC_SECGW_H_ */ diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index ec3d60b..ad913bf 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -13,6 +13,8 @@ #include <rte_flow.h> #include <rte_ipsec.h> +#include "ipsec-secgw.h" + #define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 @@ -271,6 +273,15 @@ struct ipsec_traffic { struct traffic_type ip6; }; +/* Socket ctx */ +extern struct socket_ctx socket_ctx[NB_SOCKETS]; + +void +ipsec_poll_mode_worker(void); + +int +ipsec_launch_one_lcore(void *args); + extern struct ipsec_sa *sa_out; extern uint32_t nb_sa_out; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c new file mode 100644 index 0000000..b7a1ef9 --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -0,0 +1,218 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2010-2016 Intel Corporation + * Copyright (C) 2020 Marvell International Ltd. + */ +#include <rte_event_eth_tx_adapter.h> + +#include "event_helper.h" +#include "ipsec.h" +#include "ipsec-secgw.h" + +static inline void +ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) +{ + /* Save the destination port in the mbuf */ + m->port = port_id; + + /* Save eth queue for Tx */ + rte_event_eth_tx_adapter_txq_set(m, 0); +} + +static inline void +prepare_out_sessions_tbl(struct sa_ctx *sa_out, + struct rte_security_session **sess_tbl, uint16_t size) +{ + struct rte_ipsec_session *pri_sess; + struct ipsec_sa *sa; + uint32_t i; + + if (!sa_out) + return; + + for (i = 0; i < sa_out->nb_sa; i++) { + + sa = &sa_out->sa[i]; + if (!sa) + continue; + + pri_sess = ipsec_get_primary_session(sa); + if (!pri_sess) + continue; + + if (pri_sess->type != + RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + + RTE_LOG(ERR, IPSEC, "Invalid session type %d\n", + pri_sess->type); + continue; + } + + if (sa->portid >= size) { + RTE_LOG(ERR, IPSEC, + "Port id >= than table size %d, %d\n", + sa->portid, size); + continue; + } + + /* Use only first inline session found for a given port */ + if (sess_tbl[sa->portid]) + continue; + sess_tbl[sa->portid] = pri_sess->security.ses; + } +} + +/* + * Event mode exposes various operating modes depending on the + * capabilities of the event device and the operating mode + * selected. + */ + +/* Workers registered */ +#define IPSEC_EVENTMODE_WORKERS 1 + +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - driver mode + */ +static void +ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct rte_security_session *sess_tbl[RTE_MAX_ETHPORTS] = { NULL }; + unsigned int nb_rx = 0; + struct rte_mbuf *pkt; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int16_t port_id; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* + * Prepare security sessions table. In outbound driver mode + * we always use first session configured for a given port + */ + prepare_out_sessions_tbl(socket_ctx[socket_id].sa_out, sess_tbl, + RTE_MAX_ETHPORTS); + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "driver mode) on lcore %d\n", lcore_id); + + /* We have valid links */ + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + pkt = ev.mbuf; + port_id = pkt->port; + + rte_prefetch0(rte_pktmbuf_mtod(pkt, void *)); + + /* Process packet */ + ipsec_event_pre_forward(pkt, port_id); + + if (!is_unprotected_port(port_id)) { + + if (unlikely(!sess_tbl[port_id])) { + rte_pktmbuf_free(pkt); + continue; + } + + /* Save security session */ + pkt->udata64 = (uint64_t) sess_tbl[port_id]; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + } + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + +static uint8_t +ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) +{ + struct eh_app_worker_params *wrkr; + uint8_t nb_wrkr_param = 0; + + /* Save workers */ + wrkr = wrkrs; + + /* Non-burst - Tx internal port - driver mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; + wrkr++; + + return nb_wrkr_param; +} + +static void +ipsec_eventmode_worker(struct eh_conf *conf) +{ + struct eh_app_worker_params ipsec_wrkr[IPSEC_EVENTMODE_WORKERS] = { + {{{0} }, NULL } }; + uint8_t nb_wrkr_param; + + /* Populate l2fwd_wrkr params */ + nb_wrkr_param = ipsec_eventmode_populate_wrkr_params(ipsec_wrkr); + + /* + * Launch correct worker after checking + * the event device's capabilities. + */ + eh_launch_worker(conf, ipsec_wrkr, nb_wrkr_param); +} + +int ipsec_launch_one_lcore(void *args) +{ + struct eh_conf *conf; + + conf = (struct eh_conf *)args; + + if (conf->mode == EH_PKT_TRANSFER_MODE_POLL) { + /* Run in poll mode */ + ipsec_poll_mode_worker(); + } else if (conf->mode == EH_PKT_TRANSFER_MODE_EVENT) { + /* Run in event mode */ + ipsec_eventmode_worker(conf); + } + return 0; +} diff --git a/examples/ipsec-secgw/meson.build b/examples/ipsec-secgw/meson.build index 2415d47..f9ba2a2 100644 --- a/examples/ipsec-secgw/meson.build +++ b/examples/ipsec-secgw/meson.build @@ -10,5 +10,5 @@ deps += ['security', 'lpm', 'acl', 'hash', 'ip_frag', 'ipsec', 'eventdev'] allow_experimental_apis = true sources = files( 'esp.c', 'event_helper.c', 'ipsec.c', 'ipsec_process.c', 'ipsec-secgw.c', - 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' + 'ipsec_worker.c', 'parser.c', 'rt.c', 'sa.c', 'sad.c', 'sp4.c', 'sp6.c' ) -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 12/15] examples/ipsec-secgw: add app mode worker 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (10 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 11/15] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 13/15] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik ` (4 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Add application inbound/outbound worker thread and IPsec application processing code for event mode. Example ipsec-secgw command in app mode: ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 31 +-- examples/ipsec-secgw/ipsec-secgw.h | 63 ++++++ examples/ipsec-secgw/ipsec.h | 16 -- examples/ipsec-secgw/ipsec_worker.c | 435 +++++++++++++++++++++++++++++++++++- examples/ipsec-secgw/ipsec_worker.h | 41 ++++ 5 files changed, 538 insertions(+), 48 deletions(-) create mode 100644 examples/ipsec-secgw/ipsec_worker.h diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 9ad4be5..a03958d 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -50,13 +50,12 @@ #include "event_helper.h" #include "ipsec.h" +#include "ipsec_worker.h" #include "parser.h" #include "sad.h" volatile bool force_quit; -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 - #define MAX_JUMBO_PKT_LEN 9600 #define MEMPOOL_CACHE_SIZE 256 @@ -86,29 +85,6 @@ volatile bool force_quit; static uint16_t nb_rxd = IPSEC_SECGW_RX_DESC_DEFAULT; static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; -#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((a) & 0xff) << 56) | \ - ((uint64_t)((b) & 0xff) << 48) | \ - ((uint64_t)((c) & 0xff) << 40) | \ - ((uint64_t)((d) & 0xff) << 32) | \ - ((uint64_t)((e) & 0xff) << 24) | \ - ((uint64_t)((f) & 0xff) << 16) | \ - ((uint64_t)((g) & 0xff) << 8) | \ - ((uint64_t)(h) & 0xff)) -#else -#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ - (((uint64_t)((h) & 0xff) << 56) | \ - ((uint64_t)((g) & 0xff) << 48) | \ - ((uint64_t)((f) & 0xff) << 40) | \ - ((uint64_t)((e) & 0xff) << 32) | \ - ((uint64_t)((d) & 0xff) << 24) | \ - ((uint64_t)((c) & 0xff) << 16) | \ - ((uint64_t)((b) & 0xff) << 8) | \ - ((uint64_t)(a) & 0xff)) -#endif -#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) - #define ETHADDR_TO_UINT64(addr) __BYTES_TO_UINT64( \ (addr)->addr_bytes[0], (addr)->addr_bytes[1], \ (addr)->addr_bytes[2], (addr)->addr_bytes[3], \ @@ -120,11 +96,6 @@ static uint16_t nb_txd = IPSEC_SECGW_TX_DESC_DEFAULT; #define MTU_TO_FRAMELEN(x) ((x) + RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN) -/* port/source ethernet addr and destination ethernet addr */ -struct ethaddr_info { - uint64_t src, dst; -}; - struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS] = { { 0, ETHADDR(0x00, 0x16, 0x3e, 0x7e, 0x94, 0x9a) }, { 0, ETHADDR(0x00, 0x16, 0x3e, 0x22, 0xa1, 0xd9) }, diff --git a/examples/ipsec-secgw/ipsec-secgw.h b/examples/ipsec-secgw/ipsec-secgw.h index a07a920..4b53cb5 100644 --- a/examples/ipsec-secgw/ipsec-secgw.h +++ b/examples/ipsec-secgw/ipsec-secgw.h @@ -8,6 +8,69 @@ #define NB_SOCKETS 4 +#define MAX_PKT_BURST 32 + +#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 + +#if RTE_BYTE_ORDER != RTE_LITTLE_ENDIAN +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((a) & 0xff) << 56) | \ + ((uint64_t)((b) & 0xff) << 48) | \ + ((uint64_t)((c) & 0xff) << 40) | \ + ((uint64_t)((d) & 0xff) << 32) | \ + ((uint64_t)((e) & 0xff) << 24) | \ + ((uint64_t)((f) & 0xff) << 16) | \ + ((uint64_t)((g) & 0xff) << 8) | \ + ((uint64_t)(h) & 0xff)) +#else +#define __BYTES_TO_UINT64(a, b, c, d, e, f, g, h) \ + (((uint64_t)((h) & 0xff) << 56) | \ + ((uint64_t)((g) & 0xff) << 48) | \ + ((uint64_t)((f) & 0xff) << 40) | \ + ((uint64_t)((e) & 0xff) << 32) | \ + ((uint64_t)((d) & 0xff) << 24) | \ + ((uint64_t)((c) & 0xff) << 16) | \ + ((uint64_t)((b) & 0xff) << 8) | \ + ((uint64_t)(a) & 0xff)) +#endif + +#define ETHADDR(a, b, c, d, e, f) (__BYTES_TO_UINT64(a, b, c, d, e, f, 0, 0)) + +struct traffic_type { + const uint8_t *data[MAX_PKT_BURST * 2]; + struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; + void *saptr[MAX_PKT_BURST * 2]; + uint32_t res[MAX_PKT_BURST * 2]; + uint32_t num; +}; + +struct ipsec_traffic { + struct traffic_type ipsec; + struct traffic_type ip4; + struct traffic_type ip6; +}; + +/* Fields optimized for devices without burst */ +struct traffic_type_nb { + const uint8_t *data; + struct rte_mbuf *pkt; + uint32_t res; + uint32_t num; +}; + +struct ipsec_traffic_nb { + struct traffic_type_nb ipsec; + struct traffic_type_nb ip4; + struct traffic_type_nb ip6; +}; + +/* port/source ethernet addr and destination ethernet addr */ +struct ethaddr_info { + uint64_t src, dst; +}; + +extern struct ethaddr_info ethaddr_tbl[RTE_MAX_ETHPORTS]; + /* Port mask to identify the unprotected ports */ extern uint32_t unprotected_port_mask; diff --git a/examples/ipsec-secgw/ipsec.h b/examples/ipsec-secgw/ipsec.h index ad913bf..f8f29f9 100644 --- a/examples/ipsec-secgw/ipsec.h +++ b/examples/ipsec-secgw/ipsec.h @@ -15,11 +15,9 @@ #include "ipsec-secgw.h" -#define RTE_LOGTYPE_IPSEC RTE_LOGTYPE_USER1 #define RTE_LOGTYPE_IPSEC_ESP RTE_LOGTYPE_USER2 #define RTE_LOGTYPE_IPSEC_IPIP RTE_LOGTYPE_USER3 -#define MAX_PKT_BURST 32 #define MAX_INFLIGHT 128 #define MAX_QP_PER_LCORE 256 @@ -259,20 +257,6 @@ struct cnt_blk { uint32_t cnt; } __attribute__((packed)); -struct traffic_type { - const uint8_t *data[MAX_PKT_BURST * 2]; - struct rte_mbuf *pkts[MAX_PKT_BURST * 2]; - void *saptr[MAX_PKT_BURST * 2]; - uint32_t res[MAX_PKT_BURST * 2]; - uint32_t num; -}; - -struct ipsec_traffic { - struct traffic_type ipsec; - struct traffic_type ip4; - struct traffic_type ip6; -}; - /* Socket ctx */ extern struct socket_ctx socket_ctx[NB_SOCKETS]; diff --git a/examples/ipsec-secgw/ipsec_worker.c b/examples/ipsec-secgw/ipsec_worker.c index b7a1ef9..5fde667 100644 --- a/examples/ipsec-secgw/ipsec_worker.c +++ b/examples/ipsec-secgw/ipsec_worker.c @@ -2,11 +2,51 @@ * Copyright(c) 2010-2016 Intel Corporation * Copyright (C) 2020 Marvell International Ltd. */ +#include <rte_acl.h> #include <rte_event_eth_tx_adapter.h> +#include <rte_lpm.h> +#include <rte_lpm6.h> #include "event_helper.h" #include "ipsec.h" #include "ipsec-secgw.h" +#include "ipsec_worker.h" + +static inline enum pkt_type +process_ipsec_get_pkt_type(struct rte_mbuf *pkt, uint8_t **nlp) +{ + struct rte_ether_hdr *eth; + + eth = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip, ip_p)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV4; + else + return PKT_TYPE_PLAIN_IPV4; + } else if (eth->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6)) { + *nlp = RTE_PTR_ADD(eth, RTE_ETHER_HDR_LEN + + offsetof(struct ip6_hdr, ip6_nxt)); + if (**nlp == IPPROTO_ESP) + return PKT_TYPE_IPSEC_IPV6; + else + return PKT_TYPE_PLAIN_IPV6; + } + + /* Unknown/Unsupported type */ + return PKT_TYPE_INVALID; +} + +static inline void +update_mac_addrs(struct rte_mbuf *pkt, uint16_t portid) +{ + struct rte_ether_hdr *ethhdr; + + ethhdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *); + memcpy(ðhdr->s_addr, ðaddr_tbl[portid].src, RTE_ETHER_ADDR_LEN); + memcpy(ðhdr->d_addr, ðaddr_tbl[portid].dst, RTE_ETHER_ADDR_LEN); +} static inline void ipsec_event_pre_forward(struct rte_mbuf *m, unsigned int port_id) @@ -61,6 +101,290 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, } } +static inline int +check_sp(struct sp_ctx *sp, const uint8_t *nlp, uint32_t *sa_idx) +{ + uint32_t res; + + if (unlikely(sp == NULL)) + return 0; + + rte_acl_classify((struct rte_acl_ctx *)sp, &nlp, &res, 1, + DEFAULT_MAX_CATEGORIES); + + if (unlikely(res == 0)) { + /* No match */ + return 0; + } + + if (res == DISCARD) + return 0; + else if (res == BYPASS) { + *sa_idx = -1; + return 1; + } + + *sa_idx = res - 1; + return 1; +} + +static inline uint16_t +route4_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint32_t dst_ip; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip, ip_dst); + dst_ip = *rte_pktmbuf_mtod_offset(pkt, uint32_t *, offset); + dst_ip = rte_be_to_cpu_32(dst_ip); + + ret = rte_lpm_lookup((struct rte_lpm *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +/* TODO: To be tested */ +static inline uint16_t +route6_pkt(struct rte_mbuf *pkt, struct rt_ctx *rt_ctx) +{ + uint8_t dst_ip[16]; + uint8_t *ip6_dst; + uint16_t offset; + uint32_t hop; + int ret; + + offset = RTE_ETHER_HDR_LEN + offsetof(struct ip6_hdr, ip6_dst); + ip6_dst = rte_pktmbuf_mtod_offset(pkt, uint8_t *, offset); + memcpy(&dst_ip[0], ip6_dst, 16); + + ret = rte_lpm6_lookup((struct rte_lpm6 *)rt_ctx, dst_ip, &hop); + + if (ret == 0) { + /* We have a hit */ + return hop; + } + + /* else */ + return RTE_MAX_ETHPORTS; +} + +static inline uint16_t +get_route(struct rte_mbuf *pkt, struct route_table *rt, enum pkt_type type) +{ + if (type == PKT_TYPE_PLAIN_IPV4 || type == PKT_TYPE_IPSEC_IPV4) + return route4_pkt(pkt, rt->rt4_ctx); + else if (type == PKT_TYPE_PLAIN_IPV6 || type == PKT_TYPE_IPSEC_IPV6) + return route6_pkt(pkt, rt->rt6_ctx); + + return RTE_MAX_ETHPORTS; +} + +static inline int +process_ipsec_ev_inbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct ipsec_sa *sa = NULL; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + case PKT_TYPE_PLAIN_IPV6: + if (pkt->ol_flags & PKT_RX_SEC_OFFLOAD) { + if (unlikely(pkt->ol_flags & + PKT_RX_SEC_OFFLOAD_FAILED)) { + RTE_LOG(ERR, IPSEC, + "Inbound security offload failed\n"); + goto drop_pkt_and_exit; + } + sa = pkt->userdata; + } + + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + + default: + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == BYPASS) + goto route_and_send_pkt; + + /* Validate sa_idx */ + if (sa_idx >= ctx->sa_ctx->nb_sa) + goto drop_pkt_and_exit; + + /* Else the packet has to be protected with SA */ + + /* If the packet was IPsec processed, then SA pointer should be set */ + if (sa == NULL) + goto drop_pkt_and_exit; + + /* SPI on the packet should match with the one in SA */ + if (unlikely(sa->spi != ctx->sa_ctx->sa[sa_idx].spi)) + goto drop_pkt_and_exit; + +route_and_send_pkt: + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return PKT_FORWARDED; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Inbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return PKT_DROPPED; +} + +static inline int +process_ipsec_ev_outbound(struct ipsec_ctx *ctx, struct route_table *rt, + struct rte_event *ev) +{ + struct rte_ipsec_session *sess; + struct sa_ctx *sa_ctx; + struct rte_mbuf *pkt; + uint16_t port_id = 0; + struct ipsec_sa *sa; + enum pkt_type type; + uint32_t sa_idx; + uint8_t *nlp; + + /* Get pkt from event */ + pkt = ev->mbuf; + + /* Check the packet type */ + type = process_ipsec_get_pkt_type(pkt, &nlp); + + switch (type) { + case PKT_TYPE_PLAIN_IPV4: + /* Check if we have a match */ + if (check_sp(ctx->sp4_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + case PKT_TYPE_PLAIN_IPV6: + /* Check if we have a match */ + if (check_sp(ctx->sp6_ctx, nlp, &sa_idx) == 0) { + /* No valid match */ + goto drop_pkt_and_exit; + } + break; + default: + /* + * Only plain IPv4 & IPv6 packets are allowed + * on protected port. Drop the rest. + */ + RTE_LOG(ERR, IPSEC, "Unsupported packet type = %d\n", type); + goto drop_pkt_and_exit; + } + + /* Check if the packet has to be bypassed */ + if (sa_idx == BYPASS) { + port_id = get_route(pkt, rt, type); + if (unlikely(port_id == RTE_MAX_ETHPORTS)) { + /* no match */ + goto drop_pkt_and_exit; + } + /* else, we have a matching route */ + goto send_pkt; + } + + /* Validate sa_idx */ + if (sa_idx >= ctx->sa_ctx->nb_sa) + goto drop_pkt_and_exit; + + /* Else the packet has to be protected */ + + /* Get SA ctx*/ + sa_ctx = ctx->sa_ctx; + + /* Get SA */ + sa = &(sa_ctx->sa[sa_idx]); + + /* Get IPsec session */ + sess = ipsec_get_primary_session(sa); + + /* Allow only inline protocol for now */ + if (sess->type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL) { + RTE_LOG(ERR, IPSEC, "SA type not supported\n"); + goto drop_pkt_and_exit; + } + + if (sess->security.ol_flags & RTE_SECURITY_TX_OLOAD_NEED_MDATA) + pkt->userdata = sess->security.ses; + + /* Mark the packet for Tx security offload */ + pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; + + /* Get the port to which this pkt need to be submitted */ + port_id = sa->portid; + +send_pkt: + /* Update mac addresses */ + update_mac_addrs(pkt, port_id); + + /* Update the event with the dest port */ + ipsec_event_pre_forward(pkt, port_id); + return PKT_FORWARDED; + +drop_pkt_and_exit: + RTE_LOG(ERR, IPSEC, "Outbound packet dropped\n"); + rte_pktmbuf_free(pkt); + ev->mbuf = NULL; + return PKT_DROPPED; +} + /* * Event mode exposes various operating modes depending on the * capabilities of the event device and the operating mode @@ -68,7 +392,7 @@ prepare_out_sessions_tbl(struct sa_ctx *sa_out, */ /* Workers registered */ -#define IPSEC_EVENTMODE_WORKERS 1 +#define IPSEC_EVENTMODE_WORKERS 2 /* * Event mode worker @@ -146,7 +470,7 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } /* Save security session */ - pkt->udata64 = (uint64_t) sess_tbl[port_id]; + pkt->userdata = sess_tbl[port_id]; /* Mark the packet for Tx security offload */ pkt->ol_flags |= PKT_TX_SEC_OFFLOAD; @@ -165,6 +489,105 @@ ipsec_wrkr_non_burst_int_port_drv_mode(struct eh_event_link_info *links, } } +/* + * Event mode worker + * Operating parameters : non-burst - Tx internal port - app mode + */ +static void +ipsec_wrkr_non_burst_int_port_app_mode(struct eh_event_link_info *links, + uint8_t nb_links) +{ + struct lcore_conf_ev_tx_int_port_wrkr lconf; + unsigned int nb_rx = 0; + struct rte_event ev; + uint32_t lcore_id; + int32_t socket_id; + int ret; + + /* Check if we have links registered for this lcore */ + if (nb_links == 0) { + /* No links registered - exit */ + return; + } + + /* We have valid links */ + + /* Get core ID */ + lcore_id = rte_lcore_id(); + + /* Get socket ID */ + socket_id = rte_lcore_to_socket_id(lcore_id); + + /* Save routing table */ + lconf.rt.rt4_ctx = socket_ctx[socket_id].rt_ip4; + lconf.rt.rt6_ctx = socket_ctx[socket_id].rt_ip6; + lconf.inbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_in; + lconf.inbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_in; + lconf.inbound.sa_ctx = socket_ctx[socket_id].sa_in; + lconf.inbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.inbound.session_priv_pool = + socket_ctx[socket_id].session_priv_pool; + lconf.outbound.sp4_ctx = socket_ctx[socket_id].sp_ip4_out; + lconf.outbound.sp6_ctx = socket_ctx[socket_id].sp_ip6_out; + lconf.outbound.sa_ctx = socket_ctx[socket_id].sa_out; + lconf.outbound.session_pool = socket_ctx[socket_id].session_pool; + lconf.outbound.session_priv_pool = + socket_ctx[socket_id].session_priv_pool; + + RTE_LOG(INFO, IPSEC, + "Launching event mode worker (non-burst - Tx internal port - " + "app mode) on lcore %d\n", lcore_id); + + /* Check if it's single link */ + if (nb_links != 1) { + RTE_LOG(INFO, IPSEC, + "Multiple links not supported. Using first link\n"); + } + + RTE_LOG(INFO, IPSEC, " -- lcoreid=%u event_port_id=%u\n", lcore_id, + links[0].event_port_id); + + while (!force_quit) { + /* Read packet from event queues */ + nb_rx = rte_event_dequeue_burst(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* timeout_ticks */); + + if (nb_rx == 0) + continue; + + if (unlikely(ev.event_type != RTE_EVENT_TYPE_ETHDEV)) { + RTE_LOG(ERR, IPSEC, "Invalid event type %u", + ev.event_type); + + continue; + } + + if (is_unprotected_port(ev.mbuf->port)) + ret = process_ipsec_ev_inbound(&lconf.inbound, + &lconf.rt, &ev); + else + ret = process_ipsec_ev_outbound(&lconf.outbound, + &lconf.rt, &ev); + if (ret != 1) + /* The pkt has been dropped */ + continue; + + /* + * Since tx internal port is available, events can be + * directly enqueued to the adapter and it would be + * internally submitted to the eth device. + */ + rte_event_eth_tx_adapter_enqueue(links[0].eventdev_id, + links[0].event_port_id, + &ev, /* events */ + 1, /* nb_events */ + 0 /* flags */); + } +} + static uint8_t ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) { @@ -180,6 +603,14 @@ ipsec_eventmode_populate_wrkr_params(struct eh_app_worker_params *wrkrs) wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_DRIVER; wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_drv_mode; wrkr++; + nb_wrkr_param++; + + /* Non-burst - Tx internal port - app mode */ + wrkr->cap.burst = EH_RX_TYPE_NON_BURST; + wrkr->cap.tx_internal_port = EH_TX_TYPE_INTERNAL_PORT; + wrkr->cap.ipsec_mode = EH_IPSEC_MODE_TYPE_APP; + wrkr->worker_thread = ipsec_wrkr_non_burst_int_port_app_mode; + nb_wrkr_param++; return nb_wrkr_param; } diff --git a/examples/ipsec-secgw/ipsec_worker.h b/examples/ipsec-secgw/ipsec_worker.h new file mode 100644 index 0000000..5d85cf1 --- /dev/null +++ b/examples/ipsec-secgw/ipsec_worker.h @@ -0,0 +1,41 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2020 Marvell International Ltd. + */ +#ifndef _IPSEC_WORKER_H_ +#define _IPSEC_WORKER_H_ + +#include "ipsec.h" + +enum pkt_type { + PKT_TYPE_PLAIN_IPV4 = 1, + PKT_TYPE_IPSEC_IPV4, + PKT_TYPE_PLAIN_IPV6, + PKT_TYPE_IPSEC_IPV6, + PKT_TYPE_INVALID +}; + +enum { + PKT_DROPPED = 0, + PKT_FORWARDED, + PKT_POSTED /* for lookaside case */ +}; + +struct route_table { + struct rt_ctx *rt4_ctx; + struct rt_ctx *rt6_ctx; +}; + +/* + * Conf required by event mode worker with tx internal port + */ +struct lcore_conf_ev_tx_int_port_wrkr { + struct ipsec_ctx inbound; + struct ipsec_ctx outbound; + struct route_table rt; +} __rte_cache_aligned; + +void ipsec_poll_mode_worker(void); + +int ipsec_launch_one_lcore(void *args); + +#endif /* _IPSEC_WORKER_H_ */ -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 13/15] examples/ipsec-secgw: make number of buffers dynamic 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (11 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 12/15] examples/ipsec-secgw: add app " Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 14/15] doc: add event mode support to ipsec-secgw Lukasz Bartosik ` (3 subsequent siblings) 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Make number of buffers in a pool nb_mbuf_in_pool dependent on number of ports, cores and crypto queues. Add command line option -s which when used overrides dynamic calculation of number of buffers in a pool. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 71 ++++++++++++++++++++++++++++++++------ 1 file changed, 60 insertions(+), 11 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index a03958d..5335c4c 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -60,8 +60,6 @@ volatile bool force_quit; #define MEMPOOL_CACHE_SIZE 256 -#define NB_MBUF (32000) - #define CDEV_QUEUE_DESC 2048 #define CDEV_MAP_ENTRIES 16384 #define CDEV_MP_NB_OBJS 1024 @@ -164,6 +162,7 @@ static int32_t promiscuous_on = 1; static int32_t numa_on = 1; /**< NUMA is enabled by default. */ static uint32_t nb_lcores; static uint32_t single_sa; +static uint32_t nb_bufs_in_pool; /* * RX/TX HW offload capabilities to enable/use on ethernet ports. @@ -1280,6 +1279,7 @@ print_usage(const char *prgname) " [-e]" " [-a]" " [-c]" + " [-s NUMBER_OF_MBUFS_IN_PKT_POOL]" " -f CONFIG_FILE" " --config (port,queue,lcore)[,(port,queue,lcore)]" " [--single-sa SAIDX]" @@ -1303,6 +1303,9 @@ print_usage(const char *prgname) " -a enables SA SQN atomic behaviour\n" " -c specifies inbound SAD cache size,\n" " zero value disables the cache (default value: 128)\n" + " -s number of mbufs in packet pool, if not specified number\n" + " of mbufs will be calculated based on number of cores,\n" + " ports and crypto queues\n" " -f CONFIG_FILE: Configuration file\n" " --config (port,queue,lcore): Rx queue configuration. In poll\n" " mode determines which queues from\n" @@ -1507,7 +1510,7 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) argvopt = argv; - while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:c:", + while ((opt = getopt_long(argc, argvopt, "aelp:Pu:f:j:w:c:s:", lgopts, &option_index)) != EOF) { switch (opt) { @@ -1541,6 +1544,19 @@ parse_args(int32_t argc, char **argv, struct eh_conf *eh_conf) cfgfile = optarg; f_present = 1; break; + + case 's': + ret = parse_decimal(optarg); + if (ret < 0) { + printf("Invalid number of buffers in a pool: " + "%s\n", optarg); + print_usage(prgname); + return -1; + } + + nb_bufs_in_pool = ret; + break; + case 'j': ret = parse_decimal(optarg); if (ret < RTE_MBUF_DEFAULT_BUF_SIZE || @@ -1913,12 +1929,12 @@ check_cryptodev_mask(uint8_t cdev_id) return -1; } -static int32_t +static uint16_t cryptodevs_init(void) { struct rte_cryptodev_config dev_conf; struct rte_cryptodev_qp_conf qp_conf; - uint16_t idx, max_nb_qps, qp, i; + uint16_t idx, max_nb_qps, qp, total_nb_qps, i; int16_t cdev_id; struct rte_hash_parameters params = { 0 }; @@ -1946,6 +1962,7 @@ cryptodevs_init(void) printf("lcore/cryptodev/qp mappings:\n"); idx = 0; + total_nb_qps = 0; for (cdev_id = 0; cdev_id < rte_cryptodev_count(); cdev_id++) { struct rte_cryptodev_info cdev_info; @@ -1979,6 +1996,7 @@ cryptodevs_init(void) if (qp == 0) continue; + total_nb_qps += qp; dev_conf.socket_id = rte_cryptodev_socket_id(cdev_id); dev_conf.nb_queue_pairs = qp; dev_conf.ff_disable = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO; @@ -2011,7 +2029,7 @@ cryptodevs_init(void) printf("\n"); - return 0; + return total_nb_qps; } static void @@ -2665,20 +2683,36 @@ inline_sessions_free(struct sa_ctx *sa_ctx) } } +static uint32_t +calculate_nb_mbufs(uint16_t nb_ports, uint16_t nb_crypto_qp, uint32_t nb_rxq, + uint32_t nb_txq) +{ + return RTE_MAX((nb_rxq * nb_rxd + + nb_ports * nb_lcores * MAX_PKT_BURST + + nb_ports * nb_txq * nb_txd + + nb_lcores * MEMPOOL_CACHE_SIZE + + nb_crypto_qp * CDEV_QUEUE_DESC + + nb_lcores * frag_tbl_sz * + FRAG_TBL_BUCKET_ENTRIES), + 8192U); +} + int32_t main(int32_t argc, char **argv) { int32_t ret; - uint32_t lcore_id; + uint32_t lcore_id, nb_txq, nb_rxq = 0; uint32_t cdev_id; uint32_t i; uint8_t socket_id; - uint16_t portid; + uint16_t portid, nb_crypto_qp, nb_ports = 0; uint64_t req_rx_offloads[RTE_MAX_ETHPORTS]; uint64_t req_tx_offloads[RTE_MAX_ETHPORTS]; struct eh_conf *eh_conf = NULL; size_t sess_sz; + nb_bufs_in_pool = 0; + /* init EAL */ ret = rte_eal_init(argc, argv); if (ret < 0) @@ -2727,6 +2761,22 @@ main(int32_t argc, char **argv) sess_sz = max_session_size(); + nb_crypto_qp = cryptodevs_init(); + + if (nb_bufs_in_pool == 0) { + RTE_ETH_FOREACH_DEV(portid) { + if ((enabled_port_mask & (1 << portid)) == 0) + continue; + nb_ports++; + nb_rxq += get_port_nb_rx_queues(portid); + } + + nb_txq = nb_lcores; + + nb_bufs_in_pool = calculate_nb_mbufs(nb_ports, nb_crypto_qp, + nb_rxq, nb_txq); + } + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { if (rte_lcore_is_enabled(lcore_id) == 0) continue; @@ -2740,11 +2790,12 @@ main(int32_t argc, char **argv) if (socket_ctx[socket_id].mbuf_pool) continue; - pool_init(&socket_ctx[socket_id], socket_id, NB_MBUF); + pool_init(&socket_ctx[socket_id], socket_id, nb_bufs_in_pool); session_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); session_priv_pool_init(&socket_ctx[socket_id], socket_id, sess_sz); } + printf("Number of mbufs in packet pool %d\n", nb_bufs_in_pool); RTE_ETH_FOREACH_DEV(portid) { if ((enabled_port_mask & (1 << portid)) == 0) @@ -2756,8 +2807,6 @@ main(int32_t argc, char **argv) req_tx_offloads[portid]); } - cryptodevs_init(); - /* * Set the enabled port mask in helper config for use by helper * sub-system. This will be used while initializing devices using -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 14/15] doc: add event mode support to ipsec-secgw 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (12 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 13/15] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-04-12 16:37 ` Thomas Monjalon 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 15/15] examples/ipsec-secgw: reserve crypto queues in event mode Lukasz Bartosik ` (2 subsequent siblings) 16 siblings, 1 reply; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Document addition of event mode support to ipsec-secgw application. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++++++++++++++++++++++++++----- 1 file changed, 113 insertions(+), 22 deletions(-) diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst index 5ec9b1e..038f593 100644 --- a/doc/guides/sample_app_ug/ipsec_secgw.rst +++ b/doc/guides/sample_app_ug/ipsec_secgw.rst @@ -1,5 +1,6 @@ .. SPDX-License-Identifier: BSD-3-Clause Copyright(c) 2016-2017 Intel Corporation. + Copyright (C) 2020 Marvell International Ltd. IPsec Security Gateway Sample Application ========================================= @@ -61,6 +62,44 @@ The Path for the IPsec Outbound traffic is: * Routing. * Write packet to port. +The application supports two modes of operation: poll mode and event mode. + +* In the poll mode a core receives packets from statically configured list + of eth ports and eth ports' queues. + +* In the event mode a core receives packets as events. After packet processing + is done core submits them back as events to an event device. This enables + multicore scaling and HW assisted scheduling by making use of the event device + capabilities. The event mode configuration is predefined. All packets reaching + given eth port will arrive at the same event queue. All event queues are mapped + to all event ports. This allows all cores to receive traffic from all ports. + Since the underlying event device might have varying capabilities, the worker + threads can be drafted differently to maximize performance. For example, if an + event device - eth device pair has Tx internal port, then application can call + rte_event_eth_tx_adapter_enqueue() instead of regular rte_event_enqueue_burst(). + So a thread which assumes that the device pair has internal port will not be the + right solution for another pair. The infrastructure added for the event mode aims + to help application to have multiple worker threads by maximizing performance from + every type of event device without affecting existing paths/use cases. The worker + to be used will be determined by the operating conditions and the underlying device + capabilities. **Currently the application provides non-burst, internal port worker + threads and supports inline protocol only.** It also provides infrastructure for + non-internal port however does not define any worker threads. + +Additionally the event mode introduces two submodes of processing packets: + +* Driver submode: This submode has bare minimum changes in the application to support + IPsec. There are no lookups, no routing done in the application. And for inline + protocol use case, the worker thread resembles l2fwd worker thread as the IPsec + processing is done entirely in HW. This mode can be used to benchmark the raw + performance of the HW. The driver submode is selected with --single-sa option + (used also by poll mode). When --single-sa option is used in conjution with event + mode then index passed to --single-sa is ignored. + +* App submode: This submode has all the features currently implemented with the + application (non librte_ipsec path). All the lookups, routing follows existing + methods and report numbers that can be compared against regular poll mode + benchmark numbers. Constraints ----------- @@ -94,13 +133,18 @@ The application has a number of command line options:: -p PORTMASK -P -u PORTMASK -j FRAMESIZE -l -w REPLAY_WINOW_SIZE -e -a -c SAD_CACHE_SIZE - --config (port,queue,lcore)[,(port,queue,lcore] + -s NUMBER_OF_MBUFS_IN_PACKET_POOL + -f CONFIG_FILE_PATH + --config (port,queue,lcore)[,(port,queue,lcore)] --single-sa SAIDX + --cryptodev_mask MASK + --transfer-mode MODE + --event-schedule-type TYPE --rxoffload MASK --txoffload MASK - --mtu MTU --reassemble NUM - -f CONFIG_FILE_PATH + --mtu MTU + --frag-ttl FRAG_TTL_NS Where: @@ -138,12 +182,37 @@ Where: Zero value disables cache. Default value: 128. -* ``--config (port,queue,lcore)[,(port,queue,lcore)]``: determines which queues - from which ports are mapped to which cores. +* ``-s``: sets number of mbufs in packet pool, if not provided number of mbufs + will be calculated based on number of cores, eth ports and crypto queues. + +* ``-f CONFIG_FILE_PATH``: the full path of text-based file containing all + configuration items for running the application (See Configuration file + syntax section below). ``-f CONFIG_FILE_PATH`` **must** be specified. + **ONLY** the UNIX format configuration file is accepted. + +* ``--config (port,queue,lcore)[,(port,queue,lcore)]``: in poll mode determines + which queues from which ports are mapped to which cores. In event mode this + option is not used as packets are dynamically scheduled to cores by HW. -* ``--single-sa SAIDX``: use a single SA for outbound traffic, bypassing the SP - on both Inbound and Outbound. This option is meant for debugging/performance - purposes. +* ``--single-sa SAIDX``: in poll mode use a single SA for outbound traffic, + bypassing the SP on both Inbound and Outbound. This option is meant for + debugging/performance purposes. In event mode selects driver submode, SA index + value is ignored. + +* ``--cryptodev_mask MASK``: hexadecimal bitmask of the crypto devices + to configure. + +* ``--transfer-mode MODE``: sets operating mode of the application + "poll" : packet transfer via polling (default) + "event" : Packet transfer via event device + +* ``--event-schedule-type TYPE``: queue schedule type, applies only when + --transfer-mode is set to event. + "ordered" : Ordered (default) + "atomic" : Atomic + "parallel" : Parallel + When --event-schedule-type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event + device will ensure the ordering. Ordering will be lost when tried in PARALLEL. * ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and @@ -155,6 +224,10 @@ Where: allows user to disable some of the TX HW offload capabilities. By default all HW TX offloads are enabled. +* ``--reassemble NUM``: max number of entries in reassemble fragment table. + Zero value disables reassembly functionality. + Default value: 0. + * ``--mtu MTU``: MTU value (in bytes) on all attached ethernet ports. Outgoing packets with length bigger then MTU will be fragmented. Incoming packets with length bigger then MTU will be discarded. @@ -167,26 +240,17 @@ Where: Should be lower for low number of reassembly buckets. Valid values: from 1 ns to 10 s. Default value: 10000000 (10 s). -* ``--reassemble NUM``: max number of entries in reassemble fragment table. - Zero value disables reassembly functionality. - Default value: 0. - -* ``-f CONFIG_FILE_PATH``: the full path of text-based file containing all - configuration items for running the application (See Configuration file - syntax section below). ``-f CONFIG_FILE_PATH`` **must** be specified. - **ONLY** the UNIX format configuration file is accepted. - The mapping of lcores to port/queues is similar to other l3fwd applications. -For example, given the following command line:: +For example, given the following command line to run application in poll mode:: ./build/ipsec-secgw -l 20,21 -n 4 --socket-mem 0,2048 \ - --vdev "crypto_null" -- -p 0xf -P -u 0x3 \ + --vdev "crypto_null" -- -p 0xf -P -u 0x3 \ --config="(0,0,20),(1,0,20),(2,0,21),(3,0,21)" \ - -f /path/to/config_file \ + -f /path/to/config_file --transfer-mode poll \ -where each options means: +where each option means: * The ``-l`` option enables cores 20 and 21. @@ -200,7 +264,7 @@ where each options means: * The ``-P`` option enables promiscuous mode. -* The ``-u`` option sets ports 1 and 2 as unprotected, leaving 2 and 3 as protected. +* The ``-u`` option sets ports 0 and 1 as unprotected, leaving 2 and 3 as protected. * The ``--config`` option enables one queue per port with the following mapping: @@ -228,6 +292,33 @@ where each options means: **note** the parser only accepts UNIX format text file. Other formats such as DOS/MAC format will cause a parse error. +* The ``--transfer-mode`` option selects poll mode for processing packets. + +Similarly for example, given the following command line to run application in +event app mode:: + + ./build/ipsec-secgw -c 0x3 -- -P -p 0x3 -u 0x1 \ + -f /path/to/config_file --transfer-mode event \ + --event-schedule-type parallel \ + +where each option means: + +* The ``-c`` option selects cores 0 and 1 to run on. + +* The ``-P`` option enables promiscuous mode. + +* The ``-p`` option enables ports (detected) 0 and 1. + +* The ``-u`` option sets ports 0 as unprotected, leaving 1 as protected. + +* The ``-f /path/to/config_file`` option has the same behavior as in poll + mode example. + +* The ``--transfer-mode`` option selects event mode for processing packets. + +* The ``--event-schedule-type`` option selects parallel ordering of event queues. + + Refer to the *DPDK Getting Started Guide* for general information on running applications and the Environment Abstraction Layer (EAL) options. -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 14/15] doc: add event mode support to ipsec-secgw 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 14/15] doc: add event mode support to ipsec-secgw Lukasz Bartosik @ 2020-04-12 16:37 ` Thomas Monjalon 0 siblings, 0 replies; 147+ messages in thread From: Thomas Monjalon @ 2020-04-12 16:37 UTC (permalink / raw) To: dev, Anoob Joseph, Lukasz Bartosik Cc: Akhil Goyal, Radu Nicolau, Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev 27/02/2020 17:18, Lukasz Bartosik: > Document addition of event mode support > to ipsec-secgw application. > > Signed-off-by: Anoob Joseph <anoobj@marvell.com> > Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> This patch is doing more than adding event mode in doc. It should have been split to distinguish what are the real additions. And changes to event mode could have been merged with patch code. It is too late to change it. Please remind this advice for next time. ^ permalink raw reply [flat|nested] 147+ messages in thread
* [dpdk-dev] [PATCH v5 15/15] examples/ipsec-secgw: reserve crypto queues in event mode 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (13 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 14/15] doc: add event mode support to ipsec-secgw Lukasz Bartosik @ 2020-02-27 16:18 ` Lukasz Bartosik 2020-03-02 8:47 ` [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw Anoob Joseph 2020-03-03 18:00 ` Ananyev, Konstantin 16 siblings, 0 replies; 147+ messages in thread From: Lukasz Bartosik @ 2020-02-27 16:18 UTC (permalink / raw) To: Akhil Goyal, Radu Nicolau, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, Konstantin Ananyev, dev Reserve minimum number of crypto queues equal to number of ports. This is to fulfill inline protocol offload requirements. Signed-off-by: Anoob Joseph <anoobj@marvell.com> Signed-off-by: Lukasz Bartosik <lbartosik@marvell.com> --- examples/ipsec-secgw/ipsec-secgw.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c index 5335c4c..ce36e6d 100644 --- a/examples/ipsec-secgw/ipsec-secgw.c +++ b/examples/ipsec-secgw/ipsec-secgw.c @@ -1930,7 +1930,7 @@ check_cryptodev_mask(uint8_t cdev_id) } static uint16_t -cryptodevs_init(void) +cryptodevs_init(uint16_t req_queue_num) { struct rte_cryptodev_config dev_conf; struct rte_cryptodev_qp_conf qp_conf; @@ -1993,6 +1993,7 @@ cryptodevs_init(void) i++; } + qp = RTE_MIN(max_nb_qps, RTE_MAX(req_queue_num, qp)); if (qp == 0) continue; @@ -2761,7 +2762,16 @@ main(int32_t argc, char **argv) sess_sz = max_session_size(); - nb_crypto_qp = cryptodevs_init(); + /* + * In event mode request minimum number of crypto queues + * to be reserved equal to number of ports. + */ + if (eh_conf->mode == EH_PKT_TRANSFER_MODE_EVENT) + nb_crypto_qp = rte_eth_dev_count_avail(); + else + nb_crypto_qp = 0; + + nb_crypto_qp = cryptodevs_init(nb_crypto_qp); if (nb_bufs_in_pool == 0) { RTE_ETH_FOREACH_DEV(portid) { -- 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (14 preceding siblings ...) 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 15/15] examples/ipsec-secgw: reserve crypto queues in event mode Lukasz Bartosik @ 2020-03-02 8:47 ` Anoob Joseph 2020-03-02 8:57 ` Akhil Goyal 2020-03-03 18:00 ` Ananyev, Konstantin 16 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-03-02 8:47 UTC (permalink / raw) To: Akhil Goyal, Konstantin Ananyev Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Radu Nicolau, Thomas Monjalon, Lukas Bartosik Hi Akhil, Konstantin, Are there any more comments? Or can we have the patches merged? Thanks, Anoob > -----Original Message----- > From: Lukasz Bartosik <lbartosik@marvell.com> > Sent: Thursday, February 27, 2020 9:48 PM > To: Akhil Goyal <akhil.goyal@nxp.com>; Radu Nicolau > <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Anoob Joseph <anoobj@marvell.com>; Archana Muniganti > <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi > Krishna Attunuru <vattunuru@marvell.com>; Konstantin Ananyev > <konstantin.ananyev@intel.com>; dev@dpdk.org > Subject: [PATCH v5 00/15] add eventmode to ipsec-secgw > > This series introduces event-mode additions to ipsec-secgw. > > With this series, ipsec-secgw would be able to run in eventmode. The worker > thread (executing loop) would be receiving events and would be submitting it > back to the eventdev after the processing. This way, multicore scaling and h/w > assisted scheduling is achieved by making use of the eventdev capabilities. > > Since the underlying event device would be having varying capabilities, the > worker thread could be drafted differently to maximize performance. > This series introduces usage of multiple worker threads, among which the one to > be used will be determined by the operating conditions and the underlying > device capabilities. > > For example, if an event device - eth device pair has Tx internal port, then > application can do tx_adapter_enqueue() instead of regular event_enqueue(). So > a thread making an assumption that the device pair has internal port will not be > the right solution for another pair. The infrastructure added with these patches > aims to help application to have multiple worker threads, there by extracting > maximum performance from every device without affecting existing paths/use > cases. > > The eventmode configuration is predefined. All packets reaching one eth port > will hit one event queue. All event queues will be mapped to all event ports. So > all cores will be able to receive traffic from all ports. > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event > device will ensure the ordering. Ordering would be lost when tried in PARALLEL. > > Following command line options are introduced, > > --transfer-mode: to choose between poll mode & event mode > --event-schedule-type: to specify the scheduling type > (RTE_SCHED_TYPE_ORDERED/ > RTE_SCHED_TYPE_ATOMIC/ > RTE_SCHED_TYPE_PARALLEL) > > Additionally the event mode introduces two modes of processing packets: > > Driver-mode: This mode will have bare minimum changes in the application > to support ipsec. There woudn't be any lookup etc done in > the application. And for inline-protocol use case, the > thread would resemble l2fwd as the ipsec processing would be > done entirely in the h/w. This mode can be used to benchmark > the raw performance of the h/w. All the application side > steps (like lookup) can be redone based on the requirement > of the end user. Hence the need for a mode which would > report the raw performance. > > App-mode: This mode will have all the features currently implemented with > ipsec-secgw (non librte_ipsec mode). All the lookups etc > would follow the existing methods and would report numbers > that can be compared against regular ipsec-secgw benchmark > numbers. > > The driver mode is selected with existing --single-sa option (used also by poll > mode). When --single-sa option is used in conjution with event mode then index > passed to --single-sa is ignored. > > Example commands to execute ipsec-secgw in various modes on OCTEON TX2 > platform, > > #Inbound and outbound app mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event- > schedule-type parallel > > #Inbound and outbound driver mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event- > schedule-type parallel --single-sa 0 > > This series adds non burst tx internal port workers only. It provides infrastructure > for non internal port workers, however does not define any. Also, only inline > ipsec protocol mode is supported by the worker threads added. > > Following are planned features, > 1. Add burst mode workers. > 2. Add non internal port workers. > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > 4. Add lookaside protocol support. > > Following are features that Marvell won't be attempting. > 1. Inline crypto support. > 2. Lookaside crypto support. > > For the features that Marvell won't be attempting, new workers can be > introduced by the respective stake holders. > > This series is tested on Marvell OCTEON TX2. > This series is targeted for 20.05 release. > > Changes in v5: > * Rename function check_params() to check_poll_mode_params() and > check_eh_conf() to check_event_mode_params() in order to make it clear > what is their purpose. > * Forbid usage of --config option in event mode. > * Replace magic numbers on return with enum values in > process_ipsec_ev_inbound() > and process_ipsec_ev_outbound() functions. > * Add session_priv_pool for both inbound and outbound configuration in > ipsec_wrkr_non_burst_int_port_app_mode worker. > * Add check of event type in ipsec_wrkr_non_burst_int_port_app_mode worker. > * Update description of --config option in both ipsec-secgw help and > documentation. > > Changes in v4: > * Update ipsec-secgw documentation to describe the new options as well as > event mode support. > * In event mode reserve number of crypto queues equal to number of eth ports > in order to meet inline protocol offload requirements. > * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool > and include fragments table size into the calculation. > * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static keyword > from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. > * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), > check_sp() > and prepare_out_sessions_tbl() functions as a result of changes introduced > by SAD feature. > * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx > is created with rte_zmalloc. > * Minor cleanup enhancements: > - In eh_set_default_conf_eventdev() function in event_helper.c put definition > of int local vars in one line, remove invalid comment, put > "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" > in one line > instead of two. > - Remove extern "C" from event_helper.h. > - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() and > eh_dev_has_tx_internal_port() functions in event_helper.c. > - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. > - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec- > secgw.h, > remove #include <rte_hash.h>. > - Remove not needed includes in ipsec_worker.c. > - Remove expired todo from ipsec_worker.h. > > Changes in v3: > * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c > including minor rework. > * Rename --schedule-type option to --event-schedule-type. > * Replace macro UNPROTECTED_PORT with static inline function > is_unprotected_port(). > * Move definitions of global variables used by multiple modules > to .c files and add externs in .h headers. > * Add eh_check_conf() which validates ipsec-secgw configuration > for event mode. > * Add dynamic calculation of number of buffers in a pool based > on number of cores, ports and crypto queues. > * Fix segmentation fault in event mode driver worker which happens > when there are no inline outbound sessions configured. > * Remove change related to updating number of crypto queues > in cryptodevs_init(). The update of crypto queues will be handled > in a separate patch. > * Fix compilation error on 32-bit platforms by using userdata instead > of udata64 from rte_mbuf. > > Changes in v2: > * Remove --process-dir option. Instead use existing unprotected port mask > option (-u) to decide wheter port handles inbound or outbound traffic. > * Remove --process-mode option. Instead use existing --single-sa option > to select between app and driver modes. > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > * Move destruction of flows to a location where eth ports are stopped > and closed. > * Print error and exit when event mode --schedule-type option is used > in poll mode. > * Reduce number of goto statements replacing them with loop constructs. > * Remove sec_session_fixed table and replace it with locally build > table in driver worker thread. Table is indexed by port identifier > and holds first inline session pointer found for a given port. > * Print error and exit when sessions other than inline are configured > in event mode. > * When number of event queues is less than number of eth ports then > map all eth ports to one event queue. > * Cleanup and minor improvements in code as suggested by Konstantin > > Ankur Dwivedi (1): > examples/ipsec-secgw: add default rte flow for inline Rx > > Anoob Joseph (5): > examples/ipsec-secgw: add framework for eventmode helper > examples/ipsec-secgw: add eventdev port-lcore link > examples/ipsec-secgw: add Rx adapter support > examples/ipsec-secgw: add Tx adapter support > examples/ipsec-secgw: add routines to display config > > Lukasz Bartosik (9): > examples/ipsec-secgw: add routines to launch workers > examples/ipsec-secgw: add support for internal ports > examples/ipsec-secgw: add event helper config init/uninit > examples/ipsec-secgw: add eventmode to ipsec-secgw > examples/ipsec-secgw: add driver mode worker > examples/ipsec-secgw: add app mode worker > examples/ipsec-secgw: make number of buffers dynamic > doc: add event mode support to ipsec-secgw > examples/ipsec-secgw: reserve crypto queues in event mode > > doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++- > examples/ipsec-secgw/Makefile | 2 + > examples/ipsec-secgw/event_helper.c | 1812 > ++++++++++++++++++++++++++++++ > examples/ipsec-secgw/event_helper.h | 327 ++++++ > examples/ipsec-secgw/ipsec-secgw.c | 506 +++++++-- > examples/ipsec-secgw/ipsec-secgw.h | 88 ++ > examples/ipsec-secgw/ipsec.c | 5 +- > examples/ipsec-secgw/ipsec.h | 53 +- > examples/ipsec-secgw/ipsec_worker.c | 649 +++++++++++ > examples/ipsec-secgw/ipsec_worker.h | 41 + > examples/ipsec-secgw/meson.build | 6 +- > examples/ipsec-secgw/sa.c | 21 +- > examples/ipsec-secgw/sad.h | 5 - > 13 files changed, 3516 insertions(+), 134 deletions(-) create mode 100644 > examples/ipsec-secgw/event_helper.c > create mode 100644 examples/ipsec-secgw/event_helper.h > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > -- > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw 2020-03-02 8:47 ` [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw Anoob Joseph @ 2020-03-02 8:57 ` Akhil Goyal 0 siblings, 0 replies; 147+ messages in thread From: Akhil Goyal @ 2020-03-02 8:57 UTC (permalink / raw) To: Anoob Joseph, Konstantin Ananyev Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Radu Nicolau, Thomas Monjalon, Lukas Bartosik Hi Anoob, I will merge this series in this week. Regards, Akhil > -----Original Message----- > From: Anoob Joseph <anoobj@marvell.com> > Sent: Monday, March 2, 2020 2:17 PM > To: Akhil Goyal <akhil.goyal@nxp.com>; Konstantin Ananyev > <konstantin.ananyev@intel.com> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Archana Muniganti <marchana@marvell.com>; Tejasree Kondoj > <ktejasree@marvell.com>; Vamsi Krishna Attunuru <vattunuru@marvell.com>; > dev@dpdk.org; Radu Nicolau <radu.nicolau@intel.com>; Thomas Monjalon > <thomas@monjalon.net>; Lukas Bartosik <lbartosik@marvell.com> > Subject: RE: [PATCH v5 00/15] add eventmode to ipsec-secgw > > Hi Akhil, Konstantin, > > Are there any more comments? Or can we have the patches merged? > > Thanks, > Anoob > > > -----Original Message----- > > From: Lukasz Bartosik <lbartosik@marvell.com> > > Sent: Thursday, February 27, 2020 9:48 PM > > To: Akhil Goyal <akhil.goyal@nxp.com>; Radu Nicolau > > <radu.nicolau@intel.com>; Thomas Monjalon <thomas@monjalon.net> > > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > > Anoob Joseph <anoobj@marvell.com>; Archana Muniganti > > <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi > > Krishna Attunuru <vattunuru@marvell.com>; Konstantin Ananyev > > <konstantin.ananyev@intel.com>; dev@dpdk.org > > Subject: [PATCH v5 00/15] add eventmode to ipsec-secgw > > > > This series introduces event-mode additions to ipsec-secgw. > > > > With this series, ipsec-secgw would be able to run in eventmode. The worker > > thread (executing loop) would be receiving events and would be submitting it > > back to the eventdev after the processing. This way, multicore scaling and h/w > > assisted scheduling is achieved by making use of the eventdev capabilities. > > > > Since the underlying event device would be having varying capabilities, the > > worker thread could be drafted differently to maximize performance. > > This series introduces usage of multiple worker threads, among which the one > to > > be used will be determined by the operating conditions and the underlying > > device capabilities. > > > > For example, if an event device - eth device pair has Tx internal port, then > > application can do tx_adapter_enqueue() instead of regular event_enqueue(). > So > > a thread making an assumption that the device pair has internal port will not > be > > the right solution for another pair. The infrastructure added with these patches > > aims to help application to have multiple worker threads, there by extracting > > maximum performance from every device without affecting existing paths/use > > cases. > > > > The eventmode configuration is predefined. All packets reaching one eth port > > will hit one event queue. All event queues will be mapped to all event ports. So > > all cores will be able to receive traffic from all ports. > > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event > > device will ensure the ordering. Ordering would be lost when tried in PARALLEL. > > > > Following command line options are introduced, > > > > --transfer-mode: to choose between poll mode & event mode > > --event-schedule-type: to specify the scheduling type > > (RTE_SCHED_TYPE_ORDERED/ > > RTE_SCHED_TYPE_ATOMIC/ > > RTE_SCHED_TYPE_PARALLEL) > > > > Additionally the event mode introduces two modes of processing packets: > > > > Driver-mode: This mode will have bare minimum changes in the application > > to support ipsec. There woudn't be any lookup etc done in > > the application. And for inline-protocol use case, the > > thread would resemble l2fwd as the ipsec processing would be > > done entirely in the h/w. This mode can be used to benchmark > > the raw performance of the h/w. All the application side > > steps (like lookup) can be redone based on the requirement > > of the end user. Hence the need for a mode which would > > report the raw performance. > > > > App-mode: This mode will have all the features currently implemented with > > ipsec-secgw (non librte_ipsec mode). All the lookups etc > > would follow the existing methods and would report numbers > > that can be compared against regular ipsec-secgw benchmark > > numbers. > > > > The driver mode is selected with existing --single-sa option (used also by poll > > mode). When --single-sa option is used in conjution with event mode then > index > > passed to --single-sa is ignored. > > > > Example commands to execute ipsec-secgw in various modes on OCTEON TX2 > > platform, > > > > #Inbound and outbound app mode > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event- > > schedule-type parallel > > > > #Inbound and outbound driver mode > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log- > > level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event- > > schedule-type parallel --single-sa 0 > > > > This series adds non burst tx internal port workers only. It provides > infrastructure > > for non internal port workers, however does not define any. Also, only inline > > ipsec protocol mode is supported by the worker threads added. > > > > Following are planned features, > > 1. Add burst mode workers. > > 2. Add non internal port workers. > > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > > 4. Add lookaside protocol support. > > > > Following are features that Marvell won't be attempting. > > 1. Inline crypto support. > > 2. Lookaside crypto support. > > > > For the features that Marvell won't be attempting, new workers can be > > introduced by the respective stake holders. > > > > This series is tested on Marvell OCTEON TX2. > > This series is targeted for 20.05 release. > > > > Changes in v5: > > * Rename function check_params() to check_poll_mode_params() and > > check_eh_conf() to check_event_mode_params() in order to make it clear > > what is their purpose. > > * Forbid usage of --config option in event mode. > > * Replace magic numbers on return with enum values in > > process_ipsec_ev_inbound() > > and process_ipsec_ev_outbound() functions. > > * Add session_priv_pool for both inbound and outbound configuration in > > ipsec_wrkr_non_burst_int_port_app_mode worker. > > * Add check of event type in ipsec_wrkr_non_burst_int_port_app_mode > worker. > > * Update description of --config option in both ipsec-secgw help and > > documentation. > > > > Changes in v4: > > * Update ipsec-secgw documentation to describe the new options as well as > > event mode support. > > * In event mode reserve number of crypto queues equal to number of eth > ports > > in order to meet inline protocol offload requirements. > > * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool > > and include fragments table size into the calculation. > > * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static > keyword > > from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. > > * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), > > check_sp() > > and prepare_out_sessions_tbl() functions as a result of changes introduced > > by SAD feature. > > * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx > > is created with rte_zmalloc. > > * Minor cleanup enhancements: > > - In eh_set_default_conf_eventdev() function in event_helper.c put definition > > of int local vars in one line, remove invalid comment, put > > "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" > > in one line > > instead of two. > > - Remove extern "C" from event_helper.h. > > - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() > and > > eh_dev_has_tx_internal_port() functions in event_helper.c. > > - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. > > - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec- > > secgw.h, > > remove #include <rte_hash.h>. > > - Remove not needed includes in ipsec_worker.c. > > - Remove expired todo from ipsec_worker.h. > > > > Changes in v3: > > * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c > > including minor rework. > > * Rename --schedule-type option to --event-schedule-type. > > * Replace macro UNPROTECTED_PORT with static inline function > > is_unprotected_port(). > > * Move definitions of global variables used by multiple modules > > to .c files and add externs in .h headers. > > * Add eh_check_conf() which validates ipsec-secgw configuration > > for event mode. > > * Add dynamic calculation of number of buffers in a pool based > > on number of cores, ports and crypto queues. > > * Fix segmentation fault in event mode driver worker which happens > > when there are no inline outbound sessions configured. > > * Remove change related to updating number of crypto queues > > in cryptodevs_init(). The update of crypto queues will be handled > > in a separate patch. > > * Fix compilation error on 32-bit platforms by using userdata instead > > of udata64 from rte_mbuf. > > > > Changes in v2: > > * Remove --process-dir option. Instead use existing unprotected port mask > > option (-u) to decide wheter port handles inbound or outbound traffic. > > * Remove --process-mode option. Instead use existing --single-sa option > > to select between app and driver modes. > > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > > * Move destruction of flows to a location where eth ports are stopped > > and closed. > > * Print error and exit when event mode --schedule-type option is used > > in poll mode. > > * Reduce number of goto statements replacing them with loop constructs. > > * Remove sec_session_fixed table and replace it with locally build > > table in driver worker thread. Table is indexed by port identifier > > and holds first inline session pointer found for a given port. > > * Print error and exit when sessions other than inline are configured > > in event mode. > > * When number of event queues is less than number of eth ports then > > map all eth ports to one event queue. > > * Cleanup and minor improvements in code as suggested by Konstantin > > > > Ankur Dwivedi (1): > > examples/ipsec-secgw: add default rte flow for inline Rx > > > > Anoob Joseph (5): > > examples/ipsec-secgw: add framework for eventmode helper > > examples/ipsec-secgw: add eventdev port-lcore link > > examples/ipsec-secgw: add Rx adapter support > > examples/ipsec-secgw: add Tx adapter support > > examples/ipsec-secgw: add routines to display config > > > > Lukasz Bartosik (9): > > examples/ipsec-secgw: add routines to launch workers > > examples/ipsec-secgw: add support for internal ports > > examples/ipsec-secgw: add event helper config init/uninit > > examples/ipsec-secgw: add eventmode to ipsec-secgw > > examples/ipsec-secgw: add driver mode worker > > examples/ipsec-secgw: add app mode worker > > examples/ipsec-secgw: make number of buffers dynamic > > doc: add event mode support to ipsec-secgw > > examples/ipsec-secgw: reserve crypto queues in event mode > > > > doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++- > > examples/ipsec-secgw/Makefile | 2 + > > examples/ipsec-secgw/event_helper.c | 1812 > > ++++++++++++++++++++++++++++++ > > examples/ipsec-secgw/event_helper.h | 327 ++++++ > > examples/ipsec-secgw/ipsec-secgw.c | 506 +++++++-- > > examples/ipsec-secgw/ipsec-secgw.h | 88 ++ > > examples/ipsec-secgw/ipsec.c | 5 +- > > examples/ipsec-secgw/ipsec.h | 53 +- > > examples/ipsec-secgw/ipsec_worker.c | 649 +++++++++++ > > examples/ipsec-secgw/ipsec_worker.h | 41 + > > examples/ipsec-secgw/meson.build | 6 +- > > examples/ipsec-secgw/sa.c | 21 +- > > examples/ipsec-secgw/sad.h | 5 - > > 13 files changed, 3516 insertions(+), 134 deletions(-) create mode 100644 > > examples/ipsec-secgw/event_helper.c > > create mode 100644 examples/ipsec-secgw/event_helper.h > > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > > > -- > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik ` (15 preceding siblings ...) 2020-03-02 8:47 ` [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw Anoob Joseph @ 2020-03-03 18:00 ` Ananyev, Konstantin 2020-03-12 5:32 ` Anoob Joseph 16 siblings, 1 reply; 147+ messages in thread From: Ananyev, Konstantin @ 2020-03-03 18:00 UTC (permalink / raw) To: Lukasz Bartosik, Akhil Goyal, Nicolau, Radu, Thomas Monjalon Cc: Jerin Jacob, Narayana Prasad, Ankur Dwivedi, Anoob Joseph, Archana Muniganti, Tejasree Kondoj, Vamsi Attunuru, dev > > This series introduces event-mode additions to ipsec-secgw. > > With this series, ipsec-secgw would be able to run in eventmode. The > worker thread (executing loop) would be receiving events and would be > submitting it back to the eventdev after the processing. This way, > multicore scaling and h/w assisted scheduling is achieved by making use > of the eventdev capabilities. > > Since the underlying event device would be having varying capabilities, > the worker thread could be drafted differently to maximize performance. > This series introduces usage of multiple worker threads, among which the > one to be used will be determined by the operating conditions and the > underlying device capabilities. > > For example, if an event device - eth device pair has Tx internal port, > then application can do tx_adapter_enqueue() instead of regular > event_enqueue(). So a thread making an assumption that the device pair > has internal port will not be the right solution for another pair. The > infrastructure added with these patches aims to help application to have > multiple worker threads, there by extracting maximum performance from > every device without affecting existing paths/use cases. > > The eventmode configuration is predefined. All packets reaching one eth > port will hit one event queue. All event queues will be mapped to all > event ports. So all cores will be able to receive traffic from all ports. > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event device > will ensure the ordering. Ordering would be lost when tried in PARALLEL. > > Following command line options are introduced, > > --transfer-mode: to choose between poll mode & event mode > --event-schedule-type: to specify the scheduling type > (RTE_SCHED_TYPE_ORDERED/ > RTE_SCHED_TYPE_ATOMIC/ > RTE_SCHED_TYPE_PARALLEL) > > Additionally the event mode introduces two modes of processing packets: > > Driver-mode: This mode will have bare minimum changes in the application > to support ipsec. There woudn't be any lookup etc done in > the application. And for inline-protocol use case, the > thread would resemble l2fwd as the ipsec processing would be > done entirely in the h/w. This mode can be used to benchmark > the raw performance of the h/w. All the application side > steps (like lookup) can be redone based on the requirement > of the end user. Hence the need for a mode which would > report the raw performance. > > App-mode: This mode will have all the features currently implemented with > ipsec-secgw (non librte_ipsec mode). All the lookups etc > would follow the existing methods and would report numbers > that can be compared against regular ipsec-secgw benchmark > numbers. > > The driver mode is selected with existing --single-sa option > (used also by poll mode). When --single-sa option is used > in conjution with event mode then index passed to --single-sa > is ignored. > > Example commands to execute ipsec-secgw in various modes on OCTEON TX2 platform, > > #Inbound and outbound app mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 > -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel > > #Inbound and outbound driver mode > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 --log-level=8 > -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg --transfer-mode event --event-schedule-type parallel --single-sa 0 > > This series adds non burst tx internal port workers only. It provides infrastructure > for non internal port workers, however does not define any. Also, only inline ipsec > protocol mode is supported by the worker threads added. > > Following are planned features, > 1. Add burst mode workers. > 2. Add non internal port workers. > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > 4. Add lookaside protocol support. > > Following are features that Marvell won't be attempting. > 1. Inline crypto support. > 2. Lookaside crypto support. > > For the features that Marvell won't be attempting, new workers can be > introduced by the respective stake holders. > > This series is tested on Marvell OCTEON TX2. > This series is targeted for 20.05 release. > > Changes in v5: > * Rename function check_params() to check_poll_mode_params() and > check_eh_conf() to check_event_mode_params() in order to make it clear > what is their purpose. > * Forbid usage of --config option in event mode. > * Replace magic numbers on return with enum values in process_ipsec_ev_inbound() > and process_ipsec_ev_outbound() functions. > * Add session_priv_pool for both inbound and outbound configuration in > ipsec_wrkr_non_burst_int_port_app_mode worker. > * Add check of event type in ipsec_wrkr_non_burst_int_port_app_mode worker. > * Update description of --config option in both ipsec-secgw help and documentation. > > Changes in v4: > * Update ipsec-secgw documentation to describe the new options as well as > event mode support. > * In event mode reserve number of crypto queues equal to number of eth ports > in order to meet inline protocol offload requirements. > * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool > and include fragments table size into the calculation. > * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static keyword > from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. > * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), check_sp() > and prepare_out_sessions_tbl() functions as a result of changes introduced > by SAD feature. > * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx > is created with rte_zmalloc. > * Minor cleanup enhancements: > - In eh_set_default_conf_eventdev() function in event_helper.c put definition > of int local vars in one line, remove invalid comment, put > "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" in one line > instead of two. > - Remove extern "C" from event_helper.h. > - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() and > eh_dev_has_tx_internal_port() functions in event_helper.c. > - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. > - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec-secgw.h, > remove #include <rte_hash.h>. > - Remove not needed includes in ipsec_worker.c. > - Remove expired todo from ipsec_worker.h. > > Changes in v3: > * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c > including minor rework. > * Rename --schedule-type option to --event-schedule-type. > * Replace macro UNPROTECTED_PORT with static inline function > is_unprotected_port(). > * Move definitions of global variables used by multiple modules > to .c files and add externs in .h headers. > * Add eh_check_conf() which validates ipsec-secgw configuration > for event mode. > * Add dynamic calculation of number of buffers in a pool based > on number of cores, ports and crypto queues. > * Fix segmentation fault in event mode driver worker which happens > when there are no inline outbound sessions configured. > * Remove change related to updating number of crypto queues > in cryptodevs_init(). The update of crypto queues will be handled > in a separate patch. > * Fix compilation error on 32-bit platforms by using userdata instead > of udata64 from rte_mbuf. > > Changes in v2: > * Remove --process-dir option. Instead use existing unprotected port mask > option (-u) to decide wheter port handles inbound or outbound traffic. > * Remove --process-mode option. Instead use existing --single-sa option > to select between app and driver modes. > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > * Move destruction of flows to a location where eth ports are stopped > and closed. > * Print error and exit when event mode --schedule-type option is used > in poll mode. > * Reduce number of goto statements replacing them with loop constructs. > * Remove sec_session_fixed table and replace it with locally build > table in driver worker thread. Table is indexed by port identifier > and holds first inline session pointer found for a given port. > * Print error and exit when sessions other than inline are configured > in event mode. > * When number of event queues is less than number of eth ports then > map all eth ports to one event queue. > * Cleanup and minor improvements in code as suggested by Konstantin > > Ankur Dwivedi (1): > examples/ipsec-secgw: add default rte flow for inline Rx > > Anoob Joseph (5): > examples/ipsec-secgw: add framework for eventmode helper > examples/ipsec-secgw: add eventdev port-lcore link > examples/ipsec-secgw: add Rx adapter support > examples/ipsec-secgw: add Tx adapter support > examples/ipsec-secgw: add routines to display config > > Lukasz Bartosik (9): > examples/ipsec-secgw: add routines to launch workers > examples/ipsec-secgw: add support for internal ports > examples/ipsec-secgw: add event helper config init/uninit > examples/ipsec-secgw: add eventmode to ipsec-secgw > examples/ipsec-secgw: add driver mode worker > examples/ipsec-secgw: add app mode worker > examples/ipsec-secgw: make number of buffers dynamic > doc: add event mode support to ipsec-secgw > examples/ipsec-secgw: reserve crypto queues in event mode > > doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++- > examples/ipsec-secgw/Makefile | 2 + > examples/ipsec-secgw/event_helper.c | 1812 ++++++++++++++++++++++++++++++ > examples/ipsec-secgw/event_helper.h | 327 ++++++ > examples/ipsec-secgw/ipsec-secgw.c | 506 +++++++-- > examples/ipsec-secgw/ipsec-secgw.h | 88 ++ > examples/ipsec-secgw/ipsec.c | 5 +- > examples/ipsec-secgw/ipsec.h | 53 +- > examples/ipsec-secgw/ipsec_worker.c | 649 +++++++++++ > examples/ipsec-secgw/ipsec_worker.h | 41 + > examples/ipsec-secgw/meson.build | 6 +- > examples/ipsec-secgw/sa.c | 21 +- > examples/ipsec-secgw/sad.h | 5 - > 13 files changed, 3516 insertions(+), 134 deletions(-) > create mode 100644 examples/ipsec-secgw/event_helper.c > create mode 100644 examples/ipsec-secgw/event_helper.h > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > -- Have to say I didn't look extensively on event mode. My primary concern was poll-mode and common code changes. From that perspective - LGTM. Series Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw 2020-03-03 18:00 ` Ananyev, Konstantin @ 2020-03-12 5:32 ` Anoob Joseph 2020-03-12 5:55 ` Akhil Goyal 0 siblings, 1 reply; 147+ messages in thread From: Anoob Joseph @ 2020-03-12 5:32 UTC (permalink / raw) To: Akhil Goyal Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Ananyev, Konstantin, Lukas Bartosik, Nicolau, Radu, Thomas Monjalon Hi Akhil, Reminder. Do you have any further review comments? Thanks, Anoob > -----Original Message----- > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > Sent: Tuesday, March 3, 2020 11:30 PM > To: Lukas Bartosik <lbartosik@marvell.com>; Akhil Goyal > <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas > Monjalon <thomas@monjalon.net> > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > Anoob Joseph <anoobj@marvell.com>; Archana Muniganti > <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi > Krishna Attunuru <vattunuru@marvell.com>; dev@dpdk.org > Subject: [EXT] RE: [PATCH v5 00/15] add eventmode to ipsec-secgw > > External Email > > ---------------------------------------------------------------------- > > > > > This series introduces event-mode additions to ipsec-secgw. > > > > With this series, ipsec-secgw would be able to run in eventmode. The > > worker thread (executing loop) would be receiving events and would be > > submitting it back to the eventdev after the processing. This way, > > multicore scaling and h/w assisted scheduling is achieved by making > > use of the eventdev capabilities. > > > > Since the underlying event device would be having varying > > capabilities, the worker thread could be drafted differently to maximize > performance. > > This series introduces usage of multiple worker threads, among which > > the one to be used will be determined by the operating conditions and > > the underlying device capabilities. > > > > For example, if an event device - eth device pair has Tx internal > > port, then application can do tx_adapter_enqueue() instead of regular > > event_enqueue(). So a thread making an assumption that the device pair > > has internal port will not be the right solution for another pair. The > > infrastructure added with these patches aims to help application to > > have multiple worker threads, there by extracting maximum performance > > from every device without affecting existing paths/use cases. > > > > The eventmode configuration is predefined. All packets reaching one > > eth port will hit one event queue. All event queues will be mapped to > > all event ports. So all cores will be able to receive traffic from all ports. > > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event > > device will ensure the ordering. Ordering would be lost when tried in > PARALLEL. > > > > Following command line options are introduced, > > > > --transfer-mode: to choose between poll mode & event mode > > --event-schedule-type: to specify the scheduling type > > (RTE_SCHED_TYPE_ORDERED/ > > RTE_SCHED_TYPE_ATOMIC/ > > RTE_SCHED_TYPE_PARALLEL) > > > > Additionally the event mode introduces two modes of processing packets: > > > > Driver-mode: This mode will have bare minimum changes in the application > > to support ipsec. There woudn't be any lookup etc done in > > the application. And for inline-protocol use case, the > > thread would resemble l2fwd as the ipsec processing would be > > done entirely in the h/w. This mode can be used to benchmark > > the raw performance of the h/w. All the application side > > steps (like lookup) can be redone based on the requirement > > of the end user. Hence the need for a mode which would > > report the raw performance. > > > > App-mode: This mode will have all the features currently implemented with > > ipsec-secgw (non librte_ipsec mode). All the lookups etc > > would follow the existing methods and would report numbers > > that can be compared against regular ipsec-secgw benchmark > > numbers. > > > > The driver mode is selected with existing --single-sa option (used > > also by poll mode). When --single-sa option is used in conjution with > > event mode then index passed to --single-sa is ignored. > > > > Example commands to execute ipsec-secgw in various modes on OCTEON TX2 > > platform, > > > > #Inbound and outbound app mode > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > > --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg > > --transfer-mode event --event-schedule-type parallel > > > > #Inbound and outbound driver mode > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > > --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg > > --transfer-mode event --event-schedule-type parallel --single-sa 0 > > > > This series adds non burst tx internal port workers only. It provides > > infrastructure for non internal port workers, however does not define > > any. Also, only inline ipsec protocol mode is supported by the worker threads > added. > > > > Following are planned features, > > 1. Add burst mode workers. > > 2. Add non internal port workers. > > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > > 4. Add lookaside protocol support. > > > > Following are features that Marvell won't be attempting. > > 1. Inline crypto support. > > 2. Lookaside crypto support. > > > > For the features that Marvell won't be attempting, new workers can be > > introduced by the respective stake holders. > > > > This series is tested on Marvell OCTEON TX2. > > This series is targeted for 20.05 release. > > > > Changes in v5: > > * Rename function check_params() to check_poll_mode_params() and > > check_eh_conf() to check_event_mode_params() in order to make it clear > > what is their purpose. > > * Forbid usage of --config option in event mode. > > * Replace magic numbers on return with enum values in > process_ipsec_ev_inbound() > > and process_ipsec_ev_outbound() functions. > > * Add session_priv_pool for both inbound and outbound configuration in > > ipsec_wrkr_non_burst_int_port_app_mode worker. > > * Add check of event type in ipsec_wrkr_non_burst_int_port_app_mode > worker. > > * Update description of --config option in both ipsec-secgw help and > documentation. > > > > Changes in v4: > > * Update ipsec-secgw documentation to describe the new options as well as > > event mode support. > > * In event mode reserve number of crypto queues equal to number of eth > ports > > in order to meet inline protocol offload requirements. > > * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool > > and include fragments table size into the calculation. > > * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static > keyword > > from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. > > * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), > check_sp() > > and prepare_out_sessions_tbl() functions as a result of changes introduced > > by SAD feature. > > * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx > > is created with rte_zmalloc. > > * Minor cleanup enhancements: > > - In eh_set_default_conf_eventdev() function in event_helper.c put definition > > of int local vars in one line, remove invalid comment, put > > "eventdev_config->ev_queue_mode = RTE_EVENT_QUEUE_CFG_ALL_TYPES" > in one line > > instead of two. > > - Remove extern "C" from event_helper.h. > > - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() > and > > eh_dev_has_tx_internal_port() functions in event_helper.c. > > - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. > > - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec- > secgw.h, > > remove #include <rte_hash.h>. > > - Remove not needed includes in ipsec_worker.c. > > - Remove expired todo from ipsec_worker.h. > > > > Changes in v3: > > * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c > > including minor rework. > > * Rename --schedule-type option to --event-schedule-type. > > * Replace macro UNPROTECTED_PORT with static inline function > > is_unprotected_port(). > > * Move definitions of global variables used by multiple modules > > to .c files and add externs in .h headers. > > * Add eh_check_conf() which validates ipsec-secgw configuration > > for event mode. > > * Add dynamic calculation of number of buffers in a pool based > > on number of cores, ports and crypto queues. > > * Fix segmentation fault in event mode driver worker which happens > > when there are no inline outbound sessions configured. > > * Remove change related to updating number of crypto queues > > in cryptodevs_init(). The update of crypto queues will be handled > > in a separate patch. > > * Fix compilation error on 32-bit platforms by using userdata instead > > of udata64 from rte_mbuf. > > > > Changes in v2: > > * Remove --process-dir option. Instead use existing unprotected port mask > > option (-u) to decide wheter port handles inbound or outbound traffic. > > * Remove --process-mode option. Instead use existing --single-sa option > > to select between app and driver modes. > > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > > * Move destruction of flows to a location where eth ports are stopped > > and closed. > > * Print error and exit when event mode --schedule-type option is used > > in poll mode. > > * Reduce number of goto statements replacing them with loop constructs. > > * Remove sec_session_fixed table and replace it with locally build > > table in driver worker thread. Table is indexed by port identifier > > and holds first inline session pointer found for a given port. > > * Print error and exit when sessions other than inline are configured > > in event mode. > > * When number of event queues is less than number of eth ports then > > map all eth ports to one event queue. > > * Cleanup and minor improvements in code as suggested by Konstantin > > > > Ankur Dwivedi (1): > > examples/ipsec-secgw: add default rte flow for inline Rx > > > > Anoob Joseph (5): > > examples/ipsec-secgw: add framework for eventmode helper > > examples/ipsec-secgw: add eventdev port-lcore link > > examples/ipsec-secgw: add Rx adapter support > > examples/ipsec-secgw: add Tx adapter support > > examples/ipsec-secgw: add routines to display config > > > > Lukasz Bartosik (9): > > examples/ipsec-secgw: add routines to launch workers > > examples/ipsec-secgw: add support for internal ports > > examples/ipsec-secgw: add event helper config init/uninit > > examples/ipsec-secgw: add eventmode to ipsec-secgw > > examples/ipsec-secgw: add driver mode worker > > examples/ipsec-secgw: add app mode worker > > examples/ipsec-secgw: make number of buffers dynamic > > doc: add event mode support to ipsec-secgw > > examples/ipsec-secgw: reserve crypto queues in event mode > > > > doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++- > > examples/ipsec-secgw/Makefile | 2 + > > examples/ipsec-secgw/event_helper.c | 1812 > ++++++++++++++++++++++++++++++ > > examples/ipsec-secgw/event_helper.h | 327 ++++++ > > examples/ipsec-secgw/ipsec-secgw.c | 506 +++++++-- > > examples/ipsec-secgw/ipsec-secgw.h | 88 ++ > > examples/ipsec-secgw/ipsec.c | 5 +- > > examples/ipsec-secgw/ipsec.h | 53 +- > > examples/ipsec-secgw/ipsec_worker.c | 649 +++++++++++ > > examples/ipsec-secgw/ipsec_worker.h | 41 + > > examples/ipsec-secgw/meson.build | 6 +- > > examples/ipsec-secgw/sa.c | 21 +- > > examples/ipsec-secgw/sad.h | 5 - > > 13 files changed, 3516 insertions(+), 134 deletions(-) create mode > > 100644 examples/ipsec-secgw/event_helper.c > > create mode 100644 examples/ipsec-secgw/event_helper.h > > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > > > -- > > Have to say I didn't look extensively on event mode. > My primary concern was poll-mode and common code changes. > From that perspective - LGTM. > > Series Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> > > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw 2020-03-12 5:32 ` Anoob Joseph @ 2020-03-12 5:55 ` Akhil Goyal 2020-03-12 9:57 ` [dpdk-dev] [EXT] " Lukas Bartosik 0 siblings, 1 reply; 147+ messages in thread From: Akhil Goyal @ 2020-03-12 5:55 UTC (permalink / raw) To: Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Ananyev, Konstantin, Lukas Bartosik, Nicolau, Radu, Thomas Monjalon Hi Anoob, Please send a release note update as a reply to this mail. I will update it while merging the patchset. Regards, Akhil > > Hi Akhil, > > Reminder. > > Do you have any further review comments? > > Thanks, > Anoob > > > -----Original Message----- > > From: Ananyev, Konstantin <konstantin.ananyev@intel.com> > > Sent: Tuesday, March 3, 2020 11:30 PM > > To: Lukas Bartosik <lbartosik@marvell.com>; Akhil Goyal > > <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas > > Monjalon <thomas@monjalon.net> > > Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju > > Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; > > Anoob Joseph <anoobj@marvell.com>; Archana Muniganti > > <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi > > Krishna Attunuru <vattunuru@marvell.com>; dev@dpdk.org > > Subject: [EXT] RE: [PATCH v5 00/15] add eventmode to ipsec-secgw > > > > External Email > > > > ---------------------------------------------------------------------- > > > > > > > > This series introduces event-mode additions to ipsec-secgw. > > > > > > With this series, ipsec-secgw would be able to run in eventmode. The > > > worker thread (executing loop) would be receiving events and would be > > > submitting it back to the eventdev after the processing. This way, > > > multicore scaling and h/w assisted scheduling is achieved by making > > > use of the eventdev capabilities. > > > > > > Since the underlying event device would be having varying > > > capabilities, the worker thread could be drafted differently to maximize > > performance. > > > This series introduces usage of multiple worker threads, among which > > > the one to be used will be determined by the operating conditions and > > > the underlying device capabilities. > > > > > > For example, if an event device - eth device pair has Tx internal > > > port, then application can do tx_adapter_enqueue() instead of regular > > > event_enqueue(). So a thread making an assumption that the device pair > > > has internal port will not be the right solution for another pair. The > > > infrastructure added with these patches aims to help application to > > > have multiple worker threads, there by extracting maximum performance > > > from every device without affecting existing paths/use cases. > > > > > > The eventmode configuration is predefined. All packets reaching one > > > eth port will hit one event queue. All event queues will be mapped to > > > all event ports. So all cores will be able to receive traffic from all ports. > > > When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event > > > device will ensure the ordering. Ordering would be lost when tried in > > PARALLEL. > > > > > > Following command line options are introduced, > > > > > > --transfer-mode: to choose between poll mode & event mode > > > --event-schedule-type: to specify the scheduling type > > > (RTE_SCHED_TYPE_ORDERED/ > > > RTE_SCHED_TYPE_ATOMIC/ > > > RTE_SCHED_TYPE_PARALLEL) > > > > > > Additionally the event mode introduces two modes of processing packets: > > > > > > Driver-mode: This mode will have bare minimum changes in the application > > > to support ipsec. There woudn't be any lookup etc done in > > > the application. And for inline-protocol use case, the > > > thread would resemble l2fwd as the ipsec processing would be > > > done entirely in the h/w. This mode can be used to benchmark > > > the raw performance of the h/w. All the application side > > > steps (like lookup) can be redone based on the requirement > > > of the end user. Hence the need for a mode which would > > > report the raw performance. > > > > > > App-mode: This mode will have all the features currently implemented with > > > ipsec-secgw (non librte_ipsec mode). All the lookups etc > > > would follow the existing methods and would report numbers > > > that can be compared against regular ipsec-secgw benchmark > > > numbers. > > > > > > The driver mode is selected with existing --single-sa option (used > > > also by poll mode). When --single-sa option is used in conjution with > > > event mode then index passed to --single-sa is ignored. > > > > > > Example commands to execute ipsec-secgw in various modes on OCTEON > TX2 > > > platform, > > > > > > #Inbound and outbound app mode > > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > > > --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg > > > --transfer-mode event --event-schedule-type parallel > > > > > > #Inbound and outbound driver mode > > > ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w > > > 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 > > > --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg > > > --transfer-mode event --event-schedule-type parallel --single-sa 0 > > > > > > This series adds non burst tx internal port workers only. It provides > > > infrastructure for non internal port workers, however does not define > > > any. Also, only inline ipsec protocol mode is supported by the worker threads > > added. > > > > > > Following are planned features, > > > 1. Add burst mode workers. > > > 2. Add non internal port workers. > > > 3. Verify support for Rx core (the support is added but lack of h/w to verify). > > > 4. Add lookaside protocol support. > > > > > > Following are features that Marvell won't be attempting. > > > 1. Inline crypto support. > > > 2. Lookaside crypto support. > > > > > > For the features that Marvell won't be attempting, new workers can be > > > introduced by the respective stake holders. > > > > > > This series is tested on Marvell OCTEON TX2. > > > This series is targeted for 20.05 release. > > > > > > Changes in v5: > > > * Rename function check_params() to check_poll_mode_params() and > > > check_eh_conf() to check_event_mode_params() in order to make it clear > > > what is their purpose. > > > * Forbid usage of --config option in event mode. > > > * Replace magic numbers on return with enum values in > > process_ipsec_ev_inbound() > > > and process_ipsec_ev_outbound() functions. > > > * Add session_priv_pool for both inbound and outbound configuration in > > > ipsec_wrkr_non_burst_int_port_app_mode worker. > > > * Add check of event type in ipsec_wrkr_non_burst_int_port_app_mode > > worker. > > > * Update description of --config option in both ipsec-secgw help and > > documentation. > > > > > > Changes in v4: > > > * Update ipsec-secgw documentation to describe the new options as well as > > > event mode support. > > > * In event mode reserve number of crypto queues equal to number of eth > > ports > > > in order to meet inline protocol offload requirements. > > > * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool > > > and include fragments table size into the calculation. > > > * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static > > keyword > > > from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. > > > * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), > > check_sp() > > > and prepare_out_sessions_tbl() functions as a result of changes introduced > > > by SAD feature. > > > * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx > > > is created with rte_zmalloc. > > > * Minor cleanup enhancements: > > > - In eh_set_default_conf_eventdev() function in event_helper.c put > definition > > > of int local vars in one line, remove invalid comment, put > > > "eventdev_config->ev_queue_mode = > RTE_EVENT_QUEUE_CFG_ALL_TYPES" > > in one line > > > instead of two. > > > - Remove extern "C" from event_helper.h. > > > - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() > > and > > > eh_dev_has_tx_internal_port() functions in event_helper.c. > > > - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. > > > - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec- > > secgw.h, > > > remove #include <rte_hash.h>. > > > - Remove not needed includes in ipsec_worker.c. > > > - Remove expired todo from ipsec_worker.h. > > > > > > Changes in v3: > > > * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c > > > including minor rework. > > > * Rename --schedule-type option to --event-schedule-type. > > > * Replace macro UNPROTECTED_PORT with static inline function > > > is_unprotected_port(). > > > * Move definitions of global variables used by multiple modules > > > to .c files and add externs in .h headers. > > > * Add eh_check_conf() which validates ipsec-secgw configuration > > > for event mode. > > > * Add dynamic calculation of number of buffers in a pool based > > > on number of cores, ports and crypto queues. > > > * Fix segmentation fault in event mode driver worker which happens > > > when there are no inline outbound sessions configured. > > > * Remove change related to updating number of crypto queues > > > in cryptodevs_init(). The update of crypto queues will be handled > > > in a separate patch. > > > * Fix compilation error on 32-bit platforms by using userdata instead > > > of udata64 from rte_mbuf. > > > > > > Changes in v2: > > > * Remove --process-dir option. Instead use existing unprotected port mask > > > option (-u) to decide wheter port handles inbound or outbound traffic. > > > * Remove --process-mode option. Instead use existing --single-sa option > > > to select between app and driver modes. > > > * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. > > > * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). > > > * Move destruction of flows to a location where eth ports are stopped > > > and closed. > > > * Print error and exit when event mode --schedule-type option is used > > > in poll mode. > > > * Reduce number of goto statements replacing them with loop constructs. > > > * Remove sec_session_fixed table and replace it with locally build > > > table in driver worker thread. Table is indexed by port identifier > > > and holds first inline session pointer found for a given port. > > > * Print error and exit when sessions other than inline are configured > > > in event mode. > > > * When number of event queues is less than number of eth ports then > > > map all eth ports to one event queue. > > > * Cleanup and minor improvements in code as suggested by Konstantin > > > > > > Ankur Dwivedi (1): > > > examples/ipsec-secgw: add default rte flow for inline Rx > > > > > > Anoob Joseph (5): > > > examples/ipsec-secgw: add framework for eventmode helper > > > examples/ipsec-secgw: add eventdev port-lcore link > > > examples/ipsec-secgw: add Rx adapter support > > > examples/ipsec-secgw: add Tx adapter support > > > examples/ipsec-secgw: add routines to display config > > > > > > Lukasz Bartosik (9): > > > examples/ipsec-secgw: add routines to launch workers > > > examples/ipsec-secgw: add support for internal ports > > > examples/ipsec-secgw: add event helper config init/uninit > > > examples/ipsec-secgw: add eventmode to ipsec-secgw > > > examples/ipsec-secgw: add driver mode worker > > > examples/ipsec-secgw: add app mode worker > > > examples/ipsec-secgw: make number of buffers dynamic > > > doc: add event mode support to ipsec-secgw > > > examples/ipsec-secgw: reserve crypto queues in event mode > > > > > > doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++- > > > examples/ipsec-secgw/Makefile | 2 + > > > examples/ipsec-secgw/event_helper.c | 1812 > > ++++++++++++++++++++++++++++++ > > > examples/ipsec-secgw/event_helper.h | 327 ++++++ > > > examples/ipsec-secgw/ipsec-secgw.c | 506 +++++++-- > > > examples/ipsec-secgw/ipsec-secgw.h | 88 ++ > > > examples/ipsec-secgw/ipsec.c | 5 +- > > > examples/ipsec-secgw/ipsec.h | 53 +- > > > examples/ipsec-secgw/ipsec_worker.c | 649 +++++++++++ > > > examples/ipsec-secgw/ipsec_worker.h | 41 + > > > examples/ipsec-secgw/meson.build | 6 +- > > > examples/ipsec-secgw/sa.c | 21 +- > > > examples/ipsec-secgw/sad.h | 5 - > > > 13 files changed, 3516 insertions(+), 134 deletions(-) create mode > > > 100644 examples/ipsec-secgw/event_helper.c > > > create mode 100644 examples/ipsec-secgw/event_helper.h > > > create mode 100644 examples/ipsec-secgw/ipsec-secgw.h > > > create mode 100644 examples/ipsec-secgw/ipsec_worker.c > > > create mode 100644 examples/ipsec-secgw/ipsec_worker.h > > > > > > -- > > > > Have to say I didn't look extensively on event mode. > > My primary concern was poll-mode and common code changes. > > From that perspective - LGTM. > > > > Series Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> > > > > > 2.7.4 ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v5 00/15] add eventmode to ipsec-secgw 2020-03-12 5:55 ` Akhil Goyal @ 2020-03-12 9:57 ` Lukas Bartosik 2020-03-12 13:25 ` Akhil Goyal 0 siblings, 1 reply; 147+ messages in thread From: Lukas Bartosik @ 2020-03-12 9:57 UTC (permalink / raw) To: Akhil Goyal, Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Ananyev, Konstantin, Nicolau, Radu, Thomas Monjalon Hi Akhil, This is release note proposal for event mode feature. diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst index 2190eaf..f8deda7 100644 --- a/doc/guides/rel_notes/release_20_05.rst +++ b/doc/guides/rel_notes/release_20_05.rst @@ -56,6 +56,14 @@ New Features Also, make sure to start the actual text at the margin. ========================================================= +* **Added event mode to ipsec-secgw application ** + + Added event mode to ipsec-secgw application. The ipsec-secgw worker thread(s) + would be receiving events and would be submitting it back to the event device after + the processing. This way, multicore scaling and HW assisted scheduling is achieved + by making use of the event device capabilities. The event mode currently supports + only inline IPsec protocol offload. + Removed Items ------------- Thanks, Lukasz On 12.03.2020 06:55, Akhil Goyal wrote: > External Email > > ---------------------------------------------------------------------- > Hi Anoob, > > Please send a release note update as a reply to this mail. I will update it while merging the patchset. > > Regards, > Akhil >> >> Hi Akhil, >> >> Reminder. >> >> Do you have any further review comments? >> >> Thanks, >> Anoob >> >>> -----Original Message----- >>> From: Ananyev, Konstantin <konstantin.ananyev@intel.com> >>> Sent: Tuesday, March 3, 2020 11:30 PM >>> To: Lukas Bartosik <lbartosik@marvell.com>; Akhil Goyal >>> <akhil.goyal@nxp.com>; Nicolau, Radu <radu.nicolau@intel.com>; Thomas >>> Monjalon <thomas@monjalon.net> >>> Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Narayana Prasad Raju >>> Athreya <pathreya@marvell.com>; Ankur Dwivedi <adwivedi@marvell.com>; >>> Anoob Joseph <anoobj@marvell.com>; Archana Muniganti >>> <marchana@marvell.com>; Tejasree Kondoj <ktejasree@marvell.com>; Vamsi >>> Krishna Attunuru <vattunuru@marvell.com>; dev@dpdk.org >>> Subject: [EXT] RE: [PATCH v5 00/15] add eventmode to ipsec-secgw >>> >>> External Email >>> >>> ---------------------------------------------------------------------- >>> >>>> >>>> This series introduces event-mode additions to ipsec-secgw. >>>> >>>> With this series, ipsec-secgw would be able to run in eventmode. The >>>> worker thread (executing loop) would be receiving events and would be >>>> submitting it back to the eventdev after the processing. This way, >>>> multicore scaling and h/w assisted scheduling is achieved by making >>>> use of the eventdev capabilities. >>>> >>>> Since the underlying event device would be having varying >>>> capabilities, the worker thread could be drafted differently to maximize >>> performance. >>>> This series introduces usage of multiple worker threads, among which >>>> the one to be used will be determined by the operating conditions and >>>> the underlying device capabilities. >>>> >>>> For example, if an event device - eth device pair has Tx internal >>>> port, then application can do tx_adapter_enqueue() instead of regular >>>> event_enqueue(). So a thread making an assumption that the device pair >>>> has internal port will not be the right solution for another pair. The >>>> infrastructure added with these patches aims to help application to >>>> have multiple worker threads, there by extracting maximum performance >>>> from every device without affecting existing paths/use cases. >>>> >>>> The eventmode configuration is predefined. All packets reaching one >>>> eth port will hit one event queue. All event queues will be mapped to >>>> all event ports. So all cores will be able to receive traffic from all ports. >>>> When schedule_type is set as RTE_SCHED_TYPE_ORDERED/ATOMIC, event >>>> device will ensure the ordering. Ordering would be lost when tried in >>> PARALLEL. >>>> >>>> Following command line options are introduced, >>>> >>>> --transfer-mode: to choose between poll mode & event mode >>>> --event-schedule-type: to specify the scheduling type >>>> (RTE_SCHED_TYPE_ORDERED/ >>>> RTE_SCHED_TYPE_ATOMIC/ >>>> RTE_SCHED_TYPE_PARALLEL) >>>> >>>> Additionally the event mode introduces two modes of processing packets: >>>> >>>> Driver-mode: This mode will have bare minimum changes in the application >>>> to support ipsec. There woudn't be any lookup etc done in >>>> the application. And for inline-protocol use case, the >>>> thread would resemble l2fwd as the ipsec processing would be >>>> done entirely in the h/w. This mode can be used to benchmark >>>> the raw performance of the h/w. All the application side >>>> steps (like lookup) can be redone based on the requirement >>>> of the end user. Hence the need for a mode which would >>>> report the raw performance. >>>> >>>> App-mode: This mode will have all the features currently implemented with >>>> ipsec-secgw (non librte_ipsec mode). All the lookups etc >>>> would follow the existing methods and would report numbers >>>> that can be compared against regular ipsec-secgw benchmark >>>> numbers. >>>> >>>> The driver mode is selected with existing --single-sa option (used >>>> also by poll mode). When --single-sa option is used in conjution with >>>> event mode then index passed to --single-sa is ignored. >>>> >>>> Example commands to execute ipsec-secgw in various modes on OCTEON >> TX2 >>>> platform, >>>> >>>> #Inbound and outbound app mode >>>> ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w >>>> 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 >>>> --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg >>>> --transfer-mode event --event-schedule-type parallel >>>> >>>> #Inbound and outbound driver mode >>>> ipsec-secgw -w 0002:02:00.0,ipsec_in_max_spi=128 -w >>>> 0002:03:00.0,ipsec_in_max_spi=128 -w 0002:0e:00.0 -w 0002:10:00.1 >>>> --log-level=8 -c 0x1 -- -P -p 0x3 -u 0x1 -f aes-gcm.cfg >>>> --transfer-mode event --event-schedule-type parallel --single-sa 0 >>>> >>>> This series adds non burst tx internal port workers only. It provides >>>> infrastructure for non internal port workers, however does not define >>>> any. Also, only inline ipsec protocol mode is supported by the worker threads >>> added. >>>> >>>> Following are planned features, >>>> 1. Add burst mode workers. >>>> 2. Add non internal port workers. >>>> 3. Verify support for Rx core (the support is added but lack of h/w to verify). >>>> 4. Add lookaside protocol support. >>>> >>>> Following are features that Marvell won't be attempting. >>>> 1. Inline crypto support. >>>> 2. Lookaside crypto support. >>>> >>>> For the features that Marvell won't be attempting, new workers can be >>>> introduced by the respective stake holders. >>>> >>>> This series is tested on Marvell OCTEON TX2. >>>> This series is targeted for 20.05 release. >>>> >>>> Changes in v5: >>>> * Rename function check_params() to check_poll_mode_params() and >>>> check_eh_conf() to check_event_mode_params() in order to make it clear >>>> what is their purpose. >>>> * Forbid usage of --config option in event mode. >>>> * Replace magic numbers on return with enum values in >>> process_ipsec_ev_inbound() >>>> and process_ipsec_ev_outbound() functions. >>>> * Add session_priv_pool for both inbound and outbound configuration in >>>> ipsec_wrkr_non_burst_int_port_app_mode worker. >>>> * Add check of event type in ipsec_wrkr_non_burst_int_port_app_mode >>> worker. >>>> * Update description of --config option in both ipsec-secgw help and >>> documentation. >>>> >>>> Changes in v4: >>>> * Update ipsec-secgw documentation to describe the new options as well as >>>> event mode support. >>>> * In event mode reserve number of crypto queues equal to number of eth >>> ports >>>> in order to meet inline protocol offload requirements. >>>> * Add calculate_nb_mbufs() function to calculate number of mbufs in a pool >>>> and include fragments table size into the calculation. >>>> * Move structures ipsec_xf and ipsec_sad to ipsec.h and remove static >>> keyword >>>> from sa_out, nb_sa_out, sa_in and nb_sa_in in sa.c. >>>> * Update process_ipsec_ev_inbound(), process_ipsec_ev_outbound(), >>> check_sp() >>>> and prepare_out_sessions_tbl() functions as a result of changes introduced >>>> by SAD feature. >>>> * Remove setting sa->cdev_id_qp to 0 in create_inline_session as sa_ctx >>>> is created with rte_zmalloc. >>>> * Minor cleanup enhancements: >>>> - In eh_set_default_conf_eventdev() function in event_helper.c put >> definition >>>> of int local vars in one line, remove invalid comment, put >>>> "eventdev_config->ev_queue_mode = >> RTE_EVENT_QUEUE_CFG_ALL_TYPES" >>> in one line >>>> instead of two. >>>> - Remove extern "C" from event_helper.h. >>>> - Put local vars in reverse xmas tree order in eh_dev_has_rx_internal_port() >>> and >>>> eh_dev_has_tx_internal_port() functions in event_helper.c. >>>> - Put #include <rte_bitmap.h> in alphabetical order in ipsec-secgw.c. >>>> - Move "extern volatile bool force_quit" and "#include <stdbool.h>" to ipsec- >>> secgw.h, >>>> remove #include <rte_hash.h>. >>>> - Remove not needed includes in ipsec_worker.c. >>>> - Remove expired todo from ipsec_worker.h. >>>> >>>> Changes in v3: >>>> * Move eh_conf_init() and eh_conf_uninit() functions to event_helper.c >>>> including minor rework. >>>> * Rename --schedule-type option to --event-schedule-type. >>>> * Replace macro UNPROTECTED_PORT with static inline function >>>> is_unprotected_port(). >>>> * Move definitions of global variables used by multiple modules >>>> to .c files and add externs in .h headers. >>>> * Add eh_check_conf() which validates ipsec-secgw configuration >>>> for event mode. >>>> * Add dynamic calculation of number of buffers in a pool based >>>> on number of cores, ports and crypto queues. >>>> * Fix segmentation fault in event mode driver worker which happens >>>> when there are no inline outbound sessions configured. >>>> * Remove change related to updating number of crypto queues >>>> in cryptodevs_init(). The update of crypto queues will be handled >>>> in a separate patch. >>>> * Fix compilation error on 32-bit platforms by using userdata instead >>>> of udata64 from rte_mbuf. >>>> >>>> Changes in v2: >>>> * Remove --process-dir option. Instead use existing unprotected port mask >>>> option (-u) to decide wheter port handles inbound or outbound traffic. >>>> * Remove --process-mode option. Instead use existing --single-sa option >>>> to select between app and driver modes. >>>> * Add handling of PKT_RX_SEC_OFFLOAD_FAIL result in app worker thread. >>>> * Fix passing of req_rx_offload flags to create_default_ipsec_flow(). >>>> * Move destruction of flows to a location where eth ports are stopped >>>> and closed. >>>> * Print error and exit when event mode --schedule-type option is used >>>> in poll mode. >>>> * Reduce number of goto statements replacing them with loop constructs. >>>> * Remove sec_session_fixed table and replace it with locally build >>>> table in driver worker thread. Table is indexed by port identifier >>>> and holds first inline session pointer found for a given port. >>>> * Print error and exit when sessions other than inline are configured >>>> in event mode. >>>> * When number of event queues is less than number of eth ports then >>>> map all eth ports to one event queue. >>>> * Cleanup and minor improvements in code as suggested by Konstantin >>>> >>>> Ankur Dwivedi (1): >>>> examples/ipsec-secgw: add default rte flow for inline Rx >>>> >>>> Anoob Joseph (5): >>>> examples/ipsec-secgw: add framework for eventmode helper >>>> examples/ipsec-secgw: add eventdev port-lcore link >>>> examples/ipsec-secgw: add Rx adapter support >>>> examples/ipsec-secgw: add Tx adapter support >>>> examples/ipsec-secgw: add routines to display config >>>> >>>> Lukasz Bartosik (9): >>>> examples/ipsec-secgw: add routines to launch workers >>>> examples/ipsec-secgw: add support for internal ports >>>> examples/ipsec-secgw: add event helper config init/uninit >>>> examples/ipsec-secgw: add eventmode to ipsec-secgw >>>> examples/ipsec-secgw: add driver mode worker >>>> examples/ipsec-secgw: add app mode worker >>>> examples/ipsec-secgw: make number of buffers dynamic >>>> doc: add event mode support to ipsec-secgw >>>> examples/ipsec-secgw: reserve crypto queues in event mode >>>> >>>> doc/guides/sample_app_ug/ipsec_secgw.rst | 135 ++- >>>> examples/ipsec-secgw/Makefile | 2 + >>>> examples/ipsec-secgw/event_helper.c | 1812 >>> ++++++++++++++++++++++++++++++ >>>> examples/ipsec-secgw/event_helper.h | 327 ++++++ >>>> examples/ipsec-secgw/ipsec-secgw.c | 506 +++++++-- >>>> examples/ipsec-secgw/ipsec-secgw.h | 88 ++ >>>> examples/ipsec-secgw/ipsec.c | 5 +- >>>> examples/ipsec-secgw/ipsec.h | 53 +- >>>> examples/ipsec-secgw/ipsec_worker.c | 649 +++++++++++ >>>> examples/ipsec-secgw/ipsec_worker.h | 41 + >>>> examples/ipsec-secgw/meson.build | 6 +- >>>> examples/ipsec-secgw/sa.c | 21 +- >>>> examples/ipsec-secgw/sad.h | 5 - >>>> 13 files changed, 3516 insertions(+), 134 deletions(-) create mode >>>> 100644 examples/ipsec-secgw/event_helper.c >>>> create mode 100644 examples/ipsec-secgw/event_helper.h >>>> create mode 100644 examples/ipsec-secgw/ipsec-secgw.h >>>> create mode 100644 examples/ipsec-secgw/ipsec_worker.c >>>> create mode 100644 examples/ipsec-secgw/ipsec_worker.h >>>> >>>> -- >>> >>> Have to say I didn't look extensively on event mode. >>> My primary concern was poll-mode and common code changes. >>> From that perspective - LGTM. >>> >>> Series Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> >>> >>>> 2.7.4 > ^ permalink raw reply [flat|nested] 147+ messages in thread
* Re: [dpdk-dev] [EXT] RE: [PATCH v5 00/15] add eventmode to ipsec-secgw 2020-03-12 9:57 ` [dpdk-dev] [EXT] " Lukas Bartosik @ 2020-03-12 13:25 ` Akhil Goyal 0 siblings, 0 replies; 147+ messages in thread From: Akhil Goyal @ 2020-03-12 13:25 UTC (permalink / raw) To: Lukas Bartosik, Anoob Joseph Cc: Jerin Jacob Kollanukkaran, Narayana Prasad Raju Athreya, Ankur Dwivedi, Archana Muniganti, Tejasree Kondoj, Vamsi Krishna Attunuru, dev, Ananyev, Konstantin, Nicolau, Radu, Thomas Monjalon > Hi Akhil, > > This is release note proposal for event mode feature. > > > > diff --git a/doc/guides/rel_notes/release_20_05.rst > b/doc/guides/rel_notes/release_20_05.rst > index 2190eaf..f8deda7 100644 > --- a/doc/guides/rel_notes/release_20_05.rst > +++ b/doc/guides/rel_notes/release_20_05.rst > @@ -56,6 +56,14 @@ New Features > Also, make sure to start the actual text at the margin. > ========================================================= > > +* **Added event mode to ipsec-secgw application ** > + > + Added event mode to ipsec-secgw application. The ipsec-secgw worker > thread(s) > + would be receiving events and would be submitting it back to the event device > after > + the processing. This way, multicore scaling and HW assisted scheduling is > achieved > + by making use of the event device capabilities. The event mode currently > supports > + only inline IPsec protocol offload. > + Modified it as below. +* **Added event mode to ipsec-secgw application.** + + Updated ipsec-secgw application to add event based packet processing. The worker + thread(s) would receive events and submit them back to the event device after + the processing. This way, multicore scaling and HW assisted scheduling is achieved + by making use of the event device capabilities. The event mode currently supports + only inline IPsec protocol offload. + > >>> Series Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Series Acked-by: Akhil Goyal <akhil.goyal@nxp.com> Applied to dpdk-next-crypto I may do some minor changes later while submitting pull request to master. Thanks. ^ permalink raw reply [flat|nested] 147+ messages in thread
end of thread, other threads:[~2020-04-12 16:37 UTC | newest] Thread overview: 147+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-12-08 12:30 [dpdk-dev] [PATCH 00/14] add eventmode to ipsec-secgw Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 01/14] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph 2019-12-16 14:20 ` Ananyev, Konstantin 2019-12-16 15:58 ` Anoob Joseph 2020-01-09 12:01 ` Lukas Bartosik 2020-01-09 19:09 ` Ananyev, Konstantin 2020-01-13 11:40 ` Ananyev, Konstantin 2019-12-08 12:30 ` [dpdk-dev] [PATCH 02/14] examples/ipsec-secgw: add framework for eventmode helper Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 03/14] examples/ipsec-secgw: add eventdev port-lcore link Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 04/14] examples/ipsec-secgw: add Rx adapter support Anoob Joseph 2019-12-11 11:33 ` Akhil Goyal 2019-12-12 5:18 ` Anoob Joseph 2019-12-23 18:48 ` Ananyev, Konstantin 2020-01-07 6:12 ` Anoob Joseph 2020-01-07 14:32 ` Ananyev, Konstantin 2019-12-08 12:30 ` [dpdk-dev] [PATCH 05/14] examples/ipsec-secgw: add Tx " Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 06/14] examples/ipsec-secgw: add routines to display config Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 07/14] examples/ipsec-secgw: add routines to launch workers Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 08/14] examples/ipsec-secgw: add support for internal ports Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 09/14] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph 2019-12-23 16:43 ` Ananyev, Konstantin 2020-01-03 10:18 ` Anoob Joseph 2020-01-06 15:45 ` Ananyev, Konstantin 2020-01-09 6:17 ` Anoob Joseph 2019-12-24 12:47 ` Ananyev, Konstantin 2020-01-03 10:20 ` Anoob Joseph 2020-01-06 16:50 ` Ananyev, Konstantin 2020-01-07 6:56 ` Anoob Joseph 2020-01-07 14:38 ` Ananyev, Konstantin 2019-12-08 12:30 ` [dpdk-dev] [PATCH 10/14] examples/ipsec-secgw: add app inbound worker Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 11/14] examples/ipsec-secgw: add app processing code Anoob Joseph 2019-12-23 16:49 ` Ananyev, Konstantin 2020-01-10 14:28 ` [dpdk-dev] [EXT] " Lukas Bartosik 2019-12-24 13:13 ` [dpdk-dev] " Ananyev, Konstantin 2020-01-10 14:36 ` [dpdk-dev] [EXT] " Lukas Bartosik 2019-12-25 15:18 ` [dpdk-dev] " Ananyev, Konstantin 2020-01-07 6:16 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 12/14] examples/ipsec-secgw: add driver outbound worker Anoob Joseph 2019-12-23 17:28 ` Ananyev, Konstantin 2020-01-04 10:58 ` Anoob Joseph 2020-01-06 17:46 ` Ananyev, Konstantin 2020-01-07 4:32 ` Anoob Joseph 2020-01-07 14:30 ` Ananyev, Konstantin 2020-01-09 11:49 ` Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 13/14] examples/ipsec-secgw: add app " Anoob Joseph 2019-12-08 12:30 ` [dpdk-dev] [PATCH 14/14] examples/ipsec-secgw: add cmd line option for bufs Anoob Joseph 2019-12-23 16:14 ` Ananyev, Konstantin 2019-12-23 16:16 ` Ananyev, Konstantin 2020-01-03 5:42 ` Anoob Joseph 2020-01-06 15:21 ` Ananyev, Konstantin 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 01/12] examples/ipsec-secgw: add default rte_flow for inline Rx Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 02/12] examples/ipsec-secgw: add framework for eventmode helper Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 03/12] examples/ipsec-secgw: add eventdev port-lcore link Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 04/12] examples/ipsec-secgw: add Rx adapter support Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 05/12] examples/ipsec-secgw: add Tx " Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 06/12] examples/ipsec-secgw: add routines to display config Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 07/12] examples/ipsec-secgw: add routines to launch workers Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 08/12] examples/ipsec-secgw: add support for internal ports Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 09/12] examples/ipsec-secgw: add eventmode to ipsec-secgw Anoob Joseph 2020-01-29 23:31 ` Ananyev, Konstantin 2020-01-30 11:04 ` [dpdk-dev] [EXT] " Lukas Bartosik 2020-01-30 11:13 ` Ananyev, Konstantin 2020-01-30 22:21 ` Ananyev, Konstantin 2020-01-31 1:09 ` Lukas Bartosik 2020-02-02 23:00 ` Lukas Bartosik 2020-02-03 7:50 ` Ananyev, Konstantin 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 10/12] examples/ipsec-secgw: add driver mode worker Anoob Joseph 2020-01-29 22:22 ` Ananyev, Konstantin 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 11/12] examples/ipsec-secgw: add app " Anoob Joseph 2020-01-29 15:34 ` Ananyev, Konstantin 2020-01-29 17:18 ` Anoob Joseph 2020-01-20 13:45 ` [dpdk-dev] [PATCH v2 12/12] examples/ipsec-secgw: add cmd line option for bufs Anoob Joseph 2020-01-29 14:40 ` Ananyev, Konstantin 2020-01-29 17:14 ` Anoob Joseph 2020-01-28 5:02 ` [dpdk-dev] [PATCH v2 00/12] add eventmode to ipsec-secgw Anoob Joseph 2020-01-28 13:00 ` Ananyev, Konstantin 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 00/13] " Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 01/13] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 02/13] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 03/13] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 04/13] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 05/13] examples/ipsec-secgw: add Tx " Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 06/13] examples/ipsec-secgw: add routines to display config Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 07/13] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 08/13] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 09/13] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 10/13] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 11/13] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 12/13] examples/ipsec-secgw: add app " Lukasz Bartosik 2020-02-04 13:58 ` [dpdk-dev] [PATCH v3 13/13] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik 2020-02-05 13:42 ` Ananyev, Konstantin 2020-02-05 16:08 ` [dpdk-dev] [EXT] " Lukas Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 01/15] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 02/15] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 03/15] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 04/15] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 05/15] examples/ipsec-secgw: add Tx " Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 06/15] examples/ipsec-secgw: add routines to display config Lukasz Bartosik 2020-02-20 8:01 ` [dpdk-dev] [PATCH v4 07/15] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 08/15] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 09/15] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 11/15] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 12/15] examples/ipsec-secgw: add app " Lukasz Bartosik 2020-02-24 14:13 ` Akhil Goyal 2020-02-25 11:50 ` [dpdk-dev] [EXT] " Lukas Bartosik 2020-02-25 12:13 ` Anoob Joseph 2020-02-25 16:03 ` Ananyev, Konstantin 2020-02-26 4:33 ` Anoob Joseph 2020-02-26 5:55 ` Akhil Goyal 2020-02-26 12:36 ` Ananyev, Konstantin 2020-02-26 6:04 ` Akhil Goyal 2020-02-26 10:32 ` Lukas Bartosik 2020-02-27 12:07 ` Akhil Goyal 2020-02-27 14:31 ` Lukas Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 13/15] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 14/15] doc: add event mode support to ipsec-secgw Lukasz Bartosik 2020-02-20 8:02 ` [dpdk-dev] [PATCH v4 15/15] examples/ipsec-secgw: reserve crypto queues in event mode Lukasz Bartosik 2020-02-24 5:20 ` [dpdk-dev] [PATCH v4 00/15] add eventmode to ipsec-secgw Anoob Joseph 2020-02-24 13:40 ` Akhil Goyal 2020-02-25 12:09 ` [dpdk-dev] [EXT] " Lukas Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 " Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 01/15] examples/ipsec-secgw: add default rte flow for inline Rx Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 02/15] examples/ipsec-secgw: add framework for eventmode helper Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 03/15] examples/ipsec-secgw: add eventdev port-lcore link Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 04/15] examples/ipsec-secgw: add Rx adapter support Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 05/15] examples/ipsec-secgw: add Tx " Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 06/15] examples/ipsec-secgw: add routines to display config Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 07/15] examples/ipsec-secgw: add routines to launch workers Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 08/15] examples/ipsec-secgw: add support for internal ports Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 09/15] examples/ipsec-secgw: add event helper config init/uninit Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 10/15] examples/ipsec-secgw: add eventmode to ipsec-secgw Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 11/15] examples/ipsec-secgw: add driver mode worker Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 12/15] examples/ipsec-secgw: add app " Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 13/15] examples/ipsec-secgw: make number of buffers dynamic Lukasz Bartosik 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 14/15] doc: add event mode support to ipsec-secgw Lukasz Bartosik 2020-04-12 16:37 ` Thomas Monjalon 2020-02-27 16:18 ` [dpdk-dev] [PATCH v5 15/15] examples/ipsec-secgw: reserve crypto queues in event mode Lukasz Bartosik 2020-03-02 8:47 ` [dpdk-dev] [PATCH v5 00/15] add eventmode to ipsec-secgw Anoob Joseph 2020-03-02 8:57 ` Akhil Goyal 2020-03-03 18:00 ` Ananyev, Konstantin 2020-03-12 5:32 ` Anoob Joseph 2020-03-12 5:55 ` Akhil Goyal 2020-03-12 9:57 ` [dpdk-dev] [EXT] " Lukas Bartosik 2020-03-12 13:25 ` Akhil Goyal
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).