From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E78D4A04BC; Fri, 9 Oct 2020 00:31:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C12171BAFB; Fri, 9 Oct 2020 00:31:00 +0200 (CEST) Received: from mail-ot1-f65.google.com (mail-ot1-f65.google.com [209.85.210.65]) by dpdk.org (Postfix) with ESMTP id A145F1BA9A for ; Fri, 9 Oct 2020 00:30:58 +0200 (CEST) Received: by mail-ot1-f65.google.com with SMTP id t15so7216065otk.0 for ; Thu, 08 Oct 2020 15:30:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=K4w08sLFxNLJ8EV58AX3BfHPLuQwCFNfKMVXx3FRv5s=; b=cnmgx6Sk1metilTsh+MPP2qh2fG1OCH04JRDo4gC/hqj62KlXidSxrcCxGQHN30nIM TCXgBPCNHk6abIRcxl5BO+omh6ivX3lnJpOblRsFgARH38GrWyMvmJzr+Hg0yfe0z47O ijl2ts3ZHdm1/LuYkETBDj1/jZEsw7SBbcxfA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=K4w08sLFxNLJ8EV58AX3BfHPLuQwCFNfKMVXx3FRv5s=; b=PlrON80CQk49DrYOcVm5jQ5E5abjdIoLiXsnxtqrhK04ScJv+8//j5lx98pNScSKJI 2kRKnbi8Hee0Xqh6FCobo3x5h5WS+UjY4EYghpTDrcI6xHE4V1EmNcnQ35WDyTR0c2dS zR9HgZRUNTXfEfl5qKNBovRmOzGUKHfEdCtvx4vhOunf7ck+2e1Suk0gGV8xITkN3tN+ 8jHjoS8MIdCBBtvayWoCvxQyVUtjoATHEusrnqAU/yeRN4MVsgz+BWnOnKQ7u+SqcV2w B6oZcBb7fpO8PDI39B5Lc7wcTrMyF4IyxGF9Ukr1nXR43VwTjjNxmnB1BaSL/iT7mLhI UxPw== X-Gm-Message-State: AOAM533/V7rXSi8caNKpAlWLvVCJ5BFcoDf4v1USp8+zStTwCr7T8NPe zzNZ2/At9tJFDZN7XJR8fH0ronyZXBFQvmfMW1UQoQ== X-Google-Smtp-Source: ABdhPJwtFhq5NAx0V1+IKFnaj/zugM755j0rXr9rvISaU6+XM+dFnMCPIbr73Xn0riB49Lqn9cGxIeBVrbwRPo3dCTw= X-Received: by 2002:a9d:f67:: with SMTP id 94mr4435311ott.154.1602196256604; Thu, 08 Oct 2020 15:30:56 -0700 (PDT) MIME-Version: 1.0 References: <20200702120511.16315-1-andreyv@mellanox.com> <20201008115143.13208-1-andreyv@nvidia.com> <20201008115143.13208-2-andreyv@nvidia.com> In-Reply-To: <20201008115143.13208-2-andreyv@nvidia.com> From: Ajit Khaparde Date: Thu, 8 Oct 2020 15:30:40 -0700 Message-ID: To: Andrey Vesnovaty Cc: dpdk-dev , jer@marvell.com, Jerin Jacob , Thomas Monjalon , Ferruh Yigit , Stephen Hemminger , Bruce Richardson , Ori Kam , Slava Ovsiienko , andrey.vesnovaty@gmail.com, Ray Kinsella , Neil Horman , Samik Gupta , Andrew Rybchenko Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [PATCH v7 1/2] ethdev: add flow shared action API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Thu, Oct 8, 2020 at 4:51 AM Andrey Vesnovaty wrote: > > This commit introduces extension of DPDK flow action API enabling > sharing of single rte_flow_action in multiple flows. The API intended for > PMDs, where multiple HW offloaded flows can reuse the same HW > essence/object representing flow action and modification of such an > essence/object affects all the rules using it. > > Motivation and example > === > Adding or removing one or more queues to RSS used by multiple flow rules > imposes per rule toll for current DPDK flow API; the scenario requires > for each flow sharing cloned RSS action: > - call `rte_flow_destroy()` > - call `rte_flow_create()` with modified RSS action > > API for sharing action and its in-place update benefits: > - reduce the overhead of multiple RSS flow rules reconfiguration > - optimize resource utilization by sharing action across multiple > flows > > Change description > === > > Shared action > === > In order to represent flow action shared by multiple flows new action > type RTE_FLOW_ACTION_TYPE_SHARED is introduced (see `enum > rte_flow_action_type`). > Actually the introduced API decouples action from any specific flow and > enables sharing of single action by its handle across multiple flows. > > Shared action create/use/destroy > === > Shared action may be reused by some or none flow rules at any given > moment, i.e. shared action resides outside of the context of any flow. > Shared action represent HW resources/objects used for action offloading > implementation. > API for shared action create (see `rte_flow_shared_action_create()`): > - should allocate HW resources and make related initializations required > for shared action implementation. > - make necessary preparations to maintain shared access to > the action resources, configuration and state. > API for shared action destroy (see `rte_flow_shared_action_destroy()`) > should release HW resources and make related cleanups required for shared > action implementation. > > In order to share some flow action reuse the handle of type > `struct rte_flow_shared_action` returned by > rte_flow_shared_action_create() as a `conf` field of > `struct rte_flow_action` (see "example" section). > > If some shared action not used by any flow rule all resources allocated > by the shared action can be released by rte_flow_shared_action_destroy() > (see "example" section). The shared action handle passed as argument to > destroy API should not be used any further i.e. result of the usage is > undefined. > > Shared action re-configuration > === > Shared action behavior defined by its configuration can be updated via > rte_flow_shared_action_update() (see "example" section). The shared > action update operation modifies HW related resources/objects allocated > on the action creation. The number of operations performed by the update > operation should not depend on the number of flows sharing the related > action. On return of shared action update API action behavior should be > according to updated configuration for all flows sharing the action. > > Shared action query > === > Provide separate API to query shared action state (see > rte_flow_shared_action_update()). Taking a counter as an example: query > returns value aggregating all counter increments across all flow rules > sharing the counter. This API doesn't query shared action configuration > since it is controlled by rte_flow_shared_action_create() and > rte_flow_shared_action_update() APIs and no supposed to change by other > means. > > PMD support > === > The support of introduced API is pure PMD specific design and > responsibility for each action type (see struct rte_flow_ops). > > testpmd > === > In order to utilize introduced API testpmd cli may implement following > extension > create/update/destroy/query shared action accordingly > > flow shared_action (port) create {action_id (id)} (action) / end > flow shared_action (port) update (id) (action) / end > flow shared_action (port) destroy action_id (id) {action_id (id) [...]} > flow shared_action (port) query (id) > > testpmd example > === > > configure rss to queues 1 & 2 > > > flow shared_action 0 create action_id 100 rss queues 1 2 end / end > > create flow rule utilizing shared action > > > flow create 0 ingress \ > pattern eth dst is 0c:42:a1:15:fd:ac / ipv6 / tcp / end \ > actions shared 100 / end > > add 2 more queues > > > flow shared_action 0 modify 100 rss queues 1 2 3 4 end / end > > example > === > > struct rte_flow_action actions[2]; > struct rte_flow_shared_action_conf conf; > struct rte_flow_action action; > /* skipped: initialize conf and action */ > struct rte_flow_shared_action *handle = > rte_flow_shared_action_create(port_id, &conf, &action, &error); > actions[0].type = RTE_FLOW_ACTION_TYPE_SHARED; > actions[0].conf = handle; > actions[1].type = RTE_FLOW_ACTION_TYPE_END; > /* skipped: init attr0 & pattern0 args */ > struct rte_flow *flow0 = rte_flow_create(port_id, &attr0, pattern0, > actions, error); > /* create more rules reusing shared action */ > struct rte_flow *flow1 = rte_flow_create(port_id, &attr1, pattern1, > actions, error); > /* skipped: for flows 2 till N */ > struct rte_flow *flowN = rte_flow_create(port_id, &attrN, patternN, > actions, error); > /* update shared action */ > struct rte_flow_action updated_action; > /* > * skipped: initialize updated_action according to desired action > * configuration change > */ > rte_flow_shared_action_update(port_id, handle, &updated_action, error); > /* > * from now on all flows 1 till N will act according to configuration of > * updated_action > */ > /* skipped: destroy all flows 1 till N */ > rte_flow_shared_action_destroy(port_id, handle, error); > > Signed-off-by: Andrey Vesnovaty > Acked-by: Ori Kam Since this is an ethdev patch, the testpmd description is really not required. Moreover they are not in sync with the direction and other changes you made in the testpmd patch. Also there is a typo inline. Other than that.. Acked-by: Ajit Khaparde > --- > doc/guides/prog_guide/rte_flow.rst | 19 +++ > doc/guides/rel_notes/release_20_11.rst | 9 ++ > lib/librte_ethdev/rte_ethdev_version.map | 4 + > lib/librte_ethdev/rte_flow.c | 84 +++++++++++ > lib/librte_ethdev/rte_flow.h | 169 ++++++++++++++++++++++- > lib/librte_ethdev/rte_flow_driver.h | 23 +++ > 6 files changed, 307 insertions(+), 1 deletion(-) > [snip] > + > +/** > + * RTE_FLOW_ACTION_TYPE_SHARED > + * > + * Opaque type returned after successfully creating a shared action. > + * > + * This handle can be used to manage and query the related action: > + * - share it across multiple flow rules > + * - update action configuration > + * - query action data > + * - destroy action > + */ > +struct rte_flow_shared_action; > + > /* Mbuf dynamic field offset for metadata. */ > extern int32_t rte_flow_dynf_metadata_offs; > > @@ -3357,6 +3380,150 @@ int > rte_flow_get_aged_flows(uint16_t port_id, void **contexts, > uint32_t nb_contexts, struct rte_flow_error *error); > > +/** > + * Specify shared action configuration > + */ > +struct rte_flow_shared_action_conf { > + /** > + * Flow direction for shared action configuration. > + * > + * Shred action should be valid at least for one flow direction, s/Shred/Shared > + * otherwise it is invalid for both ingress and egress rules. > + */ > + uint32_t ingress:1; > + /**< Action valid for rules applied to ingress traffic. */ > + uint32_t egress:1; > + /**< Action valid for rules applied to egress traffic. */ > +}; [snip]