From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 3A9CC20F for ; Wed, 14 Dec 2016 17:11:19 +0100 (CET) Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 8A45EC04B95C; Wed, 14 Dec 2016 16:11:18 +0000 (UTC) Received: from ktraynor.remote.csb (vpn1-7-60.ams2.redhat.com [10.36.7.60]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id uBEGBGaa007906; Wed, 14 Dec 2016 11:11:16 -0500 To: Adrien Mazarguil References: <1c8a8e4fec73ed33836f1da9525b1b8b53048518.1479309720.git.adrien.mazarguil@6wind.com> <59393e58-6c85-d2e5-1aab-a721fe9c933e@redhat.com> <20161201083652.GI10340@6wind.com> <7f65ba09-e6fe-d97a-6ab5-97e84a828a81@redhat.com> <20161208170715.GM10340@6wind.com> <5c53f86b-ead7-b539-6250-40613c7a57db@redhat.com> <20161214135423.GZ10340@6wind.com> Cc: dev@dpdk.org, Thomas Monjalon , Pablo de Lara , Olivier Matz , sugesh.chandran@intel.comn From: Kevin Traynor Organization: Red Hat Message-ID: Date: Wed, 14 Dec 2016 16:11:15 +0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0 MIME-Version: 1.0 In-Reply-To: <20161214135423.GZ10340@6wind.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Wed, 14 Dec 2016 16:11:18 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH 01/22] ethdev: introduce generic flow API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 14 Dec 2016 16:11:19 -0000 On 12/14/2016 01:54 PM, Adrien Mazarguil wrote: >> >>>>>>> + * @param[out] error >>>>>>> + * Perform verbose error reporting if not NULL. >>>>>>> + * >>>>>>> + * @return >>>>>>> + * 0 on success, a negative errno value otherwise and rte_errno is set. >>>>>>> + */ >>>>>>> +int >>>>>>> +rte_flow_query(uint8_t port_id, >>>>>>> + struct rte_flow *flow, >>>>>>> + enum rte_flow_action_type action, >>>>>>> + void *data, >>>>>>> + struct rte_flow_error *error); >>>>>>> + >>>>>>> +#ifdef __cplusplus >>>>>>> +} >>>>>>> +#endif >>>>>> >>>>>> I don't see a way to dump all the rules for a port out. I think this is >>>>>> neccessary for degbugging. You could have a look through dpif.h in OVS >>>>>> and see how dpif_flow_dump_next() is used, it might be a good reference. >>>>> >>>>> DPDK does not maintain flow rules and, depending on hardware capabilities >>>>> and level of compliance, PMDs do not necessarily do it either, particularly >>>>> since it requires space and application probably have a better method to >>>>> store these pointers for their own needs. >>>> >>>> understood >>>> >>>>> >>>>> What you see here is only a PMD interface. Depending on applications needs, >>>>> generic helper functions built on top of these may be added to manage flow >>>>> rules in the future. >>>> >>>> I'm thinking of the case where something goes wrong and I want to get a >>>> dump of all the flow rules from hardware, not query the rules I think I >>>> have. I don't see a way to do it or something to build a helper on top of? >>> >>> Generic helper functions would exist on top of this API and would likely >>> maintain a list of flow rules themselves. The dump in that case would be >>> entirely implemented in software. I think that recovering flow rules from HW >>> may be complicated in many cases (even without taking storage allocation and >>> rules conversion issues into account), therefore if there is really a need >>> for it, we could perhaps add a dump() function that PMDs are free to >>> implement later. >>> >> >> ok. Maybe there are some more generic stats that can be got from the >> hardware that would help debugging that would suffice, like total flow >> rule hits/misses (i.e. not on a per flow rule basis). >> >> You can get this from the software flow caches and it's widely used for >> debugging. e.g. >> >> pmd thread numa_id 0 core_id 3: >> emc hits:0 >> megaflow hits:0 >> avg. subtable lookups per hit:0.00 >> miss:0 >> > > Perhaps a rule such as the following could do the trick: > > group: 42 (or priority 42) > pattern: void > actions: count / passthru > > Assuming useful flow rules are defined with higher priorities (using lower > group ID or priority level) and provide a terminating action, this one would > count all packets that were not caught by them. > > That is one example to illustrate how "global" counters can be requested by > applications. > > Otherwise you could just make sure all rules contain mark / flag actions, in > which case mbufs would tell directly if they went through them or need > additional SW processing. > ok, sounds like there's some options at least to work with on this which is good. thanks.