From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2C99A0093 for ; Fri, 11 Mar 2022 12:10:46 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4327740140; Fri, 11 Mar 2022 12:10:46 +0100 (CET) Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by mails.dpdk.org (Postfix) with ESMTP id 78FB640042 for ; Fri, 11 Mar 2022 12:10:45 +0100 (CET) Received: by mail-lf1-f49.google.com with SMTP id w12so14411011lfr.9 for ; Fri, 11 Mar 2022 03:10:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:from:date:message-id:subject:to; bh=2UmV0TkuS6F3hsBoZqoZNr+XZx10UhO0s2UQkl2+LN4=; b=NbmysMjg8nHFUMpiN/GcjOsWjN9g4yfRVpEhZpaipt8fR4F0xDqtWpsOv3AqOvwZoq xfZz60bJMhiFSqCyc/Dr5L0wd7vYZf3IylDET5TxVlHaeQnUAx7WlRiOFHwxVNQSBWND VhoBdBr8UQYwtdskheAwT5ocIUE0B1MD18O/+95rV1y05aBDF7m+lN9nMp92GxmHehnf znL16K9pUuZcx4HFogoNzEyCBDVef70HAxR9Bc9SvCSbY7VJocY+mR5KoQpOvU7alF/8 wBnNO/+de5OM7TsnS0bouszEm0GrlEwmTTboemTnWGV7jxon5f6/LpfyppsNRI6K9KyE lbwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=2UmV0TkuS6F3hsBoZqoZNr+XZx10UhO0s2UQkl2+LN4=; b=vGunInDM9dYa5/4lpsT/AywkTCGXEGPPguuPfEWt6j7pE/ytX1muO/zHZONi7zBqpM s+q2Q54Etp9+8Rw26VtRbFdPPhJeOtqLeorsD5E649QZ8Mkbl6nLmjATIQ1WwSw/+4vK BomNERk3dkrtA47/w/ho42o7BWgZSz2wMTGUXYZb52UyiWMJrG/aLIVT/5LSqtXUUfdX UBgNsmD0Yo986ggyIpQX8uhv9PF6eP65107d1x5vB3D+Wjcf/l8FAFSWX3Zy28r2h1Gy G0/Zg6exAYtvcHddO2gvBsTxCLPbK2szFG3MXUH0p3mjhqpBfXoK97kk1sknmE4eB8Ja PrRg== X-Gm-Message-State: AOAM532BcGv9Ychiec+p6fYP1nri9MzfRwk8VD3OCGJ23DABSpSr4ep7 Uzk5juDobMl+1jIt4mBXRbtpKnhoPVgj3EYvGOskZclynR05kQ== X-Google-Smtp-Source: ABdhPJwioRJRMY7h00hpUP+hSdFwf5s2dzswZlo80DtMrcT3VyshkFUtBye/T6BLhUm57YchnSMJmGcJyDFnsKanjQ0= X-Received: by 2002:a05:6512:3242:b0:448:4a8f:6ae1 with SMTP id c2-20020a056512324200b004484a8f6ae1mr5507582lfr.665.1646997044464; Fri, 11 Mar 2022 03:10:44 -0800 (PST) MIME-Version: 1.0 From: =?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KHRgtC10L/QsNC90L7Qsg==?= Date: Fri, 11 Mar 2022 14:10:33 +0300 Message-ID: Subject: Flow rules performance with ConnectX-6 Dx To: users@dpdk.org Content-Type: multipart/alternative; boundary="000000000000c80fe505d9ef61be" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000c80fe505d9ef61be Content-Type: text/plain; charset="UTF-8" Hi, folks! I'm using Mellanox ConnectX-6 Dx EN adapter card (100GbE; Dual-port QSFP56; PCIe 4.0/3.0 x16) with DPDK 21.11 on Ubuntu 20.04 I want to drop particular packets within HW NIC's using rte_flow. The flow configuration is rather straightforward: RTE_FLOW_ACTION_TYPE_DROP as a single action and RTE_FLOW_ITEM_TYPE_ETH/RTE_FLOW_ITEM_TYPE_IPV4 as pattern items (I used flow_filtering DPDK example as a starting point). I'm using following IPv4 pattern for rte_flow drop rule - 0.0.0.0/0 as src IP, 10.0.0.2/32 as dest IP. So I want to drop all packets which are addressed to 10.0.0.2 (source IP doesn't matter). To test this I generate TCP packets with 2 different IP dest addresses - 10.0.0.1 and 10.0.0.2. Source IPs are generated randomly in the range of 10.0.0.0-10.255.255.255. Half of the traffic should be blocked and other half passed to my application. If I generate 20Mpps in sum - I see that 10 Mpps is dropped by rte_flow and 10 Mpps is passed to my application. So everything ok there. But if I increase input traffic to 40/50/100/148Mpps I see that only max 15 Mpps is passed to my application (and it doesn't depend on input speed). Other traffic is dropped. I checked that my generator properly produces packets - IP dest addresses are equally distributed among generated traffic. If I generate packets which don't match rte_flow drop rule (e.g. with IP dest 10.0.0.1 and 10.0.0.3) - I see that all traffic is passed to my application without problems. Another example. If I generate traffic with 3 different IP dest addresses - 10.0.0.1, 10.0.0.2, 10.0.0.3 at 60Mpps (20 Mpps for each IP dest where 10.0.0.2 matches rte_flow drop rule) - I get only 30Mpps in sum passed to my application (15Mpps for each non-matched IP dest address instead of 20Mpps). If I replace 10.0.0.2 (which matches rte_flow drop rule) by 10.0.0.4 I see that all 60 Mpps passed to my application. To summarize - if I generate traffic with IP dest address which matches rte_flow drop rule other non-matched IP dest get only max 15Mpps for each. But if traffic doesn't include IP dest which matches rte_flow drop rule this 15Mpps limit is not in a play and the whole traffic is passed to my application. Is there any explanation for such behavior? Or i'm doing something wrong? I haven't found any explanation in mlx5 PMD driver documentation. Thanks, Dmitriy Stepanov --000000000000c80fe505d9ef61be Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi, folks!

I'm using Mellanox ConnectX-6 Dx EN = adapter card (100GbE; Dual-port QSFP56; PCIe 4.0/3.0 x16) with DPDK 21.11 o= n Ubuntu 20.04

I want to drop particular packets within HW NIC's= using rte_flow.
The flow configuration is rather straightforward: RTE_F= LOW_ACTION_TYPE_DROP as a single action and RTE_FLOW_ITEM_TYPE_ETH/RTE_FLOW= _ITEM_TYPE_IPV4 as pattern items (I used flow_filtering DPDK example as a s= tarting point).

I'm using following IPv4 pattern for rte_flow dr= op rule - 0.0.0.0/0 as src IP, 10.0.0.2/32 as dest IP. So I want to drop all packe= ts which are addressed to 10.0.0.2 (source IP doesn't matter).
To te= st this I generate TCP packets with 2 different IP dest addresses - 10.0.0.= 1 and 10.0.0.2. Source IPs are generated randomly in the range of 10.0.0.0-= 10.255.255.255. Half of the traffic should be blocked and other half passed= to my application.

If I generate 20Mpps in sum - I see that 10 Mpps= is dropped by rte_flow and 10 Mpps is passed to my application. So everyth= ing ok there.
But if I increase input traffic to 40/50/100/148Mpps I se= e that only max 15 Mpps is passed to my application (and it doesn't dep= end on input speed). Other traffic is dropped. I checked that my generator = properly produces packets - IP dest addresses are =C2=A0equally distributed= among generated traffic. If I generate packets which don't match rte_f= low drop rule (e.g. with IP dest 10.0.0.1 and 10.0.0.3) - I see that all tr= affic is passed to my application without problems.

Another example.= If I generate traffic with 3 different IP dest addresses - 10.0.0.1, 10.0.= 0.2, 10.0.0.3 at 60Mpps (20 Mpps for each IP dest where 10.0.0.2 matches rt= e_flow drop rule) - I get only 30Mpps in sum passed to my application (15Mp= ps for each non-matched IP dest address instead of 20Mpps). If I replace 10= .0.0.2 (which matches rte_flow drop rule) by 10.0.0.4 I see that all 60 Mpp= s passed to my application.

To summarize - if I generate traffic wi= th IP dest address which matches rte_flow drop rule other non-matched IP de= st get only max 15Mpps for each. But if traffic doesn't include IP dest= which matches rte_flow drop rule this 15Mpps limit is not in a play and th= e whole traffic is passed to my application.

Is there any explanati= on for such behavior? Or i'm doing something wrong? I haven't found= any explanation in mlx5 PMD driver documentation.



Thanks, D= mitriy Stepanov
--000000000000c80fe505d9ef61be--