From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D3D4CA0506 for ; Tue, 12 Apr 2022 11:55:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BBD6540DF6; Tue, 12 Apr 2022 11:55:24 +0200 (CEST) Received: from smtp-relay-internal-0.canonical.com (smtp-relay-internal-0.canonical.com [185.125.188.122]) by mails.dpdk.org (Postfix) with ESMTP id 40DCA40DF6 for ; Tue, 12 Apr 2022 11:55:23 +0200 (CEST) Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 76DD83F1CE for ; Tue, 12 Apr 2022 09:55:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1649757321; bh=sWV77jPFugjeMaybfAatZ7c3+rih7xn46ocw43an3S0=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=k/j7eUqv9rXUPAswUxPsWnM95/92NATqXzu2/I8hyl3sxbXDV/Uc3CYfQSGzpcpfo jznvWKBHgPc9nqr4OwBtr4u9qk/efPJOxvbFZOlx9zVDrBQXJaJSwI6onyL4eoZm8U FwkatjO/kivBKwXiqg3Jd1S47g97FDjinYlhTU0BLVA2SoejFh//LRZgplxoGjM7cN 18tlb5C/gR1xpxgg88l50CJAEvI3seJGeV+p/6fy0uq5upx83ITpe9jroJddl9A/zf FHjGg9u4TcJ0Z5gqC2/J5JwmjJaxzVCu+xyzkuLmY+bPw0Gbm9e+SCu37n8/YvBAMW BbyROiKKDn2zw== Received: by mail-qt1-f199.google.com with SMTP id m20-20020a05622a119400b002ef68184e7fso2552037qtk.15 for ; Tue, 12 Apr 2022 02:55:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=sWV77jPFugjeMaybfAatZ7c3+rih7xn46ocw43an3S0=; b=6m+KzfsxF8Oo33s3IUkvQrqloo+5KIGir1qRVAeaJmWMqy8KNdd2hgItksCQgBH7jo yIz7PyzC5cBAPS57Zs4zeSz0Ik7UGlXDnISnII3gT8vQj1dH1eF2eWO0pHbguWVYxKIf ey7QnCcNkTb8jp6qgYF3yapQFxj0PHQ8JFhR8fshS4ge44Q7ZEhjV21r7JXpbpiQ9guc 4Sns+eJPg/rqvEOGAfw9N5l9hC59u81VLQ7g2eagDUMNjzkJzJmh/o3A9bBX41IUAfPy 9uuHlPL2I+5B2dmpEKhNXTJCc5EbkJzlEiwoPog94NRapfGv099Z8P4AL/Mj/cH4MlPl cFEg== X-Gm-Message-State: AOAM531GJhmz/ebWmXpx5VlnBDajD+61WyngviCn6Ndp20A9O1NuHjjm rsDB8r7L0hshTN4iLJVBAOF4mHsjh12b+4A1jMyLsSu1RRKAJFVtj5JyssJECkKXU4MNZVcJCz8 TeKPg8qbYVVVNsvx8ovco6dzf2E1EI4Ru/W48KMig X-Received: by 2002:a05:620a:f05:b0:67e:1a49:363a with SMTP id v5-20020a05620a0f0500b0067e1a49363amr2467999qkl.364.1649757320299; Tue, 12 Apr 2022 02:55:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz10a/46L6PFl4chajINUgelabenq9TAKfrKDa3vuoOrD611C5XzLFxSDrMhdyMhwLQYjyP+0utBmdjy6AzACA= X-Received: by 2002:a05:620a:f05:b0:67e:1a49:363a with SMTP id v5-20020a05620a0f0500b0067e1a49363amr2467985qkl.364.1649757320033; Tue, 12 Apr 2022 02:55:20 -0700 (PDT) MIME-Version: 1.0 References: <20220412092739.10758-1-rzidane@nvidia.com> In-Reply-To: <20220412092739.10758-1-rzidane@nvidia.com> From: Christian Ehrhardt Date: Tue, 12 Apr 2022 11:54:54 +0200 Message-ID: Subject: Re: [PATCH 19.11] net/mlx5: fix mark enabling for Rx To: Raja Zidane Cc: stable@dpdk.org, matan@nvidia.com Content-Type: text/plain; charset="UTF-8" X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On Tue, Apr 12, 2022 at 11:27 AM Raja Zidane wrote: > > To optimize datapath, the mlx5 pmd checked for mark action on flow > creation, and flagged possible destination rxqs (through queue/RSS > actions), then it enabled the mark action logic only for flagged rxqs. > > Mark action didn't work if no queue/rss action was in the same flow, > even when the user use multi-group logic to manage the flows. > So, if mark action is performed in group X and the packet is moved to > group Y > X when the packet is forwarded to Rx queues, SW did not get > the mark ID to the mbuf. > > Flag Rx datapath to report mark action for any queue when the driver > detects the first mark action after dev_start operation. Thanks enqueued for 19.11.13 > Signed-off-by: Raja Zidane > Acked-by: Matan Azrad > --- > drivers/net/mlx5/mlx5.h | 1 + > drivers/net/mlx5/mlx5_flow.c | 37 ++++++++++++++++++++++-------------- > drivers/net/mlx5/mlx5_rxtx.h | 1 - > 3 files changed, 24 insertions(+), 15 deletions(-) > > diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h > index 9f6b355182..e59662a99d 100644 > --- a/drivers/net/mlx5/mlx5.h > +++ b/drivers/net/mlx5/mlx5.h > @@ -739,6 +739,7 @@ struct mlx5_priv { > unsigned int counter_fallback:1; /* Use counter fallback management. */ > unsigned int mtr_en:1; /* Whether support meter. */ > unsigned int mtr_reg_share:1; /* Whether support meter REG_C share. */ > + uint32_t mark_enabled:1; /* If mark action is enabled on rxqs. */ > unsigned int root_verbs_drop_action; /* Root uses verbs drop action. */ > uint16_t domain_id; /* Switch domain identifier. */ > uint16_t vport_id; /* Associated VF vport index (if any). */ > diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c > index 0d73eebcfc..1a6f90912a 100644 > --- a/drivers/net/mlx5/mlx5_flow.c > +++ b/drivers/net/mlx5/mlx5_flow.c > @@ -728,12 +728,11 @@ flow_rxq_tunnel_ptype_update(struct mlx5_rxq_ctrl *rxq_ctrl) > * Pointer to device flow structure. > */ > static void > -flow_drv_rxq_flags_set(struct rte_eth_dev *dev, struct mlx5_flow *dev_flow) > +flow_drv_rxq_flags_set(struct rte_eth_dev *dev, struct mlx5_flow *dev_flow, > + int mark) > { > struct mlx5_priv *priv = dev->data->dev_private; > struct rte_flow *flow = dev_flow->flow; > - const int mark = !!(dev_flow->actions & > - (MLX5_FLOW_ACTION_FLAG | MLX5_FLOW_ACTION_MARK)); > const int tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL); > unsigned int i; > > @@ -752,10 +751,8 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, struct mlx5_flow *dev_flow) > priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && > mlx5_flow_ext_mreg_supported(dev)) { > rxq_ctrl->rxq.mark = 1; > - rxq_ctrl->flow_mark_n = 1; > } else if (mark) { > rxq_ctrl->rxq.mark = 1; > - rxq_ctrl->flow_mark_n++; > } > if (tunnel) { > unsigned int j; > @@ -774,6 +771,20 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev, struct mlx5_flow *dev_flow) > } > } > > +static void > +flow_rxq_mark_flag_set(struct rte_eth_dev *dev) > +{ > + struct mlx5_priv *priv = dev->data->dev_private; > + struct mlx5_rxq_ctrl *rxq_ctrl; > + > + if (priv->mark_enabled) > + return; > + LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) { > + rxq_ctrl->rxq.mark = 1; > + } > + priv->mark_enabled = 1; > +} > + > /** > * Set the Rx queue flags (Mark/Flag and Tunnel Ptypes) for a flow > * > @@ -786,9 +797,14 @@ static void > flow_rxq_flags_set(struct rte_eth_dev *dev, struct rte_flow *flow) > { > struct mlx5_flow *dev_flow; > - > + int mark = 0; > LIST_FOREACH(dev_flow, &flow->dev_flows, next) > - flow_drv_rxq_flags_set(dev, dev_flow); > + mark = mark | (!!(dev_flow->actions & (MLX5_FLOW_ACTION_FLAG | > + MLX5_FLOW_ACTION_MARK))); > + if (mark) > + flow_rxq_mark_flag_set(dev); > + LIST_FOREACH(dev_flow, &flow->dev_flows, next) > + flow_drv_rxq_flags_set(dev, dev_flow, mark); > } > > /** > @@ -805,8 +821,6 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev, struct mlx5_flow *dev_flow) > { > struct mlx5_priv *priv = dev->data->dev_private; > struct rte_flow *flow = dev_flow->flow; > - const int mark = !!(dev_flow->actions & > - (MLX5_FLOW_ACTION_FLAG | MLX5_FLOW_ACTION_MARK)); > const int tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL); > unsigned int i; > > @@ -821,10 +835,6 @@ flow_drv_rxq_flags_trim(struct rte_eth_dev *dev, struct mlx5_flow *dev_flow) > priv->config.dv_xmeta_en != MLX5_XMETA_MODE_LEGACY && > mlx5_flow_ext_mreg_supported(dev)) { > rxq_ctrl->rxq.mark = 1; > - rxq_ctrl->flow_mark_n = 1; > - } else if (mark) { > - rxq_ctrl->flow_mark_n--; > - rxq_ctrl->rxq.mark = !!rxq_ctrl->flow_mark_n; > } > if (tunnel) { > unsigned int j; > @@ -881,7 +891,6 @@ flow_rxq_flags_clear(struct rte_eth_dev *dev) > continue; > rxq_ctrl = container_of((*priv->rxqs)[i], > struct mlx5_rxq_ctrl, rxq); > - rxq_ctrl->flow_mark_n = 0; > rxq_ctrl->rxq.mark = 0; > for (j = 0; j != MLX5_FLOW_TUNNEL; ++j) > rxq_ctrl->flow_tunnels_n[j] = 0; > diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h > index 34ec66a3ae..68a935ca66 100644 > --- a/drivers/net/mlx5/mlx5_rxtx.h > +++ b/drivers/net/mlx5/mlx5_rxtx.h > @@ -198,7 +198,6 @@ struct mlx5_rxq_ctrl { > unsigned int socket; /* CPU socket ID for allocations. */ > unsigned int irq:1; /* Whether IRQ is enabled. */ > unsigned int dbr_umem_id_valid:1; /* dbr_umem_id holds a valid value. */ > - uint32_t flow_mark_n; /* Number of Mark/Flag flows using this Queue. */ > uint32_t flow_tunnels_n[MLX5_FLOW_TUNNEL]; /* Tunnels counters. */ > uint32_t wqn; /* WQ number. */ > uint16_t dump_file_n; /* Number of dump files. */ > -- > 2.21.0 > -- Christian Ehrhardt Staff Engineer, Ubuntu Server Canonical Ltd