From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F5D542CC4; Thu, 15 Jun 2023 10:00:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 57E6E40DDA; Thu, 15 Jun 2023 10:00:59 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id BD23340A84 for ; Thu, 15 Jun 2023 10:00:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686816057; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZjSqVdJiWu+T4j0xEJVOzwNR78z8vE6OwKFDnpy87lY=; b=PPkHnOrwvsTjvEZRm53541QaahK+xSAmZnlfHYqtSJa4p2UHxlJt3i8SWQvN0jABAr4Yp8 SP8mGrPIBdm05YXECtikSCIXdf4Z3636yYeC80EJ5QSj5MeyfvzcS/DFo6QDz+AP/1VA85 EDe6d0VXRT0TOB0PkkRDpBtoQ8Hal08= Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-117-kKKhvuzjNk2L9-wfz3tCUA-1; Thu, 15 Jun 2023 04:00:55 -0400 X-MC-Unique: kKKhvuzjNk2L9-wfz3tCUA-1 Received: by mail-ed1-f69.google.com with SMTP id 4fb4d7f45d1cf-50daa85e940so6942437a12.0 for ; Thu, 15 Jun 2023 01:00:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686816054; x=1689408054; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZjSqVdJiWu+T4j0xEJVOzwNR78z8vE6OwKFDnpy87lY=; b=H9fvndtQB2Ax9iTPqOPMIcFaPYL3IUWnrJ12F9oLvUOyTyicEvIvEs+XiCI4cmjhRj h0L9ED1k6Jzm+Fuse5v+bKWCfAeSWJL5B0nJ6XA7Yy6PTFF7kWLBN9kO1xcCR39yQfP9 8mHXrK5AuUxg92U45/Sy+1nC8J4qMXCaezAMNVKLJyCFKIMuLmjHonBNexTYePmRMDlx hs9SEprrtNK9kwiFfB/431BDFBKhXva9lcKLwt4x18Jls7q/8v4A73UaFQEP4AbrCsLt 8PmBGd+F7yUllvSZXKboDLJE54OPGQ5lDi/DEs7yKLv5iYo8/SmG1z6Ji6ciXbHFiFwa JZ1g== X-Gm-Message-State: AC+VfDzMsoqCYNcYxIDlxg1g7hP7TRzwmeHgr7W0KrsGvH6RvYeFddM9 R4u6ixv0PzT5kbBsUCHkWWZv/3f8EPbdGCL/yDsDGDpYgnjsLmi+3hOD82NuqcZeABjAQ3l2OmW jN63SOpFYdByqUoD0qB4= X-Received: by 2002:aa7:d0ca:0:b0:514:9bb7:d0bd with SMTP id u10-20020aa7d0ca000000b005149bb7d0bdmr12082696edo.24.1686816054670; Thu, 15 Jun 2023 01:00:54 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ42xQ3htvk0woNpLWGTWDJx6y4SfFElu8nIlbxjXb26cvKpHncdH/0r/i0ANc9ZrTBUvaMvNnSw9igs+bdD5uM= X-Received: by 2002:aa7:d0ca:0:b0:514:9bb7:d0bd with SMTP id u10-20020aa7d0ca000000b005149bb7d0bdmr12082671edo.24.1686816054325; Thu, 15 Jun 2023 01:00:54 -0700 (PDT) MIME-Version: 1.0 References: <20230505103102.2912297-1-david.marchand@redhat.com> In-Reply-To: From: David Marchand Date: Thu, 15 Jun 2023 10:00:41 +0200 Message-ID: Subject: Re: [RFC PATCH] ethdev: advertise flow restore in mbuf To: Slava Ovsiienko Cc: "dev@dpdk.org" , "NBU-Contact-Thomas Monjalon (EXTERNAL)" , "i.maximets@ovn.org" , Aman Singh , Yuying Zhang , Matan Azrad , Andrew Rybchenko , Ferruh Yigit , Ori Kam X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Wed, Jun 14, 2023 at 6:46=E2=80=AFPM Slava Ovsiienko wrote: > > Hi, David > > It looks like a good application datapath optimization, as for me. > But I see some concerns: > > 1. Are we sure the PMD should register the flag, not application? > IIRC, usually application registers needed flags/fields and PMDs just fol= low. I agree, this should come from the application. Maybe this flag should be resolved when the application calls rte_eth_rx_metadata_negotiate(). For an unknown reason, mlx5 does not implement this ops atm, but I guess this would be a good place to check for the dv_xmeta_en stuff, too. > > + if (!sh->tunnel_hub && sh->config.dv_miss_info) { > + err =3D mlx5_flow_restore_info_register(); > + if (err) { > + DRV_LOG(ERR, "Could not register mbuf dynflag for= rte_flow_get_restore_info"); > + goto error; > + } > > 2. This might have some general mlx5 Rx datapath performance impact (very= minor though) > It introduces extra memory access (instead of immed value). We should tes= t thoroughly. > @@ -857,7 +857,7 @@ rxq_cq_to_mbuf(struct mlx5_rxq_data *rxq, struct rte= _mbuf *pkt, > if (MLX5_FLOW_MARK_IS_VALID(mark)) { > pkt->ol_flags |=3D RTE_MBUF_F_RX_FDIR; > if (mark !=3D RTE_BE32(MLX5_FLOW_MARK_DEFAULT)) { > - pkt->ol_flags |=3D RTE_MBUF_F_RX_FDIR_ID; > + pkt->ol_flags |=3D rxq->mark_flag; > pkt->hash.fdir.hi =3D mlx5_flow_mark_get(= mark); > } > } I am curious about the impact of such a change too. > > 3. RTE_MBUF_F_RX_FDIR_ID is also handled in vectorized rx_burst() routine= s. > Please, see: > - mlx5_rxtx_vec_altivec.h > - mlx5_rxtx_vec_neon.h > - mlx5_rxtx_vec_sse.h > This code must be updated, and it also might have some general (regardles= s of using tunnel offload > - for all cases) performance impact. Indeed, I forgot to squash some later changes I was experimenting with. Well, numbers will tell us. I'll send a RFC v2. --=20 David Marchand