From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E83D3456A2 for ; Wed, 24 Jul 2024 16:50:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CA1AE42E9F; Wed, 24 Jul 2024 16:50:11 +0200 (CEST) Received: from mail-io1-f50.google.com (mail-io1-f50.google.com [209.85.166.50]) by mails.dpdk.org (Postfix) with ESMTP id 8842840B97 for ; Wed, 24 Jul 2024 16:50:10 +0200 (CEST) Received: by mail-io1-f50.google.com with SMTP id ca18e2360f4ac-7f70a708f54so305557639f.3 for ; Wed, 24 Jul 2024 07:50:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1721832610; x=1722437410; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=1gjN8WecMHXPR2/uO7S1AcXneznwYBzfxJhxgSx0aWI=; b=dcyR++1Z5dpktrDqc7hOVTvlF16nDw5dhaaq08yhQt8Ax+M2JJewbB5B/0RZX+muMW Yv3nnwdcLh/0EYclsDV4Nb5uznV73+v0CYJtDP+F1sEx50iXw8eGBYpooveU/AgIRZXn zyLQ008N6Zrdgh/kl69Bn2+vl2E6qiA/KSAMH9PX7n5gYsIYGczr06OwHsr0VA2kNPj8 fdj/xaPvYs9aGpsa4VX6kBIqnljWkpS8FH9fwAEh1CFlVE+Vf1pE9/aC+CGNJHhofqkq yTNpJ6i3HnV+fxgsU6CFF2BF0kPmS0zgmXGuC3jaLWVsD8q3GXJ3+fp0j5/tkJ3ehkbo Gw1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721832610; x=1722437410; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1gjN8WecMHXPR2/uO7S1AcXneznwYBzfxJhxgSx0aWI=; b=LSfO0AzyPNsfsqUmtn/9Xh2qQNmiaKYLUWnAnExgKXSjHJW8M6TMXYn5T9aK58oLsT r/oGO4fzU3Osu8Ubbv/i0LSFY2qDpN0oGg3Pq2WekyMUWl5Jwz+RiWYwPaFl04mRtNOV yYmco78g0znjgJLcPh16KxbpMaL9MUCoG3VxQAJRcbE0cXAWSOxGHGGkMDUL856EfWhq VYY9BhTiLTAIBW9lLWRdGXf7++4kpIjxuLZIBhuUzE78BxNFCJZphBR7mSjO71MyAjVA TTTy2OW/s4vE73yvwFBOiDgIEBCWXPxhIhr9aE0+v3H3Xy9ZX9EuOZ49LLqMPcLgOs49 lEiw== X-Forwarded-Encrypted: i=1; AJvYcCX7jzjbqkKh+HltxydOaSDcutuZ7d+dnV+fMosd4/tkLUxTHh9MI/Os/fWRzaoKBBILpnMOwJumMv08u7ARrw== X-Gm-Message-State: AOJu0YyzzFfgKMUc5QGh8/WOH86gYGBM1Gy6Rpead7s8XxJaw1blqpBL pDiGVyBjWqRe4putciQeIIaCRvNiGHz+SJvMbBKFsvyhjjN0KJS9Y17YRdGTIfjwQRsE30mEdF6 nI8p4oJxT+AfAK9YSqxzs4BVDSXU= X-Google-Smtp-Source: AGHT+IG6mcQ1Ju0mGNYexVHb9Ra8U+iLrNy9QAAakFROOQNiUV2oeg3NrWvVROgzuYWNivwQw1I0/8i5D3t23kSrPwo= X-Received: by 2002:a05:6602:1647:b0:815:1733:2d93 with SMTP id ca18e2360f4ac-81f7bd01841mr13084839f.1.1721832609693; Wed, 24 Jul 2024 07:50:09 -0700 (PDT) MIME-Version: 1.0 References: <5485686.Sb9uPGUboI@thomas> In-Reply-To: From: Cliff Burdick Date: Wed, 24 Jul 2024 07:49:58 -0700 Message-ID: Subject: Re: ConnectX-7 400GbE wire-rate? To: Yasuhiro Ohara Cc: Thomas Monjalon , users@dpdk.org Content-Type: multipart/alternative; boundary="000000000000104c8f061dff65c8" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000104c8f061dff65c8 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable To answer your original question, yes, I've hit 400Gbps many times provided there were enough queues/cores. You can also see test 12.2 here achieved line rate: https://fast.dpdk.org/doc/perf/DPDK_23_11_NVIDIA_NIC_performance_report.pdf On Wed, Jul 24, 2024 at 4:58=E2=80=AFAM Yasuhiro Ohara = wrote: > Hi Thomas, > Thank you for getting back to this. > > From what we have seen, it seemed to be rather a Mellanox firmware issue > than the pktgen or DPDK issue. > When we were using ConnectX-7 with fw ver 28.41.1000 > on a Core i9 machine, it limited its tx-bandwidth to 100Gbps, > though the physical max should be 400Gbps. > If we lower the fw version to 28.39.2048 or 28.39.3004, > it could send 256Gbps, on the same core i9 machine. > > Our bandwidth demonstration event completed successfully, > so we are fine with the small knowledge. > I think my colleague can help if someone wants to debug > further about this issue. > > Thank you anyway. > > Regards, > Yasu > > 2024=E5=B9=B47=E6=9C=8823=E6=97=A5(=E7=81=AB) 2:37 Thomas Monjalon : > > > > Hello, > > > > I see there is no answer. > > > > Did you try with testpmd? > > I don't know whether it can be a limitation of dpdk-pktgen? > > > > > > > > 29/05/2024 03:32, Yasuhiro Ohara: > > > Hi. My colleague is trying to generate 400Gbps traffic using > ConnectX-7, > > > but with no luck. > > > > > > Has anyone succeeded in generating 400Gbps (by larger than 1500B is > ok), > > > or are there any known issues? > > > > > > Using dpdk-pktgen, the link is successfully recognized as 400GbE > > > but when we start the traffic, it seems to be capped at 100Gbps. > > > > > > Some info follows. > > > > > > [ 1.490256] mlx5_core 0000:01:00.0: firmware version: 28.41.1000 > > > [ 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s available PCIe > > > bandwidth (32.0 GT/s PCIe x16 link) > > > [ 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 rates are > > > supported, range: 0Mbps to 195312Mbps > > > [ 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total vports 18, per > > > vport: max uc(128) max mc(2048) > > > [ 1.827270] mlx5_core 0000:01:00.0: Port module event: module 0, > > > Cable unplugged > > > [ 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:298:(pid 9): > > > PCIe slot advertised sufficient power (75W). > > > [ 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1) RqSz(8) > > > StrdSz(2048) RxCqeCmprss(0) > > > [ 1.929813] mlx5_core 0000:01:00.0: Supported tc offload range - > > > chains: 4294967294, prios: 4294967295 > > > > > > Device type: ConnectX7 > > > Name: MCX75310AAS-NEA_Ax > > > Description: NVIDIA ConnectX-7 HHHL Adapter card; 400GbE / NDR IB > > > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled; > > > Secure Boot Enabled; > > > Device: /dev/mst/mt4129_pciconf0 > > > > > > Enabled Link Speed (Ext.) : 0x00010000 (400G_4X) > > > Supported Cable Speed (Ext.) : 0x00010800 (400G_4X,100G_1X) > > > > > > pktgen result: > > > > > > Pkts/s Rx : 0 > > > TX : 8059264 > > > > > > MBits/s Rx/Tx : 0/97742 > > > > > > > > > Any info is appreciated. Thanks. > > > > > > regards, > > > Yasu > > > > > > > > > > > > > > --000000000000104c8f061dff65c8 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
To answer your original question, yes, I've hit 400Gbp= s many times provided there were enough queues/cores. You can also see test= 12.2 here achieved line rate:


On Wed, Jul 24, 2024 at 4:58=E2=80=AFAM Yasuhiro Ohara <yasu1976@gmail.com> wrote:
Hi Thomas,
Thank you for getting back to this.

>From what we have seen, it seemed to be rather a Mellanox firmware issue than the pktgen or DPDK issue.
When we were using ConnectX-7 with fw ver 28.41.1000
on a Core i9 machine, it limited its tx-bandwidth to 100Gbps,
though the physical max should be 400Gbps.
If we lower the fw version to=C2=A0 28.39.2048 or 28.39.3004,
it could send 256Gbps, on the same core i9 machine.

Our bandwidth demonstration event completed successfully,
so we are fine with the small knowledge.
I think my colleague can help if someone wants to debug
further about this issue.

Thank you anyway.

Regards,
Yasu

2024=E5=B9=B47=E6=9C=8823=E6=97=A5(=E7=81=AB) 2:37 Thomas Monjalon <thomas@monjalon.net>:
>
> Hello,
>
> I see there is no answer.
>
> Did you try with testpmd?
> I don't know whether it can be a limitation of dpdk-pktgen?
>
>
>
> 29/05/2024 03:32, Yasuhiro Ohara:
> > Hi. My colleague is trying to generate 400Gbps traffic using Conn= ectX-7,
> > but with no luck.
> >
> > Has anyone succeeded in generating 400Gbps (by larger than 1500B = is ok),
> > or are there any known issues?
> >
> > Using dpdk-pktgen, the link is successfully recognized as 400GbE<= br> > > but when we start the traffic, it seems to be capped at 100Gbps.<= br> > >
> > Some info follows.
> >
> > [=C2=A0 =C2=A0 1.490256] mlx5_core 0000:01:00.0: firmware version= : 28.41.1000
> > [=C2=A0 =C2=A0 1.492853] mlx5_core 0000:01:00.0: 504.112 Gb/s ava= ilable PCIe
> > bandwidth (32.0 GT/s PCIe x16 link)
> > [=C2=A0 =C2=A0 1.805419] mlx5_core 0000:01:00.0: Rate limit: 127 = rates are
> > supported, range: 0Mbps to 195312Mbps
> > [=C2=A0 =C2=A0 1.808477] mlx5_core 0000:01:00.0: E-Switch: Total = vports 18, per
> > vport: max uc(128) max mc(2048)
> > [=C2=A0 =C2=A0 1.827270] mlx5_core 0000:01:00.0: Port module even= t: module 0,
> > Cable unplugged
> > [=C2=A0 =C2=A0 1.830317] mlx5_core 0000:01:00.0: mlx5_pcie_event:= 298:(pid 9):
> > PCIe slot advertised sufficient power (75W).
> > [=C2=A0 =C2=A0 1.830610] mlx5_core 0000:01:00.0: MLX5E: StrdRq(1)= RqSz(8)
> > StrdSz(2048) RxCqeCmprss(0)
> > [=C2=A0 =C2=A0 1.929813] mlx5_core 0000:01:00.0: Supported tc off= load range -
> > chains: 4294967294, prios: 4294967295
> >
> > Device type:=C2=A0 =C2=A0 ConnectX7
> > Name:=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0MCX75310AAS-NEA_Ax<= br> > > Description:=C2=A0 =C2=A0 NVIDIA ConnectX-7 HHHL Adapter card; 40= 0GbE / NDR IB
> > (default mode); Single-port OSFP; PCIe 5.0 x16; Crypto Disabled;<= br> > > Secure Boot Enabled;
> > Device:=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/dev/mst/mt4129_pciconf0=
> >
> > Enabled Link Speed (Ext.)=C2=A0 =C2=A0 =C2=A0 =C2=A0: 0x00010000 = (400G_4X)
> > Supported Cable Speed (Ext.)=C2=A0 =C2=A0 : 0x00010800 (400G_4X,1= 00G_1X)
> >
> > pktgen result:
> >
> > Pkts/s Rx=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0:=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 0
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 TX=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0:=C2=A0 8059264
> >
> > MBits/s Rx/Tx=C2=A0 :=C2=A0 =C2=A00/97742
> >
> >
> > Any info is appreciated. Thanks.
> >
> > regards,
> > Yasu
> >
>
>
>
>
>
--000000000000104c8f061dff65c8--