From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF7C145ECA; Tue, 17 Dec 2024 09:02:42 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3F33340647; Tue, 17 Dec 2024 09:02:32 +0100 (CET) Received: from mail-ua1-f42.google.com (mail-ua1-f42.google.com [209.85.222.42]) by mails.dpdk.org (Postfix) with ESMTP id 99B7740144 for ; Mon, 16 Dec 2024 12:04:13 +0100 (CET) Received: by mail-ua1-f42.google.com with SMTP id a1e0cc1a2514c-85c5eb83a7fso1766820241.2 for ; Mon, 16 Dec 2024 03:04:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1734347053; x=1734951853; darn=dpdk.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=G8TqsCSDreUXIap6LehaXm6mWXaMDfu53r0qDV7f7zg=; b=BvSX97pXPVpppq5K7lmJ936JYsypUCh8Z4aZoh/uSDYvUX90BiR9gUS2aFozEwbi3r QdmBw3D0wKSNgMAXD9OhU4RyNoKWWMtEwQ4KYK+g+9IurxLznCItCKM62FytGP2L2lXR XaykEkz+gpkW8+VvRdZS4iwMLPzoiQpiFk+ps= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734347053; x=1734951853; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=G8TqsCSDreUXIap6LehaXm6mWXaMDfu53r0qDV7f7zg=; b=tqe4IVKWU9NyN/Sk6ZnNlcTy1SRoKjtkGb9+RSfKLuwZRgprYdS0fKj9+gYLzz+VKV NE5Y2/tVsq1VThbrwUA4fxe31eb8auBRrrgIjwKQSqvXZPsbkMwmCO0Ffhw+d2K6pi6P MKiyaiWk7TBeBCINf0q5kv3k5uP1cFVPJeR8i4jlVEPjW0Q8bFjlW7F/SKuE+eURlc7P l99S4uVd9gY/zCUl3hYb7yIAGTIijSVARm000EoD8Z3sZyZedJnUJPpPnTC4qGEikXpQ JGqExQr0nj3fyqPjWyLXTXKUZfqYSbljerI16q5AQ/sksre9Z3ZuG40AjF2+Qr8QQIz8 y1Gg== X-Forwarded-Encrypted: i=1; AJvYcCX026aYQuJFICzVboWC6rRZKIFIi+cifSJB5+EH/JP/YJ/qjfSbUJzbhzGr3tcNbBbgJe4=@dpdk.org X-Gm-Message-State: AOJu0Yzcf2xtVB0pTyyC2aDkrCAtcBgfgVl9PEQD7hQUgwCGRvsgyyBN 0EM1tON1Rd9Qr9RLVz1043tj91His09YlSOIlT/eL1s9XTQmgU/0CnpNGsf8HN8pPTv1oCx0uXA fChZePQkRYW5/sVKum9UFbdCQTukL9MEEvhK09cf42X6rZp9VPnNpM3f3789tacV1ilPba4m9DE BCpcmL X-Gm-Gg: ASbGncugzwlaa/U3/U78KN1J4pppGnhN3HAX8fRb31ft+Kxef630Bm1y6JipEE3fHe6 v7EnUP52H4P6WrxvRVf81XqlXE3l4SbfNgstuYQ== X-Google-Smtp-Source: AGHT+IHyJogEnKxaVhbPSa8ISGCmKESbyoVgfKJrDpw73wFcVAWRCWACo06y7onhhrF7Bpn8KCFap0VwvPiHtpxgxc0= X-Received: by 2002:a05:6122:908:b0:518:8e53:818b with SMTP id 71dfb90a1353d-518ca231a36mr9222819e0c.0.1734347052709; Mon, 16 Dec 2024 03:04:12 -0800 (PST) MIME-Version: 1.0 References: <91d94321-513e-4915-9723-f8dbbb60bb2c@redhat.com> In-Reply-To: <91d94321-513e-4915-9723-f8dbbb60bb2c@redhat.com> From: Mukul Sinha Date: Mon, 16 Dec 2024 16:34:00 +0530 Message-ID: Subject: Re: GCP cloud : Virtio-PMD performance Issue To: Maxime Coquelin Cc: Joshua Washington , chenbox@nvidia.com, Jeroen de Borst , Rushil Gupta , Srinivasa Srikanth Podila , Tathagat Priyadarshi , Samar Yadav , Varun LA , "dev@dpdk.org" Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha-256; boundary="00000000000005a5f10629612449" X-Mailman-Approved-At: Tue, 17 Dec 2024 09:02:28 +0100 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --00000000000005a5f10629612449 Content-Type: multipart/alternative; boundary="000000000000fec03a0629612370" --000000000000fec03a0629612370 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thanks Maxime, We will analyse further and try pinpointing the regression commit between DPDK-21.11 & DPDK-21.08. Will get back with further queries once we have an update. On Fri, Dec 13, 2024 at 4:17=E2=80=AFPM Maxime Coquelin wrote: > (with DPDK ML that got removed) > > On 12/13/24 11:46, Maxime Coquelin wrote: > > > > > > On 12/13/24 11:21, Mukul Sinha wrote: > >> Thanks @joshwash@google.com @Maxime > >> Coquelin for the inputs. > >> > >> @Maxime Coquelin > >> I did code bisecting and was able to pin-point through test-pmd run > >> that *this issue we are starting to see since DPDK-21.11 version > >> onwards. Till DPDK-21.08 this issue is not seen.* > >> To remind the issue what we see is that while actual amount of > >> serviced traffic by the hypervisor remains almost same between the two > >> versions (implying the underlying GCP hypervisor is only capable of > >> handling that much) but in >=3Ddpdk-21.11 versions the virtio-PMD is > >> pushing almost 20x traffic compared to dpdk-21.08 (This humongous > >> traffic rate in >=3Ddpdk-21.11 versions leads to high packet drop rat= es > >> since the underlying hypervisor is only capable of max handling the > >> same load it was servicing in <=3Ddpdk-21.08) > >> The same pattern can be seen even if we run traffic for a longer > >> duration. > >> > >> *_Eg:_* > >> Testpmd traffic run (for packet-size=3D1518) for exact same > >> time-interval of 15 seconds: > >> > >> _*>=3D21.11 DPDK version*_ > >> ---------------------- Forward statistics for port 0 > >> ---------------------- > >> RX-packets: 2 RX-dropped: 0 RX-total: 2 > >> TX-packets: 19497570 *TX-dropped: 364674686 * TX-total: 38417225= 6 > >> > -------------------------------------------------------------------------= --- > >> _*Upto 21.08 DPDK version *_ > >> ---------------------- Forward statistics for port 0 > >> ---------------------- > >> RX-packets: 3 RX-dropped: 0 RX-total: 3 > >> TX-packets: 19480319 TX-dropped: 0 TX-total: > >> 19480319 > >> > -------------------------------------------------------------------------= --- > >> > >> As you can see > >> >=3Ddpdk-21.11 > >> Packets generated : 384 million Packets serviced : ~19.5 million : > >> Tx-dropped : 364 million > >> <=3Ddpdk-21.08 > >> Packets generated : ~19.5 million Packets serviced : ~19.5 million : > >> Tx-dropped : 0 > >> > >> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >> @Maxime Coquelin > >> I have run through all the commits made by virtio-team between > >> DPDK-21.11 and DPDK-21.08 as per the commit-logs available at > >> https://git.dpdk.org/dpdk/log/drivers/net/virtio > >> > >> I even tried undoing all the possible relevant commits (I could think > >> of) on a dpdk-21.11 workspace & then re-running testpmd in order to > >> track down which commit has introduced this regression but no luck. > >> Need your inputs further if you could glance through the commits made > >> in between these releases and let us know if there's any particular > >> commit of interest which you think can cause the behavior as seen > >> above (or if there's any commit not captured in the above git link; > >> maybe a commit checkin outside the virtio PMD code perhaps?). > > > > As your issue seems 100% reproducible, using git bisect you should be > > able to point to the commit introducing the regression. > > > > This is what I need to be able to help you. > > > > Regards, > > Maxime > > > >> > >> Thanks, > >> Mukul > >> > >> > >> On Mon, Dec 9, 2024 at 9:54=E2=80=AFPM Joshua Washington >> > wrote: > >> > >> Hello, > >> > >> Based on your VM shape (8 vcpu VM) and packet size (1518B packets)= , > >> what you are seeing is exactly expected. 8 vCPU Gen 2 VMs has a > >> default egress cap of 16 Gbps. This equates to roughly 1.3Mpps whe= n > >> using 1518B packets, including IFG. Over the course of 15 seconds, > >> 19.5 million packets should be sent, which matches both cases. The > >> difference here seems to be what happens on DPDK, not GCP. I don't > >> believe that packet drops on the host NIC are captured in DPDK > >> stats; likely the descriptor ring just filled up because the egres= s > >> bandwidth cap was hit and queue servicing was throttled. This woul= d > >> cause a TX burst to return less packets than the burst size. The > >> difference between 20.05 and 22.11 might have to do with this > >> reporting, or a change in testpmd logic for when to send new burst= s > >> of traffic. > >> > >> Best, > >> Josh > >> > >> > >> On Mon, Dec 9, 2024, 07:39 Mukul Sinha >> > wrote: > >> > >> GCP-dev team > >> @jeroendb@google.com > >> @rushilg@google.com > >> @joshwash@google.com > >> > >> Can you please check the following email & get back ? > >> > >> > >> On Fri, Dec 6, 2024 at 2:04=E2=80=AFAM Mukul Sinha > >> > > >> wrote: > >> > >> Hi GCP & Virtio-PMD dev teams, > >> We are from VMware NSX Advanced Load Balancer Team whereby > >> in GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we > >> are triaging an issue of TCP profile application throughpu= t > >> performance with single dispatcher core single Rx/Tx queue > >> (queue depth: 2048) the throughput performance we get usin= g > >> dpdk-22.11 virtio-PMD code is degraded significantly when > >> compared to when using dpdk-20.05 PMD > >> We see high amount of Tx packet drop counter incrementing = on > >> virtio-NIC pointing to issue that the GCP hypervisor side = is > >> unable to drain the packets faster (No drops are seen on R= x > >> side) > >> The behavior is like this : > >> _Using dpdk-22.11_ > >> At 75% CPU usage itself we start seeing huge number of Tx > >> packet drops reported (no Rx drops) causing TCP > >> restransmissions eventually bringing down the effective > >> throughput numbers > >> _Using dpdk-20.05_ > >> even at ~95% CPU usage without any packet drops (neither R= x > >> nor Tx) we are able to get a much better throughput > >> > >> To improve performance numbers with dpdk-22.11 we have tri= ed > >> increasing the queue depth to 4096 but that din't help. > >> If with dpdk-22.11 we move from single core Rx/Tx queue=3D= 1 to > >> single core Rx/Tx queue=3D2 we are able to get slightly be= tter > >> numbers (but still doesnt match the numbers obtained using > >> dpdk-20.05 single core Rx/Tx queue=3D1). This again > >> corroborates the fact the GCP hypervisor is the bottleneck > >> here. > >> > >> To root-cause this issue we were able to replicate this > >> behavior using native DPDK testpmd as shown below (cmds > >> used):- > >> Hugepage size: 2 MB > >> ./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=3D1 > >> --txd=3D2048 --rxd=3D2048 --rxq=3D1 --txq=3D1 --portmask= =3D0x3 > >> set fwd mac > >> set fwd flowgen > >> set txpkts 1518 > >> start > >> stop > >> > >> Testpmd traffic run (for packet-size=3D1518) for exact sam= e > >> time-interval of 15 seconds: > >> > >> _22.11_ > >> ---------------------- Forward statistics for port 0 > >> ---------------------- > >> RX-packets: 2 RX-dropped: 0 > >> RX-total: 2 > >> TX-packets: 19497570 *TX-dropped: 364674686 * > >> TX-total: 384172256 > >> > >> > -------------------------------------------------------------------------= --- > >> _20.05_ > >> ---------------------- Forward statistics for port 0 > >> ---------------------- > >> RX-packets: 3 RX-dropped: 0 > >> RX-total: 3 > >> TX-packets: 19480319 TX-dropped: 0 > >> TX-total: 19480319 > >> > >> > -------------------------------------------------------------------------= --- > >> > >> As you can see > >> dpdk-22.11 > >> Packets generated : 384 million Packets serviced : ~19.5 > >> million : Tx-dropped : 364 million > >> dpdk-20.05 > >> Packets generated : ~19.5 million Packets serviced : ~19.5 > >> million : Tx-dropped : 0 > >> > >> Actual serviced traffic remains almost same between the tw= o > >> versions (implying the underlying GCP hypervisor is only > >> capable of handling that much) but in dpdk-22.11 the PMD i= s > >> pushing almost 20x traffic compared to dpdk-20.05 > >> The same pattern can be seen even if we run traffic for a > >> longer duration. > >> > >> > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >> > >> Following are our queries: > >> @ Virtio-dev team > >> 1. Why in dpdk-22.11 using virtio PMD the testpmd > >> application is able to pump 20 times Tx traffic towards > >> hypervisor compared to dpdk-20.05 ? > >> What has changed either in the virtio-PMD or in the > >> virtio-PMD & underlying hypervisor communication causing > >> this behavior ? > >> If you see actual serviced traffic by the hypervisor remai= ns > >> almost on par with dpdk-20.05 but its the humongous packet= s > >> drop count which can be overall detrimental for any > >> DPDK-application running TCP traffic profile. > >> Is there a way to slow down the number of packets sent > >> towards the hypervisor (through either any code change in > >> virtio-PMD or any config setting) and make it on-par with > >> dpdk-20.05 performance ? > >> 2. In the published Virtio performance report Release 22.1= 1 > >> we see no qualification of throughput numbers done on > >> GCP-cloud. Is there any internal performance benchmark > >> numbers you have for GCP-cloud and if yes can you please > >> share it with us so that we can check if there's any > >> configs/knobs/settings you used to get optimum performance= . > >> > >> @ GCP-cloud dev team > >> As we can see any amount of traffic greater than what can = be > >> successfully serviced by the GCP hypervisor is all getting > >> dropped hence we need help from your side to reproduce thi= s > >> issue in your in-house setup preferably using the same VM > >> instance type as highlighted before. > >> We need further investigation by you from the GCP host lev= el > >> side to check on parameters like running out of Tx buffers > >> or Queue full conditions for the virtio-NIC or number of N= IC > >> Rx/Tx kernel threads as to what is causing hypervisor to n= ot > >> match up to the traffic load pumped in dpdk-22.11 > >> Based on your debugging we would additionally need inputs = as > >> to what can be tweaked or any knobs/settings can be > >> configured from the GCP-VM level to get better performance > >> numbers. > >> > >> Please feel free to reach out to us for any further querie= s. > >> > >> _Additional outputs for debugging:_ > >> lspci | grep Eth > >> 00:06.0 Ethernet controller: Red Hat, Inc. Virtio network > >> device > >> root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth= 0 > >> driver: virtio_net > >> version: 1.0.0 > >> firmware-version: > >> expansion-rom-version: > >> bus-info: 0000:00:06.0 > >> supports-statistics: yes > >> supports-test: no > >> supports-eeprom-access: no > >> supports-register-dump: no > >> supports-priv-flags: no > >> > >> testpmd> show port info all > >> ********************* Infos for port 0 > ********************* > >> MAC address: 42:01:0A:98:A0:0F > >> Device name: 0000:00:06.0 > >> Driver name: net_virtio > >> Firmware-version: not available > >> Connect to socket: 0 > >> memory allocation on the socket: 0 > >> Link status: up > >> Link speed: Unknown > >> Link duplex: full-duplex > >> Autoneg status: On > >> MTU: 1500 > >> Promiscuous mode: disabled > >> Allmulticast mode: disabled > >> Maximum number of MAC addresses: 64 > >> Maximum number of MAC addresses of hash filtering: 0 > >> VLAN offload: > >> strip off, filter off, extend off, qinq strip off > >> No RSS offload flow type is supported. > >> Minimum size of RX buffer: 64 > >> Maximum configurable length of RX packet: 9728 > >> Maximum configurable size of LRO aggregated packet: 0 > >> Current number of RX queues: 1 > >> Max possible RX queues: 2 > >> Max possible number of RXDs per queue: 32768 > >> Min possible number of RXDs per queue: 32 > >> RXDs number alignment: 1 > >> Current number of TX queues: 1 > >> Max possible TX queues: 2 > >> Max possible number of TXDs per queue: 32768 > >> Min possible number of TXDs per queue: 32 > >> TXDs number alignment: 1 > >> Max segment number per packet: 65535 > >> Max segment number per MTU/TSO: 65535 > >> Device capabilities: 0x0( ) > >> Device error handling mode: none > >> > >> > >> > >> This electronic communication and the information and any file= s > >> transmitted with it, or attached to it, are confidential and a= re > >> intended solely for the use of the individual or entity to who= m > >> it is addressed and may contain information that is > >> confidential, legally privileged, protected by privacy laws, o= r > >> otherwise restricted from disclosure to anyone else. If you ar= e > >> not the intended recipient or the person responsible for > >> delivering the e-mail to the intended recipient, you are hereb= y > >> notified that any use, copying, distributing, dissemination, > >> forwarding, printing, or copying of this e-mail is strictly > >> prohibited. If you received this e-mail in error, please retur= n > >> the e-mail to the sender, delete it from your computer, and > >> destroy any printed copy of it. > >> > >> > >> This electronic communication and the information and any files > >> transmitted with it, or attached to it, are confidential and are > >> intended solely for the use of the individual or entity to whom it is > >> addressed and may contain information that is confidential, legally > >> privileged, protected by privacy laws, or otherwise restricted from > >> disclosure to anyone else. If you are not the intended recipient or > >> the person responsible for delivering the e-mail to the intended > >> recipient, you are hereby notified that any use, copying, > >> distributing, dissemination, forwarding, printing, or copying of this > >> e-mail is strictly prohibited. If you received this e-mail in error, > >> please return the e-mail to the sender, delete it from your computer, > >> and destroy any printed copy of it. > > --=20 This electronic communication and the information and any files transmitted= =20 with it, or attached to it, are confidential and are intended solely for=20 the use of the individual or entity to whom it is addressed and may contain= =20 information that is confidential, legally privileged, protected by privacy= =20 laws, or otherwise restricted from disclosure to anyone else. If you are=20 not the intended recipient or the person responsible for delivering the=20 e-mail to the intended recipient, you are hereby notified that any use,=20 copying, distributing, dissemination, forwarding, printing, or copying of= =20 this e-mail is strictly prohibited. If you received this e-mail in error,= =20 please return the e-mail to the sender, delete it from your computer, and= =20 destroy any printed copy of it. --000000000000fec03a0629612370 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Thanks Maxime,
We will analyse further and = try pinpointing the regression commit between DPDK-21.11 & DPDK-21.08.= =C2=A0
Will get back with further queries once we have an update.=

On Fri, Dec 13, 2024 at 4:17=E2=80=AFPM Max= ime Coquelin <maxime.coque= lin@redhat.com> wrote:
(with DPDK ML that got removed)

On 12/13/24 11:46, Maxime Coquelin wrote:
>
>
> On 12/13/24 11:21, Mukul Sinha wrote:
>> Thanks @j= oshwash@google.com <mailto:joshwash@google.com> @Maxime
>> Coquelin <mailto:maxime.coquelin@redhat.com> for the inputs.
>>
>> @Maxime Coquelin <mailto:maxime.coquelin@redhat.com>
>> I did code bisecting and was able to pin-point through test-pmd ru= n
>> that *this issue we are starting to see since DPDK-21.11 version <= br> >> onwards. Till DPDK-21.08 this issue is not seen.*
>> To remind the issue what we see is that while actual amount of >> serviced traffic by the hypervisor remains almost same between the= two
>> versions (implying the underlying GCP hypervisor is only capable o= f
>> handling that much) but in >=3Ddpdk-21.11 versions the virtio-P= MD is
>> pushing almost 20x traffic compared to dpdk-21.08 (This humongous =
>> traffic rate in =C2=A0>=3Ddpdk-21.11 versions leads to high pac= ket drop rates
>> since the underlying hypervisor is only capable of max handling th= e
>> same load it was servicing in <=3Ddpdk-21.08)
>> The same pattern can be seen even if we run traffic for a longer <= br> >> duration.
>>
>> *_Eg:_*
>> Testpmd traffic run (for packet-size=3D1518) for exact same
>> time-interval of 15 seconds:
>>
>> _*>=3D21.11 DPDK version*_
>> =C2=A0=C2=A0 ---------------------- Forward statistics for port 0 =
>> =C2=A0=C2=A0----------------------
>> =C2=A0=C2=A0 RX-packets: 2 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0RX-dropped: 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 RX-total:= 2
>> =C2=A0=C2=A0 TX-packets: 19497570 *TX-dropped: 364674686 *=C2=A0 = =C2=A0 TX-total: 384172256
>> ------------------------------------------------------------------= ----------
>> _*Upto 21.08 DPDK version *_
>> =C2=A0=C2=A0 ---------------------- Forward statistics for port 0 =
>> =C2=A0=C2=A0----------------------
>> =C2=A0=C2=A0 RX-packets: 3 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0RX-dropped: 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 RX-total:= 3
>> =C2=A0=C2=A0 TX-packets: 19480319 =C2=A0 =C2=A0 =C2=A0 TX-dropped:= 0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 TX-total:
>> 19480319
>> ------------------------------------------------------------------= ----------
>>
>> As you can see
>> =C2=A0>=3Ddpdk-21.11
>> Packets generated : 384 million Packets serviced : ~19.5 million :=
>> Tx-dropped : 364 million
>> <=3Ddpdk-21.08
>> Packets generated : ~19.5 million Packets serviced : ~19.5 million= :
>> Tx-dropped : 0
>>
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D
>> @Maxime Coquelin <mailto:maxime.coquelin@redhat.com>
>> I have run through all the commits made by virtio-team between >> DPDK-21.11 and DPDK-21.08 as per the commit-logs available at
>> https://git.dpdk.org/dpdk/log/drivers/net= /virtio
>> <https://git.dpdk.org/dpdk/log/drivers/n= et/virtio>
>> I even tried undoing all the possible relevant commits (I could th= ink
>> of) on a dpdk-21.11 workspace & then re-running testpmd in ord= er to
>> track down which commit has introduced this regression but no luck= .
>> Need your inputs further if you could glance through the commits m= ade
>> in between these releases and let us know if there's any parti= cular
>> commit of interest which you think can cause the behavior as seen =
>> above (or if there's any commit not captured in the above git = link;
>> maybe a commit checkin outside the virtio PMD code perhaps?).
>
> As your issue seems 100% reproducible, using git bisect you should be =
> able to point to the commit introducing the regression.
>
> This is what I need to be able to help you.
>
> Regards,
> Maxime
>
>>
>> Thanks,
>> Mukul
>>
>>
>> On Mon, Dec 9, 2024 at 9:54=E2=80=AFPM Joshua Washington <joshwash@google.com=
>> <mailto:joshwash@google.com>> wrote:
>>
>> =C2=A0=C2=A0=C2=A0 Hello,
>>
>> =C2=A0=C2=A0=C2=A0 Based on your VM shape (8 vcpu VM) and packet s= ize (1518B packets),
>> =C2=A0=C2=A0=C2=A0 what you are seeing is exactly expected. 8 vCPU= Gen 2 VMs has a
>> =C2=A0=C2=A0=C2=A0 default egress cap of 16 Gbps. This equates to = roughly 1.3Mpps when
>> =C2=A0=C2=A0=C2=A0 using 1518B packets, including IFG. Over the co= urse of 15 seconds,
>> =C2=A0=C2=A0=C2=A0 19.5 million packets should be sent, which matc= hes both cases. The
>> =C2=A0=C2=A0=C2=A0 difference here seems to be what happens on DPD= K, not GCP. I don't
>> =C2=A0=C2=A0=C2=A0 believe that packet drops on the host NIC are c= aptured in DPDK
>> =C2=A0=C2=A0=C2=A0 stats; likely the descriptor ring just filled u= p because the egress
>> =C2=A0=C2=A0=C2=A0 bandwidth cap was hit and queue servicing was t= hrottled. This would
>> =C2=A0=C2=A0=C2=A0 cause a TX burst to return less packets than th= e burst size. The
>> =C2=A0=C2=A0=C2=A0 difference between 20.05 and 22.11 might have t= o do with this
>> =C2=A0=C2=A0=C2=A0 reporting, or a change in testpmd logic for whe= n to send new bursts
>> =C2=A0=C2=A0=C2=A0 of traffic.
>>
>> =C2=A0=C2=A0=C2=A0 Best,
>> =C2=A0=C2=A0=C2=A0 Josh
>>
>>
>> =C2=A0=C2=A0=C2=A0 On Mon, Dec 9, 2024, 07:39 Mukul Sinha <mukul.sinha@broad= com.com
>> =C2=A0=C2=A0=C2=A0 <mailto:mukul.sinha@broadcom.com>> wrote:
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 GCP-dev team
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 @jeroendb@google.com
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <mailto:jeroendb@google.com>@rushilg@google.com
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <mailto:rushilg@google.com> @joshwash@google.com
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <mailto:
joshwash@google.com> >> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Can you please check th= e following email & get back ?
>>
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 On Fri, Dec 6, 2024 at = 2:04=E2=80=AFAM Mukul Sinha
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <mukul.sinha@broadcom.com <= ;mailto:mukul= .sinha@broadcom.com>>
>> wrote:
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Hi GCP & Virtio-PMD dev teams,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= We are from VMware NSX Advanced Load Balancer Team whereby
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= in GCP-cloud (*custom-8-8192 VM instance type 8core8G*) we
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= are triaging an issue of TCP profile application throughput
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= performance with single dispatcher core single Rx/Tx queue
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= (queue depth: 2048) the throughput performance we get using
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= dpdk-22.11 virtio-PMD code is degraded significantly when
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= compared to when using dpdk-20.05 PMD
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= We see high amount of Tx packet drop counter incrementing on
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= virtio-NIC pointing to issue that the GCP hypervisor side is
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= unable to drain the packets faster (No drops are seen on Rx
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= side)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= The behavior is like this :
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= _Using dpdk-22.11_
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= At 75% CPU usage itself we start seeing huge number of Tx
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= packet drops reported (no Rx drops) causing TCP
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= restransmissions eventually bringing down the effective
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= throughput numbers
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= _Using dpdk-20.05_
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= even at ~95% CPU usage without any packet drops (neither Rx
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= nor Tx) we are able to get a much better throughput
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= To improve performance numbers with dpdk-22.11 we have tried
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= increasing the queue depth to 4096 but that din't help.
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= If with dpdk-22.11 we move from single core Rx/Tx queue=3D1 to
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= single core Rx/Tx queue=3D2 we are able to get slightly better
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= numbers (but still doesnt match the numbers obtained using
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= dpdk-20.05 single core Rx/Tx queue=3D1). This again
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= corroborates the fact the GCP hypervisor is the bottleneck
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= here.
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= To root-cause this issue we were able to replicate this
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= behavior using native DPDK testpmd as shown below (cmds
>> used):-
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Hugepage size: 2 MB
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0./app/dpdk-testpmd -l 0-1 -n 1 -- -i --nb-cores=3D1
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= --txd=3D2048 --rxd=3D2048 --rxq=3D1 --txq=3D1 =C2=A0--portmask=3D0x3
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= set fwd mac
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= set fwd flowgen
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= set txpkts 1518
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= start
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= stop
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Testpmd traffic run (for packet-size=3D1518) for exact same
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= time-interval of 15 seconds:
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= _22.11_
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0 ---------------------- Forward statistics for port 0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0----------------------
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0 RX-packets: 2 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0RX-dropped: 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
>> RX-total: 2
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0 TX-packets: 19497570 *TX-dropped: 364674686 *
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= TX-total: 384172256
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
>> ------------------------------------------------------------------= ----------
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= _20.05_
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0 ---------------------- Forward statistics for port 0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0----------------------
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0 RX-packets: 3 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0RX-dropped: 0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
>> RX-total: 3
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0 TX-packets: 19480319 =C2=A0 =C2=A0 =C2=A0 TX-dropped: 0=C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
>> TX-total: 19480319
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
>> ------------------------------------------------------------------= ----------
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= As you can see
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= dpdk-22.11
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Packets generated : 384 million Packets serviced : ~19.5
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= million : Tx-dropped : 364 million
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= dpdk-20.05
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Packets generated : ~19.5 million Packets serviced : ~19.5
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= million : Tx-dropped : 0
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Actual serviced traffic remains almost same between the two
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= versions (implying the underlying GCP hypervisor is only
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= capable of handling that much) but in dpdk-22.11 the PMD is
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= pushing almost 20x traffic compared to dpdk-20.05
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= The same pattern can be seen even if we run traffic for a
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= longer duration.
>>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Following are our queries:
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= @ Virtio-dev team
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= 1. Why in dpdk-22.11 using virtio PMD the testpmd
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= application is able to pump 20 times Tx traffic towards
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= hypervisor compared to dpdk-20.05 ?
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= What has changed either in the virtio-PMD or in the
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= virtio-PMD & underlying hypervisor communication causing
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= this behavior ?
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= If you see actual serviced traffic by the hypervisor remains
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= almost on par with dpdk-20.05 but its the humongous packets
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= drop count which can be overall detrimental for any
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= DPDK-application running TCP traffic profile.
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Is there a way to slow down the number of packets sent
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= towards the hypervisor (through either any code change in
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= virtio-PMD or any config setting) and make it on-par with
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= dpdk-20.05 performance ?
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= 2. In the published Virtio performance report Release 22.11
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= we see no qualification of throughput numbers done on
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= GCP-cloud. Is there any internal performance benchmark
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= numbers you have for GCP-cloud and if yes can you please
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= share it with us so that we can check if there's any
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= configs/knobs/settings you used to get optimum performance.
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= @ GCP-cloud dev team
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= As we can see any amount of traffic greater than what can be
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= successfully serviced by the GCP hypervisor is all getting
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= dropped hence we need help from your side to reproduce this
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= issue in your in-house setup preferably using the same VM
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= instance type as highlighted before.
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= We need further investigation by you from the GCP host level
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= side to check on parameters like running out of Tx buffers
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= or Queue full conditions for the virtio-NIC or number of NIC
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Rx/Tx kernel threads as to what is causing hypervisor to not
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= match up to the traffic load pumped in dpdk-22.11
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Based on your debugging we would additionally need inputs as
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= to what can be tweaked or any knobs/settings can be
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= configured from the GCP-VM level to get better performance
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= numbers.
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Please feel free to reach out to us for any further queries.
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= _Additional outputs for debugging:_
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= lspci | grep Eth
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= 00:06.0 Ethernet controller: Red Hat, Inc. Virtio network
>> device
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= root@dcg15-se-ecmyw:/home/admin/dpdk/build# ethtool -i eth0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= driver: virtio_net
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= version: 1.0.0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= firmware-version:
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= expansion-rom-version:
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= bus-info: 0000:00:06.0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= supports-statistics: yes
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= supports-test: no
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= supports-eeprom-access: no
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= supports-register-dump: no
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= supports-priv-flags: no
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= testpmd> show port info all
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= ********************* Infos for port 0 =C2=A0*********************
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= MAC address: 42:01:0A:98:A0:0F
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Device name: 0000:00:06.0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Driver name: net_virtio
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Firmware-version: not available
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Connect to socket: 0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= memory allocation on the socket: 0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Link status: up
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Link speed: Unknown
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Link duplex: full-duplex
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Autoneg status: On
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= MTU: 1500
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Promiscuous mode: disabled
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Allmulticast mode: disabled
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Maximum number of MAC addresses: 64
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Maximum number of MAC addresses of hash filtering: 0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= VLAN offload:
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =C2=A0 strip off, filter off, extend off, qinq strip off
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= No RSS offload flow type is supported.
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Minimum size of RX buffer: 64
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Maximum configurable length of RX packet: 9728
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Maximum configurable size of LRO aggregated packet: 0
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Current number of RX queues: 1
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Max possible RX queues: 2
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Max possible number of RXDs per queue: 32768
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Min possible number of RXDs per queue: 32
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= RXDs number alignment: 1
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Current number of TX queues: 1
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Max possible TX queues: 2
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Max possible number of TXDs per queue: 32768
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Min possible number of TXDs per queue: 32
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= TXDs number alignment: 1
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Max segment number per packet: 65535
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Max segment number per MTU/TSO: 65535
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Device capabilities: 0x0( )
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= Device error handling mode: none
>>
>>
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 This electronic communi= cation and the information and any files
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 transmitted with it, or= attached to it, are confidential and are
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 intended solely for the= use of the individual or entity to whom
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 it is addressed and may= contain information that is
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 confidential, legally p= rivileged, protected by privacy laws, or
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 otherwise restricted fr= om disclosure to anyone else. If you are
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 not the intended recipi= ent or the person responsible for
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 delivering the e-mail t= o the intended recipient, you are hereby
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 notified that any use, = copying, distributing, dissemination,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 forwarding, printing, o= r copying of this e-mail is strictly
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 prohibited. If you rece= ived this e-mail in error, please return
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 the e-mail to the sende= r, delete it from your computer, and
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 destroy any printed cop= y of it.
>>
>>
>> This electronic communication and the information and any files >> transmitted with it, or attached to it, are confidential and are <= br> >> intended solely for the use of the individual or entity to whom it= is
>> addressed and may contain information that is confidential, legall= y
>> privileged, protected by privacy laws, or otherwise restricted fro= m
>> disclosure to anyone else. If you are not the intended recipient o= r
>> the person responsible for delivering the e-mail to the intended <= br> >> recipient, you are hereby notified that any use, copying,
>> distributing, dissemination, forwarding, printing, or copying of t= his
>> e-mail is strictly prohibited. If you received this e-mail in erro= r,
>> please return the e-mail to the sender, delete it from your comput= er,
>> and destroy any printed copy of it.


This ele= ctronic communication and the information and any files transmitted with it= , or attached to it, are confidential and are intended solely for the use o= f the individual or entity to whom it is addressed and may contain informat= ion that is confidential, legally privileged, protected by privacy laws, or= otherwise restricted from disclosure to anyone else. If you are not the in= tended recipient or the person responsible for delivering the e-mail to the= intended recipient, you are hereby notified that any use, copying, distrib= uting, dissemination, forwarding, printing, or copying of this e-mail is st= rictly prohibited. If you received this e-mail in error, please return the = e-mail to the sender, delete it from your computer, and destroy any printed= copy of it. --000000000000fec03a0629612370-- --00000000000005a5f10629612449 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIIVMgYJKoZIhvcNAQcCoIIVIzCCFR8CAQExDzANBglghkgBZQMEAgEFADALBgkqhkiG9w0BBwGg ghKSMIIGqDCCBJCgAwIBAgIQfofDCS7XZu8vIeKo0KeY9DANBgkqhkiG9w0BAQwFADBMMSAwHgYD VQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSNjETMBEGA1UEChMKR2xvYmFsU2lnbjETMBEGA1UE AxMKR2xvYmFsU2lnbjAeFw0yMzA0MTkwMzUzNTNaFw0yOTA0MTkwMDAwMDBaMFIxCzAJBgNVBAYT AkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSgwJgYDVQQDEx9HbG9iYWxTaWduIEdDQyBS NiBTTUlNRSBDQSAyMDIzMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAwjAEbSkPcSyn 26Zn9VtoE/xBvzYmNW29bW1pJZ7jrzKwPJm/GakCvy0IIgObMsx9bpFaq30X1kEJZnLUzuE1/hlc hatYqyORVBeHlv5V0QRSXY4faR0dCkIhXhoGknZ2O0bUJithcN1IsEADNizZ1AJIaWsWbQ4tYEYj ytEdvfkxz1WtX3SjtecZR+9wLJLt6HNa4sC//QKdjyfr/NhDCzYrdIzAssoXFnp4t+HcMyQTrj0r pD8KkPj96sy9axzegLbzte7wgTHbWBeJGp0sKg7BAu+G0Rk6teO1yPd75arbCvfY/NaRRQHk6tmG 71gpLdB1ZhP9IcNYyeTKXIgfMh2tVK9DnXGaksYCyi6WisJa1Oa+poUroX2ESXO6o03lVxiA1xyf G8lUzpUNZonGVrUjhG5+MdY16/6b0uKejZCLbgu6HLPvIyqdTb9XqF4XWWKu+OMDs/rWyQ64v3mv Sa0te5Q5tchm4m9K0Pe9LlIKBk/gsgfaOHJDp4hYx4wocDr8DeCZe5d5wCFkxoGc1ckM8ZoMgpUc 4pgkQE5ShxYMmKbPvNRPa5YFzbFtcFn5RMr1Mju8gt8J0c+dxYco2hi7dEW391KKxGhv7MJBcc+0 x3FFTnmhU+5t6+CnkKMlrmzyaoeVryRTvOiH4FnTNHtVKUYDsCM0CLDdMNgoxgkCAwEAAaOCAX4w ggF6MA4GA1UdDwEB/wQEAwIBhjBMBgNVHSUERTBDBggrBgEFBQcDAgYIKwYBBQUHAwQGCisGAQQB gjcUAgIGCisGAQQBgjcKAwwGCisGAQQBgjcKAwQGCSsGAQQBgjcVBjASBgNVHRMBAf8ECDAGAQH/ AgEAMB0GA1UdDgQWBBQAKTaeXHq6D68tUC3boCOFGLCgkjAfBgNVHSMEGDAWgBSubAWjkxPioufi 1xzWx/B/yGdToDB7BggrBgEFBQcBAQRvMG0wLgYIKwYBBQUHMAGGImh0dHA6Ly9vY3NwMi5nbG9i YWxzaWduLmNvbS9yb290cjYwOwYIKwYBBQUHMAKGL2h0dHA6Ly9zZWN1cmUuZ2xvYmFsc2lnbi5j b20vY2FjZXJ0L3Jvb3QtcjYuY3J0MDYGA1UdHwQvMC0wK6ApoCeGJWh0dHA6Ly9jcmwuZ2xvYmFs c2lnbi5jb20vcm9vdC1yNi5jcmwwEQYDVR0gBAowCDAGBgRVHSAAMA0GCSqGSIb3DQEBDAUAA4IC AQCRkUdr1aIDRmkNI5jx5ggapGUThq0KcM2dzpMu314mJne8yKVXwzfKBtqbBjbUNMODnBkhvZcn bHUStur2/nt1tP3ee8KyNhYxzv4DkI0NbV93JChXipfsan7YjdfEk5vI2Fq+wpbGALyyWBgfy79Y IgbYWATB158tvEh5UO8kpGpjY95xv+070X3FYuGyeZyIvao26mN872FuxRxYhNLwGHIy38N9ASa1 Q3BTNKSrHrZngadofHglG5W3TMFR11JOEOAUHhUgpbVVvgCYgGA6dSX0y5z7k3rXVyjFOs7KBSXr dJPKadpl4vqYphH7+P40nzBRcxJHrv5FeXlTrb+drjyXNjZSCmzfkOuCqPspBuJ7vab0/9oeNERg nz6SLCjLKcDXbMbKcRXgNhFBlzN4OUBqieSBXk80w2Nzx12KvNj758WavxOsXIbX0Zxwo1h3uw75 AI2v8qwFWXNclO8qW2VXoq6kihWpeiuvDmFfSAwRLxwwIjgUuzG9SaQ+pOomuaC7QTKWMI0hL0b4 mEPq9GsPPQq1UmwkcYFJ/Z4I93DZuKcXmKMmuANTS6wxwIEw8Q5MQ6y9fbJxGEOgOgYL4QIqNULb 5CYPnt2LeiIiEnh8Uuh8tawqSjnR0h7Bv5q4mgo3L1Z9QQuexUntWD96t4o0q1jXWLyrpgP7Zcnu CzCCBYMwggNroAMCAQICDkXmuwODM8OFZUjm/0VRMA0GCSqGSIb3DQEBDAUAMEwxIDAeBgNVBAsT F0dsb2JhbFNpZ24gUm9vdCBDQSAtIFI2MRMwEQYDVQQKEwpHbG9iYWxTaWduMRMwEQYDVQQDEwpH bG9iYWxTaWduMB4XDTE0MTIxMDAwMDAwMFoXDTM0MTIxMDAwMDAwMFowTDEgMB4GA1UECxMXR2xv YmFsU2lnbiBSb290IENBIC0gUjYxEzARBgNVBAoTCkdsb2JhbFNpZ24xEzARBgNVBAMTCkdsb2Jh bFNpZ24wggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCVB+hzymb57BTKezz3DQjxtEUL LIK0SMbrWzyug7hBkjMUpG9/6SrMxrCIa8W2idHGsv8UzlEUIexK3RtaxtaH7k06FQbtZGYLkoDK RN5zlE7zp4l/T3hjCMgSUG1CZi9NuXkoTVIaihqAtxmBDn7EirxkTCEcQ2jXPTyKxbJm1ZCatzEG xb7ibTIGph75ueuqo7i/voJjUNDwGInf5A959eqiHyrScC5757yTu21T4kh8jBAHOP9msndhfuDq jDyqtKT285VKEgdt/Yyyic/QoGF3yFh0sNQjOvddOsqi250J3l1ELZDxgc1Xkvp+vFAEYzTfa5MY vms2sjnkrCQ2t/DvthwTV5O23rL44oW3c6K4NapF8uCdNqFvVIrxclZuLojFUUJEFZTuo8U4lptO TloLR/MGNkl3MLxxN+Wm7CEIdfzmYRY/d9XZkZeECmzUAk10wBTt/Tn7g/JeFKEEsAvp/u6P4W4L sgizYWYJarEGOmWWWcDwNf3J2iiNGhGHcIEKqJp1HZ46hgUAntuA1iX53AWeJ1lMdjlb6vmlodiD D9H/3zAR+YXPM0j1ym1kFCx6WE/TSwhJxZVkGmMOeT31s4zKWK2cQkV5bg6HGVxUsWW2v4yb3BPp DW+4LtxnbsmLEbWEFIoAGXCDeZGXkdQaJ783HjIH2BRjPChMrwIDAQABo2MwYTAOBgNVHQ8BAf8E BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUrmwFo5MT4qLn4tcc1sfwf8hnU6AwHwYD VR0jBBgwFoAUrmwFo5MT4qLn4tcc1sfwf8hnU6AwDQYJKoZIhvcNAQEMBQADggIBAIMl7ejR/ZVS zZ7ABKCRaeZc0ITe3K2iT+hHeNZlmKlbqDyHfAKK0W63FnPmX8BUmNV0vsHN4hGRrSMYPd3hckSW tJVewHuOmXgWQxNWV7Oiszu1d9xAcqyj65s1PrEIIaHnxEM3eTK+teecLEy8QymZjjDTrCHg4x36 2AczdlQAIiq5TSAucGja5VP8g1zTnfL/RAxEZvLS471GABptArolXY2hMVHdVEYcTduZlu8aHARc phXveOB5/l3bPqpMVf2aFalv4ab733Aw6cPuQkbtwpMFifp9Y3s/0HGBfADomK4OeDTDJfuvCp8g a907E48SjOJBGkh6c6B3ace2XH+CyB7+WBsoK6hsrV5twAXSe7frgP4lN/4Cm2isQl3D7vXM3PBQ ddI2aZzmewTfbgZptt4KCUhZh+t7FGB6ZKppQ++Rx0zsGN1s71MtjJnhXvJyPs9UyL1n7KQPTEX/ 07kwIwdMjxC/hpbZmVq0mVccpMy7FYlTuiwFD+TEnhmxGDTVTJ267fcfrySVBHioA7vugeXaX3yL SqGQdCWnsz5LyCxWvcfI7zjiXJLwefechLp0LWEBIH5+0fJPB1lfiy1DUutGDJTh9WZHeXfVVFsf rSQ3y0VaTqBESMjYsJnFFYQJ9tZJScBluOYacW6gqPGC6EU+bNYC1wpngwVayaQQMIIGWzCCBEOg AwIBAgIMeBi0r1CcVYI+Eq8dMA0GCSqGSIb3DQEBCwUAMFIxCzAJBgNVBAYTAkJFMRkwFwYDVQQK ExBHbG9iYWxTaWduIG52LXNhMSgwJgYDVQQDEx9HbG9iYWxTaWduIEdDQyBSNiBTTUlNRSBDQSAy MDIzMB4XDTI0MTEyOTA2MTkzMloXDTI2MTEzMDA2MTkzMlowgacxCzAJBgNVBAYTAlVTMRMwEQYD VQQIEwpDYWxpZm9ybmlhMREwDwYDVQQHEwhTYW4gSm9zZTEZMBcGA1UEYRMQTlRSVVMrREUtNjYx MDExNzEWMBQGA1UEChMNQlJPQURDT00gSU5DLjEUMBIGA1UEAxMLTXVrdWwgU2luaGExJzAlBgkq hkiG9w0BCQEWGG11a3VsLnNpbmhhQGJyb2FkY29tLmNvbTCCASIwDQYJKoZIhvcNAQEBBQADggEP ADCCAQoCggEBANGDAiRZxwJBiBRX6nfyrz8L6JvFZdMugX4pthCFHjVyc1VL0CgJeLtxvqoKsTY/ n8dcwHGC5trlbZ6nbMiMtjipzHEuFnKccb+2a7Sv53JTPVfIeFVMRn2NBTM4XdJn2gbgKH4gP5jA LrnSzMGD5iYUx3vwtQzAYPJigfQxO7qYWBxFsB/mI5Wf/CyYA7Ba9VUVC7KpRU96I4K6H5JykKyC +DS6Jnw0twLIvvP2ECL4J8DiApxATOi4n1dkJNUC0bjG38zqg8K7mzM9akruBeGkLQTPmcylBFPZ 2y/ENeBoinVB5MvyTZ0UgUkxzJpBizhuJ+fQgGjCXdzdVBCxTc8CAwEAAaOCAdkwggHVMA4GA1Ud DwEB/wQEAwIFoDCBkwYIKwYBBQUHAQEEgYYwgYMwRgYIKwYBBQUHMAKGOmh0dHA6Ly9zZWN1cmUu Z2xvYmFsc2lnbi5jb20vY2FjZXJ0L2dzZ2NjcjZzbWltZWNhMjAyMy5jcnQwOQYIKwYBBQUHMAGG LWh0dHA6Ly9vY3NwLmdsb2JhbHNpZ24uY29tL2dzZ2NjcjZzbWltZWNhMjAyMzBlBgNVHSAEXjBc MAkGB2eBDAEFAwEwCwYJKwYBBAGgMgEoMEIGCisGAQQBoDIKAwIwNDAyBggrBgEFBQcCARYmaHR0 cHM6Ly93d3cuZ2xvYmFsc2lnbi5jb20vcmVwb3NpdG9yeS8wCQYDVR0TBAIwADBBBgNVHR8EOjA4 MDagNKAyhjBodHRwOi8vY3JsLmdsb2JhbHNpZ24uY29tL2dzZ2NjcjZzbWltZWNhMjAyMy5jcmww IwYDVR0RBBwwGoEYbXVrdWwuc2luaGFAYnJvYWRjb20uY29tMBMGA1UdJQQMMAoGCCsGAQUFBwME MB8GA1UdIwQYMBaAFAApNp5ceroPry1QLdugI4UYsKCSMB0GA1UdDgQWBBTmWPPbcD0eFoGGYnhi vf5eLRodEzANBgkqhkiG9w0BAQsFAAOCAgEAIHZb481i4zmDFiZoBHlN8pqaANupo1XaVFIc8qZ6 LBSk3u8h+xEERhBiksGtUTXf1pYqivpbHy1p9fBLAK2Sxm0/sfkOzMO+ubjBqDPm+462A8spiafL D+CuQTTeNcrD3vjk9KBstkn/ab8ruv64Y3S87Np2VzmLGT+8Zoewdo/i+0SubkcLjBFJIFXmWkRC oiHykwrhgtSezNK+INHim4Wzejy0c+rALJ91vVaXwgs47xEZSCz+lPSK2oQH5RJ04MFCs68if5YP LBqSndfVV4XlFOKd3pXu8bWeaGO0N/HX/AFU9wSjMkVJOUO00pemT1Eip0o1BawjOY95/8PhduQY PQ4LMmQF7TqewJi75iDb2/F5ExBqutECi5izoV6ki759esnEO4Y5G6s9oPAfETd0IiHge0Y9UPp2 aKXjiHgFZHoxVfUXSaiA+KR/mgJjuHV9CCf+l9DxD+DbPVko7ncJNHUGFgniENmIcO+UvIyan0Xl lPgGr4JcIXrbJVdMV+NRDJLP3cUh3whv+kgaS9y/snHcgAO3aamjQ8odJSxERuDLIxrg3eCDAPWi cpk4p8vGGX38Yl4TimlWJWY+/K8tXPq7VtakNCdvbeOx7881auxPT1B0Ymoq/N60fWgfOfCjfbCY 1kqpy+/QiyjsDM5/+Hh0ko0oVff1DZnqKGAxggJkMIICYAIBATBiMFIxCzAJBgNVBAYTAkJFMRkw FwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSgwJgYDVQQDEx9HbG9iYWxTaWduIEdDQyBSNiBTTUlN RSBDQSAyMDIzAgx4GLSvUJxVgj4Srx0wDQYJYIZIAWUDBAIBBQCggdQwLwYJKoZIhvcNAQkEMSIE IKzFHjxd/Hj6wUcKdQ49eUUV/DdoD8IL670GVnNCBEZ0MBgGCSqGSIb3DQEJAzELBgkqhkiG9w0B BwEwHAYJKoZIhvcNAQkFMQ8XDTI0MTIxNjExMDQxM1owaQYJKoZIhvcNAQkPMVwwWjALBglghkgB ZQMEASowCwYJYIZIAWUDBAEWMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzALBgkqhkiG9w0BAQow CwYJKoZIhvcNAQEHMAsGCWCGSAFlAwQCATANBgkqhkiG9w0BAQEFAASCAQAUs8Nrdm9aLWBUWRlX j51riKUZ7ulV+U/BUxg8dJ0Tb//KolmUPIQ+47dxxuMwHTgpOVxjlT4aXnH/3Rf9CUlMgUyYo2vs WlbTvILP1Y5BAW4U5PUhLDkdoqfT64tbc2fjt2n6hvt5k4NIQeHSw/n42IYiwoHUn0+fGR3dUL9j tksaxqFJuUNktHfA926AGBNVtdS6pj4JmlnzbLOz/XE/eDoOi0HQVhi4Q/fNN0G9ubm7eLL4OK7G tNu6FrX9H6gO903/idUBFm5t6SckqKEl3scB+4dWvcpaxL2TQIy/l/fQO6+FvnyqtMo1SMtaP8dT sToU5+ZJsmJNHxXrpnUw --00000000000005a5f10629612449--