From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 46A2741F5F for ; Tue, 29 Aug 2023 16:51:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE16040293; Tue, 29 Aug 2023 16:51:06 +0200 (CEST) Received: from mail-oa1-f50.google.com (mail-oa1-f50.google.com [209.85.160.50]) by mails.dpdk.org (Postfix) with ESMTP id B522240279 for ; Tue, 29 Aug 2023 16:51:04 +0200 (CEST) Received: by mail-oa1-f50.google.com with SMTP id 586e51a60fabf-1c0fcbf7ae4so3277234fac.0 for ; Tue, 29 Aug 2023 07:51:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1693320664; x=1693925464; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=zlxTMaqahoyuDikhQ6gTcWAVYOkHV3KZBZhVJ09/1MI=; b=CpcbDlvg89rnWLQLn6gmPzfdk7p3saAivTyJBhGmVMKtM4vYqce69lLs2DX4NUouE7 26mKrmut4VlntnDCXNzsgAJ/pJs8lto0k5aHMmRKxmnaxCJQqi62HZ5SUXdTYGHfbm0P feST55wJxaYrUFKLOPG5X0AAKBAcjSTWC7orx1wD2ip0lc1BYOSLrti9IOM8PPjk3j49 fnAzLNMVdAk8R2g4K93UHapn+7bDuwaU/34/yqcJaVWIDO/u6HoOAsArf0tuBxJ8TgcU vlk5gJfziYXh2q05AYH3nxKDmyDtGRAe4h9L66/KresgPV9SSt5gSWnSmdukDQM2Ogo4 mU4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1693320664; x=1693925464; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=zlxTMaqahoyuDikhQ6gTcWAVYOkHV3KZBZhVJ09/1MI=; b=g9vSkaLgLB1bcNKUNdrkwjtw+1WCwThyBy2/7p9D8a3f/Mr5oda9hCv/B3+zGV+aAr Pa+kQxP4HmSdPS9/3Wum8e3DehyidJMUCbSwub08yl6dePykvwtbOgr9MgKkYj85PJEm 6ig7Q7yEEYhtMqDi/nbO3phEyhYwGzBvqIJ91ro+MrMMDD6lM1ab4Vxt7jEGMfMqDCcs R/QryRVsF5rZ3uj2ZnGO5VY8IeA/TSLWdHMcykJ+2syRb2u9u4quZp+Pv6NLnInxptTL oqqgpjSCb8VsK84PWVlJIJYrJYI5GsYXH1EU2Odj3pdRB1B/oEYBPUeZiJyOpv2YEstF VWzw== X-Gm-Message-State: AOJu0Yw+QulTQlZU0e4lhLgFZeix/6AtQHfmYNuswzZCExWy7yT7yqT5 WvV6MI6aw+Cy3qw3Hgtria5SX/nkK3tiH3zHEmpZucz9xkECPA== X-Google-Smtp-Source: AGHT+IEIh2YDxiHhGhcVKWFDl6kIDdBvShQ+jqigQPtcllmehekafzsYg/jNVTsl5HVRFiAWnT/UwOc9E+YM08aWLho= X-Received: by 2002:a05:6870:80c6:b0:1bb:83e9:6277 with SMTP id r6-20020a05687080c600b001bb83e96277mr15532903oab.33.1693320663032; Tue, 29 Aug 2023 07:51:03 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Hari Haran Date: Tue, 29 Aug 2023 20:20:51 +0530 Message-ID: Subject: Re: Inflight value shown invalid in Event Dev Queue To: "Van Haaren, Harry" Cc: "users@dpdk.org" Content-Type: multipart/alternative; boundary="0000000000009c69f7060410f061" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --0000000000009c69f7060410f061 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Harry, Thanks for your valuable time and response for the query. Long struggling issue has been fixed now. Issue was resolved after proper initialization of 'disable_implicit_release= ' variable during event dev initialization. In our case, we are sending packets from Rx core to worker core using event dev queue and expect implicit release of event after worker core dequeued the packets from event dev queue. After initialization of 'disable_implicit_release' variable, now event dev queue released the inflight packets after the dequeue happened in the worker. With a wide range of testing, now all cases are working fine. Regards, Hariharan On Wed, Jul 19, 2023 at 9:49=E2=80=AFPM Van Haaren, Harry < harry.van.haaren@intel.com> wrote: > > From: Hari Haran > > Sent: Wednesday, July 19, 2023 4:30 PM > > To: Van Haaren, Harry > > Cc: users@dpdk.org > > Subject: Re: Inflight value shown invalid in Event Dev Queue > > > > Hi Harry Haaren (Yes :) ) > > > > I have given more details below, please check this. > > Please reply "in-line", it makes it easier to read the conversation for > future readers, and gives reference to your replies. > > > Device Configuration: > > Event Dev Queue : 1 > > Number of ports : 3 > > > > Queue 0 depth - 32k > > Port 0, 1 amd 2 : Enqueue depth 4096, Dequeue depth 128 > > > > Cores: > > Rx core - 1 > > Workers cores - 2 > > > > Port 2: > > Used in Rx core - Used to post packets from Rx core to worker cores > using event dev queue . > > So port 2 used to post packets only. > > API used: rte_event_enqueue_burst() > > > > Port 0 and 1 linked with Event Dev Q 0 to dequeue the packets. These > ports used to dequeue the packets only. > > Port 0 used in Worker core 1 - Only to receive the packets from Rx core > using event dev queue > > Port 1 used in worker core 2 - Only to receive the packets from Rx core > using event dev queue > > API used: rte_event_dequeue_burst() > > > > Expected behaviour: > > > > Port 2 enqueue packets to event dev Q in Rx core > > Port 0 and 1 dequeue packets from event dev Q in two workers > > > > Event dev scheduler of queue 0, will schedule received packets in port = 2 > to port 0 and 1. > > > > > > Problem Description: > > > > Port 0 - only received 4096 packets through event dev Q, after that no > packets available for this. > > API used: rte_event_dequeue_burst() > > > > Port 2 - Successfully enqueued 32k packets through event dev Q, after > that enqueue failure observed. > > API used: rte_event_enqueue_burst() > > Looks like, event dev queue stalled at this point. > > > > Also why port 0 stats show inflight as 4096? > > This seems to be the problem - are you returning the events to Eventdev? > Or calling the rte_event_dequeue_burst() API again (the "implicit > releases" default value will automatically "complete" the events on the > next dequeue() call, making the "inflights" go down, and allowing the > Eventdev to make forward progress. > > Please ensure that new events are enqueued with "NEW" type, > And that the worker cores are forwarding events with "FWD" type. > > This ensures that the RX/producer core is back-pressured first, and that > worker cores (who enqueue FWD type events) can make progress as there is > still space in the device. > Typically, setting a "new_event_threshold" on the producer port ( > https://doc.dpdk.org/api/structrte__event__port__conf.html#a70bebdfb5211f= 97b81b46ff08594ddda) > of 50% of the total capacity is a good starting point. The ideal new % > amount depends on the workload itself, and how often one NEW event turns > into N NEW events.. > > > Port 0 Stats: > > rx 0 drop 0 tx 4096 inflight 4096 > > > > All Stats: > > Dev=3D0 Port=3D1EventDev todo-fix-name: ports 3, qids 1 > > rx 32768 > > drop 0 > > tx 4096 > > sched calls: 628945658 > > sched cq/qid call: 628964843 > > sched no IQ enq: 628926401 > > sched no CQ enq: 628942982 > > inflight 32768, credits: 0 > > > > > > Port 0 > > rx 0 drop 0 tx 4096 inflight 4096 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:100% 1-4:0.00% 5-8:0.00% 9-12:0.00% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > Port 1 > > rx 0 drop 0 tx 0 inflight 0 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:100% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > Port 2 > > rx 32768 drop 0 tx 0 inflight 0 > > Max New: 32768 Avg cycles PP: 0 Credits: 0 > > Receive burst distribution: > > 0:-nan% > > rx ring used: 0 free: 4096 > > cq ring used: 0 free: 128 > > > > Queue 0 (Atomic) > > rx 32768 drop 0 tx 4096 > > Per Port Stats: > > Port 0: Pkts: 4096 Flows: 1 > > Port 1: Pkts: 0 Flows: 0 > > Port 2: Pkts: 0 Flows: 0 > > Port 3: Pkts: 0 Flows: 0 > > iq 0: Used 28672 > > > > Regards, > > Hariharan > > Regards, -Harry > > > --0000000000009c69f7060410f061 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi=C2=A0Harry,

Thanks for your valuable time and response for the query. Long struggling issue has been fixed now.

= =C2=A0

Issue was resolved after proper initialization of 'disable_implicit_release' variable during event dev initialization.

= =C2=A0

I= n our case, we are sending packets from Rx core to worker core using event dev queue and expec= t implicit release of event after worker core dequeued the packets from event= dev queue. After initialization=C2=A0of 'disable_implicit_release' varia= ble, now event dev queue released the inflight packets after the dequeue ha= ppened in the worker.=C2=A0

= =C2=A0

W= ith a wide range of testing, now all cases are working fine.


Regards,

Hariharan


=
On Wed, Ju= l 19, 2023 at 9:49=E2=80=AFPM Van Haaren, Harry <harry.van.haaren@intel.com> wrote:
<= blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-l= eft:1px solid rgb(204,204,204);padding-left:1ex">> From: Hari Haran <= info2harihara= n@gmail.com>
> Sent: Wednesday, July 19, 2023 4:30 PM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>
> Cc: users@dpdk.org=
> Subject: Re: Inflight value shown invalid in Event Dev Queue
>
> Hi Harry Haaren (Yes :) )
>
> I have given more details below, please check this.

Please reply "in-line", it makes it easier to read the conversati= on for future readers, and gives reference to your replies.

> Device Configuration:
> Event Dev Queue : 1
> Number of ports : 3
>
> Queue 0 depth - 32k
> Port 0, 1 amd 2 : Enqueue depth 4096, Dequeue depth 128
>
> Cores:
> Rx core - 1
> Workers cores - 2
>
> Port 2:
> Used in Rx core - Used to post packets from Rx core to worker cores us= ing event dev queue .
> So port 2 used to post packets only.=C2=A0
> API used: rte_event_enqueue_burst()
>
> Port 0 and 1 linked with Event Dev Q 0 to dequeue the packets. These p= orts used to dequeue the packets only.
> Port 0 used in Worker core 1 - Only to receive the packets from Rx cor= e using event dev queue
> Port 1 used in worker core 2 - Only to receive the packets from Rx cor= e using event dev queue
> API used: rte_event_dequeue_burst()
>
> Expected behaviour:
>
> Port 2 enqueue packets to event dev Q in Rx core
> Port 0 and 1 dequeue packets from event dev Q in two workers
>
> Event dev scheduler of queue 0, will schedule received packets in port= 2 to port 0 and 1.
>
>
> Problem Description:
>
> Port 0 - only received 4096 packets through event dev Q, after that no= packets available for this.
> API used: rte_event_dequeue_burst()
>
> Port 2 - Successfully enqueued 32k packets through event dev Q, after = that enqueue failure observed.
> API used: rte_event_enqueue_burst()
> Looks like, event dev queue stalled at this point.
>
> Also why port 0 stats show inflight as 4096?

This seems to be the problem - are you returning the events to Eventdev? Or calling the rte_event_dequeue_burst() API again (the "implicit rele= ases" default value will automatically "complete" the events= on the next dequeue() call, making the "inflights" go down, and = allowing the Eventdev to make forward progress.

Please ensure that new events are enqueued with "NEW" type,
And that the worker cores are forwarding events with "FWD" type.<= br>
This ensures that the RX/producer core is back-pressured first, and that wo= rker cores (who enqueue FWD type events) can make progress as there is stil= l space in the device.
Typically, setting a "new_event_threshold" on the producer port (= https://= doc.dpdk.org/api/structrte__event__port__conf.html#a70bebdfb5211f97b81b46ff= 08594ddda) of 50% of the total capacity is a good starting point. The i= deal new % amount depends on the workload itself, and how often one NEW eve= nt turns into N NEW events..

> Port 0 Stats:
>=C2=A0 =C2=A0rx=C2=A0 =C2=A00=C2=A0 drop 0=C2=A0 tx=C2=A0 =C2=A04096=C2= =A0 =C2=A0inflight 4096
>
> All Stats:
> Dev=3D0 Port=3D1EventDev todo-fix-name: ports 3, qids 1
> rx=C2=A0 =C2=A032768
> drop 0
> tx=C2=A0 =C2=A04096
> sched calls: 628945658
> sched cq/qid call: 628964843
> sched no IQ enq: 628926401
> sched no CQ enq: 628942982
> inflight 32768, credits: 0
>
>
> Port 0
>=C2=A0 =C2=A0rx=C2=A0 =C2=A00=C2=A0 drop 0=C2=A0 tx=C2=A0 =C2=A04096=C2= =A0 =C2=A0inflight 4096
>=C2=A0 =C2=A0Max New: 32768=C2=A0 Avg cycles PP: 0=C2=A0 =C2=A0 Credits= : 0
>=C2=A0 =C2=A0Receive burst distribution:
>=C2=A0 =C2=A0 =C2=A0 =C2=A00:100% 1-4:0.00% 5-8:0.00% 9-12:0.00%
>=C2=A0 =C2=A0rx ring used:=C2=A0 =C2=A0 0 free: 4096
>=C2=A0 =C2=A0cq ring used:=C2=A0 =C2=A0 0 free:=C2=A0 128
> Port 1
>=C2=A0 =C2=A0rx=C2=A0 =C2=A00=C2=A0 drop 0=C2=A0 tx=C2=A0 =C2=A00=C2=A0= inflight 0
>=C2=A0 =C2=A0Max New: 32768=C2=A0 Avg cycles PP: 0=C2=A0 =C2=A0 Credits= : 0
>=C2=A0 =C2=A0Receive burst distribution:
>=C2=A0 =C2=A0 =C2=A0 =C2=A00:100%
>=C2=A0 =C2=A0rx ring used:=C2=A0 =C2=A0 0 free: 4096
>=C2=A0 =C2=A0cq ring used:=C2=A0 =C2=A0 0 free:=C2=A0 128
> Port 2
>=C2=A0 =C2=A0rx=C2=A0 =C2=A032768=C2=A0 drop 0=C2=A0 tx=C2=A0 =C2=A00= =C2=A0 inflight 0
>=C2=A0 =C2=A0Max New: 32768=C2=A0 Avg cycles PP: 0=C2=A0 =C2=A0 Credits= : 0
>=C2=A0 =C2=A0Receive burst distribution:
>=C2=A0 =C2=A0 =C2=A0 =C2=A00:-nan%
>=C2=A0 =C2=A0rx ring used:=C2=A0 =C2=A0 0 free: 4096
>=C2=A0 =C2=A0cq ring used:=C2=A0 =C2=A0 0 free:=C2=A0 128
>
> Queue 0 (Atomic)
>=C2=A0 =C2=A0rx=C2=A0 =C2=A032768=C2=A0 drop 0=C2=A0 tx=C2=A0 =C2=A0409= 6
>=C2=A0 =C2=A0Per Port Stats:
>=C2=A0 =C2=A0 =C2=A0Port 0: Pkts: 4096=C2=A0 =C2=A0 Flows: 1
>=C2=A0 =C2=A0 =C2=A0Port 1: Pkts: 0=C2=A0 =C2=A0Flows: 0
>=C2=A0 =C2=A0 =C2=A0Port 2: Pkts: 0=C2=A0 =C2=A0Flows: 0
>=C2=A0 =C2=A0 =C2=A0Port 3: Pkts: 0=C2=A0 =C2=A0Flows: 0
>=C2=A0 =C2=A0iq 0: Used 28672
>
> Regards,
> Hariharan

Regards, -Harry

<snip below older parts of conversation>
--0000000000009c69f7060410f061--