DPDK patches and discussions
 help / color / mirror / Atom feed
* DPDK Secondary process not able to xmit packets with MLX5 VF
@ 2024-04-05 17:01 Samar Yadav
  2024-04-08  7:29 ` Slava Ovsiienko
  0 siblings, 1 reply; 3+ messages in thread
From: Samar Yadav @ 2024-04-05 17:01 UTC (permalink / raw)
  To: dev, users
  Cc: matan, viacheslavo, orika, suanmingm, Mukul Sinha,
	Tathagat Priyadarshi, Srinivasa Srikanth Srikanth Podila,
	Vipin PR

[-- Attachment #1: Type: text/plain, Size: 9351 bytes --]

Hi all,

We are using 2 Mellanox VFs with DPDK v22.11 but seeing an issue when
dpdk rte_proc_secondary process is trying to xmit packets out. Please
note DPDK rte_proc_primary process is able to successfully xmit
packets out. Issue seems to be in check_cqe as it always returns
MLX5_CQE_STATUS_HW_OWN.




*admin@10-50-54-244:~$ lspci | grep "Mellanox"00:07.0 Ethernet
controller: Mellanox Technologies MT27700 Family [ConnectX-4 Virtual
Function]00:08.0 Ethernet controller: Mellanox Technologies MT27700
Family [ConnectX-4 Virtual Function]*


In our application.

proc0 -> is DPDK rte_proc_primary which initializes the necessary
shared memory data structures.

proc1 -> is DPDK rte_proc_secondary which attaches to pre-initialized
shared memory.


proc0(rte_proc_primary) uses port0(*00:07.0*) to xmit packets out -
works fine as expected.

But proc1(rte_proc_secondary) uses port1(*00:08.0)* to xmit packets
out - doesn't work as the packet is not seen on the wire.


code snippet for below gdb outputs

mlx5_tx.c

180  */
181 void
182 mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
183               unsigned int olx __rte_unused)
184 {
185     unsigned int count = MLX5_TX_COMP_MAX_CQE;
186     volatile struct mlx5_cqe *last_cqe = NULL;
187     bool ring_doorbell = false;
188     int ret;
189
190     do {
191         volatile struct mlx5_cqe *cqe;
192
193         cqe = &txq->cqes[txq->cq_ci & txq->cqe_m];
194         ret = check_cqe(cqe, txq->cqe_s, txq->cq_ci);
195         if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) {
196             if (likely(ret != MLX5_CQE_STATUS_ERR)) {
197                 /* No new CQEs in completion queue. */
198                 MLX5_ASSERT(ret == MLX5_CQE_STATUS_HW_OWN);
199                 break;
200             }


mlx5_common.h

195 static __rte_always_inline enum mlx5_cqe_status
196 check_cqe(volatile struct mlx5_cqe *cqe, const uint16_t cqes_n,
197       const uint16_t ci)
198 {
199     const uint16_t idx = ci & cqes_n;
200     const uint8_t op_own = cqe->op_own;
201     const uint8_t op_owner = MLX5_CQE_OWNER(op_own);
202     const uint8_t op_code = MLX5_CQE_OPCODE(op_own);
203
204     if (unlikely((op_owner != (!!(idx))) || (op_code == MLX5_CQE_INVALID)))
205         return MLX5_CQE_STATUS_HW_OWN;
206     rte_io_rmb();
207     if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
208              op_code == MLX5_CQE_REQ_ERR))
209         return MLX5_CQE_STATUS_ERR;
210     return MLX5_CQE_STATUS_SW_OWN;
211 }

*proc1(non-working process):* we have noticed the cq_ci remains 0 and
doesn't increase.

Thread 1 "se_dp" hit Breakpoint 1, mlx5_tx_handle_completion
(txq=0x6000496c72c0, olx=127)
    at ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c:184
184	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
185	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
186	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
187	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
193	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
194	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
195	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) info locals
cqe = 0x60004962b000
count = 2
last_cqe = 0x0
ring_doorbell = false
ret = -2
(gdb) p *txq
$1 = {elts_head = 35, elts_tail = 0, elts_comp = 32, elts_s = 1024,
elts_m = 1023, wqe_ci = 35,
  wqe_pi = 0, wqe_s = 4096, wqe_m = 4095, wqe_comp = 32, wqe_thres =
512, cq_ci = 0, cq_pi = 1,
  cqe_s = 64, cqe_m = 63, elts_n = 10, cqe_n = 6, wqe_n = 12, tso_en =
1, tunnel_en = 0, swp_en = 0,
  vlan_en = 0, db_nc = 0, db_heu = 0, rt_timestamp = 0, wait_on_time =
0, fast_free = 0,
  inlen_send = 18, inlen_empw = 0, inlen_mode = 18, qp_num_8s =
340992, offloads = 32815, mr_ctrl = {
    dev_gen_ptr = 0x60004c2d62b4, cur_gen = 0, mru = 0, head = 0,
cache = {{start = 0, end = 0,
        lkey = 0}, {start = 0, end = 0, lkey = 0}, {start = 0, end =
0, lkey = 0}, {start = 0,
        end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {start =
0, end = 0, lkey = 0}, {
        start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey =
0}}, cache_bh = {len = 1,
      size = 256, table = 0x6000496c5d40}}, wqes = 0x60004c255000,
wqes_end = 0x60004c295000,
  fcqs = 0x60004c295dc0, cqes = 0x60004962b000, qp_db =
0x60004c295004, cq_db = 0x60004962c000,
  port_id = 1, idx = 0, rt_timemask = 0, ts_mask = 0, ts_offset = -1,
sh = 0x60004b865880, stats = {
    opackets = 35, obytes = 2228, oerrors = 0}, stats_reset =
{opackets = 0, obytes = 0, oerrors = 0},
  uar_data = {db = 0x0}, elts = 0x6000496c7448}


and check_cqe always returns MLX5_CQE_STATUS_HW_OWN

(gdb)
194	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) s
check_cqe (ci=0, cqes_n=64, cqe=0x60004962b000) at
../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h:199
199	../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h:
No such file or directory.
(gdb) n
200	in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb)
201	in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb)
202	in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb)
204	in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb) n
205	in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb) info locals
idx = 0
op_own = 241 '\361'
op_owner = 1 '\001'
op_code = 15 '\017'

Because of *check_cqe* return being *MLX5_CQE_STATUS_HW_OWN* , we
break in line 199 in *mlx5_tx_handle_completion* and *ring_doorbell*
remains *false* forever.

Below are the logs from mlx5_txq_devx_obj_new which is called by
proc0(rte_proc_primary) for port 1

ppriv: 0x60004b8316c0 ,ppriv->uar_table: 0x60004b8316c8,
txq_ctrl->uar_mmap_offset:0,
ppriv->uar_table[txq_data->idx]:0x7f6b2d211800, txq_data->idx: 0,
txq_data->db_nc:0

and logs from txq_uar_init_secondary which gets called by
proc1(rte_proc_secondary) for port 1

priv: 0x60004b8352c0, priv->sh: 0x60004b865880, priv->sh->pppriv: 0x60004b8316c0

txq_ctrl:0x6000496c71c0 priv:0x60004b8352c0

primary_ppriv->uar_table: 0x60004b8316c8 ,uar_va:7f6b2d211800
offset:800 addr:0x7f6b3fe47800

ppriv:0x60004962a180 ppriv->uar_table[txq->idx]:0x7f6b3fe47800, txq->idx:0


Now for the working cases all the counters are incrementing as expected.

*proc0(rte_proc_primary - working case)*:  cq_ci, cq_pi and other
counters are as expected.

Thread 1 "se_dp" hit Breakpoint 1, mlx5_tx_handle_completion
(txq=0x60004b898940, olx=127) at
../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c:184
184	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
185	in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) p *txq
$2 = {elts_head = 960, elts_tail = 931, elts_comp = 931, elts_s =
1024, elts_m = 1023, wqe_ci = 960, wqe_pi = 930, wqe_s = 4096, wqe_m =
4095, wqe_comp = 931, wqe_thres = 512, cq_ci = 28, cq_pi = 28, cqe_s =
64,
  cqe_m = 63, elts_n = 10, cqe_n = 6, wqe_n = 12, tso_en = 1,
tunnel_en = 0, swp_en = 0, vlan_en = 0, db_nc = 0, db_heu = 0,
rt_timestamp = 0, wait_on_time = 0, fast_free = 0, inlen_send = 18,
inlen_empw = 0,
  inlen_mode = 18, qp_num_8s = 865280, offloads = 32815, mr_ctrl =
{dev_gen_ptr = 0x600049a000f4, cur_gen = 0, mru = 0, head = 0, cache =
{{start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {
        start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0},
{start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {start
= 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}}, cache_bh = {
      len = 1, size = 256, table = 0x60004b8973c0}}, wqes =
0x600049655000, wqes_end = 0x600049695000, fcqs = 0x600049697100, cqes
= 0x600049696000, qp_db = 0x600049695004, cq_db = 0x600049697000,
port_id = 0,
  idx = 0, rt_timemask = 0, ts_mask = 0, ts_offset = -1, sh =
0x60004be00c40, stats = {opackets = 960, obytes = 73222, oerrors = 0},
stats_reset = {opackets = 0, obytes = 0, oerrors = 0}, uar_data = {db
= 0x0},
  elts = 0x60004b898ac8}
(gdb)


Few questions:

1. Why isn't the cqi counter increasing in proc1(rte_proc_secondary)?
Does it mean the mlx backend hardware is not consuming the packets?

2. Why is the check_cqe stuck at MLX5_CQE_STATUS_HW_OWN in
proc1(rte_proc_secondary) ?


Thanks,

Samar

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 13363 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* RE: DPDK Secondary process not able to xmit packets with MLX5 VF
  2024-04-05 17:01 DPDK Secondary process not able to xmit packets with MLX5 VF Samar Yadav
@ 2024-04-08  7:29 ` Slava Ovsiienko
  2024-04-10  7:56   ` Samar Yadav
  0 siblings, 1 reply; 3+ messages in thread
From: Slava Ovsiienko @ 2024-04-08  7:29 UTC (permalink / raw)
  To: Samar Yadav, dev, users
  Cc: Matan Azrad, Ori Kam, Suanming Mou, Mukul Sinha,
	Tathagat Priyadarshi, Srinivasa Srikanth Srikanth Podila,
	Vipin PR

[-- Attachment #1: Type: text/plain, Size: 10507 bytes --]

Hi, Samar


  *   Did you start queues in secondary process?
  *   Only one process in any moment of time manages the queue, no shared (between process) queue data sending is allowed.
  *   From you description it looks like there is no completions seen (in CQE), I would recommend checking the UAR/Doorbell mapping in secondary process

With best regards,
Slava

From: Samar Yadav <samar.yadav@broadcom.com>
Sent: Friday, April 5, 2024 8:02 PM
To: dev@dpdk.org; users@dpdk.org
Cc: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>; Mukul Sinha <mukul.sinha@broadcom.com>; Tathagat Priyadarshi <tathagat.priyadarshi@broadcom.com>; Srinivasa Srikanth Srikanth Podila <srinivasa-srikanth.podila@broadcom.com>; Vipin PR <vipin.pr@broadcom.com>
Subject: DPDK Secondary process not able to xmit packets with MLX5 VF

Hi all,

We are using 2 Mellanox VFs with DPDK v22.11 but seeing an issue when dpdk rte_proc_secondary process is trying to xmit packets out. Please note DPDK rte_proc_primary process is able to successfully xmit packets out. Issue seems to be in check_cqe as it always returns MLX5_CQE_STATUS_HW_OWN.

admin@10-50-54-244:~$ lspci | grep "Mellanox"
00:07.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4 Virtual Function]
00:08.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4 Virtual Function]



In our application.

proc0 -> is DPDK rte_proc_primary which initializes the necessary shared memory data structures.

proc1 -> is DPDK rte_proc_secondary which attaches to pre-initialized shared memory.



proc0(rte_proc_primary) uses port0(00:07.0) to xmit packets out - works fine as expected.

But proc1(rte_proc_secondary) uses port1(00:08.0) to xmit packets out - doesn't work as the packet is not seen on the wire.



code snippet for below gdb outputs

mlx5_tx.c

180  */
181 void
182 mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
183               unsigned int olx __rte_unused)
184 {
185     unsigned int count = MLX5_TX_COMP_MAX_CQE;
186     volatile struct mlx5_cqe *last_cqe = NULL;
187     bool ring_doorbell = false;
188     int ret;
189
190     do {
191         volatile struct mlx5_cqe *cqe;
192
193         cqe = &txq->cqes[txq->cq_ci & txq->cqe_m];
194         ret = check_cqe(cqe, txq->cqe_s, txq->cq_ci);
195         if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) {
196             if (likely(ret != MLX5_CQE_STATUS_ERR)) {
197                 /* No new CQEs in completion queue. */
198                 MLX5_ASSERT(ret == MLX5_CQE_STATUS_HW_OWN);
199                 break;
200             }



mlx5_common.h

195 static __rte_always_inline enum mlx5_cqe_status
196 check_cqe(volatile struct mlx5_cqe *cqe, const uint16_t cqes_n,
197       const uint16_t ci)
198 {
199     const uint16_t idx = ci & cqes_n;
200     const uint8_t op_own = cqe->op_own;
201     const uint8_t op_owner = MLX5_CQE_OWNER(op_own);
202     const uint8_t op_code = MLX5_CQE_OPCODE(op_own);
203
204     if (unlikely((op_owner != (!!(idx))) || (op_code == MLX5_CQE_INVALID)))
205         return MLX5_CQE_STATUS_HW_OWN;
206     rte_io_rmb();
207     if (unlikely(op_code == MLX5_CQE_RESP_ERR ||
208              op_code == MLX5_CQE_REQ_ERR))
209         return MLX5_CQE_STATUS_ERR;
210     return MLX5_CQE_STATUS_SW_OWN;
211 }

proc1(non-working process): we have noticed the cq_ci remains 0 and doesn't increase.

Thread 1 "se_dp" hit Breakpoint 1, mlx5_tx_handle_completion (txq=0x6000496c72c0, olx=127)
    at ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c:184
184      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
185      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
186      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
187      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
193      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
194      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
195      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) info locals
cqe = 0x60004962b000
count = 2
last_cqe = 0x0
ring_doorbell = false
ret = -2
(gdb) p *txq
$1 = {elts_head = 35, elts_tail = 0, elts_comp = 32, elts_s = 1024, elts_m = 1023, wqe_ci = 35,
  wqe_pi = 0, wqe_s = 4096, wqe_m = 4095, wqe_comp = 32, wqe_thres = 512, cq_ci = 0, cq_pi = 1,
  cqe_s = 64, cqe_m = 63, elts_n = 10, cqe_n = 6, wqe_n = 12, tso_en = 1, tunnel_en = 0, swp_en = 0,
  vlan_en = 0, db_nc = 0, db_heu = 0, rt_timestamp = 0, wait_on_time = 0, fast_free = 0,
  inlen_send = 18, inlen_empw = 0, inlen_mode = 18, qp_num_8s = 340992, offloads = 32815, mr_ctrl = {
    dev_gen_ptr = 0x60004c2d62b4, cur_gen = 0, mru = 0, head = 0, cache = {{start = 0, end = 0,
        lkey = 0}, {start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {start = 0,
        end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {
        start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}}, cache_bh = {len = 1,
      size = 256, table = 0x6000496c5d40}}, wqes = 0x60004c255000, wqes_end = 0x60004c295000,
  fcqs = 0x60004c295dc0, cqes = 0x60004962b000, qp_db = 0x60004c295004, cq_db = 0x60004962c000,
  port_id = 1, idx = 0, rt_timemask = 0, ts_mask = 0, ts_offset = -1, sh = 0x60004b865880, stats = {
    opackets = 35, obytes = 2228, oerrors = 0}, stats_reset = {opackets = 0, obytes = 0, oerrors = 0},
  uar_data = {db = 0x0}, elts = 0x6000496c7448}



and check_cqe always returns MLX5_CQE_STATUS_HW_OWN

(gdb)
194      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) s
check_cqe (ci=0, cqes_n=64, cqe=0x60004962b000) at ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h:199
199      ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h: No such file or directory.
(gdb) n
200      in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb)
201      in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb)
202      in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb)
204      in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb) n
205      in ../../../../../../service_engine/dpdk-2211/drivers/common/mlx5/mlx5_common.h
(gdb) info locals
idx = 0
op_own = 241 '\361'
op_owner = 1 '\001'
op_code = 15 '\017'

Because of check_cqe return being MLX5_CQE_STATUS_HW_OWN , we break in line 199 in mlx5_tx_handle_completion and ring_doorbell remains false forever.

Below are the logs from mlx5_txq_devx_obj_new which is called by proc0(rte_proc_primary) for port 1

ppriv: 0x60004b8316c0 ,ppriv->uar_table: 0x60004b8316c8, txq_ctrl->uar_mmap_offset:0, ppriv->uar_table[txq_data->idx]:0x7f6b2d211800, txq_data->idx: 0, txq_data->db_nc:0

and logs from txq_uar_init_secondary which gets called by proc1(rte_proc_secondary) for port 1

priv: 0x60004b8352c0, priv->sh: 0x60004b865880, priv->sh->pppriv: 0x60004b8316c0

txq_ctrl:0x6000496c71c0 priv:0x60004b8352c0

primary_ppriv->uar_table: 0x60004b8316c8 ,uar_va:7f6b2d211800 offset:800 addr:0x7f6b3fe47800

ppriv:0x60004962a180 ppriv->uar_table[txq->idx]:0x7f6b3fe47800, txq->idx:0



Now for the working cases all the counters are incrementing as expected.

proc0(rte_proc_primary - working case):  cq_ci, cq_pi and other counters are as expected.

Thread 1 "se_dp" hit Breakpoint 1, mlx5_tx_handle_completion (txq=0x60004b898940, olx=127) at ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c:184
184      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) n
185      in ../../../../../../service_engine/dpdk-2211/drivers/net/mlx5/mlx5_tx.c
(gdb) p *txq
$2 = {elts_head = 960, elts_tail = 931, elts_comp = 931, elts_s = 1024, elts_m = 1023, wqe_ci = 960, wqe_pi = 930, wqe_s = 4096, wqe_m = 4095, wqe_comp = 931, wqe_thres = 512, cq_ci = 28, cq_pi = 28, cqe_s = 64,
  cqe_m = 63, elts_n = 10, cqe_n = 6, wqe_n = 12, tso_en = 1, tunnel_en = 0, swp_en = 0, vlan_en = 0, db_nc = 0, db_heu = 0, rt_timestamp = 0, wait_on_time = 0, fast_free = 0, inlen_send = 18, inlen_empw = 0,
  inlen_mode = 18, qp_num_8s = 865280, offloads = 32815, mr_ctrl = {dev_gen_ptr = 0x600049a000f4, cur_gen = 0, mru = 0, head = 0, cache = {{start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {
        start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}, {start = 0, end = 0, lkey = 0}}, cache_bh = {
      len = 1, size = 256, table = 0x60004b8973c0}}, wqes = 0x600049655000, wqes_end = 0x600049695000, fcqs = 0x600049697100, cqes = 0x600049696000, qp_db = 0x600049695004, cq_db = 0x600049697000, port_id = 0,
  idx = 0, rt_timemask = 0, ts_mask = 0, ts_offset = -1, sh = 0x60004be00c40, stats = {opackets = 960, obytes = 73222, oerrors = 0}, stats_reset = {opackets = 0, obytes = 0, oerrors = 0}, uar_data = {db = 0x0},
  elts = 0x60004b898ac8}
(gdb)



Few questions:

1. Why isn't the cqi counter increasing in proc1(rte_proc_secondary)? Does it mean the mlx backend hardware is not consuming the packets?

2. Why is the check_cqe stuck at MLX5_CQE_STATUS_HW_OWN in proc1(rte_proc_secondary) ?



Thanks,

Samar

This electronic communication and the information and any files transmitted with it, or attached to it, are confidential and are intended solely for the use of the individual or entity to whom it is addressed and may contain information that is confidential, legally privileged, protected by privacy laws, or otherwise restricted from disclosure to anyone else. If you are not the intended recipient or the person responsible for delivering the e-mail to the intended recipient, you are hereby notified that any use, copying, distributing, dissemination, forwarding, printing, or copying of this e-mail is strictly prohibited. If you received this e-mail in error, please return the e-mail to the sender, delete it from your computer, and destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 21825 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: DPDK Secondary process not able to xmit packets with MLX5 VF
  2024-04-08  7:29 ` Slava Ovsiienko
@ 2024-04-10  7:56   ` Samar Yadav
  0 siblings, 0 replies; 3+ messages in thread
From: Samar Yadav @ 2024-04-10  7:56 UTC (permalink / raw)
  To: Slava Ovsiienko
  Cc: dev, users, Matan Azrad, Ori Kam, Suanming Mou, Mukul Sinha,
	Tathagat Priyadarshi, Srinivasa Srikanth Srikanth Podila,
	Vipin PR

[-- Attachment #1: Type: text/plain, Size: 17083 bytes --]

Hi Slava,
Thanks so much for replying, appreciate it.

>> Did you start queues in the secondary process?
Yes, tx_queue_state in secondary process is RTE_ETH_QUEUE_STATE_STARTED,

*gdb output from secondary process for port1 (non-working port, managed by
secondary process) *
(gdb) p rte_eth_devices[1]->data[0]
$3 = {name = "0000:00:08.0", '\000' <repeats 51 times>, rx_queues =
0x600177c28980, tx_queues = 0x600177c26900, nb_rx_queues = 1, nb_tx_queues =
1, sriov = {active = 0 '\000', nb_q_per_pool = 0 '\000', def_vmdq_idx = 0,
def_pool_q_idx = 0}, dev_private = 0x600177c6d7c0, dev_link = {
link_speed = 40000, link_duplex = 1, link_autoneg = 1, link_status = 1},
dev_conf = {link_speeds = 0, rxmode = {mq_mode = RTE_ETH_MQ_RX_NONE, mtu =
9000, max_lro_pkt_size = 0, offloads = 8193, reserved_64s = {0, 0},
reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = RTE_ETH_MQ_TX_NONE,
offloads = 32815, pvid = 0, hw_vlan_reject_tagged = 0 '\000',
hw_vlan_reject_untagged = 0 '\000', hw_vlan_insert_pvid = 0 '\000',
reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0,
rx_adv_conf = {rss_conf = {rss_key = 0x0, rss_key_len = 0 '\000', rss_hf = 0},
vmdq_dcb_conf = {
nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool
= 0 '\000', nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <
repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf =
{nb_tcs = (unknown: 0),
dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (
unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map
= {{vlan_id = 0, pools = 0} <repeats 64 times>}}}, tx_adv_conf = {
vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "
\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0), dcb_tc
= "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown:
0)}}, dcb_capability_en = 0, intr_conf = {lsc = 1, rxq = 0, rmv = 0}},
mtu = 1500, min_rx_buf_size = 4294967295, rx_mbuf_alloc_failed = 0,
mac_addrs = 0x600177c6d7e0, mac_pool_sel = {0 <repeats 128 times>},
hash_mac_addrs = 0x0, port_id = 1, promiscuous = 1 '\001', scattered_rx = 0
'\000', all_multicast = 1 '\001', dev_started = 1 '\001', lro = 0 '\000',
dev_configured = 1 '\001', flow_configured = 0 '\000', rx_queue_state = "
\001", '\000' <repeats 1022 times>, tx_queue_state = "\001", '\000' <repeats
1022 times>, dev_flags = 75, numa_node = -1, vlan_filter_conf = {ids = {0 <
repeats 64 times>}}, owner = {id = 0, name = '\000' <repeats 63 times>},
representor_id = 0, backer_port_id = 32, flow_ops_mutex = {__data = {__lock
= 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0,
__elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <
repeats 39 times>, __align = 0}}
(gdb) p rte_eth_devices[1]
$4 = {rx_pkt_burst = 0x5591d17ddfb2 <mlx5_rx_burst>, tx_pkt_burst =
0x5591d197268c <mlx5_tx_burst_full>, tx_pkt_prepare = 0x0, rx_queue_count =
0x0, rx_descriptor_status = 0x5591d17dc903 <mlx5_rx_descriptor_status>,
tx_descriptor_status = 0x5591d17f8dd0 <mlx5_tx_descriptor_status>,
data = 0x600177cfb500, process_private = 0x600179b1b500, dev_ops =
0x5591d20ac180 <mlx5_dev_sec_ops>, device = 0x5591d5a55be0, intr_handle =
0x5591d5a54dd0, link_intr_cbs = {tqh_first = 0x0, tqh_last = 0x5591d3c575d8
<rte_eth_devices+16600>}, post_rx_burst_cbs = {0x600177c00180,
0x0 <repeats 1023 times>}, pre_tx_burst_cbs = {0x600179e52d80, 0x0 <repeats
1023 times>}, state = RTE_ETH_DEV_ATTACHED, security_ctx = 0x0}

*gdb output from primary process for port1 (non-working port, managed by
secondary process)*
(gdb) p rte_eth_devices[1]
$3 = {rx_pkt_burst = 0x5591d17ddfb2 <mlx5_rx_burst>, tx_pkt_burst =
0x5591d197268c <mlx5_tx_burst_full>, tx_pkt_prepare = 0x0, rx_queue_count =
0x5591d17dcd20 <mlx5_rx_queue_count>, rx_descriptor_status = 0x5591d17dc903
<mlx5_rx_descriptor_status>,
tx_descriptor_status = 0x5591d17f8dd0 <mlx5_tx_descriptor_status>, data =
0x600177cfb500, process_private = 0x600177c68540, dev_ops = 0x5591d20abde0
<mlx5_dev_ops>, device = 0x5591d6076440, intr_handle = 0x5591d60d0350,
link_intr_cbs = {tqh_first = 0x0,
tqh_last = 0x5591d3c575d8 <rte_eth_devices+16600>}, post_rx_burst_cbs = {0x0
<repeats 1024 times>}, pre_tx_burst_cbs = {0x0 <repeats 1024 times>}, state
= RTE_ETH_DEV_ATTACHED, security_ctx = 0x0}
(gdb) p rte_eth_devices[1]->data[0]
$4 = {name = "0000:00:08.0", '\000' <repeats 51 times>, rx_queues =
0x600177c28980, tx_queues = 0x600177c26900, nb_rx_queues = 1, nb_tx_queues =
1, sriov = {active = 0 '\000', nb_q_per_pool = 0 '\000', def_vmdq_idx = 0,
def_pool_q_idx = 0}, dev_private = 0x600177c6d7c0, dev_link = {
link_speed = 40000, link_duplex = 1, link_autoneg = 1, link_status = 1},
dev_conf = {link_speeds = 0, rxmode = {mq_mode = RTE_ETH_MQ_RX_NONE, mtu =
9000, max_lro_pkt_size = 0, offloads = 8193, reserved_64s = {0, 0},
reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = RTE_ETH_MQ_TX_NONE,
offloads = 32815, pvid = 0, hw_vlan_reject_tagged = 0 '\000',
hw_vlan_reject_untagged = 0 '\000', hw_vlan_insert_pvid = 0 '\000',
reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0,
rx_adv_conf = {rss_conf = {rss_key = 0x0, rss_key_len = 0 '\000', rss_hf = 0},
vmdq_dcb_conf = {
nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool
= 0 '\000', nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <
repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf =
{nb_tcs = (unknown: 0),
dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (
unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map
= {{vlan_id = 0, pools = 0} <repeats 64 times>}}}, tx_adv_conf = {
vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "
\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0), dcb_tc
= "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown:
0)}}, dcb_capability_en = 0, intr_conf = {lsc = 1, rxq = 0, rmv = 0}},
mtu = 1500, min_rx_buf_size = 4294967295, rx_mbuf_alloc_failed = 0,
mac_addrs = 0x600177c6d7e0, mac_pool_sel = {0 <repeats 128 times>},
hash_mac_addrs = 0x0, port_id = 1, promiscuous = 1 '\001', scattered_rx = 0
'\000', all_multicast = 1 '\001', dev_started = 1 '\001', lro = 0 '\000',
dev_configured = 1 '\001', flow_configured = 0 '\000', rx_queue_state = "
\001", '\000' <repeats 1022 times>, tx_queue_state = "\001", '\000' <repeats
1022 times>, dev_flags = 75, numa_node = -1, vlan_filter_conf = {ids = {0 <
repeats 64 times>}}, owner = {id = 0, name = '\000' <repeats 63 times>},
representor_id = 0, backer_port_id = 32, flow_ops_mutex = {__data = {__lock
= 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0,
__elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <
repeats 39 times>, __align = 0}}
(gdb)

*gdb output from secondary process for port0 (working port, managed by
primary process) *
(gdb) p rte_eth_devices[0]
$1 = {rx_pkt_burst = 0x5591d17ddfb2 <mlx5_rx_burst>, tx_pkt_burst =
0x5591d197268c <mlx5_tx_burst_full>, tx_pkt_prepare = 0x0, rx_queue_count =
0x0, rx_descriptor_status = 0x5591d17dc903 <mlx5_rx_descriptor_status>,
tx_descriptor_status = 0x5591d17f8dd0 <mlx5_tx_descriptor_status>,
data = 0x600177cf9d00, process_private = 0x600179e49440, dev_ops =
0x5591d20ac180 <mlx5_dev_sec_ops>, device = 0x5591d5a55fa0, intr_handle =
0x5591d5a4dd20, link_intr_cbs = {tqh_first = 0x0, tqh_last = 0x5591d3c53558
<rte_eth_devices+88>}, post_rx_burst_cbs = {0x0 <repeats 1024 times>},
pre_tx_burst_cbs = {0x0 <repeats 1024 times>}, state = RTE_ETH_DEV_ATTACHED,
security_ctx = 0x0}
(gdb) p rte_eth_devices[0]->data[0]
$2 = {name = "0000:00:07.0", '\000' <repeats 51 times>, rx_queues =
0x600176c9c980, tx_queues = 0x600176c9a900, nb_rx_queues = 1, nb_tx_queues =
1, sriov = {active = 0 '\000', nb_q_per_pool = 0 '\000', def_vmdq_idx = 0,
def_pool_q_idx = 0}, dev_private = 0x600177d29d80, dev_link = {
link_speed = 40000, link_duplex = 1, link_autoneg = 1, link_status = 1},
dev_conf = {link_speeds = 0, rxmode = {mq_mode = RTE_ETH_MQ_RX_NONE, mtu =
9000, max_lro_pkt_size = 0, offloads = 8193, reserved_64s = {0, 0},
reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = RTE_ETH_MQ_TX_NONE,
offloads = 32815, pvid = 0, hw_vlan_reject_tagged = 0 '\000',
hw_vlan_reject_untagged = 0 '\000', hw_vlan_insert_pvid = 0 '\000',
reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0,
rx_adv_conf = {rss_conf = {rss_key = 0x0, rss_key_len = 0 '\000', rss_hf = 0},
vmdq_dcb_conf = {
nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool
= 0 '\000', nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <
repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf =
{nb_tcs = (unknown: 0),
dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (
unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map
= {{vlan_id = 0, pools = 0} <repeats 64 times>}}}, tx_adv_conf = {
vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "
\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0), dcb_tc
= "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown:
0)}}, dcb_capability_en = 0, intr_conf = {lsc = 1, rxq = 0, rmv = 0}},
mtu = 1500, min_rx_buf_size = 4294967295, rx_mbuf_alloc_failed = 0,
mac_addrs = 0x600177d29da0, mac_pool_sel = {0 <repeats 128 times>},
hash_mac_addrs = 0x0, port_id = 0, promiscuous = 1 '\001', scattered_rx = 0
'\000', all_multicast = 1 '\001', dev_started = 1 '\001', lro = 0 '\000',
dev_configured = 1 '\001', flow_configured = 0 '\000', rx_queue_state = "
\001", '\000' <repeats 1022 times>, tx_queue_state = "\001", '\000' <repeats
1022 times>, dev_flags = 75, numa_node = -1, vlan_filter_conf = {ids = {0 <
repeats 64 times>}}, owner = {id = 0, name = '\000' <repeats 63 times>},
representor_id = 0, backer_port_id = 32, flow_ops_mutex = {__data = {__lock
= 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0,
__elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <
repeats 39 times>, __align = 0}}

*gdb output from primary process for port0 (working port, managed by
primary process)*
(gdb) p rte_eth_devices[0]
$1 = {rx_pkt_burst = 0x5591d17ddfb2 <mlx5_rx_burst>, tx_pkt_burst =
0x5591d197268c <mlx5_tx_burst_full>, tx_pkt_prepare = 0x0, rx_queue_count =
0x5591d17dcd20 <mlx5_rx_queue_count>, rx_descriptor_status = 0x5591d17dc903
<mlx5_rx_descriptor_status>,
tx_descriptor_status = 0x5591d17f8dd0 <mlx5_tx_descriptor_status>, data =
0x600177cf9d00, process_private = 0x600176cdc600, dev_ops = 0x5591d20abde0
<mlx5_dev_ops>, device = 0x5591d6076800, intr_handle = 0x5591d606e580,
link_intr_cbs = {tqh_first = 0x0,
tqh_last = 0x5591d3c53558 <rte_eth_devices+88>}, post_rx_burst_cbs = {
0x600177c69100, 0x0 <repeats 1023 times>}, pre_tx_burst_cbs = {
0x600179e52cc0, 0x0 <repeats 1023 times>}, state = RTE_ETH_DEV_ATTACHED,
security_ctx = 0x0}
(gdb) p rte_eth_devices[0]->data[0]
$2 = {name = "0000:00:07.0", '\000' <repeats 51 times>, rx_queues =
0x600176c9c980, tx_queues = 0x600176c9a900, nb_rx_queues = 1, nb_tx_queues =
1, sriov = {active = 0 '\000', nb_q_per_pool = 0 '\000', def_vmdq_idx = 0,
def_pool_q_idx = 0}, dev_private = 0x600177d29d80, dev_link = {
link_speed = 40000, link_duplex = 1, link_autoneg = 1, link_status = 1},
dev_conf = {link_speeds = 0, rxmode = {mq_mode = RTE_ETH_MQ_RX_NONE, mtu =
9000, max_lro_pkt_size = 0, offloads = 8193, reserved_64s = {0, 0},
reserved_ptrs = {0x0, 0x0}}, txmode = {mq_mode = RTE_ETH_MQ_TX_NONE,
offloads = 32815, pvid = 0, hw_vlan_reject_tagged = 0 '\000',
hw_vlan_reject_untagged = 0 '\000', hw_vlan_insert_pvid = 0 '\000',
reserved_64s = {0, 0}, reserved_ptrs = {0x0, 0x0}}, lpbk_mode = 0,
rx_adv_conf = {rss_conf = {rss_key = 0x0, rss_key_len = 0 '\000', rss_hf = 0},
vmdq_dcb_conf = {
nb_queue_pools = (unknown: 0), enable_default_pool = 0 '\000', default_pool
= 0 '\000', nb_pool_maps = 0 '\000', pool_map = {{vlan_id = 0, pools = 0} <
repeats 64 times>}, dcb_tc = "\000\000\000\000\000\000\000"}, dcb_rx_conf =
{nb_tcs = (unknown: 0),
dcb_tc = "\000\000\000\000\000\000\000"}, vmdq_rx_conf = {nb_queue_pools = (
unknown: 0), enable_default_pool = 0 '\000', default_pool = 0 '\000',
enable_loop_back = 0 '\000', nb_pool_maps = 0 '\000', rx_mode = 0, pool_map
= {{vlan_id = 0, pools = 0} <repeats 64 times>}}}, tx_adv_conf = {
vmdq_dcb_tx_conf = {nb_queue_pools = (unknown: 0), dcb_tc = "
\000\000\000\000\000\000\000"}, dcb_tx_conf = {nb_tcs = (unknown: 0), dcb_tc
= "\000\000\000\000\000\000\000"}, vmdq_tx_conf = {nb_queue_pools = (unknown:
0)}}, dcb_capability_en = 0, intr_conf = {lsc = 1, rxq = 0, rmv = 0}},
mtu = 1500, min_rx_buf_size = 4294967295, rx_mbuf_alloc_failed = 0,
mac_addrs = 0x600177d29da0, mac_pool_sel = {0 <repeats 128 times>},
hash_mac_addrs = 0x0, port_id = 0, promiscuous = 1 '\001', scattered_rx = 0
'\000', all_multicast = 1 '\001', dev_started = 1 '\001', lro = 0 '\000',
dev_configured = 1 '\001', flow_configured = 0 '\000', rx_queue_state = "
\001", '\000' <repeats 1022 times>, tx_queue_state = "\001", '\000' <repeats
1022 times>, dev_flags = 75, numa_node = -1, vlan_filter_conf = {ids = {0 <
repeats 64 times>}}, owner = {id = 0, name = '\000' <repeats 63 times>},
representor_id = 0, backer_port_id = 32, flow_ops_mutex = {__data = {__lock
= 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0,
__elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <
repeats 39 times>, __align = 0}}


>> Only one process in any moment of time manages the queue, no shared
(between process) queue data sending is allowed.
Yes, we are using proc1(secondary process) to manage the port 1 and proc0
(primary process) to manage port0 . There is no sharing of the queue
between processes for sending data.

>> From your description it looks like there is no completions seen (in
CQE), I would recommend checking the UAR/Doorbell mapping in secondary
process

Below are the logs from mlx5_txq_devx_obj_new which is called by
proc0(rte_proc_primary) for port 1

Core:0 04/10/24 06:45:47.626580 UTC mlx5_net: Core 0
mlx5_txq_devx_obj_new 1563 ppriv: 0x600177c68540 ,ppriv->uar_table:
0x600177c68548, txq_ctrl->uar_mmap_offset:0,
ppriv->uar_table[txq_data->idx]:0x7f0411ae3800, txq_data->idx: 0,
txq_data->db_nc:0

and logs from txq_uar_init_secondary which gets called by
proc1(rte_proc_secondary) for port 1

Core:1 04/10/24 06:45:47.767512 UTC mlx5_net:   priv: 0x600177c6d7c0,
priv->sh: 0x600176c25a00, priv->sh->pppriv: 0x600177c68540
Core:1 04/10/24 06:45:47.767528 UTC mlx5_net: Core 1
txq_uar_init_secondary 535 txq_ctrl:0x600177c04980 priv:0x600177c6d7c0
Core:1 04/10/24 06:45:47.767553 UTC mlx5_net: Core 1
txq_uar_init_secondary 562   primary_ppriv->uar_table: 0x600177c68548
,uar_va:7f0411ae3800 offset:800 addr:0x7f0425b18800
Core:1 04/10/24 06:45:47.767604 UTC mlx5_net: Core 1
txq_uar_init_secondary 566 port 1 of txq 0 ppriv:0x600179b1b500
ppriv->uar_table[txq->idx]:0x7f0425b18800, txq->idx:0
        , ppriv->uar_table[0]:0x7f0425b18800,ppriv->uar_table[1]:(nil)


@viacheslavo@nvidia.com <viacheslavo@nvidia.com> I noticed that
data->dev_private->sh->tx_uar.obj has mmap_offset as 0 in VF case but
is non-zero in PF case, Is that okay? this is returned by glue in
using  uar = mlx5_glue->devx_alloc_uar(cdev->ctx, uar_mapping);

VF:

    (gdb) p *(struct mlx5dv_devx_uar *) $7
    $9 = {reg_addr = 0x7f041b02d800, base_addr = 0x7f041b02d000,
page_id = 26, mmap_off = 0, comp_mask = 0}

PF:

        txq_ctrl->uar_mmap_offset:*9441280*,
ppriv->uar_table[txq_data->idx]:0x7ff88d2be800, txq_data->idx: 0,
txq_data->db_nc:0



Also we are using MLNX_OFED_LINUX-5.8-2.0.3.0-ubuntu20.04-x86_64

Please note the MLX PF RSS works fine for us where we have 4 queues there
q0 is managed by primary process(proc0) and q1,q2,q3 managed by secondary
processes proc1,proc2,proc3 respectively. We are only seeing the issue in *MLX
VF.*

If it works better for you, we can get on Zoom call for the same.
Please let us know a convenient time for the same.

Thanks,

Samar

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.

[-- Attachment #2: Type: text/html, Size: 129787 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-04-11  7:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-05 17:01 DPDK Secondary process not able to xmit packets with MLX5 VF Samar Yadav
2024-04-08  7:29 ` Slava Ovsiienko
2024-04-10  7:56   ` Samar Yadav

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).