DPDK usage discussions
 help / color / mirror / Atom feed
From: Peter Morrow <peter@graphiant.com>
To: "users@dpdk.org" <users@dpdk.org>
Cc: "joshwash@google.com" <joshwash@google.com>,
	"rushilg@google.com" <rushilg@google.com>,
	"jeroendb@google.com" <jeroendb@google.com>,
	"jungeng.guo@intel.com" <jungeng.guo@intel.com>
Subject: gve queue format on dpdk 22.11
Date: Tue, 29 Oct 2024 17:08:50 +0000	[thread overview]
Message-ID: <PH0PR17MB463927B84226A1A7CA579F66BD4B2@PH0PR17MB4639.namprd17.prod.outlook.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 3905 bytes --]

Hi Folks,

Reading the docs for the the 22.11 release it clearly states that the DQO_RDA queue format is not yet supported:

https://doc.dpdk.org/guides-22.11/nics/gve.html

I'm attempting to bring up our software router on GCP (VM instance type c4-standard-8) where we are currently using dpdk 22.11 (via debian, with a 6.1.0-26-amd64 kernel), given the lack of support for DQO_RDA I see the following expected messages when I start vpp (our dpdk application):

Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device(): Driver is running with DQO RDA queue format.
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device(): MAC addr: 42:01:0A:07:00:0F
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_init_priv(): Max TX queues 2, Max RX queues 2
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_dev_init(): DQO_RDA is not implemented and will be added in the future

As a result vpp sefaults shortly after:

(gdb) bt
#0  gve_adminq_create_tx_queue (queue_index=0, priv=0xbc03a59c0) at ../drivers/net/gve/base/gve_adminq.c:502
#1  gve_adminq_create_tx_queues (priv=0xbc03a59c0, num_queues=2) at ../drivers/net/gve/base/gve_adminq.c:516
#2  0x00007ffda6c9c5d6 in gve_dev_start (dev=0x7ffe1f4f7240 <rte_eth_devices>) at ../drivers/net/gve/gve_ethdev.c:181
#3  0x00007ffe1f489b6a in rte_eth_dev_start () from /lib/x86_64-linux-gnu/librte_ethdev.so.23
#4  0x00007ffff584aa14 in dpdk_device_start (xd=xd@entry=0x7ffe204fbac0) at ./src/plugins/dpdk/device/common.c:393
#5  0x00007ffff584c598 in dpdk_interface_admin_up_down (vnm=<optimized out>, hw_if_index=<optimized out>, flags=<optimized out>)
    at ./src/plugins/dpdk/device/device.c:485
#6  0x00007ffff6ab3ccd in vnet_sw_interface_set_flags_helper (vnm=vnm@entry=0x7ffff7f6b660 <vnet_main>, sw_if_index=<optimized out>,
    flags=VNET_SW_INTERFACE_FLAG_ADMIN_UP, helper_flags=0, helper_flags@entry=VNET_INTERFACE_SET_FLAGS_HELPER_WANT_REDISTRIBUTE)
    at ./src/vnet/interface.c:545
#7  0x00007ffff6ab489f in vnet_sw_interface_set_flags (vnm=vnm@entry=0x7ffff7f6b660 <vnet_main>, sw_if_index=<optimized out>,
    flags=<optimized out>) at ./src/vnet/interface.c:601
#8  0x00007ffff6accee9 in vl_api_sw_interface_set_flags_t_handler (mp=0x7ffe3f241ff8) at ./src/vnet/interface_api.c:100
#9  0x00007ffff7fa6e0d in msg_handler_internal (free_it=0, do_it=1, trace_it=<optimized out>, msg_len=<optimized out>,
    the_msg=0x7ffe3f241ff8, am=0x7ffff7fb8f40 <api_global_main>) at ./src/vlibapi/api_shared.c:580
#10 vl_msg_api_handler_no_free (the_msg=0x7ffe3f241ff8, msg_len=<optimized out>) at ./src/vlibapi/api_shared.c:652
#11 0x00007ffff7f8ed7f in vl_socket_process_api_msg (rp=<optimized out>, input_v=<optimized out>) at ./src/vlibmemory/socket_api.c:208
#12 0x00007ffff7f978d3 in vl_api_clnt_process (vm=<optimized out>, node=<optimized out>, f=<optimized out>)
    at ./src/vlibmemory/memclnt_api.c:429
#13 0x00007ffff6853966 in vlib_process_bootstrap (_a=<optimized out>) at ./src/vlib/main.c:1223
#14 0x00007ffff67ba03c in clib_calljmp () at /usr/src/packages/BUILD/src/vppinfra/longjmp.S:123
#15 0x00007ffe1edfcd90 in ?? ()
#16 0x00007ffff6855054 in vlib_process_startup (f=0x0, p=0x7ffe2078f780, vm=0x7ffe1fa007c0) at ./src/vlib/main.c:1248
#17 dispatch_process (vm=0x7ffe1fa007c0, p=<optimized out>, last_time_stamp=<optimized out>, f=0x0) at ./src/vlib/main.c:1304
#18 0x0000000000000000 in ?? ()
(gdb)

The segfault occurs due to the complq pointer here being NULL:

    cmd.create_tx_queue.tx_comp_ring_addr =
      cpu_to_be64(txq->complq->tx_ring_phys_addr);

It may be possible to upgrade to a more recent version of dpdk which should help me progress, though before I do that I wondered if there is any other way to make progress here? Specifically, is there a way to force a different queue format in the device? This is a c4-standard-8 VM running in GCE.

Thanks!
Peter.


[-- Attachment #2: Type: text/html, Size: 12510 bytes --]

             reply	other threads:[~2024-10-29 17:08 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-29 17:08 Peter Morrow [this message]
2024-10-29 21:14 ` Rushil Gupta
2024-10-30 13:50   ` Peter Morrow
2024-11-01 19:21     ` Rushil Gupta

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR17MB463927B84226A1A7CA579F66BD4B2@PH0PR17MB4639.namprd17.prod.outlook.com \
    --to=peter@graphiant.com \
    --cc=jeroendb@google.com \
    --cc=joshwash@google.com \
    --cc=jungeng.guo@intel.com \
    --cc=rushilg@google.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).