* gve queue format on dpdk 22.11
@ 2024-10-29 17:08 Peter Morrow
2024-10-29 21:14 ` Rushil Gupta
0 siblings, 1 reply; 4+ messages in thread
From: Peter Morrow @ 2024-10-29 17:08 UTC (permalink / raw)
To: users; +Cc: joshwash, rushilg, jeroendb, jungeng.guo
[-- Attachment #1: Type: text/plain, Size: 3905 bytes --]
Hi Folks,
Reading the docs for the the 22.11 release it clearly states that the DQO_RDA queue format is not yet supported:
https://doc.dpdk.org/guides-22.11/nics/gve.html
I'm attempting to bring up our software router on GCP (VM instance type c4-standard-8) where we are currently using dpdk 22.11 (via debian, with a 6.1.0-26-amd64 kernel), given the lack of support for DQO_RDA I see the following expected messages when I start vpp (our dpdk application):
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device(): Driver is running with DQO RDA queue format.
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device(): MAC addr: 42:01:0A:07:00:0F
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_init_priv(): Max TX queues 2, Max RX queues 2
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_dev_init(): DQO_RDA is not implemented and will be added in the future
As a result vpp sefaults shortly after:
(gdb) bt
#0 gve_adminq_create_tx_queue (queue_index=0, priv=0xbc03a59c0) at ../drivers/net/gve/base/gve_adminq.c:502
#1 gve_adminq_create_tx_queues (priv=0xbc03a59c0, num_queues=2) at ../drivers/net/gve/base/gve_adminq.c:516
#2 0x00007ffda6c9c5d6 in gve_dev_start (dev=0x7ffe1f4f7240 <rte_eth_devices>) at ../drivers/net/gve/gve_ethdev.c:181
#3 0x00007ffe1f489b6a in rte_eth_dev_start () from /lib/x86_64-linux-gnu/librte_ethdev.so.23
#4 0x00007ffff584aa14 in dpdk_device_start (xd=xd@entry=0x7ffe204fbac0) at ./src/plugins/dpdk/device/common.c:393
#5 0x00007ffff584c598 in dpdk_interface_admin_up_down (vnm=<optimized out>, hw_if_index=<optimized out>, flags=<optimized out>)
at ./src/plugins/dpdk/device/device.c:485
#6 0x00007ffff6ab3ccd in vnet_sw_interface_set_flags_helper (vnm=vnm@entry=0x7ffff7f6b660 <vnet_main>, sw_if_index=<optimized out>,
flags=VNET_SW_INTERFACE_FLAG_ADMIN_UP, helper_flags=0, helper_flags@entry=VNET_INTERFACE_SET_FLAGS_HELPER_WANT_REDISTRIBUTE)
at ./src/vnet/interface.c:545
#7 0x00007ffff6ab489f in vnet_sw_interface_set_flags (vnm=vnm@entry=0x7ffff7f6b660 <vnet_main>, sw_if_index=<optimized out>,
flags=<optimized out>) at ./src/vnet/interface.c:601
#8 0x00007ffff6accee9 in vl_api_sw_interface_set_flags_t_handler (mp=0x7ffe3f241ff8) at ./src/vnet/interface_api.c:100
#9 0x00007ffff7fa6e0d in msg_handler_internal (free_it=0, do_it=1, trace_it=<optimized out>, msg_len=<optimized out>,
the_msg=0x7ffe3f241ff8, am=0x7ffff7fb8f40 <api_global_main>) at ./src/vlibapi/api_shared.c:580
#10 vl_msg_api_handler_no_free (the_msg=0x7ffe3f241ff8, msg_len=<optimized out>) at ./src/vlibapi/api_shared.c:652
#11 0x00007ffff7f8ed7f in vl_socket_process_api_msg (rp=<optimized out>, input_v=<optimized out>) at ./src/vlibmemory/socket_api.c:208
#12 0x00007ffff7f978d3 in vl_api_clnt_process (vm=<optimized out>, node=<optimized out>, f=<optimized out>)
at ./src/vlibmemory/memclnt_api.c:429
#13 0x00007ffff6853966 in vlib_process_bootstrap (_a=<optimized out>) at ./src/vlib/main.c:1223
#14 0x00007ffff67ba03c in clib_calljmp () at /usr/src/packages/BUILD/src/vppinfra/longjmp.S:123
#15 0x00007ffe1edfcd90 in ?? ()
#16 0x00007ffff6855054 in vlib_process_startup (f=0x0, p=0x7ffe2078f780, vm=0x7ffe1fa007c0) at ./src/vlib/main.c:1248
#17 dispatch_process (vm=0x7ffe1fa007c0, p=<optimized out>, last_time_stamp=<optimized out>, f=0x0) at ./src/vlib/main.c:1304
#18 0x0000000000000000 in ?? ()
(gdb)
The segfault occurs due to the complq pointer here being NULL:
cmd.create_tx_queue.tx_comp_ring_addr =
cpu_to_be64(txq->complq->tx_ring_phys_addr);
It may be possible to upgrade to a more recent version of dpdk which should help me progress, though before I do that I wondered if there is any other way to make progress here? Specifically, is there a way to force a different queue format in the device? This is a c4-standard-8 VM running in GCE.
Thanks!
Peter.
[-- Attachment #2: Type: text/html, Size: 12510 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: gve queue format on dpdk 22.11
2024-10-29 17:08 gve queue format on dpdk 22.11 Peter Morrow
@ 2024-10-29 21:14 ` Rushil Gupta
2024-10-30 13:50 ` Peter Morrow
0 siblings, 1 reply; 4+ messages in thread
From: Rushil Gupta @ 2024-10-29 21:14 UTC (permalink / raw)
To: Peter Morrow; +Cc: users, joshwash, jeroendb, jungeng.guo
[-- Attachment #1: Type: text/plain, Size: 4225 bytes --]
Hi Peter
DQO_RDA is supported 23.07 onwards.
On Tue, Oct 29, 2024 at 10:09 AM Peter Morrow <peter@graphiant.com> wrote:
> Hi Folks,
>
> Reading the docs for the the 22.11 release it clearly states that the
> DQO_RDA queue format is not yet supported:
>
> https://doc.dpdk.org/guides-22.11/nics/gve.html
>
> I'm attempting to bring up our software router on GCP (VM instance type
> c4-standard-8) where we are currently using dpdk 22.11 (via debian, with a
> 6.1.0-26-amd64 kernel), given the lack of support for DQO_RDA I see the
> following expected messages when I start vpp (our dpdk application):
>
> Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device():
> Driver is running with DQO RDA queue format.
> Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device(): MAC
> addr: 42:01:0A:07:00:0F
> Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_init_priv(): Max TX queues 2,
> Max RX queues 2
> Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_dev_init(): DQO_RDA is not
> implemented and will be added in the future
>
> As a result vpp sefaults shortly after:
>
> (gdb) bt
> #0 gve_adminq_create_tx_queue (queue_index=0, priv=0xbc03a59c0) at
> ../drivers/net/gve/base/gve_adminq.c:502
> #1 gve_adminq_create_tx_queues (priv=0xbc03a59c0, num_queues=2) at
> ../drivers/net/gve/base/gve_adminq.c:516
> #2 0x00007ffda6c9c5d6 in gve_dev_start (dev=0x7ffe1f4f7240
> <rte_eth_devices>) at ../drivers/net/gve/gve_ethdev.c:181
> #3 0x00007ffe1f489b6a in rte_eth_dev_start () from
> /lib/x86_64-linux-gnu/librte_ethdev.so.23
> #4 0x00007ffff584aa14 in dpdk_device_start (xd=xd@entry=0x7ffe204fbac0)
> at ./src/plugins/dpdk/device/common.c:393
> #5 0x00007ffff584c598 in dpdk_interface_admin_up_down (vnm=<optimized
> out>, hw_if_index=<optimized out>, flags=<optimized out>)
> at ./src/plugins/dpdk/device/device.c:485
> #6 0x00007ffff6ab3ccd in vnet_sw_interface_set_flags_helper (vnm=vnm@entry=0x7ffff7f6b660
> <vnet_main>, sw_if_index=<optimized out>,
> flags=VNET_SW_INTERFACE_FLAG_ADMIN_UP, helper_flags=0,
> helper_flags@entry=VNET_INTERFACE_SET_FLAGS_HELPER_WANT_REDISTRIBUTE)
> at ./src/vnet/interface.c:545
> #7 0x00007ffff6ab489f in vnet_sw_interface_set_flags (vnm=vnm@entry=0x7ffff7f6b660
> <vnet_main>, sw_if_index=<optimized out>,
> flags=<optimized out>) at ./src/vnet/interface.c:601
> #8 0x00007ffff6accee9 in vl_api_sw_interface_set_flags_t_handler
> (mp=0x7ffe3f241ff8) at ./src/vnet/interface_api.c:100
> #9 0x00007ffff7fa6e0d in msg_handler_internal (free_it=0, do_it=1,
> trace_it=<optimized out>, msg_len=<optimized out>,
> the_msg=0x7ffe3f241ff8, am=0x7ffff7fb8f40 <api_global_main>) at
> ./src/vlibapi/api_shared.c:580
> #10 vl_msg_api_handler_no_free (the_msg=0x7ffe3f241ff8, msg_len=<optimized
> out>) at ./src/vlibapi/api_shared.c:652
> #11 0x00007ffff7f8ed7f in vl_socket_process_api_msg (rp=<optimized out>,
> input_v=<optimized out>) at ./src/vlibmemory/socket_api.c:208
> #12 0x00007ffff7f978d3 in vl_api_clnt_process (vm=<optimized out>,
> node=<optimized out>, f=<optimized out>)
> at ./src/vlibmemory/memclnt_api.c:429
> #13 0x00007ffff6853966 in vlib_process_bootstrap (_a=<optimized out>) at
> ./src/vlib/main.c:1223
> #14 0x00007ffff67ba03c in clib_calljmp () at
> /usr/src/packages/BUILD/src/vppinfra/longjmp.S:123
> #15 0x00007ffe1edfcd90 in ?? ()
> #16 0x00007ffff6855054 in vlib_process_startup (f=0x0, p=0x7ffe2078f780,
> vm=0x7ffe1fa007c0) at ./src/vlib/main.c:1248
> #17 dispatch_process (vm=0x7ffe1fa007c0, p=<optimized out>,
> last_time_stamp=<optimized out>, f=0x0) at ./src/vlib/main.c:1304
> #18 0x0000000000000000 in ?? ()
> (gdb)
>
> The segfault occurs due to the complq pointer here being NULL:
>
> cmd.create_tx_queue.tx_comp_ring_addr =
> cpu_to_be64(txq->complq->tx_ring_phys_addr);
>
> It may be possible to upgrade to a more recent version of dpdk which
> should help me progress, though before I do that I wondered if there is any
> other way to make progress here? Specifically, is there a way to force a
> different queue format in the device? This is a c4-standard-8 VM running in
> GCE.
>
> Thanks!
> Peter.
>
>
[-- Attachment #2: Type: text/html, Size: 11581 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: gve queue format on dpdk 22.11
2024-10-29 21:14 ` Rushil Gupta
@ 2024-10-30 13:50 ` Peter Morrow
2024-11-01 19:21 ` Rushil Gupta
0 siblings, 1 reply; 4+ messages in thread
From: Peter Morrow @ 2024-10-30 13:50 UTC (permalink / raw)
To: Rushil Gupta; +Cc: users, joshwash, jeroendb, jungeng.guo
[-- Attachment #1: Type: text/plain, Size: 4889 bytes --]
Thanks Rushil,
I tested with 23.11 and everything is good now. Out of interest, what determines the queue format on the device? Is there a way to change the queue format such that 22.11 where gve support first appear could be used?
Thanks,
Peter.
________________________________
From: Rushil Gupta <rushilg@google.com>
Sent: 29 October 2024 21:14
To: Peter Morrow <peter@graphiant.com>
Cc: users@dpdk.org <users@dpdk.org>; joshwash@google.com <joshwash@google.com>; jeroendb@google.com <jeroendb@google.com>; jungeng.guo@intel.com <jungeng.guo@intel.com>
Subject: Re: gve queue format on dpdk 22.11
This email originated outside of your organization, do not visit any link, open any attachments or provide any sensitive information unless you recognize the sender and are certain the content can be trusted
Hi Peter
DQO_RDA is supported 23.07 onwards.
On Tue, Oct 29, 2024 at 10:09 AM Peter Morrow <peter@graphiant.com<mailto:peter@graphiant.com>> wrote:
Hi Folks,
Reading the docs for the the 22.11 release it clearly states that the DQO_RDA queue format is not yet supported:
https://doc.dpdk.org/guides-22.11/nics/gve.html
I'm attempting to bring up our software router on GCP (VM instance type c4-standard-8) where we are currently using dpdk 22.11 (via debian, with a 6.1.0-26-amd64 kernel), given the lack of support for DQO_RDA I see the following expected messages when I start vpp (our dpdk application):
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device(): Driver is running with DQO RDA queue format.
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device(): MAC addr: 42:01:0A:07:00:0F
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_init_priv(): Max TX queues 2, Max RX queues 2
Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_dev_init(): DQO_RDA is not implemented and will be added in the future
As a result vpp sefaults shortly after:
(gdb) bt
#0 gve_adminq_create_tx_queue (queue_index=0, priv=0xbc03a59c0) at ../drivers/net/gve/base/gve_adminq.c:502
#1 gve_adminq_create_tx_queues (priv=0xbc03a59c0, num_queues=2) at ../drivers/net/gve/base/gve_adminq.c:516
#2 0x00007ffda6c9c5d6 in gve_dev_start (dev=0x7ffe1f4f7240 <rte_eth_devices>) at ../drivers/net/gve/gve_ethdev.c:181
#3 0x00007ffe1f489b6a in rte_eth_dev_start () from /lib/x86_64-linux-gnu/librte_ethdev.so.23
#4 0x00007ffff584aa14 in dpdk_device_start (xd=xd@entry=0x7ffe204fbac0) at ./src/plugins/dpdk/device/common.c:393
#5 0x00007ffff584c598 in dpdk_interface_admin_up_down (vnm=<optimized out>, hw_if_index=<optimized out>, flags=<optimized out>)
at ./src/plugins/dpdk/device/device.c:485
#6 0x00007ffff6ab3ccd in vnet_sw_interface_set_flags_helper (vnm=vnm@entry=0x7ffff7f6b660 <vnet_main>, sw_if_index=<optimized out>,
flags=VNET_SW_INTERFACE_FLAG_ADMIN_UP, helper_flags=0, helper_flags@entry=VNET_INTERFACE_SET_FLAGS_HELPER_WANT_REDISTRIBUTE)
at ./src/vnet/interface.c:545
#7 0x00007ffff6ab489f in vnet_sw_interface_set_flags (vnm=vnm@entry=0x7ffff7f6b660 <vnet_main>, sw_if_index=<optimized out>,
flags=<optimized out>) at ./src/vnet/interface.c:601
#8 0x00007ffff6accee9 in vl_api_sw_interface_set_flags_t_handler (mp=0x7ffe3f241ff8) at ./src/vnet/interface_api.c:100
#9 0x00007ffff7fa6e0d in msg_handler_internal (free_it=0, do_it=1, trace_it=<optimized out>, msg_len=<optimized out>,
the_msg=0x7ffe3f241ff8, am=0x7ffff7fb8f40 <api_global_main>) at ./src/vlibapi/api_shared.c:580
#10 vl_msg_api_handler_no_free (the_msg=0x7ffe3f241ff8, msg_len=<optimized out>) at ./src/vlibapi/api_shared.c:652
#11 0x00007ffff7f8ed7f in vl_socket_process_api_msg (rp=<optimized out>, input_v=<optimized out>) at ./src/vlibmemory/socket_api.c:208
#12 0x00007ffff7f978d3 in vl_api_clnt_process (vm=<optimized out>, node=<optimized out>, f=<optimized out>)
at ./src/vlibmemory/memclnt_api.c:429
#13 0x00007ffff6853966 in vlib_process_bootstrap (_a=<optimized out>) at ./src/vlib/main.c:1223
#14 0x00007ffff67ba03c in clib_calljmp () at /usr/src/packages/BUILD/src/vppinfra/longjmp.S:123
#15 0x00007ffe1edfcd90 in ?? ()
#16 0x00007ffff6855054 in vlib_process_startup (f=0x0, p=0x7ffe2078f780, vm=0x7ffe1fa007c0) at ./src/vlib/main.c:1248
#17 dispatch_process (vm=0x7ffe1fa007c0, p=<optimized out>, last_time_stamp=<optimized out>, f=0x0) at ./src/vlib/main.c:1304
#18 0x0000000000000000 in ?? ()
(gdb)
The segfault occurs due to the complq pointer here being NULL:
cmd.create_tx_queue.tx_comp_ring_addr =
cpu_to_be64(txq->complq->tx_ring_phys_addr);
It may be possible to upgrade to a more recent version of dpdk which should help me progress, though before I do that I wondered if there is any other way to make progress here? Specifically, is there a way to force a different queue format in the device? This is a c4-standard-8 VM running in GCE.
Thanks!
Peter.
[-- Attachment #2: Type: text/html, Size: 14696 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: gve queue format on dpdk 22.11
2024-10-30 13:50 ` Peter Morrow
@ 2024-11-01 19:21 ` Rushil Gupta
0 siblings, 0 replies; 4+ messages in thread
From: Rushil Gupta @ 2024-11-01 19:21 UTC (permalink / raw)
To: Peter Morrow; +Cc: users, joshwash, jeroendb, jungeng.guo
[-- Attachment #1: Type: text/plain, Size: 5395 bytes --]
Hi Peter
That's great!
The queue-format is determined internally based on the VM instance. All
gen2 (like n2, e2, g2 etc.) use GQI_QPL format. There is no way for the
customer to configure it.
On Wed, Oct 30, 2024 at 6:50 AM Peter Morrow <peter@graphiant.com> wrote:
> Thanks Rushil,
>
> I tested with 23.11 and everything is good now. Out of interest, what
> determines the queue format on the device? Is there a way to change the
> queue format such that 22.11 where gve support first appear could be used?
>
> Thanks,
> Peter.
> ------------------------------
> *From:* Rushil Gupta <rushilg@google.com>
> *Sent:* 29 October 2024 21:14
> *To:* Peter Morrow <peter@graphiant.com>
> *Cc:* users@dpdk.org <users@dpdk.org>; joshwash@google.com <
> joshwash@google.com>; jeroendb@google.com <jeroendb@google.com>;
> jungeng.guo@intel.com <jungeng.guo@intel.com>
> *Subject:* Re: gve queue format on dpdk 22.11
>
>
> This email originated outside of your organization, do not visit any link,
> open any attachments or provide any sensitive information unless you
> recognize the sender and are certain the content can be trusted
>
> Hi Peter
> DQO_RDA is supported 23.07 onwards.
>
> On Tue, Oct 29, 2024 at 10:09 AM Peter Morrow <peter@graphiant.com> wrote:
>
> Hi Folks,
>
> Reading the docs for the the 22.11 release it clearly states that the
> DQO_RDA queue format is not yet supported:
>
> https://doc.dpdk.org/guides-22.11/nics/gve.html
>
> I'm attempting to bring up our software router on GCP (VM instance type
> c4-standard-8) where we are currently using dpdk 22.11 (via debian, with a
> 6.1.0-26-amd64 kernel), given the lack of support for DQO_RDA I see the
> following expected messages when I start vpp (our dpdk application):
>
> Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device():
> Driver is running with DQO RDA queue format.
> Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_adminq_describe_device(): MAC
> addr: 42:01:0A:07:00:0F
> Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_init_priv(): Max TX queues 2,
> Max RX queues 2
> Oct 29 11:11:59 gcpt2 vnet[1919]: dpdk: gve_dev_init(): DQO_RDA is not
> implemented and will be added in the future
>
> As a result vpp sefaults shortly after:
>
> (gdb) bt
> #0 gve_adminq_create_tx_queue (queue_index=0, priv=0xbc03a59c0) at
> ../drivers/net/gve/base/gve_adminq.c:502
> #1 gve_adminq_create_tx_queues (priv=0xbc03a59c0, num_queues=2) at
> ../drivers/net/gve/base/gve_adminq.c:516
> #2 0x00007ffda6c9c5d6 in gve_dev_start (dev=0x7ffe1f4f7240
> <rte_eth_devices>) at ../drivers/net/gve/gve_ethdev.c:181
> #3 0x00007ffe1f489b6a in rte_eth_dev_start () from
> /lib/x86_64-linux-gnu/librte_ethdev.so.23
> #4 0x00007ffff584aa14 in dpdk_device_start (xd=xd@entry=0x7ffe204fbac0)
> at ./src/plugins/dpdk/device/common.c:393
> #5 0x00007ffff584c598 in dpdk_interface_admin_up_down (vnm=<optimized
> out>, hw_if_index=<optimized out>, flags=<optimized out>)
> at ./src/plugins/dpdk/device/device.c:485
> #6 0x00007ffff6ab3ccd in vnet_sw_interface_set_flags_helper (vnm=vnm@entry=0x7ffff7f6b660
> <vnet_main>, sw_if_index=<optimized out>,
> flags=VNET_SW_INTERFACE_FLAG_ADMIN_UP, helper_flags=0,
> helper_flags@entry=VNET_INTERFACE_SET_FLAGS_HELPER_WANT_REDISTRIBUTE)
> at ./src/vnet/interface.c:545
> #7 0x00007ffff6ab489f in vnet_sw_interface_set_flags (vnm=vnm@entry=0x7ffff7f6b660
> <vnet_main>, sw_if_index=<optimized out>,
> flags=<optimized out>) at ./src/vnet/interface.c:601
> #8 0x00007ffff6accee9 in vl_api_sw_interface_set_flags_t_handler
> (mp=0x7ffe3f241ff8) at ./src/vnet/interface_api.c:100
> #9 0x00007ffff7fa6e0d in msg_handler_internal (free_it=0, do_it=1,
> trace_it=<optimized out>, msg_len=<optimized out>,
> the_msg=0x7ffe3f241ff8, am=0x7ffff7fb8f40 <api_global_main>) at
> ./src/vlibapi/api_shared.c:580
> #10 vl_msg_api_handler_no_free (the_msg=0x7ffe3f241ff8, msg_len=<optimized
> out>) at ./src/vlibapi/api_shared.c:652
> #11 0x00007ffff7f8ed7f in vl_socket_process_api_msg (rp=<optimized out>,
> input_v=<optimized out>) at ./src/vlibmemory/socket_api.c:208
> #12 0x00007ffff7f978d3 in vl_api_clnt_process (vm=<optimized out>,
> node=<optimized out>, f=<optimized out>)
> at ./src/vlibmemory/memclnt_api.c:429
> #13 0x00007ffff6853966 in vlib_process_bootstrap (_a=<optimized out>) at
> ./src/vlib/main.c:1223
> #14 0x00007ffff67ba03c in clib_calljmp () at
> /usr/src/packages/BUILD/src/vppinfra/longjmp.S:123
> #15 0x00007ffe1edfcd90 in ?? ()
> #16 0x00007ffff6855054 in vlib_process_startup (f=0x0, p=0x7ffe2078f780,
> vm=0x7ffe1fa007c0) at ./src/vlib/main.c:1248
> #17 dispatch_process (vm=0x7ffe1fa007c0, p=<optimized out>,
> last_time_stamp=<optimized out>, f=0x0) at ./src/vlib/main.c:1304
> #18 0x0000000000000000 in ?? ()
> (gdb)
>
> The segfault occurs due to the complq pointer here being NULL:
>
> cmd.create_tx_queue.tx_comp_ring_addr =
> cpu_to_be64(txq->complq->tx_ring_phys_addr);
>
> It may be possible to upgrade to a more recent version of dpdk which
> should help me progress, though before I do that I wondered if there is any
> other way to make progress here? Specifically, is there a way to force a
> different queue format in the device? This is a c4-standard-8 VM running in
> GCE.
>
> Thanks!
> Peter.
>
>
[-- Attachment #2: Type: text/html, Size: 15013 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2024-11-01 19:21 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-29 17:08 gve queue format on dpdk 22.11 Peter Morrow
2024-10-29 21:14 ` Rushil Gupta
2024-10-30 13:50 ` Peter Morrow
2024-11-01 19:21 ` Rushil Gupta
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).