DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] distributor crash with user assigned flow ids
@ 2017-10-24 21:24 Suryanathan P
  0 siblings, 0 replies; only message in thread
From: Suryanathan P @ 2017-10-24 21:24 UTC (permalink / raw)
  To: dev

Hi,

We see a crash in the dpdk packet distributor library when the packets are
assigned used defined flow ids. I was able to narrow down this issue to
incorrect worker ids returned by the sse implementation of find_match_vec().

Debug:

Non zero flow ids picked up from each mbuf in this burst
(gdb) p flows
$63 = {9521, 9521, 9521, 9521, 9521, 9521, 9521, 9521}

There are eight workers:
(gdb) p d->num_workers
$64 = 8

And eight packets being processed in this burst
(gdb) p pkts
$65 = 8

A call using vec instructions returns incorrect worker ids:

(gdb) call find_match_vec(d, &flows[0], &matches[0])
(gdb) p matches
$66 = {9, 9, 9, 9, 9, 9, 9, 9}

Where as, a call to the scalar implementation returns workers ids up to 8.
A comment in rte_distributor_process_v1705() says the matches array now
contain the intended worker ID (+1). So it makes sense to have worker ids
up to eight. (0-7)+1

(gdb) call find_match_scalar(d, &flows[0], &matches[0])
(gdb) p matches
$67 = {1, 1, 1, 0, 8, 0, 0, 0}

Is this a bug in the sse implementation of the find match function?

The function SEG faults when trying to access non-existent workers backlog
structure:

(gdb) p d->backlog[matches[j]-1]
$76 = {start = 0, count = 1, pkts = {0, 0, 0, 0, 0, 0, 0, 0}, tags = 0x0}


Regards,
Suryanathan

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2017-10-24 21:25 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-24 21:24 [dpdk-dev] distributor crash with user assigned flow ids Suryanathan P

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).