DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@xilinx.com>
To: John Weston <john.weston@senetas.com>
Cc: <dev@dpdk.org>, Jakub Grajciar <jgrajcia@cisco.com>
Subject: Re: memif driver segfault memory corruption.
Date: Mon, 18 Jul 2022 09:17:20 +0100	[thread overview]
Message-ID: <0017fca5-e52e-d7c9-248d-376840ac3c5e@xilinx.com> (raw)
In-Reply-To: <CAGJEGKQ4vKATojn=_jCC4_Aw4vSbgqCDXQ6+fwdj5=hrOrhH-Q@mail.gmail.com>

On 7/16/2022 1:15 AM, John Weston wrote:
> Hi,
> 
> I have been working on the dpdk memif driver and encountered 
> the following issues. Please consider these patches for review.
> 
> Firstly, the zero copy code in drivers/net/memif/rte_eth_memif.c can 
> readilly overflow the ring buffer for mbufs. This patch adjusts the 
> n_slots used in rte_poktmbuf_alloc_bulk to ensure it does not run off 
> the end of the buffers list.
> 
> dev@dpdk.org <mailto:dev@dpdk.org>@@ -524,14 +1196,23 @@ refill:
> 
> */
> head = __atomic_load_n(&ring->head, __ATOMIC_RELAXED);
> n_slots = ring_size - head + mq->last_tail;
> -
> - if (n_slots < 32)
> - goto no_free_mbufs;
> +// CTAM - critical BUG FIX !
> +#if 1
> +// only work within mask so alloc_bulk does not overrun the buffers array
> +if ((head&mask) + n_slots > ring_size)
> +{
> +n_slots = ring_size - (head&mask);
> +}
> +if (n_slots <=0)
> + goto no_free_mbufs;
> +#else
> +if (n_slots < 32)
> + goto no_free_mbufs;
> +#endif
> 
> ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], 
> n_slots);
> if (unlikely(ret < 0))
> goto no_free_mbufs;
> -
> while (n_slots--) {
> s0 = head++ & mask;
> if (n_slots > 0)
> 
> 
> Secondly, a segfault can occur on the stats routine on initialisation 
> due to null pointer dereferencing.
> 
> @@ -1404,10 +2167,14 @@ memif_stats_get(struct rte_eth_dev *dev, struct 
> rte_eth_stats *stats)
> 
> /* RX stats */
> for (i = 0; i < nq; i++) {
> mq = dev->data->rx_queues[i];
> - stats->q_ipackets[i] = mq->n_pkts;
> - stats->q_ibytes[i] = mq->n_bytes;
> - stats->ipackets += mq->n_pkts;
> - stats->ibytes += mq->n_bytes;
> +// CTAM test this, as it may not yet initialised!
> +if (mq)
> +{
> + stats->q_ipackets[i] = mq->n_pkts;
> + stats->q_ibytes[i] = mq->n_bytes;
> + stats->ipackets += mq->n_pkts;
> + stats->ibytes += mq->n_bytes;
> +}
> }
> 
> tmp = (pmd->role == MEMIF_ROLE_CLIENT) ? pmd->run.num_s2c_rings :
> 
> This can occur in several places in the stats code, and null dereference 
> guards are needed in all locations.
> 
> Hope this helps someone else.
> 
> John

Hi John,

Thanks for reporting. Cc'ed memif maintainer.

Meanwhile, can you please submit a defect to bugzilla [1], to be sure 
issue is recorded properly?

Also if you want to contribute the fix yourself, please check the 
contribution guide:
https://doc.dpdk.org/guides/contributing/patches.html


[1]
https://bugs.dpdk.org/


      reply	other threads:[~2022-07-18  8:17 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-16  0:15 John Weston
2022-07-18  8:17 ` Ferruh Yigit [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0017fca5-e52e-d7c9-248d-376840ac3c5e@xilinx.com \
    --to=ferruh.yigit@xilinx.com \
    --cc=dev@dpdk.org \
    --cc=jgrajcia@cisco.com \
    --cc=john.weston@senetas.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).