From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Yangchao Zhou <zhouyates@gmail.com>, dev@dpdk.org
Cc: stephen@networkplumber.org, sodey@rbbn.com,
Junxiao Shi <sunnylandh@gmail.com>
Subject: Re: [dpdk-dev] [PATCH v3] kni: fix possible kernel crash with va2pa
Date: Wed, 10 Jul 2019 21:09:44 +0100 [thread overview]
Message-ID: <421b6eaa-beac-bed4-fe3e-6cf8647406e9@intel.com> (raw)
In-Reply-To: <20190625150414.11332-1-zhouyates@gmail.com>
On 6/25/2019 4:04 PM, Yangchao Zhou wrote:
> va2pa depends on the physical address and virtual address offset of
> current mbuf. It may get the wrong physical address of next mbuf which
> allocated in another hugepage segment.
>
> In rte_mempool_populate_default(), trying to allocate whole block of
> contiguous memory could be failed. Then, it would reserve memory in
> several memzones that have different physical address and virtual address
> offsets. The rte_mempool_populate_default() is used by
> rte_pktmbuf_pool_create().
>
> Fixes: 8451269e6d7b ("kni: remove continuous memory restriction")
>
> Signed-off-by: Yangchao Zhou <zhouyates@gmail.com>
Overall looks good to me, not from this patch but can you please check below
comment too.
Also there is a comment from Junxiao, lets clear it before the ack.
<...>
> @@ -396,7 +401,7 @@ kni_net_rx_lo_fifo(struct kni_dev *kni)
> uint32_t ret;
> uint32_t len;
> uint32_t i, num, num_rq, num_tq, num_aq, num_fq;
> - struct rte_kni_mbuf *kva;
> + struct rte_kni_mbuf *kva, *next_kva;
> void *data_kva;
> struct rte_kni_mbuf *alloc_kva;
> void *alloc_data_kva;
> @@ -439,6 +444,13 @@ kni_net_rx_lo_fifo(struct kni_dev *kni)
> data_kva = kva2data_kva(kva);
> kni->va[i] = pa2va(kni->pa[i], kva);
>
> + while (kva->next) {
> + next_kva = pa2kva(kva->next);
> + /* Convert physical address to virtual address */
> + kva->next = pa2va(kva->next, next_kva);
> + kva = next_kva;
> + }
Not done in this patch, but in 'kni_net_rx_lo_fifo()' the len calculated as
'len = kva->pkt_len;'
But while copying 'data' to 'alloc_data' the segmentation is not taken into
account and 'len' is used:
memcpy(alloc_data_kva, data_kva, len);
This may lead overflow 'alloc_data_kva' for some 'pkt_len' values.
next prev parent reply other threads:[~2019-07-10 20:09 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-28 7:30 [dpdk-dev] [PATCH] " Yangchao Zhou
2019-03-06 17:31 ` Ferruh Yigit
2019-06-14 18:41 ` Dey, Souvik
2019-03-12 9:22 ` [dpdk-dev] [PATCH v2] " Yangchao Zhou
2019-03-19 18:35 ` Ferruh Yigit
2019-03-19 18:35 ` Ferruh Yigit
2019-03-22 20:49 ` Ferruh Yigit
2019-03-22 20:49 ` Ferruh Yigit
2019-06-18 4:06 ` Dey, Souvik
2019-06-18 15:48 ` Stephen Hemminger
2019-06-25 15:04 ` [dpdk-dev] [PATCH v3] " Yangchao Zhou
2019-07-02 20:07 ` [dpdk-dev] [v3] " Junxiao Shi
2019-07-10 20:11 ` Ferruh Yigit
2019-07-10 20:40 ` yoursunny
2019-07-10 21:23 ` Ferruh Yigit
2019-07-10 23:52 ` yoursunny
2019-07-10 20:09 ` Ferruh Yigit [this message]
2019-07-11 7:46 ` [dpdk-dev] [PATCH v3] " Ferruh Yigit
2019-07-15 20:50 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=421b6eaa-beac-bed4-fe3e-6cf8647406e9@intel.com \
--to=ferruh.yigit@intel.com \
--cc=dev@dpdk.org \
--cc=sodey@rbbn.com \
--cc=stephen@networkplumber.org \
--cc=sunnylandh@gmail.com \
--cc=zhouyates@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).