From: wangyunjian <wangyunjian@huawei.com>
To: Liron Himi <lironh@marvell.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "zr@semihalf.com" <zr@semihalf.com>,
"Lilijun (Jerry)" <jerry.lilijun@huawei.com>,
xudingke <xudingke@huawei.com>,
"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-stable] [EXT] [dpdk-dev] [PATCH] net/mvneta: check allocation in rx queue flush
Date: Mon, 7 Dec 2020 13:07:45 +0000 [thread overview]
Message-ID: <34EFBCA9F01B0748BEB6B629CE643AE60DB5CC89@DGGEMM533-MBX.china.huawei.com> (raw)
In-Reply-To: <DM5PR18MB2214E2C85EF0028FEB62BB1DC6CE0@DM5PR18MB2214.namprd18.prod.outlook.com>
> -----Original Message-----
> From: Liron Himi [mailto:lironh@marvell.com]
> Sent: Monday, December 7, 2020 8:38 PM
> To: wangyunjian <wangyunjian@huawei.com>; dev@dpdk.org
> Cc: zr@semihalf.com; Lilijun (Jerry) <jerry.lilijun@huawei.com>; xudingke
> <xudingke@huawei.com>; stable@dpdk.org; Liron Himi <lironh@marvell.com>
> Subject: RE: [EXT] [dpdk-dev] [PATCH] net/mvneta: check allocation in rx queue
> flush
>
> Hi,
>
> How about use 2 local arrays for descs & bufs instead of the malloc/free?
The definition of these 2 arrays is 2048. If it is a local array, is it too large?
>
> Liron
>
>
> -----Original Message-----
> From: wangyunjian <wangyunjian@huawei.com>
> Sent: Monday, 7 December 2020 13:37
> To: dev@dpdk.org
> Cc: Liron Himi <lironh@marvell.com>; zr@semihalf.com;
> jerry.lilijun@huawei.com; xudingke@huawei.com; Yunjian Wang
> <wangyunjian@huawei.com>; stable@dpdk.org
> Subject: [EXT] [dpdk-dev] [PATCH] net/mvneta: check allocation in rx queue flush
>
> External Email
>
> ----------------------------------------------------------------------
> From: Yunjian Wang <wangyunjian@huawei.com>
>
> The function rte_malloc() could return NULL, the return value need to be
> checked.
>
> Fixes: ce7ea764597e ("net/mvneta: support Rx/Tx")
> Cc: stable@dpdk.org
>
> Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
> ---
> drivers/net/mvneta/mvneta_rxtx.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/drivers/net/mvneta/mvneta_rxtx.c
> b/drivers/net/mvneta/mvneta_rxtx.c
> index 10b6f57584..dfa7ecc090 100644
> --- a/drivers/net/mvneta/mvneta_rxtx.c
> +++ b/drivers/net/mvneta/mvneta_rxtx.c
> @@ -872,7 +872,17 @@ mvneta_rx_queue_flush(struct mvneta_rxq *rxq)
> int ret, i;
>
> descs = rte_malloc("rxdesc", MRVL_NETA_RXD_MAX * sizeof(*descs), 0);
> + if (descs == NULL) {
> + MVNETA_LOG(ERR, "Failed to allocate descs.");
> + return;
> + }
> +
> bufs = rte_malloc("buffs", MRVL_NETA_RXD_MAX * sizeof(*bufs), 0);
> + if (bufs == NULL) {
> + MVNETA_LOG(ERR, "Failed to allocate bufs.");
> + rte_free(descs);
> + return;
> + }
>
> do {
> num = MRVL_NETA_RXD_MAX;
> --
> 2.23.0
next prev parent reply other threads:[~2020-12-07 13:08 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-07 11:37 [dpdk-stable] " wangyunjian
2020-12-07 12:37 ` [dpdk-stable] [EXT] " Liron Himi
2020-12-07 13:07 ` wangyunjian [this message]
2020-12-15 22:29 ` Liron Himi
2021-01-12 14:21 ` [dpdk-stable] " Jerin Jacob
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=34EFBCA9F01B0748BEB6B629CE643AE60DB5CC89@DGGEMM533-MBX.china.huawei.com \
--to=wangyunjian@huawei.com \
--cc=dev@dpdk.org \
--cc=jerry.lilijun@huawei.com \
--cc=lironh@marvell.com \
--cc=stable@dpdk.org \
--cc=xudingke@huawei.com \
--cc=zr@semihalf.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).