From: Stephen Hemminger <stephen@networkplumber.org>
To: Tyler Retzlaff <roretzla@linux.microsoft.com>
Cc: dev@dpdk.org
Subject: Re: RFC acceptable handling of VLAs across toolchains
Date: Wed, 8 Nov 2023 08:51:54 -0800 [thread overview]
Message-ID: <20231108085154.757719e4@hermes.local> (raw)
In-Reply-To: <20231107193220.GA15232@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>
On Tue, 7 Nov 2023 11:32:20 -0800
Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> hi folks,
>
> i'm seeking advice. we have use of VLAs which are now optional in
> standard C. some toolchains provide a conformant implementation and msvc
> does not (and never will).
>
> we seem to have a few options, just curious about what people would
> prefer.
>
> * use alloca
>
> * use dynamically allocated storage
>
> * conditional compiled code where the msvc leg uses one of the previous
> two options
>
> i'll leave it simple for now, i'd like to hear input rather than provide
> a recommendation for now.
>
> thanks!
As an experiment did a build of current DPDK with -Wvla option.
Lots of errors, some have obvious solutions like:
../drivers/net/failsafe/failsafe_intr.c: In function ‘fs_rx_event_proxy_service_install’:
../drivers/net/failsafe/failsafe_intr.c:142:17: warning: ISO C90 forbids variable length array ‘service_core_list’ [-Wvla]
142 | uint32_t service_core_list[num_service_cores];
| ^~~~~~~~
This could just be RTE_MAX_LCORES.
others like rte_metrics should just use malloc() as is used already in
that function.
../lib/metrics/rte_metrics_telemetry.c: In function ‘rte_metrics_tel_update_metrics_ethdev’:
../lib/metrics/rte_metrics_telemetry.c:140:9: warning: ISO C90 forbids variable length array ‘xstats_values’ [-Wvla]
140 | uint64_t xstats_values[num_xstats];
| ^~~~~~~~
../lib/metrics/rte_metrics_telemetry.c: In function ‘rte_metrics_tel_extract_data’:
../lib/metrics/rte_metrics_telemetry.c:384:9: warning: ISO C90 forbids variable length array ‘stat_names’ [-Wvla]
384 | const char *stat_names[num_stat_names];
| ^~~~~
Others already have an implicit upper bound.
Example is in rte_cuckoo_hash where some fields us RTE_HASH_LOOKUP_BULK_MAX
and some use VLA.
[170/2868] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
../lib/hash/rte_cuckoo_hash.c: In function ‘rte_hash_lookup_bulk_data’:
../lib/hash/rte_cuckoo_hash.c:2355:9: warning: ISO C90 forbids variable length array ‘positions’ [-Wvla]
2355 | int32_t positions[num_keys];
| ^~~~~~~
../lib/hash/rte_cuckoo_hash.c: In function ‘rte_hash_lookup_with_hash_bulk_data’:
../lib/hash/rte_cuckoo_hash.c:2471:9: warning: ISO C90 forbids variable length array ‘positions’ [-Wvla]
2471 | int32_t positions[num_keys];
| ^~~~~~~
Would it make sense to have an rte_config.h value for maximum burst size?
Lots of code is using nb_pkts.
There is also some confusing ones like:
../lib/mempool/rte_mempool.c: In function ‘mempool_cache_init’:
../lib/mempool/rte_mempool.c:751:50: warning: ISO C90 forbids array whose size cannot be evaluated [-Wvla]
751 | RTE_SIZEOF_FIELD(struct rte_mempool_cache, objs[0]));
| ^~~~~~~~~~~~~~~~~
../lib/eal/include/rte_common.h:498:65: note: in definition of macro ‘RTE_BUILD_BUG_ON’
498 | #define RTE_BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)]))
| ^~~~~~~~~
../lib/mempool/rte_mempool.c:751:26: note: in expansion of macro ‘RTE_SIZEOF_FIELD’
751 | RTE_SIZEOF_FIELD(struct rte_mempool_cache, objs[0]));
next prev parent reply other threads:[~2023-11-08 16:51 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-07 19:32 Tyler Retzlaff
2023-11-08 2:31 ` Stephen Hemminger
2023-11-08 3:25 ` Tyler Retzlaff
2023-11-08 8:19 ` Morten Brørup
2023-11-08 16:51 ` Stephen Hemminger [this message]
2023-11-08 17:48 ` Morten Brørup
2023-11-09 10:25 ` RFC: default burst sizes in rte_config Morten Brørup
2023-11-09 20:26 ` RFC acceptable handling of VLAs across toolchains Tyler Retzlaff
2024-03-21 0:12 ` Tyler Retzlaff
2024-04-04 17:15 ` [PATCH 0/4] RFC samples converting VLA to alloca Tyler Retzlaff
2024-04-04 17:15 ` [PATCH 1/4] latencystats: use alloca instead of vla trivial Tyler Retzlaff
2024-04-06 15:28 ` Morten Brørup
2024-04-07 9:36 ` Mattias Rönnblom
2024-04-07 17:00 ` Stephen Hemminger
2024-04-04 17:15 ` [PATCH 2/4] hash: " Tyler Retzlaff
2024-04-06 16:01 ` Morten Brørup
2024-04-04 17:15 ` [PATCH 3/4] vhost: use alloca instead of vla sizeof Tyler Retzlaff
2024-04-06 22:30 ` Morten Brørup
2024-04-04 17:15 ` [PATCH 4/4] dispatcher: use alloca instead of vla multi dimensional Tyler Retzlaff
2024-04-06 15:49 ` Morten Brørup
2024-04-07 9:31 ` [PATCH 0/4] RFC samples converting VLA to alloca Mattias Rönnblom
2024-04-07 11:07 ` Morten Brørup
2024-04-07 17:03 ` Stephen Hemminger
2024-04-08 15:27 ` Tyler Retzlaff
2024-04-08 15:53 ` Morten Brørup
2024-04-09 8:28 ` Konstantin Ananyev
2024-04-09 15:08 ` Tyler Retzlaff
2024-04-10 9:58 ` Konstantin Ananyev
2024-04-10 17:03 ` Tyler Retzlaff
2024-04-10 7:32 ` Mattias Rönnblom
2024-04-10 7:52 ` Morten Brørup
2024-04-10 17:04 ` Tyler Retzlaff
2024-04-10 7:27 ` Mattias Rönnblom
2024-04-10 17:10 ` Tyler Retzlaff
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231108085154.757719e4@hermes.local \
--to=stephen@networkplumber.org \
--cc=dev@dpdk.org \
--cc=roretzla@linux.microsoft.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).