* Problem with rte_hash_hash when sharing the list between processes
@ 2025-02-15 14:06 Felipe Pereira
2025-02-15 15:14 ` Pathak, Pravin
0 siblings, 1 reply; 2+ messages in thread
From: Felipe Pereira @ 2025-02-15 14:06 UTC (permalink / raw)
To: users
Hello,
It's my first email on mailing lists, if I did something wrong please
tell me so I will not do it again.
I'm having a problem when using an hash_list shared between processes.
I have the following code to create the hash_list on the primary
process, and get the existing list from de secondary process:
static void
init_connections_hash(struct rte_hash **connections_hash, int
max_connections)
{
struct rte_hash_parameters hash_params = {
.name = HASH_NAME,
.entries = max_connections,
.key_len = sizeof(struct connection),
.hash_func = rte_hash_crc,
.hash_func_init_val = 0,
.socket_id = SOCKET_ID_ANY,
.extra_flag =
RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD|RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF
};
*connections_hash = rte_hash_create(&hash_params);
if (*connections_hash == NULL)
{
rte_exit(EXIT_FAILURE, "Failed to create TCP connections hash
table\n");
}
}
static void
get_existing_connections_hash(struct rte_hash **connections_hash)
{
*connections_hash = rte_hash_find_existing(HASH_NAME);
if (*connections_hash == NULL)
{
rte_exit(EXIT_FAILURE, "Failed to get existing TCP connections
hash table\n");
}
}
And in the main, when initializing the process:
if(proc_type == RTE_PROC_PRIMARY) {
init_connections_hash(&connections_hash, MAX_CONNECTIONS_IN_LIST);
} else {
get_existing_connections_hash(&connections_hash);
}
}
What I think it's happening, is when I create with
.hash_func=rte_hash_crc (I also tested with rte_jhash), it writes the
memory address of the function, that exists or is allocated on the main
process, but is not shared with the secondary process.
So when I call rte_hash_lookup_data, it calls rte_hash_hash(h, key), and
for reference the function is:
hash_sig_t
rte_hash_hash(const struct rte_hash *h, const void *key)
{
/* calc hash result by key */
return h->hash_func(key, h->key_len, h->hash_func_init_val);
}
But h->hash_func is the memory address that is available only in the
primary process, and when it gets called at the secondary process, I get
segmentation fault error.
I tried changing the h->hash_func to rte_hash_crc, for my application it
worked like a charm, so I really think it's something with the address
of the function not being available to the secondary process.
This only happens with this function, I tried to add some items in the
primary process, and in the secondary I iterated the list with
rte_hash_iterate, I could read everything in it without any problem, so
the has_list was shared correctly between the processes.
The solution I found was to use rte_hash_lookup_with_hash_data instead
of rte_hash_lookup_data, it's working like I intended, but then it would
be pointless to init the hash with .hash_func=somehashfunc when using
multiple processes
So I would like to know if it's something wrong that I did when creating
or using the list, if it is some bug on the code, or if this solution is
really the way it's intended to do when using multiple processes.
Thanks
^ permalink raw reply [flat|nested] 2+ messages in thread
* RE: Problem with rte_hash_hash when sharing the list between processes
2025-02-15 14:06 Problem with rte_hash_hash when sharing the list between processes Felipe Pereira
@ 2025-02-15 15:14 ` Pathak, Pravin
0 siblings, 0 replies; 2+ messages in thread
From: Pathak, Pravin @ 2025-02-15 15:14 UTC (permalink / raw)
To: Felipe Pereira, users
Please refer to https://doc.dpdk.org/guides-1.8/prog_guide/multi_proc_support.html
You may have to disable ASLR to get the same address in the secondary process.
There are a number of limitations to what can be done when running DPDK multi-process applications. Some of these are documented below:
The multi-process feature requires that the exact same hugepage memory mappings be present in all applications. The Linux security feature - Address-Space Layout Randomization (ASLR) can interfere with this mapping, so it may be necessary to disable this feature in order to reliably run multi-process applications.
> -----Original Message-----
> From: Felipe Pereira <felipe@spint.com.br>
> Sent: Saturday, February 15, 2025 9:06 AM
> To: users@dpdk.org
> Subject: Problem with rte_hash_hash when sharing the list between processes
>
> Hello,
>
> It's my first email on mailing lists, if I did something wrong please tell me so I will
> not do it again.
>
> I'm having a problem when using an hash_list shared between processes.
>
> I have the following code to create the hash_list on the primary process, and get
> the existing list from de secondary process:
>
> static void
> init_connections_hash(struct rte_hash **connections_hash, int
> max_connections)
> {
> struct rte_hash_parameters hash_params = {
> .name = HASH_NAME,
> .entries = max_connections,
> .key_len = sizeof(struct connection),
> .hash_func = rte_hash_crc,
> .hash_func_init_val = 0,
> .socket_id = SOCKET_ID_ANY,
> .extra_flag =
> RTE_HASH_EXTRA_FLAGS_MULTI_WRITER_ADD|RTE_HASH_EXTRA_FLAGS_RW
> _CONCURRENCY_LF
> };
> *connections_hash = rte_hash_create(&hash_params);
> if (*connections_hash == NULL)
> {
> rte_exit(EXIT_FAILURE, "Failed to create TCP connections hash table\n");
> }
>
> }
>
> static void
> get_existing_connections_hash(struct rte_hash **connections_hash) {
> *connections_hash = rte_hash_find_existing(HASH_NAME);
> if (*connections_hash == NULL)
> {
> rte_exit(EXIT_FAILURE, "Failed to get existing TCP connections hash
> table\n");
> }
> }
>
> And in the main, when initializing the process:
>
> if(proc_type == RTE_PROC_PRIMARY) {
> init_connections_hash(&connections_hash, MAX_CONNECTIONS_IN_LIST);
> } else {
> get_existing_connections_hash(&connections_hash);
> }
> }
>
> What I think it's happening, is when I create with .hash_func=rte_hash_crc (I also
> tested with rte_jhash), it writes the memory address of the function, that exists or
> is allocated on the main process, but is not shared with the secondary process.
>
> So when I call rte_hash_lookup_data, it calls rte_hash_hash(h, key), and for
> reference the function is:
>
> hash_sig_t
> rte_hash_hash(const struct rte_hash *h, const void *key) {
> /* calc hash result by key */
> return h->hash_func(key, h->key_len, h->hash_func_init_val); }
>
> But h->hash_func is the memory address that is available only in the primary
> process, and when it gets called at the secondary process, I get segmentation
> fault error.
>
> I tried changing the h->hash_func to rte_hash_crc, for my application it worked
> like a charm, so I really think it's something with the address of the function not
> being available to the secondary process.
>
> This only happens with this function, I tried to add some items in the primary
> process, and in the secondary I iterated the list with rte_hash_iterate, I could read
> everything in it without any problem, so the has_list was shared correctly
> between the processes.
>
> The solution I found was to use rte_hash_lookup_with_hash_data instead of
> rte_hash_lookup_data, it's working like I intended, but then it would be pointless
> to init the hash with .hash_func=somehashfunc when using multiple processes
>
> So I would like to know if it's something wrong that I did when creating or using
> the list, if it is some bug on the code, or if this solution is really the way it's
> intended to do when using multiple processes.
>
> Thanks
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2025-02-15 15:14 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-15 14:06 Problem with rte_hash_hash when sharing the list between processes Felipe Pereira
2025-02-15 15:14 ` Pathak, Pravin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).