From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: Michael Baum <michaelba@nvidia.com>, <dev@dpdk.org>,
Matan Azrad <matan@nvidia.com>,
Raslan Darawsheh <rasland@nvidia.com>,
"Viacheslav Ovsiienko" <viacheslavo@nvidia.com>,
David Marchand <david.marchand@redhat.com>,
Ray Kinsella <mdr@ashroe.eu>
Subject: Re: [PATCH v3 1/6] common/mlx5: consider local functions as internal
Date: Fri, 25 Feb 2022 19:13:48 +0000 [thread overview]
Message-ID: <d51b7634-b69c-58ae-6b0f-01c9b1e766e3@intel.com> (raw)
In-Reply-To: <3402514.V25eIC5XRa@thomas>
On 2/25/2022 6:38 PM, Thomas Monjalon wrote:
> 25/02/2022 19:01, Ferruh Yigit:
>> On 2/24/2022 11:25 PM, Michael Baum wrote:
>>> The functions which are not explicitly marked as internal
>>> were exported because the local catch-all rule was missing in the
>>> version script.
>>> After adding the missing rule, all local functions are hidden.
>>> The function mlx5_get_device_guid is used in another library,
>>> so it needs to be exported (as internal).
>>>
>>> Because the local functions were exported as non-internal
>>> in DPDK 21.11, any change in these functions would break the ABI.
>>> An ABI exception is added for this library, considering that all
>>> functions are either local or internal.
>>>
>>
>> When a function is not listed explicitly in .map file, it shouldn't
>> be exported at all.
>
> It seems we need local:* to achieve this behaviour.
> Few other libs are missing it. I plan to send a patch for them.
>
+1 for this patch, thanks.
>> So I am not sure if this exception is required, did you get
>> warning for tool, or is this theoretical?
>
> It is not theoritical, you can check with objdump:
> objdump -T build/lib/librte_common_mlx5.so | sed -rn 's,^[[:xdigit:]]* g *(D[^0]*)[^ ]* *,\1,p'
>
> I did not check the ABI tool without the exception.
>
Yes tool complains with change [1], I will proceed with original patch.
[1]
29 Removed functions:
[D] 'function int mlx5_auxiliary_get_pci_str(const rte_auxiliary_device*, char*, size_t)' {mlx5_auxiliary_get_pci_str}
[D] 'function void mlx5_common_auxiliary_init()' {mlx5_common_auxiliary_init}
[D] 'function int mlx5_common_dev_dma_map(rte_device*, void*, uint64_t, size_t)' {mlx5_common_dev_dma_map}
[D] 'function int mlx5_common_dev_dma_unmap(rte_device*, void*, uint64_t, size_t)' {mlx5_common_dev_dma_unmap}
[D] 'function int mlx5_common_dev_probe(rte_device*)' {mlx5_common_dev_probe}
[D] 'function int mlx5_common_dev_remove(rte_device*)' {mlx5_common_dev_remove}
[D] 'function void mlx5_common_driver_on_register_pci(mlx5_class_driver*)' {mlx5_common_driver_on_register_pci}
[D] 'function void mlx5_common_pci_init()' {mlx5_common_pci_init}
[D] 'function mlx5_mr* mlx5_create_mr_ext(void*, uintptr_t, size_t, int, mlx5_reg_mr_t)' {mlx5_create_mr_ext}
[D] 'function bool mlx5_dev_pci_match(const mlx5_class_driver*, const rte_device*)' {mlx5_dev_pci_match}
[D] 'function int mlx5_dev_to_pci_str(const rte_device*, char*, size_t)' {mlx5_dev_to_pci_str}
[D] 'function void mlx5_free_mr_by_addr(mlx5_mr_share_cache*, const char*, void*, size_t)' {mlx5_free_mr_by_addr}
[D] 'function ibv_device* mlx5_get_aux_ibv_device(const rte_auxiliary_device*)' {mlx5_get_aux_ibv_device}
[D] 'function void mlx5_glue_constructor()' {mlx5_glue_constructor}
[D] 'function void mlx5_malloc_mem_select(uint32_t)' {mlx5_malloc_mem_select}
[D] 'function void mlx5_mr_btree_dump(mlx5_mr_btree*)' {mlx5_mr_btree_dump}
[D] 'function int mlx5_mr_create_cache(mlx5_mr_share_cache*, int)' {mlx5_mr_create_cache}
[D] 'function void mlx5_mr_free(mlx5_mr*, mlx5_dereg_mr_t)' {mlx5_mr_free}
[D] 'function int mlx5_mr_insert_cache(mlx5_mr_share_cache*, mlx5_mr*)' {mlx5_mr_insert_cache}
[D] 'function mlx5_mr* mlx5_mr_lookup_list(mlx5_mr_share_cache*, mr_cache_entry*, uintptr_t)' {mlx5_mr_lookup_list}
[D] 'function void mlx5_mr_rebuild_cache(mlx5_mr_share_cache*)' {mlx5_mr_rebuild_cache}
[D] 'function void mlx5_mr_release_cache(mlx5_mr_share_cache*)' {mlx5_mr_release_cache}
[D] 'function int mlx5_nl_devlink_family_id_get(int)' {mlx5_nl_devlink_family_id_get}
[D] 'function int mlx5_nl_enable_roce_get(int, int, const char*, int*)' {mlx5_nl_enable_roce_get}
[D] 'function int mlx5_nl_enable_roce_set(int, int, const char*, int)' {mlx5_nl_enable_roce_set}
[D] 'function int mlx5_os_open_device(mlx5_common_device*, uint32_t)' {mlx5_os_open_device}
[D] 'function int mlx5_os_pd_create(mlx5_common_device*)' {mlx5_os_pd_create}
[D] 'function void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t*, mlx5_dereg_mr_t*)' {mlx5_os_set_reg_mr_cb}
[D] 'function void mlx5_set_context_attr(rte_device*, ibv_context*)' {mlx5_set_context_attr}
2 Removed variables:
[D] 'uint32_t atomic_sn' {atomic_sn}
[D] 'int mlx5_common_logtype' {mlx5_common_logtype}
1 Removed function symbol not referenced by debug info:
[D] mlx5_mr_dump_cache
next prev parent reply other threads:[~2022-02-25 19:14 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
2022-02-22 21:04 ` [PATCH 1/6] common/mlx5: glue device and PD importation Michael Baum
2022-02-22 21:04 ` [PATCH 2/6] common/mlx5: add remote PD and CTX support Michael Baum
2022-02-22 21:04 ` [PATCH 3/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
2022-02-22 21:04 ` [PATCH 4/6] net/mlx5: add external RxQ mapping API Michael Baum
2022-02-22 21:04 ` [PATCH 5/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
2022-02-22 21:04 ` [PATCH 6/6] app/testpmd: add test " Michael Baum
2022-02-23 18:48 ` [PATCH v2 0/6] mlx5: external RxQ support Michael Baum
2022-02-23 18:48 ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
2022-02-23 18:48 ` [PATCH v2 2/6] common/mlx5: glue device and PD importation Michael Baum
2022-02-23 18:48 ` [PATCH v2 3/6] common/mlx5: add remote PD and CTX support Michael Baum
2022-02-23 18:48 ` [PATCH v2 4/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
2022-02-23 18:48 ` [PATCH v2 5/6] net/mlx5: add external RxQ mapping API Michael Baum
2022-02-23 18:48 ` [PATCH v2 6/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
2022-02-24 8:38 ` [PATCH v2 0/6] mlx5: external RxQ support Matan Azrad
2022-02-24 23:25 ` [PATCH v3 " Michael Baum
2022-02-24 23:25 ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
2022-02-25 18:01 ` Ferruh Yigit
2022-02-25 18:38 ` Thomas Monjalon
2022-02-25 19:13 ` Ferruh Yigit [this message]
2022-02-24 23:25 ` [PATCH v3 2/6] common/mlx5: glue device and PD importation Michael Baum
2022-02-24 23:25 ` [PATCH v3 3/6] common/mlx5: add remote PD and CTX support Michael Baum
2022-02-24 23:25 ` [PATCH v3 4/6] net/mlx5: optimize RxQ/TxQ control structure Michael Baum
2022-02-24 23:25 ` [PATCH v3 5/6] net/mlx5: add external RxQ mapping API Michael Baum
2022-02-24 23:25 ` [PATCH v3 6/6] net/mlx5: support queue/RSS action for external RxQ Michael Baum
2022-02-25 17:39 ` [PATCH v3 0/6] mlx5: external RxQ support Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d51b7634-b69c-58ae-6b0f-01c9b1e766e3@intel.com \
--to=ferruh.yigit@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=mdr@ashroe.eu \
--cc=michaelba@nvidia.com \
--cc=rasland@nvidia.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).