aboutsummaryrefslogtreecommitdiffstats
path: root/diffcore-rename.c
diff options
context:
space:
mode:
authorJiang Xin <worldhello.net@gmail.com>2020-03-10 14:28:22 +0800
committerJiang Xin <worldhello.net@gmail.com>2020-03-10 14:28:22 +0800
commit438393202c21565dd768c079e4956cf7f26e50bc (patch)
tree817de5e8d49279efb8cb5a8a152041622836fa65 /diffcore-rename.c
parentMerge branch 'fr_2.26.0' of github.com:jnavila/git (diff)
parentl10n: sv.po: Update Swedish translation (4835t0f0u) (diff)
downloadgit-438393202c21565dd768c079e4956cf7f26e50bc.tar.gz
git-438393202c21565dd768c079e4956cf7f26e50bc.zip
Merge branch 'master' of github.com:nafmo/git-l10n-sv
* 'master' of github.com:nafmo/git-l10n-sv: l10n: sv.po: Update Swedish translation (4835t0f0u)
Diffstat (limited to 'diffcore-rename.c')
0 files changed, 0 insertions, 0 deletions
il-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> 2025-09-30net/mlx5e: Prevent entering switchdev mode with inconsistent netnsJianbo Liu1-0/+33 When a PF enters switchdev mode, its netdevice becomes the uplink representor but remains in its current network namespace. All other representors (VFs, SFs) are created in the netns of the devlink instance. If the PF's netns has been moved and differs from the devlink's netns, enabling switchdev mode would create a state where the OVS control plane (ovs-vsctl) cannot manage the switch because the PF uplink representor and the other representors are split across different namespaces. To prevent this inconsistent configuration, block the request to enter switchdev mode if the PF netdevice's netns does not match the netns of its devlink instance. As part of this change, the PF's netns is first marked as immutable. This prevents race conditions where the netns could be changed after the check is performed but before the mode transition is complete, and it aligns the PF's behavior with that of the final uplink representor. Signed-off-by: Jianbo Liu <jianbol@nvidia.com> Reviewed-by: Cosmin Ratiu <cratiu@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1759094723-843774-3-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> 2025-09-30net/mlx5: HWS, Generalize complex matchersVlad Dogaru6-1199/+836 The existing solution of complex matchers splits the match parameters across two, and exactly two, matchers. For some rather extreme cases (e.g. IPv6-in-IPv6 tunnels), even two matchers are not enough. Generalize complex matchers to up to 4 submatchers, and allow easy extension to more if needed. This resulted in rewriting a large part of the high-level complex matchers logic, but the original concepts were rock solid and still hold. Key characteristics of the new implementation: * Rework complex matchers to include multiple submatchers. All submatchers but the first are isolated, in keeping with the existing paradigm of handing off to specialized matchers that are not otherwise reachable by regular rules. * Similarly, rework complex rules to allow splitting them into more than two simple rules. Rules continue to be refcounted to allow for multiple complex rules matching on identical parts of the match params. * Rely on the match tag, as opposed to the entire match_param, to hash subrules. This results in lower memory usage. * Prefer to split the original user-supplied match parameters rather than the internal field descriptors. This avoids the awkward transition back and forth between the two formats. * Allow splitting multi-dword fields across matchers. The only restrictions that the new implementation impose are: a) any fragment of an IP address must be accompanied by a match on the IP version; and b) a single lower dword of an IPv6 address cannot be present in a submatcher as it would be interpreted as an IPv4 address. * Employ a greedy algorithm to split the match params, as opposed to complete search. The results are not optimal, but the algorithm is now linear compared to exponential. Consequently, we see complex matcher creation time drops two orders of magnitude in our tests. Signed-off-by: Vlad Dogaru <vdogaru@nvidia.com> Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1759094723-843774-2-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> 2025-09-30net/mlx5: Improve write-combining test reliability for ARM64 Grace CPUsPatrisious Haddad1-2/+26 Write combining is an optimization feature in CPUs that is frequently used by modern devices to generate 32 or 64 byte TLPs at the PCIe level. These large TLPs allow certain optimizations in the driver to HW communication that improve performance. As WC is unpredictable and optional the HW designs all tolerate cases where combining doesn't happen and simply experience a performance degradation. Unfortunately many virtualization environments on all architectures have done things that completely disable WC inside the VM with no generic way to detect this. For example WC was fully blocked in ARM64 KVM until commit 8c47ce3e1d2c ("KVM: arm64: Set io memory s2 pte as normalnc for vfio pci device"). Trying to use WC when it is known not to work has a measurable performance cost (~5%). Long ago mlx5 developed an boot time algorithm to test if WC is available or not by using unique mlx5 HW features to measure how many large TLPs the device is receiving. The SW generates a large number of combining opportunities and if any succeed then WC is declared working. In mlx5 the WC optimization feature is never used by the kernel except for the boot time test. The WC is only used by userspace in rdma-core. Sadly modern ARM CPUs, especially NVIDIA Grace, have a combining implementation that is very unreliable compared to pretty much everything prior. This is being fixed architecturally in new CPUs with a new ST64B instruction, but current shipping devices suffer this problem. Unreliable means the SW can present thousands of combining opportunities and the HW will not combine for any of them, which creates a performance degradation, and critically fails the mlx5 boot test. However, the CPU is very sensitive to the instruction sequence used, with the better options being sufficiently good that the performance loss from the unreliable CPU is not measurable. Broadly there are several options, from worst to best: 1) A C loop doing a u64 memcpy. This was used prior to commit ef302283ddfc ("IB/mlx5: Use __iowrite64_copy() for write combining stores") and failed almost all the time on Grace CPUs. 2) ARM64 assembly with consecutive 8 byte stores. This was implemented as an arch-generic __iowriteXX_copy() family of functions suitable for performance use in drivers for WC. commit ead79118dae6 ("arm64/io: Provide a WC friendly __iowriteXX_copy()") provided the ARM implementation. 3) ARM64 assembly with consecutive 16 byte stores. This was rejected from kernel use over fears of virtualization failures. Common ARM VMMs will crash if STP is used against emulated memory. 4) A single NEON store instruction. Userspace has used this option for a very long time, it performs well. 5) For future silicon the new ST64B instruction is guaranteed to generate a 64 byte TLP 100% of the time The past upgrade from #1 to #2 was thought to be sufficient to solve this problem. However, more testing on more systems shows that #3 is still problematic at a low frequency and the kernel test fails. Thus, make the mlx5 use the same instructions as userspace during the boot time WC self test. This way the WC test matches the userspace and will properly detect the ability of HW to support the WC workload that userspace will generate. While #4 still has imperfect combining performance, it is substantially better than #2, and does actually give a performance win to applications. Self-test failures with #2 are like 3/10 boots, on some systems, #4 has never seen a boot failure. There is no real general use case for a NEON based WC flow in the kernel. This is not suitable for any performance path work as getting into/out of a NEON context is fairly expensive compared to the gain of WC. Future CPUs are going to fix this issue by using an new ARM instruction and __iowriteXX_copy() will be updated to use that automatically, probably using the ALTERNATES mechanism. Since this problem is constrained to mlx5's unique situation of needing a non-performance code path to duplicate what mlx5 userspace is doing as a matter of self-testing, implement it as a one line inline assembly in the driver directly. Lastly, this was concluded from the discussion with ARM maintainers which confirms that this is the best approach for the solution: https://lore.kernel.org/r/aHqN_hpJl84T1Usi@arm.com Signed-off-by: Patrisious Haddad <phaddad@nvidia.com> Reviewed-by: Michael Guralnik <michaelgur@nvidia.com> Reviewed-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/1759093688-841357-1-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> 2025-09-30selftests/net: add tcp_port_share to .gitignoreGopi Krishna Menon1-0/+1 Add the tcp_port_share test binary to .gitignore to avoid accidentally staging the build artifact. Signed-off-by: Gopi Krishna Menon <krishnagopi487@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250929163140.122383-1-krishnagopi487@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> 2025-09-30Revert "net/mlx5e: Update and set Xon/Xoff upon MTU set"Jakub Kicinski2-28/+1 This reverts commit ceddedc969f0532b7c62ca971ee50d519d2bc0cb. Commit in question breaks the mapping of PGs to pools for some SKUs. Specifically multi-host NICs seem to be shipped with a custom buffer configuration which maps the lossy PG to pool 4. But the bad commit overrides this with pool 0 which does not have sufficient buffer space reserved. Resulting in ~40% packet loss. The commit also breaks BMC / OOB connection completely (100% packet loss). Revert, similarly to commit 3fbfe251cc9f ("Revert "net/mlx5e: Update and set Xon/Xoff upon port speed set""). The breakage is exactly the same, the only difference is that quoted commit would break the NIC immediately on boot, and the currently reverted commit only when MTU is changed. Note: "good" kernels do not restore the configuration, so downgrade isn't enough to recover machines. A NIC power cycle seems to be necessary to return to a healthy state (or overriding the relevant registers using a custom patch). Fixes: ceddedc969f0 ("net/mlx5e: Update and set Xon/Xoff upon MTU set") Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250929181529.1848157-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30net: add NUMA awareness to skb_attempt_defer_free()Eric Dumazet5-22/+37 Instead of sharing sd->defer_list & sd->defer_count with many cpus, add one pair for each NUMA node. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250928084934.3266948-4-edumazet@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30net: use llist for sd->defer_listEric Dumazet3-24/+17 Get rid of sd->defer_lock and adopt llist operations. We optimize skb_attempt_defer_free() for the common case, where the packet is queued. Otherwise sd->defer_count is increasing, until skb_defer_free_flush() clears it. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250928084934.3266948-3-edumazet@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30net: make softnet_data.defer_count an atomicEric Dumazet3-6/+4 This is preparation work to remove the softnet_data.defer_lock, as it is contended on hosts with large number of cores. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250928084934.3266948-2-edumazet@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30selftests: drv-net: psp: add tests for destroying devicesJakub Kicinski5-3/+68 Add tests for making sure device can disappear while associations exist. This is netdevsim-only since destroying real devices is more tricky. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250927225420.1443468-9-kuba@kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30selftests: drv-net: psp: add test for auto-adjusting TCP MSSJakub Kicinski1-0/+52 Test TCP MSS getting auto-adjusted. PSP adds an encapsulation overhead of 40B per packet, when used in transport mode without any virtualization cookie or other optional PSP header fields. The kernel should adjust the MSS for a connection after PSP tx state is reached. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250927225420.1443468-8-kuba@kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30selftests: drv-net: psp: add connection breaking testsJakub Kicinski1-1/+91 Add test checking conditions which lead to connections breaking. Using bad key or connection gets stuck if device key is rotated twice. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250927225420.1443468-7-kuba@kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30selftests: drv-net: psp: add association testsJakub Kicinski4-4/+167 Add tests for exercising PSP associations for TCP sockets. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250927225420.1443468-6-kuba@kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30selftests: drv-net: psp: add basic data transfer and key rotation testsJakub Kicinski1-3/+191 Add basic tests for sending data over PSP and making sure that key rotation toggles the MSB of the spi. Deploy PSP responder on the remote end. We also need a healthy dose of common helpers for setting up the connections, assertions and interrogating socket state on the Python side. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250927225420.1443468-5-kuba@kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30selftests: drv-net: add PSP responderJakub Kicinski3-0/+493 PSP tests need the remote system to support PSP, and some PSP capable application to exchange data with. Create a simple PSP responder app which we can build and deploy to the remote host. The tests themselves can be written in Python but for ease of deploying the responder is in C (using C YNL). Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250927225420.1443468-4-kuba@kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30selftests: drv-net: base device access API testJakub Kicinski7-3/+93 Simple PSP test to getting info about PSP devices. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250927225420.1443468-3-kuba@kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30netdevsim: a basic test PSP implementationJakub Kicinski5-6/+294 Provide a PSP implementation for netdevsim. Use psp_dev_encapsulate() and psp_dev_rcv() to do actual encapsulation and decapsulation on skbs, but perform no encryption or decryption. In order to make encryption with a bad key result in a drop on the peer's rx side, we stash our psd's generation number in the first byte of each key before handing to the peer. Signed-off-by: Jakub Kicinski <kuba@kernel.org> Co-developed-by: Daniel Zahka <daniel.zahka@gmail.com> Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20250927225420.1443468-2-kuba@kernel.org Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30net: sfp: improve poll interval handlingHeiner Kallweit1-18/+11 The poll interval is a fixed value, so we don't need a static variable for it. The change also allows to use standard macro module_platform_driver, avoiding some boilerplate code. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://patch.msgid.link/b8079f96-6865-431c-a908-a0b9e9bd5379@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30net: sfp: don't include swphy.hHeiner Kallweit1-1/+0 Nothing from swphy.h is used here, so don't include it. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://patch.msgid.link/19921899-f0a8-4752-a897-1b6d62ade6eb@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30net: phy: annotate linkmode initializers as not used after init phaseHeiner Kallweit3-10/+10 Code and data used from phy_init() only, can be annotated accordingly. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://patch.msgid.link/5fb9c41b-bf44-4915-a3c3-f20952fce6de@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30net: phy: stop exporting phy_driver_unregisterHeiner Kallweit2-7/+5 After 42e2a9e11a1d ("net: phy: dp83640: improve phydev and driver removal handling") we can stop exporting also phy_driver_unregister(). Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Link: https://patch.msgid.link/2bab950e-4b70-4030-b997-03f48379586f@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30page_pool: Clamp pool size to max 16K pagesDragos Tatulea1-5/+1 page_pool_init() returns E2BIG when the page_pool size goes above 32K pages. As some drivers are configuring the page_pool size according to the MTU and ring size, there are cases where this limit is exceeded and the queue creation fails. The page_pool size doesn't have to cover a full queue, especially for larger ring size. So clamp the size instead of returning an error. Do this in the core to avoid having each driver do the clamping. The current limit was deemed to high [1] so it was reduced to 16K to avoid page waste. [1] https://lore.kernel.org/all/1758532715-820422-3-git-send-email-tariqt@nvidia.com/ Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250926131605.2276734-2-dtatulea@nvidia.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30tipc: adjust tipc_nodeid2string() to return string lengthDmitry Antipov3-10/+7 Since the value returned by 'tipc_nodeid2string()' is not used, the function may be adjusted to return the length of the result, which is helpful to drop a few calls to 'strlen()' in 'tipc_link_create()' and 'tipc_link_bc_create()'. Compile tested only. Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250926074113.914399-1-dmantipov@yandex.ru Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-306pack: drop redundant locking and refcountingQingfang Deng1-52/+5 The TTY layer already serializes line discipline operations with tty->ldisc_sem, so the extra disc_data_lock and refcnt in 6pack are unnecessary. Removing them simplifies the code and also resolves a lockdep warning reported by syzbot. The warning did not indicate a real deadlock, since the write-side lock was only taken in process context with hardirqs disabled. Reported-by: syzbot+5fd749c74105b0e1b302@syzkaller.appspotmail.com Closes: https://lore.kernel.org/all/68c858b0.050a0220.3c6139.0d1c.GAE@google.com/ Signed-off-by: Qingfang Deng <dqfext@gmail.com> Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Link: https://patch.msgid.link/20250925051059.26876-1-dqfext@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-30Documentation: net: add flow control guide and document ethtool APIOleksij Rempel8-15/+453 Introduce a new document, flow_control.rst, to provide a comprehensive guide on Ethernet Flow Control in Linux. The guide explains how flow control works, how autonegotiation resolves pause capabilities, and how to configure it using ethtool and Netlink. In parallel, document the pause and pause-stat attributes in the ethtool.yaml netlink spec. This enables the ynl tool to generate kernel-doc comments for the corresponding enums in the UAPI header, making the C interface self-documenting. Finally, replace the legacy flow control section in phy.rst with a reference to the new document and add pointers in the relevant C source files. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Link: https://patch.msgid.link/20250924120241.724850-1-o.rempel@pengutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com> 2025-09-29dpll: zl3073x: Allow to configure phase offset averaging factorIvan Vecera4-6/+107 The DPLL phase measurement block uses an exponential moving average with a configurable averaging factor. Measurements are taken at approximately 40 Hz or at the reference frequency, whichever is lower. Currently, factor=2 is used to prioritize fast response for dynamic phase changes. For applications needing a stable, precise average phase offset where rapid changes are unlikely, a higher factor is recommended. Implement the .phase_offset_avg_factor_get/set callbacks to allow a user to adjust this factor. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Link: https://patch.msgid.link/20250927084912.2343597-4-ivecera@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> 2025-09-29dpll: add phase_offset_avg_factor_get/set callback opsIvan Vecera2-7/+65 Add new callback operations for a dpll device: - phase_offset_avg_factor_get(...) - to obtain current phase offset averaging factor from dpll device, - phase_offset_avg_factor_set(...) - to set phase offset averaging factor Obtain the factor value using the get callback and provide it to the user if the device driver implement this callback. Execute the set callback upon user requests, if the driver implement it. Signed-off-by: Ivan Vecera <ivecera@redhat.com> v2: * do not require 'set' callback to retrieve current value * always call 'set' callback regardless of current value Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Link: https://patch.msgid.link/20250927084912.2343597-3-ivecera@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> 2025-09-29dpll: add phase-offset-avg-factor device attribute to netlink specIvan Vecera4-3/+27 Add dpll device level attribute DPLL_A_PHASE_OFFSET_AVG_FACTOR to allow control over a calculation of reported phase offset value. Attribute is present, if the driver provides such capability, otherwise attribute shall not be present. Signed-off-by: Ivan Vecera <ivecera@redhat.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Link: https://patch.msgid.link/20250927084912.2343597-2-ivecera@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> 2025-09-29net: stmmac: remove stmmac_hw_setup() excess documentation parameterRussell King (Oracle)1-1/+0 The kernel build bot reports: Warning: drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:3438 Excess function parameter 'ptp_register' description in 'stmmac_hw_setup' Fix it. Reported-by: kernel test robot <lkp@intel.com> Fixes: 98d8ea566b85 ("net: stmmac: move timestamping/ptp init to stmmac_hw_setup() caller") Closes: https://lore.kernel.org/oe-kbuild-all/202509290927.svDd6xuw-lkp@intel.com/ Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/E1v38Y7-00000008UCQ-3w27@rmk-PC.armlinux.org.uk Signed-off-by: Jakub Kicinski <kuba@kernel.org>