OSDN Git Service

spapr_numa.c: fix ibm,max-associativity-domains calculation
authorDaniel Henrique Barboza <danielhb413@gmail.com>
Thu, 28 Jan 2021 17:42:13 +0000 (14:42 -0300)
committerDavid Gibson <david@gibson.dropbear.id.au>
Tue, 9 Feb 2021 23:43:50 +0000 (10:43 +1100)
commitb01fec3659f7e595d5066fc052fb31a94a8a969b
treecc5001ea7c31eba5e479aea81f1b04a4c2fa7ba7
parent6640706972c50aac4f620d7385d4e228a118e289
spapr_numa.c: fix ibm,max-associativity-domains calculation

The current logic for calculating 'maxdomain' making it a sum of
numa_state->num_nodes with spapr->gpu_numa_id. spapr->gpu_numa_id is
used as a index to determine the next available NUMA id that a
given NVGPU can use.

The problem is that the initial value of gpu_numa_id, for any topology
that has more than one NUMA node, is equal to numa_state->num_nodes.
This means that our maxdomain will always be, at least, twice the
amount of existing NUMA nodes. This means that a guest with 4 NUMA
nodes will end up with the following max-associativity-domains:

rtas/ibm,max-associativity-domains
                 00000004 00000008 00000008 00000008 00000008

This overtuning of maxdomains doesn't go unnoticed in the guest, being
detected in SLUB during boot:

 dmesg | grep SLUB
[    0.000000] SLUB: HWalign=128, Order=0-3, MinObjects=0, CPUs=4, Nodes=8

SLUB is detecting 8 total nodes, with 4 nodes being online.

This patch fixes ibm,max-associativity-domains by considering the amount
of NVGPUs NUMA nodes presented in the guest, instead of just
spapr->gpu_numa_id.

Reported-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
Message-Id: <20210128174213.1349181-4-danielhb413@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
hw/ppc/spapr_numa.c