From 614c9466a238641480332b707a7a20a3593bdfb7 Mon Sep 17 00:00:00 2001 From: "Richard W.M. Jones" Date: Mon, 9 Oct 2023 13:48:25 +0100 Subject: [PATCH] target/riscv: Use env_archcpu for better performance MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit RISCV_CPU(cs) uses a checked cast. When QOM cast debugging is enabled this adds about 5% total overhead when emulating RV64 on x86-64 host. Using a RISC-V guest with 16 vCPUs, 16 GB of guest RAM, virtio-blk disk. The guest has a copy of the qemu source tree. The test involves compiling the qemu source tree with 'make clean; time make -j16'. Before making this change the compile step took 449 & 447 seconds over two consecutive runs. After making this change: 428 & 421 seconds. The saving is over 5%. Thanks: Paolo Bonzini Thanks: Philippe Mathieu-Daudé Signed-off-by: Richard W.M. Jones Reviewed-by: Alistair Francis Reviewed-by: Philippe Mathieu-Daudé Reviewed-by: Richard Henderson Message-ID: <20231009124859.3373696-2-rjones@redhat.com> Signed-off-by: Alistair Francis --- target/riscv/cpu_helper.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/target/riscv/cpu_helper.c b/target/riscv/cpu_helper.c index 3a02079290..8c28241c18 100644 --- a/target/riscv/cpu_helper.c +++ b/target/riscv/cpu_helper.c @@ -65,8 +65,7 @@ int riscv_cpu_mmu_index(CPURISCVState *env, bool ifetch) void cpu_get_tb_cpu_state(CPURISCVState *env, vaddr *pc, uint64_t *cs_base, uint32_t *pflags) { - CPUState *cs = env_cpu(env); - RISCVCPU *cpu = RISCV_CPU(cs); + RISCVCPU *cpu = env_archcpu(env); RISCVExtStatus fs, vs; uint32_t flags = 0; -- 2.11.0