From: Paolo Bonzini Date: Wed, 13 Apr 2011 08:03:43 +0000 (+0200) Subject: really fix -icount in the iothread case X-Git-Url: http://git.osdn.net/view?a=commitdiff_plain;h=3b2319a30b5ae528787bf3769b1a28a863b53252;p=qmiga%2Fqemu.git really fix -icount in the iothread case The correct fix for -icount is to consider the biggest difference between iothread and non-iothread modes. In the traditional model, CPUs run _before_ the iothread calls select (or WaitForMultipleObjects for Win32). In the iothread model, CPUs run while the iothread isn't holding the mutex, i.e. _during_ those same calls. So, the iothread should always block as long as possible to let the CPUs run smoothly---the timeout might as well be infinite---and either the OS or the CPU thread itself will let the iothread know when something happens. At this point, the iothread wakes up and interrupts the CPU. This is exactly the approach that this patch takes: when cpu_exec_all returns in -icount mode, and it is because a vm_clock deadline has been met, it wakes up the iothread to process the timers. This is really the "bulk" of fixing icount. Signed-off-by: Paolo Bonzini Tested-by: Edgar E. Iglesias Signed-off-by: Edgar E. Iglesias --- diff --git a/cpus.c b/cpus.c index 41bec7cc56..cbeac7a40e 100644 --- a/cpus.c +++ b/cpus.c @@ -830,6 +830,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) while (1) { cpu_exec_all(); + if (use_icount && qemu_next_deadline() <= 0) { + qemu_notify_event(); + } qemu_tcg_wait_io_event(); }