Do some minor cleanup of some recently added comments, a warning, and change
other instances of stack extension to be like what's now being done for x86.
The way flag bits were being set for microops in x86 ended up implicitly
calling the bitset constructor which was truncating flags beyond the width of
an unsigned long. This change sets the bits in chunks which are always small
enough to avoid being truncated. On 64 bit machines this should reduce to be
the same as before, and on 32 bit machines it should work properly and not be
unreasonably inefficient.
When an instruction is translated in the x86 TLB, a variable called
delayedResponse is passed back and forth which tracks whether a translation
could be completed immediately, or if there's going to be callback that will
finish things up. If a read was to the internal memory space, memory mapped
registers used to implement things like MSRs, the function hadn't yet gotten
to where delayedResponse was set to false, it's default. That meant that the
value was never set, and the TLB could start waiting for a callback that would
never come. This change simply moves the assignment to above where control
can divert to translateInt().
Nothing big here, but when you have an address that is not in the page table request to be allocated, if it falls outside of the maximum stack range all you get is a page fault and you don't know why. Add a little warn() to explain it a bit. Also add some comments and alter logic a little so that you don't totally ignore the return value of checkAndAllocNextPage().
Even though the code is safe, compiler flags a warning here, which are treated as errors for fast/opt. I know it's redundant but it has no side effects and fixes the compile.
In the current implementation of Functional Accesses, it's very hard to
implement broadcast or snooping protocols where the memory has no idea if it
has exclusive access to a cache block or not. Without this knowledge, making
sure the RW vs. RO permissions are right are next to impossible. So we add a
new state called Backing_Store to enable the conveyance that this is the backup
storage for a block, so that it can be written if it is the only possibly RW
block in the system, or written even if there is another RW block in the
system, without causing problems.
Also, a small change to actually set the m_name field for each Controller so
that debugging can be easier. Now you can access a controller's name just by
controller->getName().
There are a set of locations is the linux kernel that are managed via
cache maintence instructions until all processors enable their MMUs & TLBs.
Writes to these locations are manually flushed from the cache to main
memory when the occur so that cores operating without their MMU enabled
and only issuing uncached accesses can receive the correct data. Unfortuantely,
gem5 doesn't support any kind of software directed maintence of the cache.
Until such time as that support exists this patch marks the specific cache blocks
that need to be coherent as non-cacheable until all CPUs enable their MMU and
thus allows gem5 to boot MP systems with caches enabled (a requirement for
booting an O3 cpu and thus an O3 CPU regression).
The driver can read the IDE config register as a 32 bit register since
some adapters use bit 18 as a disable channel bit. If the size isn't
set in a PRD it should be 64K according to the SPEC (and driver) not
128K.
SEV instructions were originally implemented to cause asynchronous squashes
via the generateTCSquash() function in the O3 pipeline when updating the
SEV_MAILBOX miscReg. This caused race conditions between CPUs in an MP system
that would lead to a pipeline either going inactive indefinitely or not being
able to commit squashed instructions. Fixed SEV instructions to behave like
interrupts and cause synchronous sqaushes inside the pipeline, eliminating
the race conditions. Also fixed up the semantics of the WFE instruction to
behave as documented in the ARMv7 ISA description to not sleep if SEV_MAILBOX=1
or unmasked interrupts are pending.
Two issues are fixed in this patch:
1. The load and store pc passed to the predictor are passed in reverse order.
2. The flag indicating that a barrier is inflight was never cleared when
the barrier was squashed instead of committed. This made all load insts
dependent on a non-existent barrier in-flight.
Change the way instructions are squashed on memory ordering violations
to squash the violator and younger instructions, not all instructions
that are younger than the instruction they violated (no reason to throw
away valid work).
Cortex-A9 processors can have a local timer and watchdog counter. It
is enabled by default in Linux and up to this point we've had to disable
them since a model wasn't available. This change allows a default
MP ARM Linux configuration to boot.
Control register operands are set up so that writing to them is serialize
after, serialize before, and non-speculative. These are probably overboard,
but they should usually be safe. Unfortunately there are times when even these
aren't enough. If an instruction modifies state that affects fetch, later
serialized instructions which come after it might have already gone through
fetch and decode by the time it commits. These instructions may have been
translated incorrectly or interpretted incorrectly and need to be destroyed.
This change modifies instructions which will or may have this behavior so that
they use the IsSquashAfter flag when necessary.
It's possible (though until now very unlikely) for fetchAddr to get out of
sync with the actual PC of the current instruction. This change forcefull
resets fetchAddr at the end of every instruction.
Until now, the only reason a macroop would be left was because it ended at a
microop marked as the last microop. In O3 with branch prediction, it's
possible for the branch predictor to have entries which originally came from
different instructions which happened to have the same RIP. This could
theoretically happen in many ways, but it was encountered specifically when
different programs in different address spaces ran one after the other in
X86_FS.
What would happen in that case was that the macroop would continue to be
looped over and microops fetched from it until it reached the last microop
even though the macropc had moved out from under it. If things lined up
properly, this could mean that the end bytes of an instruction actually fell
into the instruction sized block of memory after the one in the predecoder.
The fetch loop implicitly assumes that the last instruction sized chunk of
memory processed was the last one needed for the instruction it just finished
executing. It would then tell the predecoder to move to an offset within the
bytes it was given that is larger than those bytes, and that would trip an
assert in the x86 predecoder.
This change fixes this problem by making fetch stop processing the current
macroop if the address it should be fetching from changed when the PC is
updated. That happens when the last microop was reached because the instruction
handled it properly, and it also catches the case where the branch predictor
makes fetch do a macro level branch when it shouldn't.
The check of isLastMicroop is retained because otherwise, a macroop that
branches back to itself would act like a single, long macroop instead of
multiple instances of the same microop. There may be situations (which may
turn out to be purely hypothetical) where that matters.
This also fixes a relatively minor issue where the curMacroop variable would
be set to NULL immediately after seeing that a microop was the last one before
curMacroop was used to build the dyninst. The traceData structure would have a
NULL pointer to the macroop for that microop.
Before this change, the commit stage would wait until the ROB and store queue
were empty before recognizing an interrupt. The fetch stage would stop
generating instructions at an appropriate point, so commit would then wait
until a valid time to interrupt the instruction stream. Instructions might be
in flight after fetch but not the in the ROB or store queue (in rename, for
instance), so this change makes commit wait until all in flight instructions
are finished.
This patch replaces RUBY with PROTOCOL in all the SConscript files as
the environment variable that decides whether or not certain components
of the simulator are compiled.