Running with valgrind I noticed a use after free originating from
simple_mem.cc. It looks like this is a known issue and this additional call
site was missed in an earlier patch.
The DMA device sometimes calls the process() method on a completion
event directly instead of scheduling it on the current tick. This
breaks some devices that assume that the completion handler won't be
called until the current event handler has returned. Specifically, it
causes infinite recursion in the IdeDisk component because it does not
advance its chunk generator until after a dmaRead()/dmaWrite() has
returned. This changeset removes this mico-optimization and schedules
the event in the current tick instead. This way the semantics event
handling stay the same even when the delay is 0.
The number of arguments specified when calling parse_int_args() in
do_exit() is incorrect. This leads to stack corruption since it causes
writes past the end of the ints array.
Tick was not correctly wrapped for the stats system, and therefore it was not
possible to configure the stats dumping from the python scripts without
defining Ticks as long long. This patch fixes the wrapping of Tick by copying
the typemap of uint64_t to Tick.
Cleanup the serialization code for the simple CPUs and the O3 CPU. The
CPU-specific code has been replaced with a (un)serializeThread that
serializes the thread state / context of a specific thread. Assuming
that the thread state class uses the CPU-specific thread state uses
the base thread state serialization code, this allows us to restore a
checkpoint with any of the CPU models.
This changeset adds a set of tests that stress the CPU switching
code. It adds the following test configurations:
* tsunami-switcheroo-full -- Alpha system (atomic, timing, O3)
* realview-switcheroo-atomic -- ARM system (atomic<->atomic)
* realview-switcheroo-timing -- ARM system (timing<->timing)
* realview-switcheroo-o3 -- ARM system (O3<->O3)
* realview-switcheroo-full -- ARM system (atomic, timing, O3)
Reference data is provided for the 10.linux-boot test case. All of the
tests trigger a CPU switch once per millisecond during the boot
process.
The in-order CPU model was not included in any of the tests as it does
not support CPU handover.
This changeset inserts a TLB flush in BaseCPU::switchOut to prevent
stale translations when doing repeated switching. Additionally, the
TLB flushing functionality is exported to the Python to make debugging
of switching/checkpointing easier.
A simulation script will typically use the TLB flushing functionality
to generate a reference trace. The following sequence can be used to
simulate a handover (this depends on how drain is implemented, but is
generally the case) between identically configured CPU models:
m5.drain(test_sys)
[ cpu.flushTLBs() for cpu in test_sys.cpu ]
m5.resume(test_sys)
The generated trace should normally be identical to a trace generated
when switching between identically configured CPU models or
checkpointing and resuming.
When the classic gem5 cache sees an uncacheable memory access, it used
to ignore it or silently drop the cache line in case of a
write. Normally, there shouldn't be any data in the cache belonging to
an uncacheable address range. However, since some architecture models
don't implement cache maintenance instructions, there might be some
dirty data in the cache that is discarded when this happens. The
reason it has mostly worked before is because such cache lines were
most likely evicted by normal memory activity before a TLB flush was
requested by the OS.
Previously, the cache model would invalidate cache lines when they
were accessed by an uncacheable write. This changeset alters this
behavior so all uncacheable memory accesses cause a cache flush with
an associated writeback if necessary. This is implemented by reusing
the cache flushing machinery used when draining the cache, which
implies that writebacks are performed using functional accesses.
Previously, the O3 CPU could stop in the middle of a microcode
sequence. This patch makes sure that the pipeline stops when it has
committed a normal instruction or exited from a microcode
sequence. Additionally, it makes sure that the pipeline has no
instructions in flight when it is drained, which should make draining
more robust.
Draining is controlled in the commit stage, which checks if the next
PC after a committed instruction is in microcode. If this isn't the
case, it requests a squash of all instructions after that the
instruction that just committed and immediately signals a drain stall
to the fetch stage. The CPU then continues to execute until the
pipeline and all associated buffers are empty.
Currently, the atomic CPU can be in the middle of a microcode sequence
when it is drained. This leads to two problems:
* When switching to a hardware virtualized CPU, we obviously can't
execute gem5 microcode.
* Since curMacroStaticInst is populated when executing microcode,
repeated switching between CPUs executing microcode leads to
incorrect execution.
After applying this patch, the CPU will be on a proper instruction
boundary, which means that it is safe to switch to any CPU model
(including hardware virtualized ones). This changeset fixes a bug
where the multiple switches to the same atomic CPU sometimes corrupts
the target state because of dangling pointers to the currently
executing microinstruction.
Note: This changeset moves tick event descheduling from switchOut() to
drain(), which makes timing consistent between just draining a system
and draining /and/ switching between two atomic CPUs. This makes
debugging quite a lot easier (execution traces get the same timing),
but the latency of the last instruction before a drain will not be
accounted for correctly (it will always be 1 cycle).
Note 2: This changeset removes so_state variable, the locked variable,
and the tickEvent from checkpoints since none of them contain state
that needs to be preserved across checkpoints. The so_state is made
redundant because we don't use the drain state variable anymore, the
lock variable should never be set when the system is drained, and the
tick event isn't scheduled.
Currently, the timing CPU can be in the middle of a microcode sequence
or multicycle (stayAtPC is true) instruction when it is drained. This
leads to two problems:
* When switching to a hardware virtualized CPU, we obviously can't
execute gem5 microcode.
* If stayAtPC is true we might execute half of an instruction twice
when restoring a checkpoint or switching CPUs, which leads to an
incorrect execution.
After applying this patch, the CPU will be on a proper instruction
boundary, which means that it is safe to switch to any CPU model
(including hardware virtualized ones). This changeset also fixes a bug
where the timing CPU sometimes switches out with while stayAtPC is
true, which corrupts the target state after a CPU switch or
checkpoint.
Note: This changeset removes the so_state variable from checkpoints
since the drain state isn't used anymore.
The thread context handover code used to break when multiple handovers
were performed during the same quiesce period. Previously, the thread
contexts would assign the TC pointer in the old quiesce event to the
new TC. This obviously broke in cases where multiple switches were
performed within the same quiesce period, in which case the TC pointer
in the quiesce event would point to an old CPU.
The new implementation deschedules pending quiesce events in the old
TC and schedules a new quiesce event in the new TC. The code has been
refactored to remove most of the code duplication.
Currently, we invalidate the cached miscregs in
TLB::unserialize(). The intended use of the drainResume() method is to
invalidate cached state and prepare the system to resume after a CPU
handover or (un)serialization. This patch moves the TLB miscregs
invalidation code to the drainResume() method to avoid surprising
behavior.
Since the page table walker only checks if a drain has completed in
doL1DescriptorWrapper() and doL2DescriptorWrapper(), it sometimes
looses track of a drain request if there is a squash. This changeset
adds a completeDrain() call after squashing requests in the pending
queue, which fixes this issue.
Commit can currently both commit and squash in the same cycle. This
confuses other stages since the signals coming from the commit stage
can only signal either a squash or a commit in a cycle. This changeset
changes the behavior of squashAfter so that it commits all
instructions, including the instruction that requested the squash, in
the first cycle and then starts to squash in the next cycle.
The defer_registration parameter is used to prevent a CPU from
initializing at startup, leaving it in the "switched out" mode. The
name of this parameter (and the help string) is confusing. This patch
renames it to switched_out, which should be more descriptive.
In order to see all registers independent of the current CPU mode, the
ARM architecture model uses the magic MISCREG_CPSR_MODE register to
change the register mappings without actually updating the CPU
mode. This hack is no longer needed since the thread context now
provides a flat interface to the register file. This patch replaces
the CPSR_MODE hack with the flat register interface.
This patch introduces the following sanity checks when switching
between CPUs:
* Check that the set of new and old CPUs do not overlap. Having an
overlap between the set of new CPUs and the set of old CPUs is
currently not supported. Doing such a switch used to result in the
following assertion error:
BaseCPU::takeOverFrom(BaseCPU*): \
Assertion `!new_itb_port->isConnected()' failed.
* Check that all new CPUs are in the switched out state.
* Check that all old CPUs are in the switched in state.
This patch cleans up the CPU switching functionality by making sure
that CPU models consistently call the parent on switchOut() and
takeOverFrom(). This has the following implications that might alter
current functionality:
* The call to BaseCPU::switchout() in the O3 CPU is moved from
signalDrained() (!) to switchOut().
* A call to BaseSimpleCPU::switchOut() is introduced in the simple
CPUs.
The O3 CPU used to copy its thread context to a SimpleThread in order
to do serialization. This was a bit of a hack involving two static
SimpleThread instances and a magic constructor that was only used by
the O3 CPU.
This patch moves the ThreadContext serialization code into two global
procedures that, in addition to the normal serialization parameters,
take a ThreadContext reference as a parameter. This allows us to reuse
the serialization code in all ThreadContext implementations.
The entire O3 pipeline used to be initialized from init(), which is
called before initState() or unserialize(). This causes the pipeline
to be initialized from an incorrect thread context. This doesn't
currently lead to correctness problems as instructions fetched from
the incorrect start PC will be squashed a few cycles after
initialization.
This patch will affect the regressions since the O3 CPU now issues its
first instruction fetch to the correct PC instead of 0x0.
Some architectures map registers differently depending on their mode
of operations. There is currently no architecture independent way of
accessing all registers. This patch introduces a flat register
interface to the ThreadContext class. This interface is useful, for
example, when serializing or copying thread contexts.
After making the ISA an independent SimObject, it is serialized
automatically by the Python world. Previously, this just resulted in
an empty ISA section. This patch moves the contents of the ISA to that
section and removes the explicit ISA serialization from the thread
contexts, which makes it behave like a normal SimObject during
serialization.
Note: This patch breaks checkpoint backwards compatibility! Use the
cpt_upgrader.py utility to upgrade old checkpoints to the new format.
This patch adds checks to all CPU models to make sure that the memory
system is in the correct mode at startup and when resuming after a
drain. Previously, we only checked that the memory system was in the
right mode when resuming. This is inadequate since this is a
configuration error that should be detected at startup as well as when
resuming. Additionally, since the check was done using an assert, it
wasn't performed when NDEBUG was set (e.g., the fast target).
This patch adds support for the memInvalidate() drain method. TLB
flushing is requested by calling the virtual flushAll() method on the
TLB.
Note: This patch renames invalidateAll() to flushAll() on x86 and
SPARC to make the interface consistent across all supported
architectures.
The IIC replacement policy seems to be unused and has probably
gathered too much bit rot to be useful. This patch removes the IIC and
its associated cache parameters.
This patch removes the intNum and clock from the serialized scalars as
these are set by the Python parameters and should not be part of the
checkpoint.
This patch checks that the compiler in use is either gcc >= 4.4 or
clang >= 2.9. and enables building with --std=c++0x in all cases. As a
consequence, we can tidy up the hashmap and always have static_assert
available. If anyone wants to use alternative compilers, icc for
example supports c++0x to a similar level and could be added if
needed.
This patch opens up for a more elaborate use of c++0x features that
are present in gcc 4.4 and clang 2.9, e.g. auto typed variables,
variadic templates, rvalues and move semantics, and strongly typed
enums. There will be no going back on this one...
This patch simply prunes the SUNCC and ICC compiler options as they
are both sufficiently stale that they would have to be re-written from
scratch anyhow. The patch serves to clean things up before shifting to
a build environment that enforces basic c++11 compliance as done in
the following patch.
This patch changes the NS gige controller to have a non-clock, and
sets the default to 500 MHz. The blocks that could prevoiusly be
by-passed with a zero clock are now always present, and the user is
left with the option of setting a very high clock frequency to achieve
a similar performance.
Scons normally removes all environment variables that aren't
whitelisted from the build environment. This messes up things like
ccache, distcc, and the clang static analyzer. This changeset adds the
DISTCC_, CCACHE_, and CCC_ prefixes to the environment variable
whitelist.
Fixed check pointing of the framebuffer. Previously, the pixel size was not
considered in determining the size of the buffer to checkpoint. This patch
checkpoints the entire framebuffer instead of the first quarter.
At least gcc 4.4.3 seems to get confused by the use of func both as a
template parameter and a member variable in the M5VarArgsFault
class. This causes the value of the member variable func to be
unpredictable in M5VarArgsFault objects. This changeset renames the
template parameter to remove this ambiguity.
This patch adds basic merging of address ranges when determining which
address ranges should be reported in the configuration table. By
performing this merging it is possible to distribute an address range
across many memory channels (controllers). This is essential to enable
address interleaving.
This patch adds support for merging a vector of interleaved address
ranges into a contigous range. The functionality will be used in the
interconnect and the PhysicalMemory to transform interleaved memory
ranges to contigous ranges before passing them on.
The actual use of the merging is appearing in future patches.
This patch adds support for interleaving bits for the address
ranges. What was previously just a start and end address, now has an
additional three fields, for the high bit, and number of bits to use
for interleaving, and a match value to compare against. If the number
of interleaving bits is set to zero it is effectively disabled.
A number of convenience functions are added to the range to enquire
about the interleaving, its granularity and the number of stripes it
is part of.
This patch makes the all proxy traverse any potential list that is
encountered in the object hierarchy instead of only looking at
children that are SimObjects. An example of where this is useful is
when creating a multi-channel memory system as a list of controllers,
whilst ensuring that the memories are still visible in the system.
This patch cleans up the AddrRangeMap in preparation for the addition
of interleaving by removing unused code. The non-const editions of
find are never used, and hence the duplication is not needed.
This patch generalises the address range resolution for the I/O cache
and I/O bridge such that they do not assume a single memory. The patch
involves adding a parameter to the system which is then defined based
on the memories that are to be visible from the I/O subsystem, whether
behind a cache or a bridge.
The change is needed to allow interleaved memory controllers in the
system.
This patch tidies up a number of the bus DPRINTFs related to range
manipulation. In particular, it shifts the message about range changes
to the start of the member function, and also adds information about
when all ranges are received.
This patch makes the address mapper less stringent about checking the
before and after ranges, i.e. the original and remapped ranges. The
checks were not really necessary, and there are situations when the
previous checks were too strict.
This patch makes the start and end address private in a move to
prevent direct manipulation and matching of ranges based on these
fields. This is done so that a transition to ranges with interleaving
support is possible.
As a result of hiding the start and end, a number of member functions
are needed to perform the comparisons and manipulations that
previously took place directly on the members. An accessor function is
provided for the start address, and a function is added to test if an
address is within a range. As a result of the latter the != and ==
operator is also removed in favour of the member function. A member
function that returns a string representation is also created to allow
debug printing.
In general, this patch does not add any functionality, but it does
take us closer to a situation where interleaving (and more cleverness)
can be added under the bonnet without exposing it to the user. More on
that in a later patch.
This patch temporarily removes the joining of ranges when creating the
backing store, to reserve this functionality for the interleaved
ranges that are about to be introduced.
When creating the mmaps for the backing store, there is no point in
creating larger contigous chunks that what is necessary. The larger
chunks will only make life more difficult for the host.
Merging will be re-added later, but then only for interleaved ranges.