This changeset adds a standardized probe point type to monitor packets
in the memory system and adds two probe points to the CommMonitor
class. These probe points enable monitoring of successfully delivered
requests and successfully delivered responses.
Memory system probe listeners should use the BaseMemProbe base class
to provide a unified configuration interface and reuse listener
registration code. Unlike the ProbeListenerObject class, the
BaseMemProbe allows objects to be wired to multiple ProbeManager
instances as long as they use the same probe point name.
Transaction Level Modeling (TLM2.0) is widely used in industry for creating
virtual platforms (IEEE 1666 SystemC). This patch contains a standard compliant
implementation of an external gem5 port, that enables the usage of gem5 as a
TLM initiator component in SystemC based virtual platforms. Both TLM coding
paradigms loosely timed (b_transport) and aproximately timed (nb_transport) are
supported.
Compared to the original patch a TLM memory manager was added. Furthermore, the
transaction object was removed and for each TLM payload a PacketPointer that
points to the original gem5 packet is added as an TLM extension. For event
handling single events are now created.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
There are 2 problems with the existing checkpoint and restore code in ruby.
The first is that when the event queue is altered by ruby during serialization,
some events that are currently scheduled cannot be found (e.g. the event to
stop simulation that always lives on the queue), causing a panic.
The second is that ruby is sometimes serialized after the memory system,
meaning that the dirty data in its cache is flushed back to memory too late
and so isn't included in the checkpoint.
These are fixed by implementing memory writeback in ruby, using the same
technique of hijacking the event queue, but first descheduling all events that
are currently on it. They are saved, along with their scheduled time, so that
the event queue can be faithfully reconstructed after writeback has finished.
Events with the AutoDelete flag set will delete themselves when they
are descheduled, causing an error when attempting to schedule them again.
This is fixed by simply not recording them when taking them off the queue.
Writeback is still implemented using flushing, so the cache recorder object,
that is created to generate the trace and manage flushing, is kept
around and used during serialization to write the trace to disk.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
1. Eliminate state NP in L0 and L1 Caches: The two states 'NP' and 'I' both
mean that the cache block is not present in the cache. 'I' also means that the
cache entry has been allocated. This causes problems when we do not correctly
initialize the cache entry when it is re-used. Hence, this patch eliminates
the state NP altogether. Everytime a new block comes into the cache, a cache
entry is allocated. Everytime a block leaves, the corresponding entry is
deallocated.
2. Separate transient state for instruction fetches: purely for accouting
purposes.
3. Drop state IS_I in L1 Cache and the message type STALE_DATA: when
invalidation is received for a block in IS, the block used to be moved to IS_I.
This meant that the data that would arrive in future would be used but not
stored since the controller lost the permissions after gaining them. This
state is being dropped and now invalidation messages would not processed till
the data has arrived. This also means that STALE_DATA type is not longer
required.
The level 2 controller has a bug. In one particular action, the data block was
copied from a message irrespective whether the block is dirty or not. In cases
when L1 sends no data, the data value copied was incorrect.
For many years the slicc symbol table has supported overloaded functions in
external classes. This patch extends that support to functions that are not
part of classes (a.k.a. no parent). For example, this support allows slicc
to understand that mapAddressToRange is overloaded and the NodeID is an
optional parameter.
This patch changes the router pipeline stages from 4 to 2. The
canonical 4-stage router is conservative while a lower-latency router
with look ahead routing and speculative allocation is well acknowledged.
Sets m_stage.second to the second parameter of the function.
Then, for every place where advance_stage is called, adds
a cycle to the argument being passed.
Adds features to allow protocols to reschedule controllers when conditionally
stalling within inport logic or actions. Also insures that resource and
protocol stalls are re-evaluated the next cycle.
This patch adds support that allows the replacement policy to identify each
cache block's access permission. This information can be useful when making
replacement decisions.
The Ruby banked array resource checks (initiated from SLICC) did a check and
allocate at the same time. If a transition needs more than one resource, then
it might check/allocate resource #1, then fail to get resource #2. Another
transition might then try to get the same resources, but in reverse order.
Deadlock.
This patch separates resource checking and resource reservation into two
steps to avoid deadlock.
It was previously possible for a stalled message to be reordered after an
incomming message. This patch ensures that any stalled message stays in its
original request order.
This patch adds a few helpful functions that allow .sm files to directly
invalidate all cache blocks using a trigger queue rather than rely on each
individual cache block to be invalidated via requests from the mandatory
queue.
This patch allows DPRINTFs to be used in SLICC state machines similar to how
they are used by the rest of gem5. Previously all DPRINTFs in the .sm files
had to use the RubySlicc flag.
This patch exposes the tag and data array latencies to the SLICC state machines
so that it can be used to determine the correct enqueue latency for response
messages.
To have multiple Entry types (e.g., a cache Entry type and
a directory Entry type), just declare one of them as a secondary
type by using the pair 'main="false"', e.g.:
structure(DirEntry, desc="...", interface="AbstractCacheEntry",
main="false") {
...and the primary type would be declared:
structure(Entry, desc="...", interface="AbstractCacheEntry") {
This patch fixes the type handling when prefix operations are used. Previously
prefix operators would assume a void return type, which made it impossible to
combine prefix operations with other expressions. This patch allows SLICC
programmers to use prefix operations more naturally.
This patches adds support for transitions of the form:
transition(START, EVENTS, *) { ACTIONS }
This allows a machine to collapse states that differ only in the next state
transition to collapse into one, and can help shorten/simplfy some protocols
significantly.
When * is encountered as an end state of a transition, the next state is
determined by calling the machine-specific getNextState function. The next
state is determined before any actions of the transition execute, and
therefore the next state calculation cannot depend on any of the transition
actions.
This patch allows SLICC protocols to use more than one message type with a
message buffer. For example, you can declare two in ports as such:
in_port(ResponseQueue_in, ResponseMsg, responseFromDir, rank=3) { ... }
in_port(tgtResponseQueue_in, TgtResponseMsg, responseFromDir, rank=2) { ... }
This patch was created by Bihn Pham during his internship at AMD.
There is no need to delay hit callback response messages by a cycle because
the response latency is already incurred in the Ruby protocol. This ensures
correct timing of memory instructions.
The Minor draining fixes affect perturb the timing slightly since it
affects how the simulator is drained. Update reference statistics to
reflect this expected change.
The Minor CPU currently doesn't drain properly when it is switched
out. This happens because Fetch 1 expects to be in the FetchHalted
state when it is drained. However, because the CPU is switched out, it
is stuck in the FetchWaitingForPC state. Fix this by ignoring drain
requests and returning DrainState::Drained from MinorCPU::drain() if
the CPU is switched out. This is always safe since a switched out CPU,
by definition, doesn't have any instructions in flight.
Minor currently activates thread 0 in startup() to work around an
issue where activateContext() is called from LiveProcess before the
process entry point is known. When activateContext() is called, Minor
creates a branch instruction to the process's entry point. The first
time it is called, the branch points to an undefined location (0). The
call in startup() updates the branch to point to the actual entry
point.
When instantiating a switched out Minor CPU, it still tries to
activate thread 0. This is clearly incorrect since a switched out CPU
can't have any active threads. This changeset adds a check to ensure
that the thread is active before reactivating it.
The drain refactor patches introduced a couple of bugs in the way
Minor handles draining. This patch fixes an incorrect assert and a
case of infinite recursion when the CPU signals drain done.