This patch fixes the type handling when prefix operations are used. Previously
prefix operators would assume a void return type, which made it impossible to
combine prefix operations with other expressions. This patch allows SLICC
programmers to use prefix operations more naturally.
This patches adds support for transitions of the form:
transition(START, EVENTS, *) { ACTIONS }
This allows a machine to collapse states that differ only in the next state
transition to collapse into one, and can help shorten/simplfy some protocols
significantly.
When * is encountered as an end state of a transition, the next state is
determined by calling the machine-specific getNextState function. The next
state is determined before any actions of the transition execute, and
therefore the next state calculation cannot depend on any of the transition
actions.
This patch allows SLICC protocols to use more than one message type with a
message buffer. For example, you can declare two in ports as such:
in_port(ResponseQueue_in, ResponseMsg, responseFromDir, rank=3) { ... }
in_port(tgtResponseQueue_in, TgtResponseMsg, responseFromDir, rank=2) { ... }
This patch was created by Bihn Pham during his internship at AMD.
There is no need to delay hit callback response messages by a cycle because
the response latency is already incurred in the Ruby protocol. This ensures
correct timing of memory instructions.
The Minor draining fixes affect perturb the timing slightly since it
affects how the simulator is drained. Update reference statistics to
reflect this expected change.
The Minor CPU currently doesn't drain properly when it is switched
out. This happens because Fetch 1 expects to be in the FetchHalted
state when it is drained. However, because the CPU is switched out, it
is stuck in the FetchWaitingForPC state. Fix this by ignoring drain
requests and returning DrainState::Drained from MinorCPU::drain() if
the CPU is switched out. This is always safe since a switched out CPU,
by definition, doesn't have any instructions in flight.
Minor currently activates thread 0 in startup() to work around an
issue where activateContext() is called from LiveProcess before the
process entry point is known. When activateContext() is called, Minor
creates a branch instruction to the process's entry point. The first
time it is called, the branch points to an undefined location (0). The
call in startup() updates the branch to point to the actual entry
point.
When instantiating a switched out Minor CPU, it still tries to
activate thread 0. This is clearly incorrect since a switched out CPU
can't have any active threads. This changeset adds a check to ensure
that the thread is active before reactivating it.
The drain refactor patches introduced a couple of bugs in the way
Minor handles draining. This patch fixes an incorrect assert and a
case of infinite recursion when the CPU signals drain done.
This patch removes the RequestCause, and also simplifies how we
schedule the sending of packets through the memory-side port. The
deassertion of bus requests is removed as it is not used.
This patch makes cache sets aware of the way number. This enables
some nice features such as the ablity to restrict way allocation. The
implemented mechanism allows to set a maximum way number to be
allocated 'k' which must fulfill 0 < k <= N (where N is the number of
ways). In the future more sophisticated mechasims can be implemented.
This patch changes how writebacks communicate whether the line is
passed as modified or owned. Previously we relied on the
isSupplyExclusive mechanism, which was originally designed to avoid
unecessary snoops.
For normal cache requests we use the sharedAsserted mechanism to
determine if a block should be marked writeable or not, and with this
patch we transition the writebacks to also use this
mechanism. Conceptually this is cleaner and more consistent.
Some minor fixes and removal of dead code. Changing the flags to be
enums rather than static const (to avoid any linking issues caused by
the latter). Also adding a getBlockAddr member which hopefully can
slowly finds its way into caches, snoop filters etc.
This adds a vector register type. The type is defined as a std::array of a
fixed number of uint64_ts. The isa_parser.py has been modified to parse vector
register operands and generate the required code. Different cpus have vector
register files now.
The Process class methods were using an improper style and this subsequently
bled into the system call code. The following regular expressions should be
helpful if someone transitions private system call patches on top of these
changesets:
s/alloc_fd/allocFD/
s/sim_fd(/simFD(/
s/sim_fd_obj/getFDEntry/
s/fix_file_offsets/fixFileOffsets/
s/find_file_offsets/findFileOffsets/
The patch clarifies whether file descriptors are host file descriptors or
target file descriptors in the system call code. (Host file descriptors
are file descriptors which have been allocated through real system calls
where target file descriptors are allocated from an array in the Process
class.)
This patch extends the previous patch's alterations around fd_map. It cleans
up some of the uglier code in the process file and replaces it with a more
concise C++11 version. As part of the changes, the FdMap class is pulled out
of the Process class and receives its own file.
This patch gets rid of unused Process::dup_fd method and does minor
refactoring in the process class files. The file descriptor max has been
changed to be the number of file descriptors since this clarifies the loop
boundary condition and cleans up the code a bit. The fd_map field has been
altered to be dynamically allocated as opposed to being an array; the
intention here is to build on this is subsequent patches to allow processes
to share their file descriptors with the clone system call.
This patch updates the x86 decoder so that it can decode instructions with vex
prefix. It also updates the isa with opcodes from vex opcode maps 1, 2 and 3.
Note that none of the instructions have been implemented yet. The
implementations would be provided in due course of time.
Multi gem5 is an extension to gem5 to enable parallel simulation of a
distributed system (e.g. simulation of a pool of machines
connected by Ethernet links). A multi gem5 run consists of seperate gem5
processes running in parallel (potentially on different hosts/slots on
a cluster). Each gem5 process executes the simulation of a component of the
simulated distributed system (e.g. a multi-core board with an Ethernet NIC).
The patch implements the "distributed" Ethernet link device
(dev/src/multi_etherlink.[hh.cc]). This device will send/receive
(simulated) Ethernet packets to/from peer gem5 processes. The interface
to talk to the peer gem5 processes is defined in dev/src/multi_iface.hh and
in tcp_iface.hh.
There is also a central message server process (util/multi/tcp_server.[hh,cc])
which acts like an Ethernet switch and transfers messages among the gem5 peers.
A multi gem5 simulations can be kicked off by the util/multi/gem5-multi.sh
wrapper script.
Checkpoints are supported by multi-gem5. The checkpoint must be
initiated by a single gem5 process. E.g., the gem5 process with rank 0
can take a checkpoint from the bootscript just before it invokes
'mpirun' to launch an MPI test. The message server process will notify
all the other peer gem5 processes and make them take a checkpoint, too
(after completing a global synchronisation to ensure that there are no
inflight messages among gem5).
This is another step in the process of removing global variables
from Ruby to enable multiple RubySystem instances in a single simulation.
The list of abstract controllers is per-RubySystem and should be
represented that way, rather than as a global.
Since this is the last remaining Ruby global variable, the
src/mem/ruby/Common/Global.* files are also removed.
This is another step in the process of removing global variables
from Ruby to enable multiple RubySystem instances in a single simulation.
With possibly multiple RubySystem objects, we can no longer use a global
variable to find "the" RubySystem object. Instead, each Ruby component
has to carry a pointer to the RubySystem object to which it belongs.
This patch begins the process of removing global variables from the Ruby
source with the goal of eventually allowing users to create multiple Ruby
instances in a single simulation. Currently, users cannot do so because
several global variables and static members are referenced by the RubySystem
object in a way that assumes that there will only ever be a single RubySystem.
These need to be replaced with per-RubySystem equivalents.
This specific patch replaces the global var g_ruby_start, which is used
to calculate throughput statistics for Throttles in simple networks and
links in Garnet networks, with a RubySystem instance var m_start_cycle.