This patch fixes an assert condition that is not true at all
times. There are valid situations that arise in dual-core
dual-workload runs where the assert condition is false. The function
call following the assert however needs to be called only when the
condition is true (a block cannot be invalidated in the tags structure
if has not been allocated in the structure, and the tempBlock is never
allocated). Hence the 'assert' has been replaced with an 'if'.
Couple of users observed segmentation fault when the simulator tries to
register the statistical variable m_IncompleteTimes. It seems that there
is some problem with the initialization of these variables when allocated
in the constructor.
Currently, the interrupt controller in x86 is connected to the io bus
directly. Therefore the packets between the io devices and the interrupt
controller do not go through ruby. This patch changes ruby port so that
these packets arrive at the ruby port first, which then routes them to their
destination. Note that the patch does not make these packets go through the
ruby network. That would happen in a subsequent patch.
This patch simplfies the retry logic in the RubyPort, avoiding
redundant attributes, and enforcing more stringent checks on the
interactions with the normal ports. The patch also simplifies the
routing done by the RubyPort, using the port identifiers instead of a
heavy-weight sender state.
The patch also fixes a bug in the sending of responses from PIO
ports. Previously these responses bypassed the queue in the queued
port, and ignored the return value, potentially leading to response
packets being lost.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Code in two of the functions was exactly the same. This patch moves
this code to a new function which is called from the two functions
mentioned initially.
At several places, there are functions that take a cycle value as input
and performs some computation. Along with each such function, another
function was being defined that simply added one more cycle to input and
computed the same function. This patch removes this second copy of the
function. Places where these functions were being called have been updated
to use the original function with argument being current cycle + 1.
Two files had been incorrectly named with a .cache suffix.
--HG--
rename : src/mem/protocol/MESI_Three_Level-L0.cache => src/mem/protocol/MESI_Three_Level-L0cache.sm
rename : src/mem/protocol/MESI_Three_Level-L1.cache => src/mem/protocol/MESI_Three_Level-L1cache.sm
This patch fixes a bug in how physical memory used to be mapped and
unmapped. Previously we unmapped and re-mapped if restoring from a
checkpoint. However, we never checked that the new mapping was
actually the same, it was just magically working as the OS seems to
fairly reliably give us the same chunk back. This patch fixes this
issue by relying entirely on the mmap call in the constructor.
This patch adds a filter to the cache to drop snoop requests that are
not for a range covered by the cache. This fixes an issue observed
when multiple caches are placed in parallel, covering different
address ranges. Without this patch, all the caches will forward the
snoop upwards, when only one should do so.
This patch adds DRAMSim2 as a memory controller by wrapping the
external library and creating a sublass of AbstractMemory that bridges
between the semantics of gem5 and the DRAMSim2 interface.
The DRAMSim2 wrapper extracts the clock period from the config
file. There is no way of extracting this information from DRAMSim2
itself, so we simply read the same config file and get it from there.
To properly model the response queue, the wrapper keeps track of how
many transactions are in the actual controller, and how many are
stacking up waiting to be sent back as responses (in the wrapper). The
latter requires us to move away from the queued port and manage the
packets ourselves. This is due to DRAMSim2 not having any flow control
on the response path.
DRAMSim2 assumes that the transactions it is given are matching the
burst size of the choosen memory. The wrapper checks to ensure the
cache line size of the system matches the burst size of DRAMSim2 as
there are currently no provisions to split the system requests. In
theory we could allow a cache line size smaller than the burst size,
but that would lead to inefficient use of the DRAM, so for not we
fatal also in this case.
Forces the prefetcher to mispredict twice in a row before resetting the
confidence of prefetching. This helps cases where a load PC strides by a
constant factor, however it may operate on different arrays at times.
Avoids the cost of retraining. Primarily helps with small iteration loops.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
For systems with a tightly coupled L2, a stride-based prefetcher may observe
access requests from both instruction and data L1 caches. However, the PC
address of an instruction miss gives no relevant training information to the
stride based prefetcher(there is no stride to train). In theses cases, its
better if the L2 stride prefetcher simply reverted back to a simple N-block
ahead prefetcher. This patch enables this option.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch extends the classic prefetcher to work on non-block aligned
addresses. Because the existing prefetchers in gem5 mask off the lower
address bits of cache accesses, many predictable strides fail to be
detected. For example, if a load were to stride by 48 bytes, with 64 byte
cachelines, the current stride based prefetcher would see an access pattern
of 0, 64, 64, 128, 192.... Thus not detecting a constant stride pattern. This
patch fixes this, by training the prefetcher on access and not masking off the
lower address bits.
It also adds the following configuration options:
1) Training/prefetching only on cache misses,
2) Training/prefetching only on data acceses,
3) Optionally tagging prefetches with a PC address.
#3 allows prefetchers to train off of prefetch requests in systems with
multiple cache levels and PC-based prefetchers present at multiple levels.
It also effectively allows a pipelining of prefetch requests (like in POWER4)
across multiple levels of cache hierarchy.
Improves performance on my gem5 configuration by 4.3% for SPECINT and 4.7% for SPECFP (geomean).
The patch
(1) removes the redundant writeback argument from findVictim()
(2) fixes the description of access() function
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Adds very basic statistics on the number of tag and data accesses within the
cache, which is important for power modelling. For the tags, simply count
the associativity of the cache each time. For the data, this depends on
whether tags and data are accessed sequentially, which is given by a new
parameter. In the parallel case, all data blocks are accessed each time, but
with sequential accesses, a single data block is accessed only on a hit.
This patch enables tracking of cache occupancy per thread along with
ages (in buckets) per cache blocks. Cache occupancy stats are
recalculated on each stat dump.
Add some values and methods to the request object to track the translation
and access latency for a request and which level of the cache hierarchy responded
to the request.
The first two levels (L0, L1) are private to the core, the third level (L2)is
possibly shared. The protocol supports clustered designs. For example, one
can have two sets of two cores. Each core has an L0 and L1 cache. There are
two L2 controllers where each set accesses only one of the L2 controllers.
A cluster over here means a set of controllers that can be accessed only by a
certain set of cores. For example, consider a two level hierarchy. Assume
there are 4 L1 controllers (private) and 2 L2 controllers. We can have two
different hierarchies here:
a. the address space is partitioned between the two L2 controllers. Each L1
controller accesses both the L2 controllers. In this case, each L1 controller
is a cluster initself.
b. both the L2 controllers can cache any address. An L1 controller has access
to only one of the L2 controllers. In this case, each L2 controller
along with the L1 controllers that access it, form a cluster.
This patch allows for each controller to have a cluster ID, which is 0 by
default. By setting the cluster ID properly, one can instantiate hierarchies
with clusters. Note that the coherence protocol might have to be changed as
well.
This patch fixes couple of bugs in the L2 controller of the mesi cmp
directory protocol.
1. The state MT_I was transitioning to NP on receiving a clean writeback
from the L1 controller. This patch makes it inform the directory controller
about the writeback.
2. The L2 controller was sending the dirty bit to the L1 controller and the
L2 controller used writeback from the L1 controller to update the dirty bit
unconditionally. Now, the L1 controller always assumes that the incoming
data is clean. The L2 controller updates the dirty bit only when the L1
controller writes to the block.
3. Certain unused functions and events are being removed.
This patch replaces max_in_port_rank with the number of inports. The use of
max_in_port_rank was causing spurious re-builds and incorrect initialization
of variables in ruby related regression tests. This was due to the variable
value being used across threads while compiling when it was not meant to be.
Since the number of inports is state machine specific value, this problem
should get solved.
The directory controller should not have the sharer field since there is
only one level 2 cache. Anyway the field was not in use. The owner field
was being used to track the l2 cache version (in case of distributed l2) that
has the cache block under consideration. The information is not required
since the version of the level 2 cache can be obtained from a subset of the
address bits.
This patch fixes a number of stats accounting issues in the DRAM
controller. Most importantly, it separates the system interface and
DRAM interface so that it is clearer what the actual DRAM bandwidth
(and consequently utilisation) is.
This patch unifies the request selection across read and write queues
for FR-FCFS scheduling policy. It also fixes the request selection
code to prioritize the row hits present in the request queues over the
selection based on earliest bank availability.
This patch adds a basic adaptive version of the open-page policy that
guides the decision to keep open or close by looking at the contents
of the controller queues. If no row hits are found, and bank conflicts
are present, then the row is closed by means of an auto
precharge. This is a well-known technique that should improve
performance in most use-cases.
This patch removes the untimed while loop in the write scheduling
mechanism and now schedule commands taking into account the minimum
timing constraint. It also introduces an optimization to track write
queue size and switch from writes to reads if the number of write
requests fall below write low threshold.
This patch adds the tRRD parameter to the DRAM controller. With the
recent addition of the actAllowedAt member for each bank, this
addition is trivial.
This patch changes the tXAW constraint so that it is enforced per rank
rather than globally for all ranks in the channel. It also avoids
using the bank freeAt to enforce the activation limit, as doing so
also precludes performing any column or row command to the
DRAM. Instead the patch introduces a new variable actAllowedAt for the
banks and use this to track when a potential activation can occur.
This patch fixes the controller when a write threshold of 100% is
used. Earlier for 100% write threshold no data is written to memory
as writes never get triggered since this corner case is not
considered.
This patch changes the FCFS bit of FR-FCFS such that requests that
target the earliest available bank are picked first (as suggested in
the original work on FR-FCFS by Rixner et al). To accommodate this we
add functionality to identify a bank through a one-dimensional
identifier (bank id). The member names of the DRAMPacket are also
update to match the style guide.
This patch changes the time the controller is woken up to take the
next scheduling decisions. tRAS is now handled in estimateLatency and
doDRAMAccess and we do not need to worry about it at scheduling
time. The earliest we need to wake up is to do a pre-charge, row
access and column access before the bus becomes free for use.
This patch adds an explicit tRAS parameter to the DRAM controller
model. Previously tRAS was, rather conservatively, assumed to be tRCD
+ tCL + tRP. The default values for tRAS are chosen to match the
previous behaviour and will be updated later.
This patch adds missing initializations of the SenderMachine field of
out_msg's when thery are created in the L2 cache controller of the
MOESI_CMP_directory coherence protocol. When an out_msg is created and this
field is left uninitialized, it is set to the default value MachineType_NUM.
This causes a panic in the MachineType_to_string function when gem5 is
executed with the Ruby debug flag on and it tries to print the message.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch fixes a problem where in Garnet, the enqueue time in the
VCallocator and the SWallocator which is of type Cycles was being stored
inside a variable with int type.
This lead to a known problem restoring checkpoints with garnet & the fixed
pipeline enabled. That value was really big and didn't fit in the variable
overflowing it, therefore some conditions on the VC allocation stage & the
SW allocation stage were not met and the packets didn't advance through the
network, leading to a deadlock panic right after the checkpoint was restored.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
The CoherentBus eventually got virtual methods for its interface. The
"virtuality" of the CoherentBus, however, comes already from the virtual
interface of the bus' ports. There is no need to add another layer of virtual
functions, here.
Get rid of non-deterministic "stats" in ruby.stats output
such as time & date of run, elapsed & CPU time used,
and memory usage. These values cause spurious
miscomparisons when looking at output diffs (though
they don't affect regressions, since the regressions
pass/fail status currently ignores ruby.stats entirely).
Most of this information is already captured in other
places (time & date in stdout, elapsed time & mem usage
in stats.txt), where the regression script is smart
enough to filter it out. It seems easier to get rid of
the redundant output rather than teaching the
regression tester to ignore the same information in
two different places.
ASI_BITS in the Request object were originally used to store a memory
request's ASI on SPARC. This is not the case any more since other ISAs
use the ASI bits to store architecture-dependent information. This
changeset renames the ASI_BITS to ARCH_BITS which better describes
their use. Additionally, the getAsi() accessor is renamed to
getArchFlags().
Using address bit 63 to identify generic IPRs caused problems on
SPARC, where IPRs are heavily used. This changeset redefines how
generic IPRs are identified. Instead of using bit 63, we now use a
separate flag (GENERIC_IPR) a memory request.
This patch ensures that a dequeue event is not scheduled if the memory
controller is waiting for a retry already. Without this check it is
possible for the controller to attempt sending something whilst
already having one packet that is in retry, thus causing the bus to
have an assertion failure.
The Topology source sets up input and output buffers for each of the external
nodes of a topology by indexing on Ruby's generated controller unique IDs.
These unique IDs are found by adding the MachineType_base_number to the version
number of each controller (see any generated *_Controller.cc - init() calls
getToNetQueue and getFromNetQueue using m_version + base). However, the
Topology object used the cntrl_id - which is required to be unique across all
controllers - to index the controllers list as they are being connected to
their input and output buffers. If the cntrl_ids did not match the Ruby unique
ID, the throttles end up connected to incorrectly indexed nodes in the network,
resulting in packets traversing incorrect network paths. This patch fixes the
Topology indexing scheme by using the Ruby unique ID to match that of the
SimpleNetwork buffer vectors.
The previous changeset (9863:9483739f83ee) used STL vector containers to
dynamically allocate stats in the Ruby SimpleNetwork, Switch and Throttle. For
gcc versions before at least 4.6.3, this causes the standard vector allocator
to call Stats copy constructors (a no-no, since stats should be allocated in
the body of each SimObject instance). Since the size of these stats arrays is
known at compile time (NOTE: after code generation), this patch changes their
allocation to be static rather than using an STL vector.
This patch makes it possible to once again build gem5 without any
ISA. The main purpose is to enable work around the interconnect and
memory system without having to build any CPU models or device models.
The regress script is updated to include the NULL ISA target. Currently
no regressions make use of it, but all the testers could (and perhaps
should) transition to it.
--HG--
rename : build_opts/NOISA => build_opts/NULL
rename : src/arch/noisa/SConsopts => src/arch/null/SConsopts
rename : src/arch/noisa/cpu_dummy.hh => src/arch/null/cpu_dummy.hh
rename : src/cpu/intr_control.cc => src/cpu/intr_control_noisa.cc
This patch updates the stats to reflect the: 1) addition of the
internal queue in SimpleMemory, 2) moving of the memory class outside
FSConfig, 3) fixing up of the 2D vector printing format, 4) specifying
burst size and interface width for the DRAM instead of relying on
cache-line size, 5) performing merging in the DRAM controller write
buffer, and 6) fixing how idle cycles are counted in the atomic and
timing CPU models.
The main reason for bundling them up is to minimise the changeset
size.
This patch adds support for specifying multi-channel memory
configurations on the command line, e.g. 'se/fs.py
--mem-type=ddr3_1600_x64 --mem-channels=4'. To enable this, it
enhances the functionality of MemConfig and moves the existing
makeMultiChannel class method from SimpleDRAM to the support scripts.
The se/fs.py example scripts are updated to make use of the new
feature.
This patch changes the default parameter value of conf_table_reported
to match the common case. It also simplifies the regression and config
scripts to reflect this change.
This patch changes the data structure used for the DRAM read, write
and response queues from an STL list to deque. This optimisation is
based on the observation that the size is small (and fixed), and that
the structures are frequently iterated over in a linear fashion.
This patch implements basic write merging in the DRAM to avoid
redundant bursts. When a new access is added to the queue it is
compared against the existing entries, and if it is either
intersecting or immediately succeeding/preceeding an existing item it
is merged.
There is currently no attempt made at avoiding iterating over the
existing items in determining whether merging is possible or not.
This patch gets rid of bytesPerCacheLine parameter and makes the DRAM
configuration separate from cache line size. Instead of
bytesPerCacheLine, we define a parameter for the DRAM called
burst_length. The burst_length parameter shows the length of a DRAM
device burst in bits. Also, lines_per_rowbuffer is replaced with
device_rowbuffer_size to improve code portablity.
This patch adds a burst length in beats for each memory type, an
interface width for each memory type, and the memory controller model
is extended to reason about "system" packets vs "dram" packets and
assemble the responses properly. It means that system packets larger
than a full burst are split into multiple dram packets.
This patch adds a packet queue in SimpleMemory to avoid using the
packet queue in the port (and thus have no involvement in the flow
control). The port queue was bound to 100 packets, and as the
SimpleMemory is modelling both a controller and an actual RAM, it
potentially has a large number of packets in flight. There is
currently no limit on the number of packets in the memory controller,
but this could easily be added in a follow-on patch.
As a result of the added internal storage, the functional access and
draining is updated. Some minor cleaning up and renaming has also been
done.
The memtest regression changes as a result of this patch and the stats
will be updated.
Some of the code in StateMachine.py file is added to all the controllers and
is independent of the controller definition. This code is being moved to the
AbstractController class which is the parent class of all controllers.
This patch removes the notion of a peer block size and instead sets
the cache line size on the system level.
Previously the size was set per cache, and communicated through the
interconnect. There were plenty checks to ensure that everyone had the
same size specified, and these checks are now removed. Another benefit
that is not yet harnessed is that the cache line size is now known at
construction time, rather than after the port binding. Hence, the
block size can be locally stored and does not have to be queried every
time it is used.
A follow-on patch updates the configuration scripts accordingly.
This code seems not to be of any use now. There is no path in the simulator
that allows for reconfiguring the network. A better approach would be to
take a checkpoint and start the simulation from the checkpoint with the new
configuration.
This patch reorganizes the cache tags to allow more flexibility to
implement new replacement policies. The base tags class is now a
clocked object so that derived classes can use a clock if they need
one. Also having deriving from SimObject allows specialized Tag
classes to be swapped in/out in .py files.
The cache set is now templatized to allow it to contain customized
cache blocks with additional informaiton. This involved moving code to
the .hh file and removing cacheset.cc.
The statistics belonging to the cache tags are now including ".tags"
in their name. Hence, the stats need an update to reflect the change
in naming.
This patch adds the notion of source- and derived-clock domains to the
ClockedObjects. As such, all clock information is moved to the clock
domain, and the ClockedObjects are grouped into domains.
The clock domains are either source domains, with a specific clock
period, or derived domains that have a parent domain and a divider
(potentially chained). For piece of logic that runs at a derived clock
(a ratio of the clock its parent is running at) the necessary derived
clock domain is created from its corresponding parent clock
domain. For now, the derived clock domain only supports a divider,
thus ensuring a lower speed compared to its parent. Multiplier
functionality implies a PLL logic that has not been modelled yet
(create a separate clock instead).
The clock domains should be used as a mechanism to provide a
controllable clock source that affects clock for every clocked object
lying beneath it. The clock of the domain can (in a future patch) be
controlled by a handler responsible for dynamic frequency scaling of
the respective clock domains.
All the config scripts have been retro-fitted with clock domains. For
the System a default SrcClockDomain is created. For CPUs that run at a
different speed than the system, there is a seperate clock domain
created. This domain incorporates the CPU and the associated
caches. As before, Ruby runs under its own clock domain.
The clock period of all domains are pre-computed, such that no virtual
functions or multiplications are needed when calling
clockPeriod. Instead, the clock period is pre-computed when any
changes occur. For this to be possible, each clock domain tracks its
children.
This patch removes the explicit setting of the clock period for
certain instances of CoherentBus, NonCoherentBus and IOCache where the
specified clock is same as the default value of the system clock. As
all the values used are the defaults, there are no performance
changes. There are similar cases where the toL2Bus is set to use the
parent CPU clock which is already the default behaviour.
The main motivation for these simplifications is to ease the
introduction of clock domains.
This patch does a bit of tidying up in the bridge code, adding const
where appropriate and also removing redundant checks and adding a few
new ones.
There are no changes to the behaviour of any regressions.
This patch fixes the CommMonitor local variable names, and also
introduces a variable to capture if it expects to see a response. The
latter check considers both needsResponse and memInhibitAsserted.
This patch fixes an outstanding issue in the cache timing calculations
where an atomic access returned a time in Cycles, but the port
forwarded it on as if it was in Ticks.
A separate patch will update the regression stats.
This patch changes the updards snoop packet to avoid allocating and
later deleting it. As the code executes in 0 time and the lifetime of
the packet does not extend beyond the block there is no reason to heap
allocate it.
This patch adds separate actions for requests that missed in the local cache
and messages were sent out to get the requested line. These separate actions
are required for differentiating between the hit and miss latencies in the
statistics collected.
This patch adds separate actions for requests that missed in the local cache
and messages were sent out to get the requested line. These separate actions
are required for differentiating between the hit and miss latencies in the
statistics collected.
The patch started of with removing the global variables from the profiler for
profiling the miss latency of requests made to the cache. The corrresponding
histograms have been moved to the Sequencer. These are combined together when
the histograms are printed. Separate histograms are now maintained for
tracking latency of all requests together, of hits only and of misses only.
A particular set of histograms used to use the type GenericMachineType defined
in one of the protocol files. This patch removes this type. Now, everything
that relied on this type would use MachineType instead. To do this, SLICC has
been changed so that multiple machine types can be declared by a controller
in its preamble.
This patch removes the following three files: RubySlicc_Profiler.sm,
RubySlicc_Profiler_interface.cc and RubySlicc_Profiler_interface.hh.
Only one function prototyped in the file RubySlicc_Profiler.sm. Rest of the
code appearing in any of these files is not in use. Therefore, these files
are being removed.
That one single function, profileMsgDelay(), is being moved to the protocol
files where it is in use. If we need any of these deleted functions, I think
the right way to make them visible is to have the AbstractController class in
a .sm and let the controller state machine inherit from this class. The
AbstractController class can then have the prototypes of these profiling
functions in its definition.
2013-06-24 08:59:08 -05:00
Joel Hestness ext:(%2C%20Nilay%20Vaish%20%3Cnilay%40cs.wisc.edu%3E)
The m_size variable attempted to track m_prio_heap.size(), but it did so
incorrectly due to the functions reanalyzeMessages and reanalyzeAllMessages().
Since this variable is intended to track m_prio_heap.size(), we can simply
replace instances where m_size is referenced with m_prio_heap.size(), which
has the added bonus of removing the need for m_size.
Note: This patch also removes an extraneous DPRINTF format string designator
from reanalyzeAllMessages()
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Previously, .sm files were allowed to use the same name for a type and a
variable. This is unnecessarily confusing and has some bad side effects, like
not being able to declare later variables in the same scope with the same type.
This causes the compiler to complain and die on things like Address Address.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Change all occurrances of Address as a variable name to instead use Addr.
Address is an allowed name in slicc even when Address is also being used as a
type, leading to declarations of "Address Address". While this works, it
prevents adding another field of type Address because the compiler then thinks
Address is a variable name, not type.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
Reuse the address finalization code in the TLB instead of replicating
it when handling MMIO. This patch also adds support for injecting
memory mapped IPR requests into the memory system.
This patch removes per processor cycle count, histogram for filter stats,
histogram for multicasts, histogram for prefetch wait, some function
prototypes that do not have definitions.
The Profiler class does not need an event for dumping statistics
periodically. This is because there is a method for dumping statistics
for all the sim objects periodically. Since Ruby is a sim object, its
statistics are also included.
This moves event and transition count statistics for cache controllers to
gem5's statistics. It does the same for the statistics associated with the
memory controller in ruby.
All the cache/directory/dma controllers individually collect the event and
transition counts. A callback function, collateStats(), has been added that
is invoked on the controller version 0 of each controller class. This
function adds all the individual controller statistics to a vector
variables. All the code for registering the statistical variables and
collating them is generated by SLICC. The patch removes the files
*_Profiler.{cc,hh} and *_ProfileDumper.{cc,hh} which were earlier used for
collecting and dumping statistics respectively.
This patch changes the class names of the variuos DRAM configurations
to better reflect what memory they are based on. The speed and
interface width is now part of the name, and also the alias that is
used to select them on the command line.
Some minor changes are done to the actual parameters, to better
reflect the named configurations. As a result of these changes the
regressions change slightly and the stats will be bumped in a separate
patch.
This patch adds a histogram to track how many bytes are accessed in an
open row before it is closed. This metric is useful in characterising
a workload and the efficiency of the DRAM scheduler. For example, a
DDR3-1600 device requires 44 cycles (tRC) before it can activate
another row in the same bank. For a x32 interface (8 bytes per cycle)
that means 8 x 44 = 352 bytes must be transferred to hide the
preparation time.
This patch adds a frontend and backend static latency to the DRAM
controller by delaying the responses. Two parameters expressing the
frontend and backend contributions in absolute time are added to the
controller, and the appropriate latency is added to the responses when
adding them to the (infinite) queued port for sending.
For writes and reads that hit in the write buffer, only the frontend
latency is added. For reads that are serviced by the DRAM, the static
latency is the sum of the pipeline latencies of the entire frontend,
backend and PHY. The default values are chosen based on having roughly
10 pipeline stages in total at 500 MHz.
In the future, it would be sensible to make the controller use its
clock and convert these latencies (and a few of the DRAM timings) to
cycles.
This patch does some minor tidying up of the MSHR and MSHRQueue. The
clean up started as part of some ad-hoc tracing and debugging, but
seems worthwhile enough to go in as a separate patch.
The highlights of the changes are reduced scoping (private) members
where possible, avoiding redundant new/delete, and constructor
initialisation to please static code analyzers.
This patch introduces a mirrored internal snoop port to facilitate
easy addition of flow control for the snoop responses that are turned
into normal responses on their return. To perform this, the slave
ports of the coherent bus are wrapped in internal master ports that
are passed as the source ports to the response layer in question.
As a result of this patch, there is more contention for the response
resources, and as such system performance will decrease slightly.
A consequence of the mirrored internal port is that the port the bus
tells to retry (the internal one) and the port actually retrying (the
mirrored) one are not the same. Thus, the existing check in tryTiming
is not longer correct. In fact, the test is redundant as the layer is
only in the retry state while calling sendRetry on the waiting port,
and if the latter does not immediately call the bus then the retry
state is left. Consequently the check is removed.
This patch makes the buses multi layered, and effectively creates a
crossbar structure with distributed contention ports at the
destination ports. Before this patch, a bus could have a single
request, response and snoop response in flight at any time, and with
these changes there can be as many requests as connected slaves (bus
master ports), and as many responses as connected masters (bus slave
ports).
Together with address interleaving, this patch enables us to create
high-throughput memory interconnects, e.g. 50+ GByte/s.
This patch makes the flow control and state updates of the coherent
bus more clear by separating the two cases, i.e. forward as a snoop
response, or turn it into a normal response.
With this change it is also more clear what resources are being
occupied, and that we effectively bypass the busy check for the second
case. As a result of the change in resource usage some stats change.
This patch does some minor housekeeping on the bus code, removing
redundant code, and moving the extraction of the destination id to the
top of the functions using it.
This patch adds a basic set of stats which are hard to impossible to
implement using only communication monitors, and are needed for
insight such as bus utilization, transactions through the bus etc.
Stats added include throughput and transaction distribution, and also
a two-dimensional vector capturing how many packets and how much data
is exchanged between the masters and slaves connected to the bus.
This patch changes the set used to track outstanding requests to an
unordered set (part of C++11 STL). There is no need to maintain the
order, and hopefully there might even be a small performance benefit.
This patch adds a typical (leaning towards fast) LPDDR3 configuration
based on publically available data. As expected, it looks very similar
to the LPDDR2-S4 configuration, only with a slightly lower burst time.
This patch adapts the existing LPDDR2 configuration to make use of the
multi-channel functionality. Thus, to get a x64 interface two
controllers should be instantiated using the makeMultiChannel method.
The page size and ranks are also adapted to better suit with a typical
LPDDR2 part.
This patch removes the explicit memset as it is redundant and causes
the simulator to touch the entire space, forcing the host system to
allocate the pages.
Anonymous pages are mapped on the first access, and the page-fault
handler is responsible for zeroing them. Thus, the pages are still
zeroed, but we avoid touching the entire allocated space which enables
us to use much larger memory sizes as long as not all the memory is
actually used.
This patch changes the way cache statistics are collected in ruby.
As of now, there is separate entity called CacheProfiler which holds
statistical variables for caches. The CacheMemory class defines different
functions for accessing the CacheProfiler. These functions are then invoked
in the .sm files. I find this approach opaque and prone to error. Secondly,
we probably should not be paying the cost of a function call for recording
statistics.
Instead, this patch allows for accessing statistical variables in the
.sm files. The collection would become transparent. Secondly, it would happen
in place, so no function calls. The patch also removes the CacheProfiler class.
--HG--
rename : src/mem/slicc/ast/InfixOperatorExprAST.py => src/mem/slicc/ast/OperatorExprAST.py
The existing implementation can read uninitialized data or stale information
from the cached PageTable entries.
1) Add a valid bit for the cache entries. Simply using zero for the virtual
address to signify invalid entries is not sufficient. Speculative, wrong-path
accesses frequently access page zero. The current implementation would return
a uninitialized TLB entry when address zero was accessed and the PageTable
cache entry was invalid.
2) When unmapping/mapping/remaping a page, invalidate the corresponding
PageTable cache entry if one already exists.
Due to recent changes to clocking system in Ruby and the way Ruby restores
state from a checkpoint, garnet was failing to run from a checkpointed state.
The problem is that Ruby resets the time to zero while warming up the caches.
If any component records a local copy of the time (read calls curCycle())
before the simulation has started, then that component will not operate until
that time is reached. In the context of this particular patch, the Garnet
Network class calls curCycle() at multiple places. Any non-operational
component can block in requests in the memory system, which the system
interprets as a deadlock. This patch makes changes so that Garnet can
successfully run from checkpointed state.
It adds a globally visible time at which the actual execution started. This
time is initialized in RubySystem::startup() function. This variable is only
meant for components with in Ruby. This replaces the private variable that
was maintained within Garnet since it is not possible to figure out the
correct time when the value of this variable can be set.
The patch also does away with all cases where curCycle() is called with in
some Ruby component before the system has actually started executing. This
is required due to the quirky manner in which ruby restores from a checkpoint.
This patch adds an address mapping scheme where the channel
interleaving takes place on a cache line granularity. It is similar to
the existing RaBaChCo that interleaves on a DRAM page, but should give
higher performance when there is less locality in the address
stream.
This patch changes the slightly ambigious names used for the address
mapping scheme to be more descriptive, and actually spell out what
they do. With this patch we also open up for adding more flavours of
open- and close-type mappings, i.e. interleaving across channels with
the open map.
This patch adds a WideIO 200 MHz configuration that can be used as a
baseline to compare with DDRx and LPDDRx. Note that it is a single
channel and that it should be replicated 4 times. It is based on
publically available information and attempts to capture an envisioned
8 Gbit single-die part (i.e. without TSVs).
This patch provides useful printouts throughut the memory system. This
includes pretty-printed cache tags and function call messages
(call-stack like).
This patch changes the SimpleTimingPort and RubyPort to panic on
inhibited requests as this should never happen in either of the
cases. The SimpleTimingPort is only used for the I/O devices PIO port
and the DMA devices config port and should thus never see an inhibited
request. Similarly, the SimpleTimingPort is also used for the
MessagePort in x86, and there should also not be any cases where the
port sees an inhibited request.
Previously, nextCycle() could return the *current* cycle if the current tick was
already aligned with the clock edge. This behavior is not only confusing (not
quite what the function name implies), but also caused problems in the
drainResume() function. When exiting/re-entering the sim loop (e.g., to take
checkpoints), the CPUs will drain and resume. Due to the previous behavior of
nextCycle(), the CPU tick events were being rescheduled in the same ticks that
were already processed before draining. This caused divergence from runs that
did not exit/re-entered the sim loop. (Initially a cycle difference, but a
significant impact later on.)
This patch separates out the two behaviors (nextCycle() and clockEdge()),
uses nextCycle() in drainResume, and uses clockEdge() everywhere else.
Nothing (other than name) should change except for the drainResume timing.
When using the o3 or inorder CPUs with many Ruby protocols, the caches may
need to forward invalidations to the CPUs. The RubyPort was instantiating a
packet to be sent to the CPUs to signal the eviction, but the packets were
not being freed by the CPUs. Consistent with the classic memory model, stack
allocate the packet and heap allocate the request so on
ruby_eviction_callback() completion, the packet deconstructor is called, and
deletes the request (*Note: stack allocating the request causes double
deletion, since it will be deleted in the packet destructor). This results in
the least memory allocations without memory errors.
When warming up caches in Ruby, the CacheRecorder sends fetch requests into
Ruby Sequencers with packet types that require responses. Since responses are
never generated for these CacheRecorder requests, the requests are not deleted
in the packet destructor called from the Ruby hit callback. Free the request.
This allows you to have (i.e.) an L2 cache that is not named "L2Cache"
but is still a GenericMachineType_L2Cache. This is particularly
helpful if the protocol has multiple L2 controllers.
When Ruby stats are printed for events and transitions, they include stats
for all of the controllers of the same type, but they are not necessarily
printed in order of the controller ID "version", because of the way the
profilers were added to the profiler vector. This patch fixes the push order
problem so that the stats are printed in ascending order 0->(# controllers),
so statistics parsers may correctly assume the controller to which the stats
belong.
When connecting message buffers between Ruby controllers, it is
easy to mistakenly connect multiple controllers to the same message
buffer. This patch prints a more descriptive fatal message than the
previous assert statement in order to facilitate easier debugging.
The cache trace variables are array allocated uint8_t* in the RubySystem and
the Ruby CacheRecorder, but the code used delete to free the memory, resulting
in Valgrind memory errors. Change these deletes to delete [] to get rid of the
errors.
Fixes a latency calculation bug for accesses during a cache line fill.
Under a cache miss, before the line is filled, accesses to the cache are
associated with a MSHR and marked as targets. Once the line fill completes,
MSHR target packets pay an additional latency of
"responseLatency + busSerializationLatency". However, the "whenReady"
field of the cache line is only set to an additional delay of
"busSerializationLatency". This lacks the responseLatency component of
the fill. It is possible for accesses that occur on the cycle of
(or briefly after) the line fill to respond without properly paying the
responseLatency. This also creates the situation where two accesses to the
same address may be serviced in an order opposite of how they were received
by the cache. For stores to the same address, this means that although the
cache performs the stores in the order they were received, acknowledgements
may be sent in a different order.
Adding the responseLatency component to the whenReady field preserves the
penalty that should be paid and prevents these ordering issues.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch solves the corner case scenario where the sendRetryEvent could be
scheduled twice, when an io device stresses the IOcache in the system. This
should not be possible in the cache system.
This patch splits the retryList into a list of ports that are waiting
for the bus itself to become available, and a map that tracks the
ports where forwarding failed due to a peer not accepting the
packet. Thus, when a retry reaches the bus, it can be sent to the
appropriate port that initiated that transaction.
As a consequence of this patch, only ports that are really ready to go
will get a retry, thus reducing the amount of redundant failed
attempts. This patch also makes it easier to reason about the order of
servicing requests as the ports waiting for the bus are now clearly
FIFO and much easier to change if desired.
This patch introduces a variable to keep track of the retrying port
instead of relying on it being the front of the retryList.
Besides the improvement in readability, this patch is a step towards
separating out the two cases where a port is waiting for the bus to be
free, and where the forwarding did not succeed and the bus is waiting
for a retry to pass on to the original initiator of the transaction.
The changes made are currently such that the regressions are not
affected. This is ensured by always prioritizing the currently
retrying port and putting it back at the front of the retry list.
This patch adds an optional flags field to the packet trace to encode
the request flags that contain information about whether the request
is (un)cacheable, instruction fetch, preftech etc.
A recent set of patches added support for multiple clock domains to ruby.
I had made some errors while writing those patches. The sender was using
the receiver side clock while enqueuing a message in the buffer. Those
errors became visible while creating (or restoring from) checkpoints. The
errors also become visible when a multi eventq scenario occurs.
The message buffer node used to keep time in terms of Cycles. Since the
sender and the receiver can have different clock periods, storing node
time in cycles requires some conversion. Instead store the time directly
in Ticks.
A set of patches was recently committed to allow multiple clock domains
in ruby. In those patches, I had inadvertently made an incorrect use of
the clocks. Suppose object A needs to schedule an event on object B. It
was possible that A accesses B's clock to schedule the event. This is not
possible in actual system. Hence, changes are being to the Consumer class
so as to avoid such happenings. Note that in a multi eventq simulation,
this can possibly lead to an incorrect simulation.
There are two functions in the Consumer class that are used for scheduling
events. The first function takes in the relative delay over the current time
as the argument and adds the current time to it for scheduling the event.
The second function takes in the absolute time (in ticks) for scheduling the
event. The first function is now being moved to protected section of the
class so that only objects of the derived classes can use it. All other
objects will have to specify absolute time while scheduling an event
for some consumer.
The histogram for tracking outstanding counts per cycle is maintained
in the profiler. For a parallel implementation of the memory system, we
need that this histogram is maintained locally. Hence it will now be
kept in the sequencer itself. The resulting histograms will be merged
when the stats are printed.
These functions are currently implemented in one of the files related to Slicc.
Since these are purely C++ functions, they are better suited to be in the base
class.
This patch modifies ruby so that two controllers can be connected to each
other with only message buffers in between. Before this patch, all the
controllers had to be connected to the network for them to communicate
with each other. With this patch, one can have protocols where a controller
is not connected to the network, but communicates with another controller
through a message buffer.
The Topology class in Ruby does not need to inherit from SimObject class.
This patch turns it into a regular class. The topology object is now created
in the constructor of the Network class. All the parameters for the topology
class have been moved to the network class.
The functional write code was assuming that all writes are block sized,
which may not be true for Ruby Requests. This bug can lead to a buffer
overflow.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This check covers a case where a retry is called from the SimpleDRAM
causing a new request to appear before the DRAM itself schedules a
nextReqEvent. By adding this check, the event is not scheduled twice.
This patch adds a class method that allows easy creation of
channel-interleaved multi-channel DRAM configurations. It is enabled
by a class method to allow customisation of the class independent of
the channel configuration. For example, the user can create a MyDDR
subclass of e.g. SimpleDDR3, and then create a four-channel
configuration of the subclass by calling MyDDR.makeMultiChannel(4,
mem_start, mem_size).
This patch fixes a number of small cosmetic issues in the SimpleDRAM
module. The most important change is to move the accounting of
received packets to after the check is made if the packet should be
retried or not. Thus, packets are only counted if they are actually
accepted.
This patch adds support for multi-channel instances of the DRAM
controller model by stripping away the channel bits in the address
decoding. The patch relies on the availiability of address
interleaving and, at this time, it is up to the user to configure the
interleaving appropriately. At the moment it is assumed that the
channel interleaving bits are immediately following the column bits
(smallest sensible interleaving). Convenience methods for building
multi-channel configurations will be added later.
This patch adds merging of interleaved ranges before creating the
backing stores. The backing stores are always a contigous chunk of the
address space, and with this patch it is possible to have interleaved
memories in the system.
This patch adds basic merging of address ranges to the bus, such that
interleaved ranges are merged together before being passed on by the
bus. As such, the bus aggregates the address ranges of the connected
slave ports and then passes on the merged ranges through its master
ports. The bus thus hides the complexity of the interleaved ranges and
only exposes contigous ranges to the surrounding system.
As part of this patch, the bus ranges are also cached for any future
queries.
The MESI CMP directory coherence protocol, while transitioning from SM to IM,
did not invalidate the lock that it might have taken on a cache line. This
patch adds an action for doing so.
The problem was found by Dibakar, but I was not happy with his proposed
solution. So I implemented a different solution.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch fixes a newly introduced bug where the sender state was
popped before checking that it should be. Amazingly all regressions
pass, but Linux fails to boot on the detailed CPU with caches enabled.
This patch fixes the warnings that clang3.2svn emit due to the "-Wall"
flag. There is one case of an uninitialised value in the ARM neon ISA
description, and then a whole range of unused private fields that are
pruned.
This patch address the most important name shadowing warnings (as
produced when using gcc/clang with -Wshadow). There are many
locations where constructor parameters and function parameters shadow
local variables, but these are left unchanged.
This patch adds a check to ensure that the delay incurred by
the bus is not simply disregarded, but accounted for by someone. At
this point, all the modules do is to zero it out, and no additional
time is spent. This highlights where the bus timing is simply dropped
instead of being paid for.
As a follow up, the locations identified in this patch should add this
additional time to the packets in one way or another. For now it
simply acts as a sanity check and highlights where the delay is simply
ignored.
Since no time is added, all regressions remain the same.
This patch changes the names of the cache accessor functions to be in
line with those used by the ports. This is done to avoid confusion and
get closer to a one-to-one correspondence between the interface of the
memory object (the cache in this case) and the port itself.
The member function timingAccess has been split into a snoop/non-snoop
part to avoid branching on the isResponse() of the packet.
This patch changes the bus-related time accounting done in the packet
to be relative. Besides making it easier to align the cache timing to
cache clock cycles, it also makes it possible to create a Last-Level
Cache (LLC) directly to a memory controller without a bus inbetween.
The bus is unique in that it does not ever make the packets wait to
reflect the time spent forwarding them. Instead, the cache is
currently responsible for making the packets wait. Thus, the bus
annotates the packets with the time needed for the first word to
appear, and also the last word. The cache then delays the packets in
its queues before passing them on. It is worth noting that every
object attached to a bus (devices, memories, bridges, etc) should be
doing this if we opt for keeping this way of accounting for the bus
timing.
This patch removes the time field from the packet as it was only used
by the preftecher. Similar to the packet queue, the prefetcher now
wraps the packet in a deferred packet, which also has a tick
representing the absolute time when the packet should be sent.
This patch makes the clock member private to the ClockedObject and
forces all children to access it using clockPeriod(). This makes it
impossible to inadvertently change the clock, and also makes it easier
to transition to a situation where the clock is derived from e.g. a
clock domain, or through a multiplier.
This patch fixes a potential deadlock in the caches. This deadlock
could occur when more than one cache is used in a system, and
pkt->senderState is modified in between the two caches. This happened
as the caches relied on the senderState remaining unchanged, and used
it for instantaneous upstream communication with other caches.
This issue has been addressed by iterating over the linked list of
senderStates until we are either able to cast to a MSHR* or
senderState is NULL. If the cast is successful, we know that the
packet has previously passed through another cache, and therefore
update the downstreamPending flag accordingly. Otherwise, we do
nothing.
This patch adds a predecessor field to the SenderState base class to
make the process of linking them up more uniform, and enable a
traversal of the stack without knowing the specific type of the
subclasses.
There are a number of simplifications done as part of changing the
SenderState, particularly in the RubyTest.
This patch fixes a bug in the CommMonitor caused by the packet being
modified before it is captured in the trace. By recording the fields
before passing the packet on, and then putting these values in the
trace we ensure that even if the packet is modified the trace captures
what the CommMonitor saw.
This patch merely adopts a more strict use of const for the cache
member functions and variables, and also moves a large portion of the
member functions from public to protected.
Virtualized CPUs and the fastmem mode of the atomic CPU require direct
access to physical memory. We currently require caches to be disabled
when using them to prevent chaos. This is not ideal when switching
between hardware virutalized CPUs and other CPU models as it would
require a configuration change on each switch. This changeset
introduces a new version of the atomic memory mode,
'atomic_noncaching', where memory accesses are inserted into the
memory system as atomic accesses, but bypass caches.
To make memory mode tests cleaner, the following methods are added to
the System class:
* isAtomicMode() -- True if the memory mode is 'atomic' or 'direct'.
* isTimingMode() -- True if the memory mode is 'timing'.
* bypassCaches() -- True if caches should be bypassed.
The old getMemoryMode() and setMemoryMode() methods should never be
used from the C++ world anymore.
The transition for state MII and event Store was found missing during testing.
The transition is being added. The controller will not stall the Store request
in state MII
This patch allows ruby to have multiple clock domains. As I understand
with this patch, controllers can have different frequencies. The entire
network needs to run at a single frequency.
The idea is that with in an object, time is treated in terms of cycles.
But the messages that are passed from one entity to another should contain
the time in Ticks. As of now, this is only true for the message buffers,
but not for the links in the network. As I understand the code, all the
entities in different networks (simple, garnet-fixed, garnet-flexible) should
be clocked at the same frequency.
Another problem is that the directory controller has to operate at the same
frequency as the ruby system. This is because the memory controller does
not make use of the Message Buffer, and instead implements a buffer of its
own. So, it has no idea of the frequency at which the directory controller
is operating and uses ruby system's frequency for scheduling events.
This patch is as of now the final patch in the series of patches that replace
Time with Cycles.This patch further replaces Time with Cycles in Sequencer,
Profiler, different protocols and related entities.
Though Time has not been completely removed, the places where it is in use
seem benign as of now.
The patch started of with replacing Time with Cycles in the Consumer class.
But to get ruby to compile, the rest of the changes had to be carried out.
Subsequent patches will further this process, till we completely replace
Time with Cycles.
This patch modifies the Histogram class' add() function so that it can add
linear histograms as well. The function assumes that the left end point of
the ranges of the two histograms are the same. It also assumes that when
the ranges of the two histogram are changed to accomodate an element not in
the range, the factor used in changing the range is same for both the
histograms.
This function is then used in removing one of the calls to the global
profiler*. The histograms for recording the delays incurred in processing
different requests are now maintained by the controllers. The profiler
adds these histograms when it needs to print the stats.
This patch does several things. First, the counter for fully busy cycles for a
controller is now kept with in the controller, instead of being part of the profiler.
Second, the topology class no longer keeps an array of controllers which was only
used for printing stats. Instead, ruby system will now ask each controller to print
the stats. Thirdly, the statistical variable for recording how many different types
were created is being moved in to the controller from the profiler. Note that for
printing, the profiler will collate results from different controllers.
The number of bits required for an address was set to floorLog2(memory size).
This is correct under the assumption that the memory size is a power of 2,
which is not always true. Hence, floorLog2 is being replaced with ceilLog2.
This patch moves the default DRAM parameters from the SimpleDRAM class
to two different subclasses, one for DDR3 and one for LPDDR2. More can
be added as we go forward.
The regressions that previously used the SimpleDRAM are now using
SimpleDDR3 as this is the most similar configuration.
This patch adds two additional scheduling constraints to the DRAM
controller model, to constrain the activation rate. The two metrics
are determine the size of the activation window in terms of the number
of activates and the minimum time required for that number of
activates. This maps to current DDRx, LPDDRx and WIOx standards that
have either tFAW (4 activate window) or tTAW (2 activate window)
scheduling constraints.
This patch changes how the data bus busy time is calculated such that
it is delayed to the actual scheduling time of the request as opposed
to being done as soon as possible.
This patch changes a bunch of statistics, and the stats update is
bundled together with the introruction of tFAW/tTAW and the named DRAM
configurations like DDR3 and LPDDR2.
the cache drainManager is set but never cleared, this is because
the cache itself does not need to be drained and thus never
triggers a signalDrainDone(). because the drainManager variable
is not used properly and does not appear to be necessary it has
been removed with this patch.
The panic() function already prints the current tick value. This call to
curCycle() is as such redundant. Since we are trying to move towards multiple
clock domains, this call will print misleading time.
This patch converts the panic() print outs in the Sequencer::wakeup()
call from ruby cycles to Ticks(). This makes it easier to debug deadlocks
with the ProtocolTrace flag so the issue time indicated in the panic message
can be quickly searched for.
Committed by: Nilay Vaish <nilay@cs.wisc.edu>
This patch was initiated so as to remove reference to g_system_ptr,
the pointer to Ruby System that is used for getting the current time.
That simple change actual requires changing a lot many things in slicc and
garnet. All these changes are related to how time is handled.
In most of the places, g_system_ptr has been replaced by another clock
object. The changes have been done under the assumption that all the
components in the memory system are on the same clock frequency, but the
actual clocks might be distributed.
Many Ruby structures inherit from the Consumer, which is used for scheduling
events. The Consumer used to relay on an Event Manager for scheduling events
and on g_system_ptr for time. With this patch, the Consumer will now use a
ClockedObject to schedule events and to query for current time. This resulted
in several structures being converted from SimObjects to ClockedObjects. Also,
the MessageBuffer class now requires a pointer to a ClockedObject so as to
query for time.
The current implementation in gem5 just keeps a list of locks per cacheline.
Due to this, a store to a non-overlapping portion of the cacheline can cause an
LL/SC pair to fail. This patch simply adds an address range to the lock
structure, so that the lock is only invalidated if the store overlaps the lock
range.
Running with valgrind I noticed a use after free originating from
simple_mem.cc. It looks like this is a known issue and this additional call
site was missed in an earlier patch.
When the classic gem5 cache sees an uncacheable memory access, it used
to ignore it or silently drop the cache line in case of a
write. Normally, there shouldn't be any data in the cache belonging to
an uncacheable address range. However, since some architecture models
don't implement cache maintenance instructions, there might be some
dirty data in the cache that is discarded when this happens. The
reason it has mostly worked before is because such cache lines were
most likely evicted by normal memory activity before a TLB flush was
requested by the OS.
Previously, the cache model would invalidate cache lines when they
were accessed by an uncacheable write. This changeset alters this
behavior so all uncacheable memory accesses cause a cache flush with
an associated writeback if necessary. This is implemented by reusing
the cache flushing machinery used when draining the cache, which
implies that writebacks are performed using functional accesses.
The IIC replacement policy seems to be unused and has probably
gathered too much bit rot to be useful. This patch removes the IIC and
its associated cache parameters.
This patch adds basic merging of address ranges when determining which
address ranges should be reported in the configuration table. By
performing this merging it is possible to distribute an address range
across many memory channels (controllers). This is essential to enable
address interleaving.
This patch adds support for interleaving bits for the address
ranges. What was previously just a start and end address, now has an
additional three fields, for the high bit, and number of bits to use
for interleaving, and a match value to compare against. If the number
of interleaving bits is set to zero it is effectively disabled.
A number of convenience functions are added to the range to enquire
about the interleaving, its granularity and the number of stripes it
is part of.
This patch cleans up the AddrRangeMap in preparation for the addition
of interleaving by removing unused code. The non-const editions of
find are never used, and hence the duplication is not needed.
This patch tidies up a number of the bus DPRINTFs related to range
manipulation. In particular, it shifts the message about range changes
to the start of the member function, and also adds information about
when all ranges are received.
This patch makes the address mapper less stringent about checking the
before and after ranges, i.e. the original and remapped ranges. The
checks were not really necessary, and there are situations when the
previous checks were too strict.
This patch makes the start and end address private in a move to
prevent direct manipulation and matching of ranges based on these
fields. This is done so that a transition to ranges with interleaving
support is possible.
As a result of hiding the start and end, a number of member functions
are needed to perform the comparisons and manipulations that
previously took place directly on the members. An accessor function is
provided for the start address, and a function is added to test if an
address is within a range. As a result of the latter the != and ==
operator is also removed in favour of the member function. A member
function that returns a string representation is also created to allow
debug printing.
In general, this patch does not add any functionality, but it does
take us closer to a situation where interleaving (and more cleverness)
can be added under the bonnet without exposing it to the user. More on
that in a later patch.
This patch temporarily removes the joining of ranges when creating the
backing store, to reserve this functionality for the interleaved
ranges that are about to be introduced.
When creating the mmaps for the backing store, there is no point in
creating larger contigous chunks that what is necessary. The larger
chunks will only make life more difficult for the host.
Merging will be re-added later, but then only for interleaved ranges.
This patch adds packet tracing to the communication monitor using a
protobuf as the mechanism for creating the trace.
If no file is specified, then the tracing is disabled. If a file is
specified, then for every packet that is successfully sent, a protobuf
message is serialized to the file.
This patch adds a basic check to ensure that the packet queue does not
grow absurdly large. The queue should only be used to store packets
that were delayed due to blocking from the neighbouring port, and not
for actual storage. Thus, a limit of 100 has been chosen for now
(which is already quite substantial).
This patch addresses a warning related to the swig interface
generation for the Switch class. The cxx_header is now specified
correctly, and the header in question has got a few includes added to
make it all compile.
This patch fixes a bug that caused multiple systems to overwrite each
other physical memory. The system name is now included in the filename
such that this is avoided.
The patch adds support to slicc for recognizing arguments that should be
passed to the constructor of a class. I did not like the fact that an explicit
check was being carried on the type 'TBETable' to figure out the arguments to
be passed to the constructor.
The patch also moves some of the member variables that are declared for all
the controllers to the base class AbstractController.
This patch adds a prefetcher for the ruby memory system. The prefetcher
is based on a prefetcher implemented by others (well, I don't know
who wrote the original). The prefetcher does stride-based prefetching,
both unit and non-unit. It obseves the misses in the cache and trains on
these. After the training period is over, the prefetcher starts issuing
prefetch requests to the controller.
This patch adds a _curTick variable to an eventq. This variable is updated
whenever an event is serviced in function serviceOne(), or all events upto
a particular time are processed in function serviceEvents(). This change
helps when there are eventqs that do not make use of curTick for scheduling
events.
Recent changes to functionalRead() in the memory system was not correct.
The change allowed for returning data from the first message found in
the buffers of the memory system. This is not correct since it is possible
that a timing message has data from an older state of the block.
The changes are being reverted.
This patch fixes the draining of the SimpleDRAM controller model. The
controller performs buffering of writes and normally there is no need
to ever empty the write buffer (if you have a fast on-chip memory,
then use it). The patch adds checks to ensure the write buffer is
drained when the controller is asked to do so.
This patch adds support to ruby so that the statistics maintained by ruby
are reset/dumped when the statistics for the rest of the system are
reset/dumped. For resetting the statistics, ruby now provides the
resetStats() function that a sim object can provide. As a consequence, the
clearStats() function has been removed from RubySystem. For dumping stats,
Ruby now adds a callback event to the dumpStatsQueue. The exit callback that
ruby used to add earlier is being removed.
Created by: Hamid Reza Khaleghzadeh.
Improved by: Lluc Alvarez, Nilay Vaish
Committed by: Nilay Vaish
This patch adds support for the following optional drain methods in
the classical memory system's cache model:
memWriteback() - Write back all dirty cache lines to memory using
functional accesses.
memInvalidate() - Invalidate all cache lines. Dirty cache lines
are lost unless a writeback is requested.
Since memWriteback() is called when checkpointing systems, this patch
adds support for checkpointing systems with caches. The serialization
code now checks whether there are any dirty lines in the cache. If
there are dirty lines in the cache, the checkpoint is flagged as bad
and a warning is printed.
This patch moves the draining interface from SimObject to a separate
class that can be used by any object needing draining. However,
objects not visible to the Python code (i.e., objects not deriving
from SimObject) still depend on their parents informing them when to
drain. This patch also gets rid of the CountedDrainEvent (which isn't
really an event) and replaces it with a DrainManager.
When casting objects in the generated SWIG interfaces, SWIG uses
classical C-style casts ( (Foo *)bar; ). In some cases, this can
degenerate into the equivalent of a reinterpret_cast (mainly if only a
forward declaration of the type is available). This usually works for
most compilers, but it is known to break if multiple inheritance is
used anywhere in the object hierarchy.
This patch introduces the cxx_header attribute to Python SimObject
definitions, which should be used to specify a header to include in
the SWIG interface. The header should include the declaration of the
wrapped object. We currently don't enforce header the use of the
header attribute, but a warning will be generated for objects that do
not use it.
This patch enables dumping statistics and Linux process information on
context switch boundaries (__switch_to() calls) that are used for
Streamline integration (a graphical statistics viewer from ARM).
This patch changes the default system clock from 1THz to 1GHz. This
clock is used by all modules that do not override the default (parent
clock), and primarily affects the IO subsystem. Every DMA device uses
its clock to schedule the next transfer, and the change will thus
cause this inter-transfer delay to be longer.
The default clock of the bus is removed, as the clock inherited from
the system provides exactly the same value.
A follow-on patch will bump the stats.
I had forgotten to change the network test protocol while making changes to
ruby for supporting functional accesses. This patch updates the protocol so
that it can compile correctly.
This patch adds support to different entities in the ruby memory system
for more reliable functional read/write accesses. Only the simple network
has been augmented as of now. Later on Garnet will also support functional
accesses.
The patch adds functional access code to all the different types of messages
that protocols can send around. These messages are functionally accessed
by going through the buffers maintained by the network entities.
The patch also rectifies some of the bugs found in coherence protocols while
testing the patch.
With this patch applied, functional writes always succeed. But functional
reads can still fail.
Currently the Ruby System maintains pointer to only one of the memory
controllers. But there can be multiple controllers in the system. This
patch adds a vector of memory controllers.
This patch adds support for function definitions to appear in slicc structs.
This is required for supporting functional accesses for different types of
messages. Subsequent patches will use this to development.
It seems unecessary that the BankedArray class needs to schedule an event
to figure out when the access ends. Instead only the time for the end of access
needs to be tracked.
Ruby system was recently converted to a clocked object. Such objects maintain
state related to the time that has passed so far. During the cache warmup, Ruby
system changes its own time and the global time. Later on, the global time is
restored. So Ruby system also needs to reset its own time.
This patch adds an additional level of ports in the inheritance
hierarchy, separating out the protocol-specific and protocl-agnostic
parts. All the functionality related to the binding of ports is now
confined to use BaseMaster/BaseSlavePorts, and all the
protocol-specific parts stay in the Master/SlavePort. In the future it
will be possible to add other protocol-specific implementations.
The functions used in the binding of ports, i.e. getMaster/SlavePort
now use the base classes, and the index parameter is updated to use
the PortID typedef with the symbolic InvalidPortID as the default.
This patch moves all the memory backing store operations from the
independent memory controllers to the global physical memory. The main
reason for this patch is to allow address striping in a future set of
patches, but at this point it already provides some useful
functionality in that it is now possible to change the number of
memory controllers and their address mapping in combination with
checkpointing. Thus, the host and guest view of the memory backing
store are now completely separate.
With this patch, the individual memory controllers are far simpler as
all responsibility for serializing/unserializing is moved to the
physical memory. Currently, the functionality is more or less moved
from AbstractMemory to PhysicalMemory without any major
changes. However, in a future patch the physical memory will also
resolve any ranges that are interleaved and properly assign the
backing store to the memory controllers, and keep the host memory as a
single contigous chunk per address range.
Functionality for future extensions which involve CPU virtualization
also enable the host to get pointers to the backing store.
This patch changes the data structure used to keep track of ports that
should be told to retry. As the bus is doing this in an FCFS way,
there is no point having a list. A deque is a better match (and is at
least in theory a better choice from a performance point of view).
This patch addresses a number of smaller issues identified by the code
inspection utility cppcheck. There are a number of identified leaks in
the arm/linux/system.cc (although the function only get's called once
so it is not a major problem), a few deletes in dev/x86/i8042.cc that
were not array deletes, and sprintfs where the character array had one
element less than needed. In the IIC tags there was a function
allocating an array of longs which is in fact never used.
This patch changes the cache-related latencies from an absolute time
expressed in Ticks, to a number of cycles that can be scaled with the
clock period of the caches. Ultimately this patch serves to enable
future work that involves dynamic frequency scaling. As an immediate
benefit it also makes it more convenient to specify cache performance
without implicitly assuming a specific CPU core operating frequency.
The stat blocked_cycles that actually counter in ticks is now updated
to count in cycles.
As the timing is now rounded to the clock edges of the cache, there
are some regressions that change. Plenty of them have very minor
changes, whereas some regressions with a short run-time are perturbed
quite significantly. A follow-on patch updates all the statistics for
the regressions.
This patch transitions the bus to use the AddrRange operations instead
of directly accessing the start and end. The change facilitates the
move to a more elaborate AddrRange class that also supports address
striping in the bus by specifying interleaving bits in the ranges.
Two new functions are added to the AddrRange to determine if two
ranges intersect, and if one is a subset of another. The bus
propagation of address ranges is also tweaked such that an update is
only propagated if the bus received information from all the
downstream slave modules. This avoids the iteration and need for the
cycle-breaking scheme that was previously used.
This patch moves the block size computation from findBlockSize to
initialisation time, once all the neighbouring ports are connected.
There is no need to dynamically update the block size, and the caching
of the value effectively avoided that anyhow. This is very similar to
what was already in place, just with a slightly leaner implementation.
This patch removes the parts of slicc that were required for multi-chip
protocols. Going ahead, it seems multi-chip protocols would be implemented
by playing with the network itself.
This patch moves the code for functional accesses to ruby system. This is
because the subsequent patches add support for making functional accesses
to the messages in the interconnect. Making those accesses from the ruby port
would be cumbersome.
In the current caches the hit latency is paid twice on a miss. This patch lets
a configurable response latency be set of the cache for the backward path.
Fix the drain functionality of the RubyPort to only call drain on child ports
during a system-wide drain process, instead of calling each time that a
ruby_hit_callback is executed.
This fixes the issue of the RubyPort ports being reawakened during the drain
simulation, possibly with work they didn't previously have to complete. If
they have new work, they may call process on the drain event that they had
not registered work for, causing an assertion failure when completing the
drain event.
Also, in RubyPort, set the drainEvent to NULL when there are no events
to be drained. If not set to NULL, the drain loop can result in stale
drainEvents used.
This patch introduces a high-level model of a DRAM controller, with a
basic read/write buffer structure, a selectable and customisable
arbiter, a few address mapping options, and the basic DRAM timing
constraints. The parameters make it possible to turn this model into
any desired DDRx/LPDDRx/WideIOx memory controller.
The intention is not to be cycle accurate or capture every aspect of a
DDR DRAM interface, but rather to enable exploring of the high-level
knobs with a good simulation speed. Thus, contrary to e.g. DRAMSim
this module emphasizes simulation speed with a good-enough accuracy.
This module is merely a starting point, and there are plenty additions
and improvements to come. A notable addition is the support for
address-striping in the bus to enable a multi-channel DRAM
controller. Also note that there are still a few "todo's" in the code
base that will be addressed as we go along.
A follow-up patch will add basic performance regressions that use the
traffic generator to exercise a few well-defined corner cases.
This patch removes the unused file parameter from the
AbstractMemory. The patch serves to make it easier to transition to a
separation of the actual contigious host memory backing store, and the
gem5 memory controllers.
Without the file parameter it becomes easier to hide the creation of
the mmap in the PhysicalMemory, as there are no longer any reasons to
expose the actual contigious ranges to the user.
To the best of my knowledge there is no use of the parameter, so the
change should not affect anyone.
This patch takes the final plunge and transitions from the templated
Range class to the more specific AddrRange. In doing so it changes the
obvious Range<Addr> to AddrRange, and also bumps the range_map to be
AddrRangeMap.
In addition to the obvious changes, including the removal of redundant
includes, this patch also does some house keeping in preparing for the
introduction of address interleaving support in the ranges. The Range
class is also stripped of all the functionality that is never used.
--HG--
rename : src/base/range.hh => src/base/addr_range.hh
rename : src/base/range_map.hh => src/base/addr_range_map.hh
This patch removes the use of g_system_ptr for event scheduling. Each consumer
object now needs to specify upfront an EventManager object it would use for
scheduling events. This makes the ruby memory system more amenable for a
multi-threaded simulation.
This patch makes a minor addition to the SimpleMemory by enforcing a
maximum data rate. The bandwidth is configurable, and a reasonable
value (12.8GB/s) has been choosen as the default.
The changes do add some complexity to the SimpleMemory, but they
should definitely be justifiable as this enables a far more realistic
setup using even this simple memory controller.
The rate regulation is done for reads and writes combined to reflect
the bidirectional data busses used by most (if not all) relevant
memories. Moreover, the regulation is done per packet as opposed to
long term, as it is the short term data rate (data bus width times
frequency) that is the limiting factor.
A follow-up patch bumps the stats for the regressions.
This patch shifts the version of gcc for which we enable c++0x from
4.6 to 4.4 The more long term plan is to see what the c++0x features
can bring and what level of support would be enabled simply by bumping
the required version of gcc from 4.3 to 4.4.
A few minor things had to be fixed in the code base, most notably the
choice of a hashmap implementation. In the Ruby Sequencer there were
also a few minor issues that gcc 4.4 was not too happy about.
This patch addresses a few minor issues reported by the clang static
analyzer.
The analysis was run with:
scan-build -disable-checker deadcode \
-enable-checker experimental.core \
-disable-checker experimental.core.CastToStruct \
-enable-checker experimental.cpluscplus
This seperates the functionality to clear the state in a block into
blk.hh and the functionality to udpate the tag information into the
tags. This gets rid of the case where calling invalidateBlk on an
already-invalid block does something different than calling it on a
valid block, which was confusing.
If I understand correctly, this was put in place so that a debugger can be
attached when the protocol aborts. While this sounds useful, it is a problem
when the simulation is not being actively monitored. I think it is better to
remove this.
Despite gzwrite taking an unsigned for length, it returns an int for
bytes written; gzwrite fails if (int)len < 0. Because of this, call
gzwrite with len no larger than INT_MAX: write in blocks of INT_MAX if
data to be written is larger than INT_MAX.
This patch is a first step to using Cycles as a parameter type. The
main affected modules are the CPUs and the Ruby caches. There are
definitely plenty more places that are affected, but this patch serves
as a starting point to making the transition.
An important part of this patch is to actually enable parameters to be
specified as Param.Cycles which involves some changes to params.py.
The =operator for the DataBlock class was incorrectly interpreting the class
member m_alloc. This variable stands for whether the assigned memory for the
data block needs to be freed or not by the class itself. It seems that the
=operator interpreted the variable as whether the memory is assigned to the
data block. This wrong interpretation was causing values not to propagate
to RubySystem::m_mem_vec_ptr. This caused major issues with restoring from
checkpoints when using a protocol which verified that the cache data was
consistent with the backing store (i.e. MOESI-hammer).
This patch addresses the comments and feedback on the preceding patch
that reworks the clocks and now more clearly shows where cycles
(relative cycle counts) are used to express time.
Instead of bumping the existing patch I chose to make this a separate
patch, merely to try and focus the discussion around a smaller set of
changes. The two patches will be pushed together though.
This changes done as part of this patch are mostly following directly
from the introduction of the wrapper class, and change enough code to
make things compile and run again. There are definitely more places
where int/uint/Tick is still used to represent cycles, and it will
take some time to chase them all down. Similarly, a lot of parameters
should be changed from Param.Tick and Param.Unsigned to
Param.Cycles.
In addition, the use of curTick is questionable as there should not be
an absolute cycle. Potential solutions can be built on top of this
patch. There is a similar situation in the o3 CPU where
lastRunningCycle is currently counting in Cycles, and is still an
absolute time. More discussion to be had in other words.
An additional change that would be appropriate in the future is to
perform a similar wrapping of Tick and probably also introduce a
Ticks class along with suitable operators for all these classes.
This patch tightens up the semantics around port binding and checks
that the ports that are being bound are currently not connected, and
similarly connected before unbind is called.
The patch consequently also changes the order of the unbind and bind
for the switching of CPUs to ensure that the rules are adhered
to. Previously the ports would be "over-written" without any check.
There are no changes in behaviour due to this patch, and the only
place where the unbind functionality is used is in the CPU.
The memory size variable was a 32-bit int. This meant that the size of the
memory was limited to 4GB. This patch changes the type of the variable to
64-bit to support larger memory sizes. Thanks to Raghuraman Balasubramanian
for bringing this to notice.
This patch removes the NACK frrom the packet as there is no longer any
module in the system that issues them (the bridge was the only one and
the previous patch removes that).
The handling of NACKs was mostly avoided throughout the code base, by
using e.g. panic or assert false, but in a few locations the NACKs
were actually dealt with (although NACKs never occured in any of the
regressions). Most notably, the DMA port will now never receive a NACK
and the backoff time is thus never changed. As a consequence, the
entire backoff mechanism (similar to a PCI bus) is now removed and the
DMA port entirely relies on the bus performing the arbitration and
issuing a retry when appropriate. This is more in line with e.g. PCIe.
Surprisingly, this patch has no impact on any of the regressions. As
mentioned in the patch that removes the NACK from the bridge, a
follow-up patch should change the request and response buffer size for
at least one regression to also verify that the system behaves as
expected when the bridge fills up.
This patch removes the NACKing in the bridge, as the split
request/response busses now ensure that protocol deadlocks do not
occur, i.e. the message-dependency chain is broken by always allowing
responses to make progress without being stalled by requests. The
NACKs had limited support in the system with most components ignoring
their use (with a suitable call to panic), and as the NACKs are no
longer needed to avoid protocol deadlocks, the cleanest way is to
simply remove them.
The bridge is the starting point as this is the only place where the
NACKs are created. A follow-up patch will remove the code that deals
with NACKs in the endpoints, e.g. the X86 table walker and DMA
port. Ultimately the type of packet can be complete removed (until
someone sees a need for modelling more complex protocols, which can
now be done in parts of the system since the port and interface is
split).
As a consequence of the NACK removal, the bridge now has to send a
retry to a master if the request or response queue was full on the
first attempt. This change also makes the bridge ports very similar to
QueuedPorts, and a later patch will change the bridge to use these. A
first step in this direction is taken by aligning the name of the
member functions, as done by this patch.
A bit of tidying up has also been done as part of the simplifications.
Surprisingly, this patch has no impact on any of the
regressions. Hence, there was never any NACKs issued. In a follow-up
patch I would suggest changing the size of the bridge buffers set in
FSConfig.py to also test the situation where the bridge fills up.
This patch extends the queued port interfaces with methods for
scheduling the transmission of a timing request/response. The methods
are named similar to the corresponding sendTiming(Snoop)Req/Resp,
replacing the "send" with "sched". As the queues are currently
unbounded, the methods always succeed and hence do not return a value.
This functionality was previously provided in the subclasses by
calling PacketQueue::schedSendTiming with the appropriate
parameters. With this change, there is no need to introduce these
extra methods in the subclasses, and the use of the queued interface
is more uniform and explicit.
This patch allows packets to be enqueued in the same tick as they are
intended to be sent. This does not imply they actually are sent that
tick, although that is possible.
This change is useful for module that use the queued ports primarly to
avoid handling the flow control involved in sending and retrying
packets.
This patch moves the clock of the CPU, bus, and numerous devices to
the new class ClockedObject, that sits in between the SimObject and
MemObject in the class hierarchy. Although there are currently a fair
amount of MemObjects that do not make use of the clock, they
potentially should do so, e.g. the caches should at some point have
the same clock as the CPU, potentially with a 1:n ratio. This patch
does not introduce any new clock objects or object hierarchies
(clusters, clock domains etc), but is still a step in the direction of
having a more structured approach clock domains.
The most contentious part of this patch is the serialisation of clocks
that some of the modules (but not all) did previously. This
serialisation should not be needed as the clock is set through the
parameters even when restoring from the checkpoint. In other words,
the state is "stored" in the Python code that creates the modules.
The nextCycle methods are also simplified and the clock phase
parameter of the CPU is removed (this could be part of a clock object
once they are introduced).
This patch fixes some problems with the drain/switchout functionality
for the O3 cpu and for the ARM ISA and adds some useful debug print
statements.
This is an incremental fix as there are still a few bugs/mem leaks with the
switchout code. Particularly when switching from an O3CPU to a
TimingSimpleCPU. However, when switching from O3 to O3 cores with the ARM ISA
I haven't encountered any more assertion failures; now the kernel will
typically panic inside of simulation.
This patch moves instantiateTopology into Ruby.py and removes the
mem/ruby/network/topologies directory. It also adds some extra inheritance to
the topologies to clean up some issues in the existing topologies.
Off-by-one loop termination meant that we were stuffing
the terminating '\0' into the std::string value, which
makes for difficult-to-debug string comparison failures.
removes the optimization that forwards an exclusive copy to a requester on a
read, only for the i-cache. this optimization isn't necessary because we
typically won't be writing to the i-cache.
This patch changes the simple memory to have a single slave port
rather than a vector port. The simple memory makes no attempts at
modelling the contention between multiple ports, and any such
multiplexing and demultiplexing could be done in a bus (or crossbar)
outside the memory controller. This scenario also matches with the
ongoing work on a SimpleDRAM model, which will be a single-ported
single-channel controller that can be used in conjunction with a bus
(or crossbar) to create a multi-port multi-channel controller.
There are only very few regressions that make use of the vector port,
and these are all for functional accesses only. To facilitate these
cases, memtest and memtest-ruby have been updated to also have a
"functional" bus to perform the (de)multiplexing of the functional
memory accesses.
This patch removes printConfig() functions from all structures in Ruby.
Most of the information is already part of config.ini, and where ever it
is not, it would become in due course.
This patch models a cache as separate tag and data arrays. The patch exposes
the banked array as another resource that is checked by SLICC before a
transition is allowed to execute. This is similar to how TBE entries and slots
in output ports are modeled.
Updates to Ruby to support statistics counting of cache accesses. This feature
serves multiple purposes beyond simple stats collection. It provides the
foundation for ruby to model the cache tag and data arrays as physical
resources, as well as provide the necessary input data for McPAT power
modeling.
Instead of just passing a list of controllers to the makeTopology function
in src/mem/ruby/network/topologies/<Topo>.py we pass in a function pointer
which knows how to make the topology, possibly with some extra state set
in the configs/ruby/<protocol>.py file. Thus, we can move all of the files
from network/topologies to configs/topologies. A new class BaseTopology
is added which all topologies in configs/topologies must inheirit from and
follow its API.
--HG--
rename : src/mem/ruby/network/topologies/Crossbar.py => configs/topologies/Crossbar.py
rename : src/mem/ruby/network/topologies/Mesh.py => configs/topologies/Mesh.py
rename : src/mem/ruby/network/topologies/MeshDirCorners.py => configs/topologies/MeshDirCorners.py
rename : src/mem/ruby/network/topologies/Pt2Pt.py => configs/topologies/Pt2Pt.py
rename : src/mem/ruby/network/topologies/Torus.py => configs/topologies/Torus.py
This patch makes the queue implementation in the SimpleTimingPort
private to avoid confusion with the protected member queue in the
QueuedSlavePort. The SimpleTimingPort provides the queue_impl to the
QueuedSlavePort and it can be accessed via the reference in the base
class. The use of the member name queue is thus no longer overloaded.
This patch is a first step to align the port names used in the Python
world and the C++ world. Ultimately it serves to make the use of
config.json together with output from the simulation easier, including
post-processing of statistics.
Most notably, the CPU, cache, and bus is addressed in this patch, and
there might be other ports that should be updated accordingly. The
dash name separator has also been replaced with a "." which is what is
used to concatenate the names in python, and a separation is made
between the master and slave port in the bus.
This patch changes the default bus width to a more sensible 8 bytes
(64 bits), which is in line with most on-chip buses. Although there
are cases where a wider or narrower bus is useful, the 8 bytes is a
good compromise to serve as the default.
This patch changes essentially all statistics, and will be bundled
with the outstanding changes to the bus.
This patch splits the existing buses into multiple layers. The
non-coherent bus is split into a request and a response layer, and the
coherent bus adds an additional layer for the snoop responses. The
layer is modified to be templatised on the port type, such that the
different layers can have retryLists with either master or slave
ports. This patch also removes the dynamic cast from the retry, as
previously promised when moving the recvRetry from the port base class
to the master/slave port respectively.
Overall, the split bus more closely reflects any modern on-chip bus
and should be at step in the right direction. From this point, it
would be reasonable straight forward to add separate layers (and thus
contention points and arbitration) for each port and thus create a
true crossbar.
The regressions all produce the correct output, but have varying
degrees of changes to their statistics. A separate patch will be
pushed with the updates to the reference statistics.
This patch moves all flow control, arbitration and state information
into a bus layer. The layer is thus responsible for all the state
transitions, and for keeping hold of the retry list. Consequently the
layer is also responsible for the draining.
With this change, the non-coherent and coherent bus are given a single
layer to avoid changing any temporal behaviour, but the patch opens up
for adding more layers.
This patch adds a state enum and member variable in the bus, tracking
the bus state, thus eliminating the need for tickNextIdle and inRetry,
and fixing an issue that allowed the bus to be occupied by multiple
packets at once (hopefully it also makes it easier to understand the
code).
The bus, in its current form, uses tickNextIdle and inRetry to keep
track of the state of the bus. However, it only updates tickNextIdle
_after_ forwarding a packet using sendTiming, and the result is that
the bus is still seen as idle, and a module that receives the packet
and starts transmitting new packets in zero time will still see the
bus as idle (and this is done by a number of DMA devices). The issue
can also be seen in isOccupied where the bus calls reschedule on an
event instead of schedule.
This patch addresses the problem by marking the bus as _not_ idle
already by the time we conclude that the bus is not occupied and we
will deal with the packet.
As a result of not allowing multiple packets to occupy the bus, some
regressions have slight changes in their statistics. A separate patch
updates these accordingly.
Further ahead, a follow-on patch will introduce a separate state
variable for request/responses/snoop responses, and thus implement a
split request/response bus with separate flow control for the
different message types (even further ahead it will introduce a
multi-layer bus).
This patch makes getAddrRanges const throughout the code base. There
is no reason why it should not be, and making it const prevents adding
any unintentional side-effects.
This patch adds getAddrRanges to the master port, and thus avoids
going through getSlavePort to be able to ask the slave. Similar to the
previous patch that added isSnooping to the SlavePort, this patch aims
to introduce an additional level of hierarchy in the ports (base port
being protocol-agnostic) and getSlave/MasterPort will return port
pointers to these base classes.
The function is named getAddrRanges also on the master port, but does
nothing besides asking the connected slave port. The slave port, as
before, has to provide an implementation and actually produce a list
of address ranges. The initial design used the name getSlaveAddrRanges
for the new function, but the more verbose name was later changed.
This patch adds isSnooping to the slave port, and thus avoids going
through getMasterPort to be able to ask the master. Over the course of
the next few patches, all getMasterPort/getSlavePort in Port and
MemObject are to be protocol agnostic, and the snooping is part of the
protocol layer.
The function is already present on the master port, where it is
implemented by the module itself, e.g. a cache. On the slave side, it
is merely asking the connected master port. The same name is used by
both functions despite their difference in behaviour. The initial
design used isMasterSnooping on the slave port side, but the more
verbose function name was later changed.
This patch is the last part of moving all protocol-related
functionality out of the Port base class. All the send/recv functions
are already moved, and the retry (which still governs all the timing
transport functions) is the only part that remained in the base class.
The only point where this currently causes a bit of inconvenience is
in the bus where the retry list is global and holds Port pointers (not
Master/SlavePort). This is about to change with the split into a
request/response bus and will soon be removed anyway.
The patch has no impact on any regressions.
This patch is the result of static analysis identifying a number of
memory leaks. The leaks are all benign as they are a result of not
deallocating memory in the desctructor. The fix still has value as it
removes false positives in the static analysis.
The LRU policy always evicted the least recently touched way, even if it
contained valid data and another way was invalid, as can happen if a block has
been invalidated by coherance. This can result in caches never warming up even
though they are replacing blocks. This modifies the LRU policy to move blocks
to LRU position on invalidation.
Currently when multiple CPUs perform a load-linked/store-conditional sequence,
the loads all create a list of reservations which is then scanned when the
stores occur. A reservation matching the context and address of the store is
sought, BUT all reservations matching the address are also erased at this point.
The upshot is that a store-conditional will remove all reservations even if the
store itself does not succeed. A livelock was observed using 7-8 CPUs where a
thread would erase the reservations of other threads, not succeed, loop and put
its own reservation in again only to have it blown by another thread that
unsuccessfully now tries to store-conditional -- no forward progress was made,
hanging the system.
The correct way to do this is to only blow a reservation when a store
(conditional or not) actually /occurs/ to its address. One thread always wins
(the one that does the store-conditional first).
This patch is a temporary fix until Andreas' four-phase patches
get reviewed and committed. Removing FastAlloc seems to have exposed
an issue which previously was reasonable rare in which packets are freed
before the sending cache is done with them. This change puts incoming packets
no a pendingDelete queue which are deleted at the start of the next call and
thus breaks the dependency between when the caller returns true and when the
packet is actually used by the sending cache.
Running valgrind on a multi-core linux boot and the memtester results in no
valgrind warnings.
While FastAlloc provides a small performance increase (~1.5%) over regular malloc it isn't thread safe.
After removing FastAlloc and using tcmalloc I've seen a performance increase of 12% over libc malloc
when running twolf for ARM.
This patch introduces a class hierarchy of buses, a non-coherent one,
and a coherent one, splitting the existing bus functionality. By doing
so it also enables further specialisation of the two types of buses.
A non-coherent bus connects a number of non-snooping masters and
slaves, and routes the request and response packets based on the
address. The request packets issued by the master connected to a
non-coherent bus could still snoop in caches attached to a coherent
bus, as is the case with the I/O bus and memory bus in most system
configurations. No snoops will, however, reach any master on the
non-coherent bus itself. The non-coherent bus can be used as a
template for modelling PCI, PCIe, and non-coherent AMBA and OCP buses,
and is typically used for the I/O buses.
A coherent bus connects a number of (potentially) snooping masters and
slaves, and routes the request and response packets based on the
address, and also forwards all requests to the snoopers and deals with
the snoop responses. The coherent bus can be used as a template for
modelling QPI, HyperTransport, ACE and coherent OCP buses, and is
typically used for the L1-to-L2 buses and as the main system
interconnect.
The configuration scripts are updated to use a NoncoherentBus for all
peripheral and I/O buses.
A bit of minor tidying up has also been done.
--HG--
rename : src/mem/bus.cc => src/mem/coherent_bus.cc
rename : src/mem/bus.hh => src/mem/coherent_bus.hh
rename : src/mem/bus.cc => src/mem/noncoherent_bus.cc
rename : src/mem/bus.hh => src/mem/noncoherent_bus.hh
This patch merely remove the Packet* from the isOccupied member
function. Historically this was used to check if the packet was an
express snoop, but this is now done outside this function (where
relevant).
The main aim of this patch is to arrive at a suitable port interface
for vector ports, including both the packet and the port id. This
patch changes the bus transport functions
(recvFunctional/Atomic/Timing) to require a PortId parameter
indicating the source port. Previously this information was passed by
setting the source field of the packet, and this is only required in
the case of a timing request.
With this patch, the use of the source and destination field is also
more restrictive, as they are only needed for timing accesses. The
modifications to these fields for atomic snoops is now removed
entirely, also making minor modifications to the cache.
This patch removes the Packet::NodeID typedef and unifies it with the
Port::PortId. The src and dest fields in the packet are used to hold a
port id (e.g. in the bus), and thus the two should actually be the
same.
The typedef PortID is now global (in base/types.hh) and aligned with
the ThreadID in terms of capitalisation and naming of the
InvalidPortID constant.
Before this patch, two flags were used for valid destination and
source, rather than relying on a named value (InvalidPortID), and
this is now redundant, as the src and dest field themselves are
sufficient to tell whether the current value is a valid port
identifier or not. Consequently, the VALID_SRC and VALID_DST are
removed.
As part of the cleaning up, a number of int parameters and local
variables are updated to use PortID.
Note that Ruby still has its own NodeID typedef. Furthermore, the
MemObject getMaster/SlavePort still has an int idx parameter with a
default value of -1 which should eventually change to PortID idx =
InvalidPortID.
This patch updates the comments for the src and dest fields to reflect
their actual use. Due to a number of patches (e.g. removing the
Broadcast flag), the old comments are no longer indicative of the
current usage.
This patch splits the PacketBuffer class into a RequestState and a
DeferredRequest and DeferredResponse. Only the requests need a
SenderState, and the deferred requests and responses only need an
associated point in time for the request and the response queue.
Besides the cleaning up, the goal is to simplify the transition to a
new port handshake, and with these changes, the two packet queues are
starting to look very similar to the generic packet queue, but
currently they do a few unique things relating to the NACK and
counting of requests/responses that the packet queue cannot be
conveniently used. This will be addressed in a later patch.
This patch removes unused commands and attributes from the packet to
avoid any confusion. It is part of an effort to clear up how and where
different commands and attributes are used.
The scheduling of the deadlock check event was being done incorrectly as the
clock was not being multiplied, so as to convert the time into ticks. This
patch removes that bug.
This patch adds a communication monitor MemObject that can be inserted
between a master and slave port to provide a range of statistics about
the communication passing through it. The communication monitor is
non-invasive and does not change any properties or timing of the
packets, with the exception of adding a sender state to be able to
track latency. The statistics are only collected in timing mode (not
atomic) to avoid slowing down any fast forwarding.
An example of the statistics captured by the monitor are: read/write
burst lengths, bandwidth, request-response latency, outstanding
transactions, inter transaction time, transaction count, and address
distribution. The monitor can be used in combination with periodic
resetting and dumping of stats (through schedStatEvent) to study the
behaviour over time.
In future patches, a selection of convenience scripts will be added to
aid in visualising the statistics collected by the monitor.
This patch adds a guarding if-statement to avoid forwarding
uncacheable requests (or rather their corresponding request packets)
to bus snoopers. These packets should never have any effect on the
caches, and thus there is no need to forward them to the snoopers.
This patch fixes a bug that caused snoop requests to be placed in a
packet queue. Instead, the packet is now sent immediately using
sendTimingSnoopReq, thus bypassing the packet queue and any normal
responses waiting to be sent.
This patch moves send/recvTiming and send/recvTimingSnoop from the
Port base class to the MasterPort and SlavePort, and also splits them
into separate member functions for requests and responses:
send/recvTimingReq, send/recvTimingResp, and send/recvTimingSnoopReq,
send/recvTimingSnoopResp. A master port sends requests and receives
responses, and also receives snoop requests and sends snoop
responses. A slave port has the reciprocal behaviour as it receives
requests and sends responses, and sends snoop requests and receives
snoop responses.
For all MemObjects that have only master ports or slave ports (but not
both), e.g. a CPU, or a PIO device, this patch merely adds more
clarity to what kind of access is taking place. For example, a CPU
port used to call sendTiming, and will now call
sendTimingReq. Similarly, a response previously came back through
recvTiming, which is now recvTimingResp. For the modules that have
both master and slave ports, e.g. the bus, the behaviour was
previously relying on branches based on pkt->isRequest(), and this is
now replaced with a direct call to the apprioriate member function
depending on the type of access. Please note that send/recvRetry is
still shared by all the timing accessors and remains in the Port base
class for now (to maintain the current bus functionality and avoid
changing the statistics of all regressions).
The packet queue is split into a MasterPort and SlavePort version to
facilitate the use of the new timing accessors. All uses of the
PacketQueue are updated accordingly.
With this patch, the type of packet (request or response) is now well
defined for each type of access, and asserts on pkt->isRequest() and
pkt->isResponse() are now moved to the appropriate send member
functions. It is also worth noting that sendTimingSnoopReq no longer
returns a boolean, as the semantics do not alow snoop requests to be
rejected or stalled. All these assumptions are now excplicitly part of
the port interface itself.
This patch makes some rather trivial simplifications to the bus in
that it changes the use of BusMasterPort and BusSlavePort pointers to
simply use MasterPort and SlavePort (iterators are also updated
accordingly).
This change is a step towards a future patch that introduces a
separation of the interface and the structural port itself.
This patch introduces the PortId type, moves the definition of
INVALID_PORT_ID to the Port class, and also gives every port an id to
reflect the fact that each element in a vector port has an
identifier/index.
Previously the bus and Ruby testers (and potentially other users of
the vector ports) added the id field in their port subclasses, and now
this functionality is always present as it is moved to the base class.
This patch simplifies the packet by removing the broadcast flag and
instead more firmly relying on (and enforcing) the semantics of
transactions in the classic memory system, i.e. request packets are
routed from a master to a slave based on the address, and when they
are created they have neither a valid source, nor destination. On
their way to the slave, the request packet is updated with a source
field for all modules that multiplex packets from multiple master
(e.g. a bus). When a request packet is turned into a response packet
(at the final slave), it moves the potentially populated source field
to the destination field, and the response packet is routed through
any multiplexing components back to the master based on the
destination field.
Modules that connect multiplexing components, such as caches and
bridges store any existing source and destination field in the sender
state as a stack (just as before).
The packet constructor is simplified in that there is no longer a need
to pass the Packet::Broadcast as the destination (this was always the
case for the classic memory system). In the case of Ruby, rather than
using the parameter to the constructor we now rely on setDest, as
there is already another three-argument constructor in the packet
class.
In many places where the packet information was printed as part of
DPRINTFs, request packets would be printed with a numeric "dest" that
would always be -1 (Broadcast) and that field is now removed from the
printing.
This patch introduces port access methods that separates snoop
request/responses from normal memory request/responses. The
differentiation is made for functional, atomic and timing accesses and
builds on the introduction of master and slave ports.
Before the introduction of this patch, the packets belonging to the
different phases of the protocol (request -> [forwarded snoop request
-> snoop response]* -> response) all use the same port access
functions, even though the snoop packets flow in the opposite
direction to the normal packet. That is, a coherent master sends
normal request and receives responses, but receives snoop requests and
sends snoop responses (vice versa for the slave). These two distinct
phases now use different access functions, as described below.
Starting with the functional access, a master sends a request to a
slave through sendFunctional, and the request packet is turned into a
response before the call returns. In a system without cache coherence,
this is all that is needed from the functional interface. For the
cache-coherent scenario, a slave also sends snoop requests to coherent
masters through sendFunctionalSnoop, with responses returned within
the same packet pointer. This is currently used by the bus and caches,
and the LSQ of the O3 CPU. The send/recvFunctional and
send/recvFunctionalSnoop are moved from the Port super class to the
appropriate subclass.
Atomic accesses follow the same flow as functional accesses, with
request being sent from master to slave through sendAtomic. In the
case of cache-coherent ports, a slave can send snoop requests to a
master through sendAtomicSnoop. Just as for the functional access
methods, the atomic send and receive member functions are moved to the
appropriate subclasses.
The timing access methods are different from the functional and atomic
in that requests and responses are separated in time and
send/recvTiming are used for both directions. Hence, a master uses
sendTiming to send a request to a slave, and a slave uses sendTiming
to send a response back to a master, at a later point in time. Snoop
requests and responses travel in the opposite direction, similar to
what happens in functional and atomic accesses. With the introduction
of this patch, it is possible to determine the direction of packets in
the bus, and no longer necessary to look for both a master and a slave
port with the requested port id.
In contrast to the normal recvFunctional, recvAtomic and recvTiming
that are pure virtual functions, the recvFunctionalSnoop,
recvAtomicSnoop and recvTimingSnoop have a default implementation that
calls panic. This is to allow non-coherent master and slave ports to
not implement these functions.
This patch addresses a number of minor issues that cause problems when
compiling with clang >= 3.0 and gcc >= 4.6. Most importantly, it
avoids using the deprecated ext/hash_map and instead uses
unordered_map (and similarly so for the hash_set). To make use of the
new STL containers, g++ and clang has to be invoked with "-std=c++0x",
and this is now added for all gcc versions >= 4.6, and for clang >=
3.0. For gcc >= 4.3 and <= 4.5 and clang <= 3.0 we use the tr1
unordered_map to avoid the deprecation warning.
The addition of c++0x in turn causes a few problems, as the
compiler is more stringent and adds a number of new warnings. Below,
the most important issues are enumerated:
1) the use of namespaces is more strict, e.g. for isnan, and all
headers opening the entire namespace std are now fixed.
2) another other issue caused by the more stringent compiler is the
narrowing of the embedded python, which used to be a char array,
and is now unsigned char since there were values larger than 128.
3) a particularly odd issue that arose with the new c++0x behaviour is
found in range.hh, where the operator< causes gcc to complain about
the template type parsing (the "<" is interpreted as the beginning
of a template argument), and the problem seems to be related to the
begin/end members introduced for the range-type iteration, which is
a new feature in c++11.
As a minor update, this patch also fixes the build flags for the clang
debug target that used to be shared with gcc and incorrectly use
"-ggdb".
This patch fixes a bug in Ruby that caused non-deterministic
simulation when changing the underlying hash map implementation. The
reason is order-dependent behaviour in combination with iteration over
the hash map contents. The two locations where a sorted container is
assumed are now changed to make use of a std::map instead of the
unordered hash map.
With this change, the stats changes slightly and the follow-on
changeset will update the relevant statistics.
Fixes checkpointing with respect to lost events after swapping event queues.
Also adds DPRINTFs to better understand what's going on when Ruby serializes
and unserializes.
This patch removes the assumption on having on single instance of
PhysicalMemory, and enables a distributed memory where the individual
memories in the system are each responsible for a single contiguous
address range.
All memories inherit from an AbstractMemory that encompasses the basic
behaviuor of a random access memory, and provides untimed access
methods. What was previously called PhysicalMemory is now
SimpleMemory, and a subclass of AbstractMemory. All future types of
memory controllers should inherit from AbstractMemory.
To enable e.g. the atomic CPU and RubyPort to access the now
distributed memory, the system has a wrapper class, called
PhysicalMemory that is aware of all the memories in the system and
their associated address ranges. This class thus acts as an
infinitely-fast bus and performs address decoding for these "shortcut"
accesses. Each memory can specify that it should not be part of the
global address map (used e.g. by the functional memories by some
testers). Moreover, each memory can be configured to be reported to
the OS configuration table, useful for populating ATAG structures, and
any potential ACPI tables.
Checkpointing support currently assumes that all memories have the
same size and organisation when creating and resuming from the
checkpoint. A future patch will enable a more flexible
re-organisation.
--HG--
rename : src/mem/PhysicalMemory.py => src/mem/AbstractMemory.py
rename : src/mem/PhysicalMemory.py => src/mem/SimpleMemory.py
rename : src/mem/physical.cc => src/mem/abstract_mem.cc
rename : src/mem/physical.hh => src/mem/abstract_mem.hh
rename : src/mem/physical.cc => src/mem/simple_mem.cc
rename : src/mem/physical.hh => src/mem/simple_mem.hh
This patch removes the DRAM memory class in preparation for updates to
the memory system, with the first one introducing an abstract memory
class, and removing the assumption of a single physical memory.
This patch removes the physMemPort from the RubySequencer and instead
uses the system pointer to access the physmem. The system already
keeps track of the physmem and the valid memory address ranges, and
with this patch we merely make use of that existing functionality. The
memory is modified so that it is possible to call the access functions
(atomic and functional) without going through the port, and the memory
is allowed to be unconnected, i.e. have no ports (since Ruby does not
attach it like the conventional memory system).
This patch introduces the notion of a master and slave port in the C++
code, thus bringing the previous classification from the Python
classes into the corresponding simulation objects and memory objects.
The patch enables us to classify behaviours into the two bins and add
assumptions and enfore compliance, also simplifying the two
interfaces. As a starting point, isSnooping is confined to a master
port, and getAddrRanges to slave ports. More of these specilisations
are to come in later patches.
The getPort function is not getMasterPort and getSlavePort, and
returns a port reference rather than a pointer as NULL would never be
a valid return value. The default implementation of these two
functions is placed in MemObject, and calls fatal.
The one drawback with this specific patch is that it requires some
code duplication, e.g. QueuedPort becomes QueuedMasterPort and
QueuedSlavePort, and BusPort becomes BusMasterPort and BusSlavePort
(avoiding multiple inheritance). With the later introduction of the
port interfaces, moving the functionality outside the port itself, a
lot of the duplicated code will disappear again.
This patch unifies the recvFunctional, recvAtomic and recvTiming to
all be based on a similar structure: 1) extract information about the
incoming packet, 2) send it out to the appropriate snoopers, 3)
determine where it is going, and 4) forward it to the right
destination. The naming of variables across the different access
functions is now consistent as well.
Additionally, the patch introduces the member functions releaseBus and
retryWaiting to better distinguish between the two cases when we
should tell a sender to retry. The first case is when the bus goes
from busy to idle, and the second case is when it receives a retry
from a destination that did not immediatelly accept a packet.
As a very minor change, the MMU debug flag is no longer used in the bus.
This patch decouples the queueing and the port interactions to
simplify the introduction of the master and slave ports. By separating
the queueing functionality from the port itself, it becomes much
easier to distinguish between master and slave ports, and still retain
the queueing ability for both (without code duplication).
As part of the split into a PacketQueue and a port, there is now also
a hierarchy of two port classes, QueuedPort and SimpleTimingPort. The
QueuedPort is useful for ports that want to leave the packet
transmission of outgoing packets to the queue and is used by both
master and slave ports. The SimpleTimingPort inherits from the
QueuedPort and adds the implemention of recvTiming and recvFunctional
through recvAtomic.
The PioPort and MessagePort are cleaned up as part of the changes.
--HG--
rename : src/mem/tport.cc => src/mem/packet_queue.cc
rename : src/mem/tport.hh => src/mem/packet_queue.hh
This patch removes the overriding of "-Werror" in a handful of
cases. The code compiles with gcc 4.6.3 and clang 3.0 without any
warnings, and thus without any errors. There are no functional changes
introduced by this patch. In the future, rather than ypassing
"-Werror", address the warnings.
1. --implicit-cache behavior is default.
2. makeEnv in src/SConscript is conditionally called.
3. decider set to MD5-timestamp
4. NO_HTML build option changed to SLICC_HTML (defaults to False)
This patch renames the sendTiming member function in the RubyPort to
avoid inadvertently hiding Port::sendTiming (discovered through some
rather painful debugging). The RubyPort does, in fact, rely on the
functionality of the queued port and the implementation merely
schedules a send the next cycle. The new name for the member function
is sendNextCycle to better reflect this behaviour.
In the unlikely event that we ever shift to using C++11 the member
functions in Port should have a "final" identifier to prevent any
overriding in derived classes.
The block is never inserted because it's the one extra block in the cache, but
it can be invalidated twice in a row. In that case the block doesn't have a
new master id (beacuse it was never inserted), however it is valid and
the accounting goes wrong at that point.
This is a trivial patch that merely makes all the member functions of
the port proxies const. There is no good reason why they should not
be, and this change only serves to make it explicit that they are not
modified through their use.
This patch splits the two cache ports into a master (memory-side) and
slave (cpu-side) subclass of port with slightly different
functionality. For example, it is only the CPU-side port that blocks
incoming requests, and only the memory-side port that schedules send
events outside of what the transmit list dictates.
This patch simplifies the two classes by relying further on
SimpleTimingPort and also generalises the latter to better accommodate
the changes (introducing trySendTiming and scheduleSend). The
memory-side cache port overrides sendDeferredPacket to be able to not
only send responses from the transmit list, but also send requests
based on the MSHRs.
A follow on patch further simplifies the SimpleTimingPort and the
cache ports.
This patch simplifies the mport in preparation for a split into a
master and slave role for the message ports. In particular,
sendMessageAtomic was only used in a single location and similarly so
sendMessageTiming. The affected interrupt device is updated
accordingly.
This patch moves the readBlob/writeBlob/memsetBlob from the Port class
to the PortProxy class, thus making a clear separation of the basic
port functionality (recv/send functional/atomic/timing), and the
higher-level functional accessors available on the port proxies.
There are only a few places in the code base where the blob functions
were used on ports, and they are all for peeking into the memory
system without making a normal memory access (in the memtest, and the
malta and tsunami pchip). The memtest also exemplifies how easy it is
to create a non-translating proxy if desired. The malta and tsunami
pchip used a slave port to perform a functional read, and this is now
changed to rely on the physProxy of the system (to which they already
have a pointer).
This patch is adding a clearer design intent to all objects that would
not be complete without a port proxy by making the proxies members
rathen than dynamically allocated. In essence, if NULL would not be a
valid value for the proxy, then we avoid using a pointer to make this
clear.
The same approach is used for the methods using these proxies, such as
loadSections, that now use references rather than pointers to better
reflect the fact that NULL would not be an acceptable value (in fact
the code would break and that is how this patch started out).
Overall the concept of "using a reference to express unconditional
composition where a NULL pointer is never valid" could be done on a
much broader scale throughout the code base, but for now it is only
done in the locations affected by the proxies.
This patch moves all port creation from the getPort method to be
consistently done in the MemObject's constructor. This is possible
thanks to the Swig interface passing the length of the vector ports.
Previously there was a mix of: 1) creating the ports as members (at
object construction time) and using getPort for the name resolution,
or 2) dynamically creating the ports in the getPort call. This is now
uniform. Furthermore, objects that would not be complete without a
port have these ports as members rather than having pointers to
dynamically allocated ports.
This patch also enables an elaboration-time enumeration of all the
ports in the system which can be used to determine the masterId.
This patch continues the unification of how the different CPU models
create and share their instruction and data ports. Most importantly,
it forces every CPU to have an instruction and a data port, and gives
these ports explicit getters in the BaseCPU (getDataPort and
getInstPort). The patch helps in simplifying the code, make
assumptions more explicit, andfurther ease future patches related to
the CPU ports.
The biggest changes are in the in-order model (that was not modified
in the previous unification patch), which now moves the ports from the
CacheUnit to the CPU. It also distinguishes the instruction fetch and
load-store unit from the rest of the resources, and avoids the use of
indices and casting in favour of keeping track of these two units
explicitly (since they are always there anyways). The atomic, timing
and O3 model simply return references to their already existing ports.
This patch adds a check in the findPort method to ensure that an
invalid port id is never returned. Previously this could happen if no
default port was set, and no address matched the request, in which
case -1 was returned causing a SEGFAULT when using the id to index in
the port array. To clean things up further a symbolic name is added
for the invalid port id.
This patch classifies all ports in Python as either Master or Slave
and enforces a binding of master to slave. Conceptually, a master (such
as a CPU or DMA port) issues requests, and receives responses, and
conversely, a slave (such as a memory or a PIO device) receives
requests and sends back responses. Currently there is no
differentiation between coherent and non-coherent masters and slaves.
The classification as master/slave also involves splitting the dual
role port of the bus into a master and slave port and updating all the
system assembly scripts to use the appropriate port. Similarly, the
interrupt devices have to have their int_port split into a master and
slave port. The intdev and its children have minimal changes to
facilitate the extra port.
Note that this patch does not enforce any port typing in the C++
world, it merely ensures that the Python objects have a notion of the
port roles and are connected in an appropriate manner. This check is
carried when two ports are connected, e.g. bus.master =
memory.port. The following patches will make use of the
classifications and specialise the C++ ports into masters and slaves.
This patch fixes the cache stats to use the new request ids.
Cache stats also display the requestor names in the vector subnames.
Most cache stats now include "nozero" and "nonan" flags to reduce the
amount of excessive cache stat dump. Also, simplified
incMissCount()/incHitCount() functions.
This change adds a master id to each request object which can be
used identify every device in the system that is capable of issuing a request.
This is part of the way to removing the numCpus+1 stats in the cache and
replacing them with the master ids. This is one of a series of changes
that make way for the stats output to be changed to python.
This patch removes the calls to isTagPresent() from Sequencer.cc. These
calls are made just for setting the cache block to have been most recently
used. The calls have been folded in to the function setMRU().
This patch adds support for stalling the requests queued up at different
controllers for the MESI CMP directory protocol. Earlier the controllers
would recycle the requests using some fixed latency. This results in
younger requests getting serviced first at times, and can result in
starvation. Instead all the requests that need a particular block to be
in a stable state are moved to a separate queue, where they wait till
that block returns to a stable state and then they are processed.
This patch removes the onRetryList field from the BusPort class and
entirely relies on the retryList which holds all ports that are
waiting to retry. The onRetryList field and the retryList were
previously used with overloaded functionalities and only one is really
needed (there were also checks to assert they held the same
information). After this patch the bus ports will be split into master
and slave ports and this simplifies that transition.
This patch adds the necessary flags to the SConstruct and SConscript
files for compiling using clang 2.9 and later (on Ubuntu et al and OSX
XCode 4.2), and also cleans up a bunch of compiler warnings found by
clang. Most of the warnings are related to hidden virtual functions,
comparisons with unsigneds >= 0, and if-statements with empty
bodies. A number of mismatches between struct and class are also
fixed. clang 2.8 is not working as it has problems with class names
that occur in multiple namespaces (e.g. Statistics in
kernel_stats.hh).
clang has a bug (http://llvm.org/bugs/show_bug.cgi?id=7247) which
causes confusion between the container std::set and the function
Packet::set, and this is currently addressed by not including the
entire namespace std, but rather selecting e.g. "using std::vector" in
the appropriate places.
This patch is a very straight-forward simplification, removing the
unecessary otherPort pointer from the cache port. The pointer was only
used to forward range changes, and the address range is fixed for the
cache. Removing the pointer simplifies the transition to master/slave
ports.
Brings the CheckerCPU back to life to allow FS and SE checking of the
O3CPU. These changes have only been tested with the ARM ISA. Other
ISAs potentially require modification.
This patch makes the physMemPort of the RubyPort a PioPort rather than
an M5Port. This reflects the fact that the M5Port and PioPort have
different roles. The M5Port is really a coherent slave that is
connected to the CPUs and other coherent masters of the system,
e.g. DMA ports. The PioPort, on the other hand, is a master port that
is connected to the memory and other slaves, for example the pio
devices.
This simplifies future changes into master/slave ports and is
consistent with the port roles throughout the system.
CopyStringOut() improperly indexed setting the null
character, would result in zeroing a random byte
of memory after(out of bounds) the character array.
This patch implements the functionality for forwarding invalidations and
replacements from the L1 cache of the Ruby memory system to the O3 CPU. The
implementation adds a list of ports to RubyPort. Whenever a replacement or an
invalidation is performed, the L1 cache forwards this to all the ports, which
is the LSQ in case of the O3 CPU.
This command will be sent from the memory system (Ruby) to the LSQ of
an O3 CPU so that the LSQ, if it needs to, invalidates the address in
the request packet.
This patch removes the idiosyncratic nature of the default bus port
and makes it yet another port in the list of interfaces. Rather than
having a specific pointer to the default port we merely track the
identifier of this port. This change makes future port diversification
easier and overall cleans up the bus code.
This patch makes the bus bridge uni-directional and specialises the
bus ports to be a master port and a slave port. This greatly
simplifies the assumptions on both sides as either port only has to
deal with requests or responses. The following patches introduce the
notion of master and slave ports, and would not be possible without
this split of responsibilities.
In making the bridge unidirectional, the address range mechanism of
the bridge is also changed. For the cases where communication is
taking place both ways, an additional bridge is needed. This causes
issues with the existing mechanism, as the busses cannot determine
when to stop iterating the address updates from the two bridges. To
avoid this issue, and also greatly simplify the specification, the
bridge now has a fixed set of address ranges, specified at creation
time.
The functional ports are no longer used and this patch cleans up the
legacy that is still present in buses, memories, CPUs etc. Note that
this does not refer to the class FunctionalPort (already removed), but
rather ports with the name (and use) functional.
This patch simplifies the address-range determination mechanism and
also unifies the naming across ports and devices. It further splits
the queries for determining if a port is snooping and what address
ranges it responds to (aiming towards a separation of
cache-maintenance ports and pure memory-mapped ports). Default
behaviours are such that most ports do not have to define isSnooping,
and master ports need not implement getAddrRanges.
This patch removes the default port and instead relies on the peer
being set to NULL initially. The binding check (i.e. is a port
connected or not) will eventually be moved to the init function of the
modules.
This patch removes the inheritance of EventManager from the ports and
moves all responsibility for event queues to the owner. Eventually the
event manager should be the interface block, which could either be the
structural owner or a subblock like a LSQ in the O3 CPU for example.
Port proxies are used to replace non-structural ports, and thus enable
all ports in the system to correspond to a structural entity. This has
the advantage of accessing memory through the normal memory subsystem
and thus allowing any constellation of distributed memories, address
maps, etc. Most accesses are done through the "system port" that is
used for loading binaries, debugging etc. For the entities that belong
to the CPU, e.g. threads and thread contexts, they wrap the CPU data
port in a port proxy.
The following replacements are made:
FunctionalPort > PortProxy
TranslatingPort > SETranslatingPortProxy
VirtualPort > FSTranslatingPortProxy
--HG--
rename : src/mem/vport.cc => src/mem/fs_translating_port_proxy.cc
rename : src/mem/vport.hh => src/mem/fs_translating_port_proxy.hh
rename : src/mem/translating_port.cc => src/mem/se_translating_port_proxy.cc
rename : src/mem/translating_port.hh => src/mem/se_translating_port_proxy.hh
This patch changes the access permission for the WB_E_W state from
Busy to Read_Write to avoid having issues in follow-on patches with
functional accesses going through Ruby. This change was made after
consultation with all involved parties and is more of a work-around
than a fix.
This patch changes the functionalAccess member function in the cache
model such that it is aware of what port the access came from, i.e. if
it came from the CPU side or from the memory side. By adding this
information, it is possible to respect the 'forwardSnoops' flag for
snooping requests coming from the memory side and not forward
them. This fixes an outstanding issue with the IO bus getting accesses
that have no valid destination port and also cleans up future changes
to the bus model.
The definition for the class CacheMsg was removed long back. Some declaration
had still survived, which was recently removed. Since the PerfectCacheMemory
class relied on this particular declaration, its absence let to compilation
breaking down. Hence this patch.
This patch resurrects ruby's cache warmup capability. It essentially
makes use of all the infrastructure that was added to the controllers,
memories and the cache recorder.
This patch adds function to the Sparse Memory so that the blocks can be
recorded in a cache trace. The blocks are added to the cache recorder
which can later write them into a file.
This patch adds functions to the memory vector class that can be used for
collating memory pages to raw trace and for populating pages from a raw
trace.
The SparseMemEntry structure includes just one void* pointer. It seems
unnecessary that we have a structure for this. The patch removes the
structure and makes use of a typedef on void* instead.
This adds the derived class FunctionalPacket to fix a long standing
deficiency in the Packet class where it was unable to handle finding data to
partially satisfy a functional access. Made this a derived class as
functional accesses are used only in certain contexts and to not add any
additional overhead to the existing Packet class.
This constant is currently in System.hh, but is only used in Set.hh. It
is being moved to Set.hh to remove this artificial dependence of Set.hh
on System.hh.
--HG--
extra : rebase_source : 683c43a5eeaec4f5f523b3ea32953a07f65cfee7
This patch removes calls to uu_ProfileMiss from transitions where the request
is satisfied by the L2 cache controller.
--HG--
extra : rebase_source : e59fe7c6cd5795c0019cf178dd3b062d73cc2ff5
This patch adds and removes included files from some of the files so as to
organize remove some false dependencies and include some files directly
instead of transitively.
--HG--
extra : rebase_source : 09b482ee9ae00b3a204ace0c63550bc3ca220134
SLICC uses pointers for cache and TBE entries but not for directory entries.
This patch changes the protocols, SLICC and Ruby memory system so that even
directory entries are referenced using pointers.
--HG--
extra : rebase_source : abeb4ac78033d003153751f216fd1948251fcfad
This patch changes the implementation of Ruby's recvTiming() function so
that it pushes a packet in to the Sequencer instead of a RubyRequest. This
requires changes in the Sequencer's makeRequest() and issueRequest()
functions, as they also need to operate on a Packet instead of RubyRequest.
This patch adds a fault model, which provides the probability of a number of
architectural faults in the interconnection network (e.g., data corruption,
misrouting). These probabilities can be used to realistically inject faults
in GARNET and faithfully evaluate the effectiveness of novel resilient NoC
architectures.
This patch removes some of the unused typedefs. It also moves
some of the typedefs from Global.hh to TypeDefines.hh. The patch
also eliminates the file NodeID.hh.
In RubySlicc_ComponentMapping.hh, certain '#define's have been used for
mapping MachineType to GenericMachineType. These '#define's are being
eliminated and the code will now be generated by SLICC instead. Also
are being eliminated some of the unused functions from
RubySlicc_ComponentMapping.sm.
PageTable supported an allocate() call that called back
through the Process to allocate memory, but did not have
a method to map addresses without allocating new pages.
It makes more sense for Process to do the allocation, so
this method was renamed allocateMem() and moved to Process,
and uses a new map() call on PageTable.
The remaining uses of the process pointer in PageTable
were only to get the name and the PID, so by passing these
in directly in the constructor, we can make PageTable
completely independent of Process.
Initialize flags via the Event constructor instead of calling
setFlags() in the body of the derived class's constructor. I
forget exactly why, but this made life easier when implementing
multi-queue support.
Also rename Event::getFlags() to isFlagSet() to better match
common usage, and get rid of some unused Event methods.
Check that we're not currently writing back an address the prefetcher is trying
to prefetch before issuing it. We previously checked the mshrQueue and the cache
itself, but forgot to check the writeBuffer. This fixes a memory corrucption
issue with an L2 prefetcher.
Do some minor cleanup of some recently added comments, a warning, and change
other instances of stack extension to be like what's now being done for x86.
Even though the code is safe, compiler flags a warning here, which are treated as errors for fast/opt. I know it's redundant but it has no side effects and fixes the compile.
In the current implementation of Functional Accesses, it's very hard to
implement broadcast or snooping protocols where the memory has no idea if it
has exclusive access to a cache block or not. Without this knowledge, making
sure the RW vs. RO permissions are right are next to impossible. So we add a
new state called Backing_Store to enable the conveyance that this is the backup
storage for a block, so that it can be written if it is the only possibly RW
block in the system, or written even if there is another RW block in the
system, without causing problems.
Also, a small change to actually set the m_name field for each Controller so
that debugging can be easier. Now you can access a controller's name just by
controller->getName().
This patch replaces RUBY with PROTOCOL in all the SConscript files as
the environment variable that decides whether or not certain components
of the simulator are compiled.
This patch drops RUBY as a compile time option. Instead the PROTOCOL option
is used to figure out whether or not to build Ruby. If the specified protocol
is 'None', then Ruby is not compiled.
Currently, functions associated with a controller go into separate files.
This patch puts all the functions in the controller's .cc file. This should
hopefully take away some time from compilation.
Prefetch requests issued from the L2 or below wouldn't check if valid data is
present higher in the system. If a prefetch into the L2 occured at the same
time as writeback from a higher-level cache the dirty data could be replaced
in by unmodified data in memory.
This makes it possible to use the grammar multiple times and use the multiple
instances concurrently. This makes implementing an include statement as part
of a grammar possible.
Addition of functional access support to Ruby necessitated some changes to
the way coherence protocols are written. I had forgotten to update the
Network_test protocol. This patch makes those updates.
This patch rpovides functional access support in Ruby. Currently only
the M5Port of RubyPort supports functional accesses. The support for
functional through the PioPort will be added as a separate patch.
The code for Set class was written under the assumption that
std::numeric_limits<long>::digits returns the number of bits used for
data type long, which was presumed to be either 32 or 64. But return value
is actually one less, that is, it is either 31 or 63. The value is now
being incremented by 1 so as to correctly set it.
The access permissions for the directory entries are not being set correctly.
This is because pointers are not used for handling directory entries.
function. get and set functions for access permissions have been added to the
Controller state machine. The changePermission() function provided by the
AbstractEntry and AbstractCacheEntry classes has been exposed to SLICC
code once again. The set_permission() functionality has been removed.
NOTE: Each protocol will have to define these get and set functions in order
to compile successfully.
Currently, the machine name is appended before any of the functions
defined with in the sm files. This is not necessary and it also
means that these functions cannot be used outside the sm files.
This patch does away with the prefixes. Note that the generated
C++ files in which the code for these functions is present are
still named such that the machine name is the prefix.
Re-enabling implicit parenting (see previous patch) causes current
Ruby config scripts to create some strange hierarchies and generate
several warnings. This patch makes three general changes to address
these issues.
1. The order of object creation in the ruby config files makes the L1
caches children of the sequencer rather than the controller; these
config ciles are rewritten to assign the L1 caches to the
controller first.
2. The assignment of the sequencer list to system.ruby.cpu_ruby_ports
causes the sequencers to be children of system.ruby, generating
warnings because they are already parented to their respective
controllers. Changing this attribute to _cpu_ruby_ports fixes this
because the leading underscore means this is now treated as a plain
Python attribute rather than a child assignment. As a result, the
configuration hierarchy changes such that, e.g.,
system.ruby.cpu_ruby_ports0 becomes system.l1_cntrl0.sequencer.
3. In the topology classes, the routers become children of some random
internal link node rather than direct children of the topology.
The topology classes are rewritten to assign the routers to the
topology object first.
The virtual channels within "response" vnets are made buffers_per_data_vc
deep (default=4), while virtual channels within other vnets are made
buffers_per_ctrl_vc deep (default = 1). This is for accurate power estimates.
Identifying response vnets versus other vnets will allow garnet to
determine which vnets will carry data packets, and which will carry
ctrl packets, and use appropriate buffer sizes (since data packets are larger
than ctrl packets). This in turn allows the orion power model to accurately
estimate buffer power.
Renamed (message) class to vnet for consistency with rest of ruby.
Moved some parameters specific to fixed/flexible garnet networks into their
corresponding py files.
The RubyMemory flag wasnt used in the code, creating large gaps in trace output. Replace cprintfs w/dprintfs
using RubyMemory in memory controller. DPRINTF also deprecate the usage of the setDebug() pure virtual
function in the AbstractMemoryOrCache Class as well the m_debug/cprintf functions in MemoryControl.hh/cc
The simple network's endpoint bandwidth value is used to adjust the overall
bandwidth of the network. Specifically, the ration between endpoint bandwidth
and the MESSAGE_SIZE_MULTIPLIER determines the increase. By setting the value
to 1000, that means the bandwdith factor specified in the links translates to
the link bandwidth in bytes. Previously, it was increasing that value by 10.
This patch will likely require a reset of the ruby regression tester stats.
Moved the buffer_size, endpoint_bandwidth, and adaptive_routing params out of
the top-level parent network object and to only those networks that actually
use those parameters.
This patch ensures that both Garnet and the simple networks use the bw value
specified in the topology. To do so, the patch generalizes the specification
of bw for basic links. This value is then translated to the specific value
used by the simple and Garnet networks. Since Garent does not support
non-uniformed link bandwidth, the patch also adds a check to ensure all bws are
equal.
--HG--
rename : src/mem/ruby/network/BasicLink.cc => src/mem/ruby/network/simple/SimpleLink.cc
rename : src/mem/ruby/network/BasicLink.hh => src/mem/ruby/network/simple/SimpleLink.hh
rename : src/mem/ruby/network/BasicLink.py => src/mem/ruby/network/simple/SimpleLink.py
This patch converts links and switches from second class simobjects that were
virtually ignored by the networks (both simple and Garnet) to first class
simobjects that directly correspond to c++ ojbects manipulated by the
topology and network classes. This is especially true for Garnet, where the
links and switches directly correspond to specific C++ objects.
By making this change, many aspects of the Topology class were simplified.
--HG--
rename : src/mem/ruby/network/Network.cc => src/mem/ruby/network/BasicLink.cc
rename : src/mem/ruby/network/Network.hh => src/mem/ruby/network/BasicLink.hh
rename : src/mem/ruby/network/Network.cc => src/mem/ruby/network/garnet/fixed-pipeline/GarnetLink_d.cc
rename : src/mem/ruby/network/Network.hh => src/mem/ruby/network/garnet/fixed-pipeline/GarnetLink_d.hh
rename : src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.py => src/mem/ruby/network/garnet/fixed-pipeline/GarnetLink_d.py
rename : src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.py => src/mem/ruby/network/garnet/fixed-pipeline/GarnetRouter_d.py
rename : src/mem/ruby/network/Network.cc => src/mem/ruby/network/garnet/flexible-pipeline/GarnetLink.cc
rename : src/mem/ruby/network/Network.hh => src/mem/ruby/network/garnet/flexible-pipeline/GarnetLink.hh
rename : src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.py => src/mem/ruby/network/garnet/flexible-pipeline/GarnetLink.py
rename : src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.py => src/mem/ruby/network/garnet/flexible-pipeline/GarnetRouter.py
Moved the Topology class to the top network directory because it is shared by
both the simple and Garnet networks.
--HG--
rename : src/mem/ruby/network/simple/Topology.cc => src/mem/ruby/network/Topology.cc
rename : src/mem/ruby/network/simple/Topology.hh => src/mem/ruby/network/Topology.hh
At the same time, rename the trace flags to debug flags since they
have broader usage than simply tracing. This means that
--trace-flags is now --debug-flags and --trace-help is now --debug-help
Fixed an error reguarding DMA for uninprocessor systems. Basically removed an
overly agressive optimization that lead to inconsistent state between the
cache and the directory.
This function duplicates the functionality of allocate() exactly, except that it does not return
a return value. In protocols where you just want to allocate a block
but do not want that block to be your implicitly passed cache_entry, use this function.
Otherwise, SLICC will complain if you do not consume the pointer returned by allocate(),
and if you do a dummy assignment Entry foo := cache.allocate(address), the C++
compiler will complain of an unused variable. This is kind of a hack to get around
those issues, but suggestions welcome.
Before this changeset, all local variables of type Entry and TBE were considered
to be pointers, but an immediate use of said variables would not be automatically
deferenced in SLICC-generated code. Instead, deferences occurred when such
variables were passed to functions, and were automatically dereferenced in
the bodies of the functions (e.g. the implicitly passed cache_entry).
This is a more general way to do it, which leaves in place the
assumption that parameters to functions and local variables of type AbstractCacheEntry
and TBE are always pointers, but instead of dereferencing to access member variables
on a contextual basis, the dereferencing automatically occurs on a type basis at the
moment a member is being accessed. So, now, things you can do that you couldn't before
include:
Entry foo := getCacheEntry(address);
cache_entry.DataBlk := foo.DataBlk;
or
cache_entry.DataBlk := getCacheEntry(address).DataBlk;
or even
cache_entry.DataBlk := static_cast(Entry, pointer, cache.lookup(address)).DataBlk;
This is a substitute for MessageBuffers between controllers where you don't
want messages to actually go through the Network, because requests/responses can
always get reordered wrt to one another (even if you turn off Randomization and turn on Ordered)
because you are, after all, going through a network with contention. For systems where you model
multiple controllers that are very tightly coupled and do not actually go through a network,
it is a pain to have to write a coherence protocol to account for mixed up request/response orderings
despite the fact that it's completely unrealistic. This is *not* meant as a substitute for real
MessageBuffers when messages do in fact go over a network.
It is useful for Ruby to understand from whence request packets came.
This has all request packets going into Ruby pass the contextId value, if
it exists. This supplants the old libruby proc_id value passed around in
all the Messages, so I've also removed the unused unsigned proc_id; member
generated by SLICC for all Message types.
The goal of the patch is to do away with the CacheMsg class currently in use
in coherence protocols. In place of CacheMsg, the RubyRequest class will used.
This class is already present in slicc_interface/RubyRequest.hh. In fact,
objects of class CacheMsg are generated by copying values from a RubyRequest
object.
The tester code is in testers/networktest.
The tester can be invoked by configs/example/ruby_network_test.py.
A dummy coherence protocol called Network_test is also addded for network-only simulations and testing. The protocol takes in messages from the tester and just pushes them into the network in the appropriate vnet, without storing any state.
I had recently committed a patch that removed the WakeUp*.py files from the
slicc/ast directory. I had forgotten to remove the import calls for these
files from slicc/ast/__init__.py. This resulted in error while running
regressions on zizzer. This patch does the needful.
This patch fixes the problem where Ruby would fail to call sendRetry on ports
after it nacked the port. This patch is particularly helpful for bursty dma
requests which often include several packets.
In SLICC, in order to define a type a data type for which it should not
generate any code, the keyword external_type is used. For those data types for
which code should be generated, the keyword structure is used. This patch
eliminates the use of keyword external_type for defining structures. structure
key word can now have an optional attribute external, which would be used for
figuring out whether or not to generate the code for this structure. Also, now
structures can have functions as well data members in them.
In order to add stall and wait facility for protocols, a keyword
wake_up_dependents was introduced. This patch removes the keyword,
instead this functionality is now implemented as function call.
In order to add stall and wait facility for protocols, a keyword
wake_up_all_dependents was introduced. This patch removes the keyword,
instead this functionality is now implemented as function call.
This change fixes the problem for all the cases we actively use. If you want to try
more creative I/O device attachments (E.g. sharing an L2), this won't work. You
would need another level of caching between the I/O device and the cache
(which you actually need anyway with our current code to make sure writes
propagate). This is required so that you can mark the cache in between as
top level and it won't try to send ownership of a block to the I/O device.
Asserts have been added that should catch any issues.
None of the code in the ruby tester directory is compiled or referred to
outside of that directory. This change eliminates it. If it's needed in the
future, it can be revived from the history. In the mean time, this removes
clutter and the only use of the GEMS_ROOT scons variable.
There may not be a formally correct spelling for the past tense of mmap, but
mmapped is the spelling Google doesn't try to autocorrect. This makes sense
because it mirrors the past tense of map->mapped and not the past tense of
cape->caped.
--HG--
rename : src/arch/alpha/mmaped_ipr.hh => src/arch/alpha/mmapped_ipr.hh
rename : src/arch/arm/mmaped_ipr.hh => src/arch/arm/mmapped_ipr.hh
rename : src/arch/mips/mmaped_ipr.hh => src/arch/mips/mmapped_ipr.hh
rename : src/arch/power/mmaped_ipr.hh => src/arch/power/mmapped_ipr.hh
rename : src/arch/sparc/mmaped_ipr.hh => src/arch/sparc/mmapped_ipr.hh
rename : src/arch/x86/mmaped_ipr.hh => src/arch/x86/mmapped_ipr.hh
At a couple of places in PerfectSwitch.cc and MessageBuffer.cc, DPRINTF()
has not been provided with correct number of arguments. The patch fixes these
bugs.
This patch removes the store buffer from Ruby. It is not in use currently.
Since libruby is being and store buffer makes calls to libruby, it is not
possible to maintain it until substantial changes are made.
This patch changes Address.hh so that it is not dependent on RubySystem.
This dependence seems unecessary. All those functions that depend on
RubySystem have been moved to Address.cc file.
This patch changes DataBlock.hh so that it is not dependent on RubySystem.
This dependence seems unecessary. All those functions that depende on
RubySystem have been moved to DataBlock.cc file.
This patch integrates permissions with cache and memory states, and then
automates the setting of permissions within the generated code. No longer
does one need to manually set the permissions within the setState funciton.
This patch will faciliate easier functional access support by always correctly
setting permissions for both cache and memory states.
--HG--
rename : src/mem/slicc/ast/EnumDeclAST.py => src/mem/slicc/ast/StateDeclAST.py
rename : src/mem/slicc/ast/TypeFieldEnumAST.py => src/mem/slicc/ast/TypeFieldStateAST.py
"executing" isnt a very descriptive debug message and in going through the
output you get multiple messages that say "executing" but nothing to help
you parse through the code/execution.
So instead, at least print out the name of the action that is taking
place in these functions.
Overall, continue to progress Ruby debug messages to more of the normal M5
debug message style
- add a name() to the Ruby Throttle & PerfectSwitch objects so that the debug output
isn't littered w/"global:" everywhere.
- clean up messages that print over multiple lines when possible
- clean up duplicate prints in the message buffer