This patch makes the buses multi layered, and effectively creates a
crossbar structure with distributed contention ports at the
destination ports. Before this patch, a bus could have a single
request, response and snoop response in flight at any time, and with
these changes there can be as many requests as connected slaves (bus
master ports), and as many responses as connected masters (bus slave
ports).
Together with address interleaving, this patch enables us to create
high-throughput memory interconnects, e.g. 50+ GByte/s.
This patch does some minor housekeeping on the bus code, removing
redundant code, and moving the extraction of the destination id to the
top of the functions using it.
This patch adds a basic set of stats which are hard to impossible to
implement using only communication monitors, and are needed for
insight such as bus utilization, transactions through the bus etc.
Stats added include throughput and transaction distribution, and also
a two-dimensional vector capturing how many packets and how much data
is exchanged between the masters and slaves connected to the bus.
This patch splits the retryList into a list of ports that are waiting
for the bus itself to become available, and a map that tracks the
ports where forwarding failed due to a peer not accepting the
packet. Thus, when a retry reaches the bus, it can be sent to the
appropriate port that initiated that transaction.
As a consequence of this patch, only ports that are really ready to go
will get a retry, thus reducing the amount of redundant failed
attempts. This patch also makes it easier to reason about the order of
servicing requests as the ports waiting for the bus are now clearly
FIFO and much easier to change if desired.
This patch adds a check to ensure that the delay incurred by
the bus is not simply disregarded, but accounted for by someone. At
this point, all the modules do is to zero it out, and no additional
time is spent. This highlights where the bus timing is simply dropped
instead of being paid for.
As a follow up, the locations identified in this patch should add this
additional time to the packets in one way or another. For now it
simply acts as a sanity check and highlights where the delay is simply
ignored.
Since no time is added, all regressions remain the same.
This patch changes the bus-related time accounting done in the packet
to be relative. Besides making it easier to align the cache timing to
cache clock cycles, it also makes it possible to create a Last-Level
Cache (LLC) directly to a memory controller without a bus inbetween.
The bus is unique in that it does not ever make the packets wait to
reflect the time spent forwarding them. Instead, the cache is
currently responsible for making the packets wait. Thus, the bus
annotates the packets with the time needed for the first word to
appear, and also the last word. The cache then delays the packets in
its queues before passing them on. It is worth noting that every
object attached to a bus (devices, memories, bridges, etc) should be
doing this if we opt for keeping this way of accounting for the bus
timing.
This patch makes the clock member private to the ClockedObject and
forces all children to access it using clockPeriod(). This makes it
impossible to inadvertently change the clock, and also makes it easier
to transition to a situation where the clock is derived from e.g. a
clock domain, or through a multiplier.
This patch moves the draining interface from SimObject to a separate
class that can be used by any object needing draining. However,
objects not visible to the Python code (i.e., objects not deriving
from SimObject) still depend on their parents informing them when to
drain. This patch also gets rid of the CountedDrainEvent (which isn't
really an event) and replaces it with a DrainManager.
This patch is a first step to align the port names used in the Python
world and the C++ world. Ultimately it serves to make the use of
config.json together with output from the simulation easier, including
post-processing of statistics.
Most notably, the CPU, cache, and bus is addressed in this patch, and
there might be other ports that should be updated accordingly. The
dash name separator has also been replaced with a "." which is what is
used to concatenate the names in python, and a separation is made
between the master and slave port in the bus.
This patch splits the existing buses into multiple layers. The
non-coherent bus is split into a request and a response layer, and the
coherent bus adds an additional layer for the snoop responses. The
layer is modified to be templatised on the port type, such that the
different layers can have retryLists with either master or slave
ports. This patch also removes the dynamic cast from the retry, as
previously promised when moving the recvRetry from the port base class
to the master/slave port respectively.
Overall, the split bus more closely reflects any modern on-chip bus
and should be at step in the right direction. From this point, it
would be reasonable straight forward to add separate layers (and thus
contention points and arbitration) for each port and thus create a
true crossbar.
The regressions all produce the correct output, but have varying
degrees of changes to their statistics. A separate patch will be
pushed with the updates to the reference statistics.
This patch moves all flow control, arbitration and state information
into a bus layer. The layer is thus responsible for all the state
transitions, and for keeping hold of the retry list. Consequently the
layer is also responsible for the draining.
With this change, the non-coherent and coherent bus are given a single
layer to avoid changing any temporal behaviour, but the patch opens up
for adding more layers.
This patch adds a state enum and member variable in the bus, tracking
the bus state, thus eliminating the need for tickNextIdle and inRetry,
and fixing an issue that allowed the bus to be occupied by multiple
packets at once (hopefully it also makes it easier to understand the
code).
The bus, in its current form, uses tickNextIdle and inRetry to keep
track of the state of the bus. However, it only updates tickNextIdle
_after_ forwarding a packet using sendTiming, and the result is that
the bus is still seen as idle, and a module that receives the packet
and starts transmitting new packets in zero time will still see the
bus as idle (and this is done by a number of DMA devices). The issue
can also be seen in isOccupied where the bus calls reschedule on an
event instead of schedule.
This patch addresses the problem by marking the bus as _not_ idle
already by the time we conclude that the bus is not occupied and we
will deal with the packet.
As a result of not allowing multiple packets to occupy the bus, some
regressions have slight changes in their statistics. A separate patch
updates these accordingly.
Further ahead, a follow-on patch will introduce a separate state
variable for request/responses/snoop responses, and thus implement a
split request/response bus with separate flow control for the
different message types (even further ahead it will introduce a
multi-layer bus).
This patch introduces a class hierarchy of buses, a non-coherent one,
and a coherent one, splitting the existing bus functionality. By doing
so it also enables further specialisation of the two types of buses.
A non-coherent bus connects a number of non-snooping masters and
slaves, and routes the request and response packets based on the
address. The request packets issued by the master connected to a
non-coherent bus could still snoop in caches attached to a coherent
bus, as is the case with the I/O bus and memory bus in most system
configurations. No snoops will, however, reach any master on the
non-coherent bus itself. The non-coherent bus can be used as a
template for modelling PCI, PCIe, and non-coherent AMBA and OCP buses,
and is typically used for the I/O buses.
A coherent bus connects a number of (potentially) snooping masters and
slaves, and routes the request and response packets based on the
address, and also forwards all requests to the snoopers and deals with
the snoop responses. The coherent bus can be used as a template for
modelling QPI, HyperTransport, ACE and coherent OCP buses, and is
typically used for the L1-to-L2 buses and as the main system
interconnect.
The configuration scripts are updated to use a NoncoherentBus for all
peripheral and I/O buses.
A bit of minor tidying up has also been done.
--HG--
rename : src/mem/bus.cc => src/mem/coherent_bus.cc
rename : src/mem/bus.hh => src/mem/coherent_bus.hh
rename : src/mem/bus.cc => src/mem/noncoherent_bus.cc
rename : src/mem/bus.hh => src/mem/noncoherent_bus.hh