This prevents redundant prefetches from being issued, solving the
occasional 'needsExclusive && !blk->isWritable()' assertion failure
in cache_impl.hh that several people have run into.
Eliminates "prefetch_cache_check_push" flag, neither setting of
which really solved the problem.
This is simply a translation of the C++ slicc into python with very minimal
reorganization of the code. The output can be verified as nearly identical
by doing a "diff -wBur".
Slicc can easily be run manually by using util/slicc
Get rid of misc.py and just stick misc things in __init__.py
Move utility functions out of SCons files and into m5.util
Move utility type stuff from m5/__init__.py to m5/util/__init__.py
Remove buildEnv from m5 and allow access only from m5.defines
Rename AddToPath to addToPath while we're moving it to m5.util
Rename read_command to readCommand while we're moving it
Rename compare_versions to compareVersions while we're moving it.
--HG--
rename : src/python/m5/convert.py => src/python/m5/util/convert.py
rename : src/python/m5/smartdict.py => src/python/m5/util/smartdict.py
This changeset contains a lot of different changes that are too
mingled to separate. They are:
1. Added MOESI_CMP_directory
I made the changes necessary to bring back MOESI_CMP_directory,
including adding a DMA controller. I got rid of MOESI_CMP_directory_m
and made MOESI_CMP_directory use a memory controller. Added a new
configuration for two level protocols in general, and
MOESI_CMP_directory in particular.
2. DMA Sequencer uses a generic SequencerMsg
I will eventually make the cache Sequencer use this type as well. It
doesn't contain an offset field, just a physical address and a length.
MI_example has been updated to deal with this.
3. Parameterized Controllers
SLICC controllers can now take custom parameters to use for mapping,
latencies, etc. Currently, only int parameters are supported.
The inconsistency was causing a subtle bug with some of the
constructors where the params had the same name as the fields.
This is also a first step to switching the accessors over to
our new "standard", e.g., getVaddr() -> vaddr().
Caches are now responsible for their own statistic gathering. This
requires a direct callback from the protocol on misses, and so all
future protocols need to take this into account.
The DMASequencer was still using a parameter from the old RubyConfig,
causing an offset error when the requested data wasn't block aligned.
This changeset also includes a fix to MI_example for a similar bug.
2. Reintroduced RMW_Read and RMW_Write
3. Defined -2 in the Sequencer as well as made a note about mandatory queue
Did not address the issues in the slicc because remaking the atomics altogether to allow
multiple processors to issue atomic requests at once
This also includes a change to the default Ruby random seed, which was
previously set using the wall clock. It is now set to 1234 so that
the stat files don't change for the regression tester.
This was done with an automated process, so there could be things that were
done in this tree in the past that didn't make it. One known regression
is that atomic memory operations do not seem to work properly anymore.
This changeset also includes a lot of work from Derek Hower <drh5@cs.wisc.edu>
RubyMemory is now both a driver for Ruby and a port for M5. Changed
makeRequest/hitCallback interface. Brought packets (superficially)
into the sequencer. Modified tester infrastructure to be packet based.
and Ruby can be used together through the example ruby_se.py
script. SPARC parallel applications work, and the timing *seems* right
from combined M5/Ruby debug traces. To run,
% build/ALPHA_SE/m5.debug configs/example/ruby_se.py -c
tests/test-progs/hello/bin/alpha/linux/hello -n 4 -t
1. removed checks from tester files
2. removed else clause in Sequencer and DirectoryMemory else clause is
needed by the tester, it is up to Derek to revive it elsewhere when he
gets to it
Also:
1. Changed m_entries in DirectoryMemory to a map
2. And replaced SIMICS_read_physical_memory with a call to now-dummy
Derek's-to-be readPhysMem function
Add the PROTOCOL sticky option sets the coherence protocol that slicc
will parse and therefore ruby will use. This whole process was made
difficult by the fact that the set of files that are output by slicc
are not easily known ahead of time. The easiest thing wound up being
to write a parser for slicc that would tell me. Incidentally this
means we now have a slicc grammar written in python.
This basically means changing all #include statements and changing
autogenerated code so that it generates the correct paths. Because
slicc generates #includes, I had to hard code the include paths to
mem/protocol.
1) Removing files from the ruby build left some unresovled
symbols. Those have been fixed.
2) Most of the dependencies on Simics data types and the simics
interface files have been removed.
3) Almost all mention of opal is gone.
4) Huge chunks of LogTM are now gone.
5) Handling 1-4 left ~hundreds of unresolved references, which were
fixed, yielding a snowball effect (and the massive size of this
delta).
I did the macro cleanup because I was worried that the SCons scanner
would get confused. This code will hopefully go away soon anyway.
--HG--
rename : src/mem/ruby/config/config.include => src/mem/ruby/config/config.hh
Previously there was one per bus, which caused some coherence problems
when more than one decided to respond. Now there is just one on
the main memory bus. The default bus responder on all other buses
is now the downstream cache's cpu_side port. Caches no longer need
to do address range filtering; instead, we just have a simple flag
to prevent snoops from propagating to the I/O bus.
This frees up needed space for more public flags. Also:
- remove unused Request accessor methods
- make Packet use public Request accessors, so it need not be a friend
Apparently we broke it with the cache rewrite and never noticed.
Thanks to Bao Yungang <baoyungang@gmail.com> for a significant part
of these changes (and for inspiring me to work on the rest).
Some other overdue cleanup on the prefetch code too.
Bogus calls to ChunkGenerator with negative size were triggering
a new assertion that was added there.
Also did a little renaming and cleanup in the process.
I think readData() and writeData() were used for Erik's compression
work, but that code is gone, these aren't called anymore, and they
don't even really do what their names imply.
I did some of the flags and assertions wrong. Thanks to Brad Beckmann
for pointing this out. I should have run the opt regressions instead
of the fast. I also screwed up some of the logical functions in the Flags
class.
the primary identifier for a hardware context should be contextId(). The
concept of threads within a CPU remains, in the form of threadId() because
sometimes you need to know which context within a cpu to manipulate.
Since the early days of M5, an event needed to know which event queue
it was on, and that data was required at the time of construction of
the event object. In the future parallelized M5, this sort of
requirement does not work well since the proper event queue will not
always be known at the time of construction of an event. Now, events
are created, and the EventQueue itself has the schedule function,
e.g. eventq->schedule(event, when). To simplify the syntax, I created
a class called EventManager which holds a pointer to an EventQueue and
provides the schedule interface that is a proxy for the EventQueue.
The intent is that objects that frequently schedule events can be
derived from EventManager and then they have the schedule interface.
SimObject and Port are examples of objects that will become
EventManagers. The end result is that any SimObject can just call
schedule(event, when) and it will just call that SimObject's
eventq->schedule function. Of course, some objects may have more than
one EventQueue, so this interface might not be perfect for those, but
they should be relatively few.
This appears to work, but I don't want to commit it until it gets tested a lot more.
I haven't deleted the functionality in this patch that will come later, but one question
is how to enforce encourage objects that call getVirtPort() to not cache the virtual port
since if the CPU changes out from under them it will be worse than useless. Perhaps a null
function like delVirtPort() is still useful in that case.
It runs out that if a MemObject turns around and does a send in its
receive callback, and there are other sends already scheduled, then
it could observe a state where it's not at the head of the list but
the bus's sendEvent is not scheduled (because we're still in the
middle of processing the prior sendEvent).
I was asserting that the only reason you would defer targets is if
a write came in while you had an outstanding read miss, but there's
another case where you could get a read access after you've snooped
an invalidation and buffered it because it applies to a prior
outstanding miss.
Make OutputDirectory::resolve() private and change the functions using
resolve() to instead use create().
--HG--
extra : convert_revision : 36d4be629764d0c4c708cec8aa712cd15f966453
if a prior write miss arrived while an even earlier
read miss was still outstanding.
--HG--
extra : convert_revision : 4924e145829b2ecf4610b88d33f4773510c6801a
where we defer a response to a read from a far-away cache A, then later
defer a ReadExcl from a cache B on the same bus as us. We'll assert
MemInhibit in both cases, but in the latter case MemInhibit will keep
the invalidation from reaching cache A. This special response tells
cache A that it gets the block to satisfy its read, but must immediately
invalidate it.
--HG--
extra : convert_revision : f85c8b47bb30232da37ac861b50a6539dc81161b
Don't mark upstream MSHR as pending if downstream MSHR is already in service.
--HG--
extra : convert_revision : e1c135ff00217291db58ce8a06ccde34c403d37f
Not so much noise on failed sends, and more complete
info when grepping a trace using an address.
--HG--
extra : convert_revision : 05a8261c9452072ca08b906200c6322b33e2b9f1
SimObjects not yet updated:
- Process and subclasses
- BaseCPU and subclasses
The SimObject(const std::string &name) constructor was removed. Subclasses
that still rely on that behavior must call the parent initializer as
: SimObject(makeParams(name))
--HG--
extra : convert_revision : d6faddde76e7c3361ebdbd0a7b372a40941c12ed
The page table now stores actual page table entries. It is still a templated
class here, but this will be corrected in the near future.
--HG--
extra : convert_revision : 804dcc6320414c2b3ab76a74a15295bd24e1d13d
Make sure not to keep processing functional accesses
after they've been responded to.
Also use checkFunctional() return value instead of checking
packet command field where possible, mostly just for consistency.
--HG--
extra : convert_revision : 29fc76bc18731bd93a4ed05a281297827028ef75
creation and initialization now happens in python. Parameter objects
are generated and initialized by python. The .ini file is now solely for
debugging purposes and is not used in construction of the objects in any
way.
--HG--
extra : convert_revision : 7e722873e417cb3d696f2e34c35ff488b7bff4ed
Turns out DeferredSnoop isn't quite the right bit of info
we needed... see new comment in cache_impl.hh.
--HG--
extra : convert_revision : a38de8c1677a37acafb743b7074ef88b21d3b7be
If the invalidation beats the upgrade at a lower level
then the upgrade must be converted to a read exclusive
"in the field".
Restructure target list & deferred target list to
factor out some common code.
--HG--
extra : convert_revision : 7bab4482dd6c48efdb619610f0d3778c60ff777a
- Add "deferred snoop" flag to Packet so upper-level caches
can distinguish whether lower-level cache request was
in-service or not at the time of the original snoop.
- Revamp response handling to properly handle deferred snoops
on non-cache-fill requests (i.e. upgrades).
- Make sure forwarded writebacks are kept in write buffer at
lower-level caches so they get snooped properly.
--HG--
extra : convert_revision : 17f8a3772a1ae31a16991a53f8225ddf54d31fc9
Move check for loops outside, since half the call sites
end up working around it anyway. Return integer port ID
instead of port object pointer.
--HG--
extra : convert_revision : 4c31fe9930f4d1aa4919e764efb7c50d43792ea3
Note that we should *not* print pointer values in DPRINTFs as
these needlessly clutter tracediff output.
--HG--
extra : convert_revision : 25a448f1b3ac8d453a717a104ad6dc0112fb30bb
src/cpu/simple/timing.cc:
Fix another SC problem.
src/mem/cache/cache_impl.hh:
Forgot to call makeTimingResponse() on uncached timing responses.
--HG--
extra : convert_revision : 5a5a58ca2053e4e8de2133205bfd37de15eb4209
Stats pretty much line up with old code, except:
- bug in old code included L1 latency in L2 miss time, making it too high
- UniCoherence did cache-to-cache transfers even from non-owner caches,
so occasionally the icache would get a block from the dcache not the L2
- L2 can now receive ReadExReq from L1 since L1s have coherence
--HG--
extra : convert_revision : 5052c1a1767b5a662f30a88f16012165a73b791c
Change target overflow from assertion to warning.
src/mem/cache/cache_impl.hh:
Change target overflow from assertion to warning.
--HG--
extra : convert_revision : ceca990ed916bbf96dedd4836c40df522803f173
src/mem/cache/cache_impl.hh:
Handle grants with no packet.
src/mem/cache/miss/mshr.cc:
Fix MSHR snoop hit handling.
--HG--
extra : convert_revision : f365283afddaa07cb9e050b2981ad6a898c14451
sure we don't re-request bus prematurely. Use callback to
avoid calling sendRetry() recursively within recvTiming.
--HG--
extra : convert_revision : a907a2781b4b00aa8eb1ea7147afc81d6b424140
src/cpu/memtest/memtest.cc:
Need to set packet source field so that response from cache
doesn't run into assertion failure when copying source to dest.
src/mem/packet.hh:
Copy source field when copying packets.
Assert that source is valid before copying it to dest
when turning packets around.
--HG--
extra : convert_revision : 09e3cfda424aa89fe170e21e955b295746832bf8
supposed to and make sure parameters have the right type.
Also make sure that any object that should be an intermediate
type has the right options set.
--HG--
extra : convert_revision : d56910628d9a067699827adbc0a26ab629d11e93
into vm1.(none):/home/stever/bk/newmem-cache2
configs/example/memtest.py:
Hand merge redundant changes.
--HG--
extra : convert_revision : a2e36be254bf052024f37bcb23b5209f367d37e1
timing mode still broken.
configs/example/memtest.py:
Revamp options.
src/cpu/memtest/memtest.cc:
No need for memory initialization.
No need to make atomic response... memory system should do that now.
src/cpu/memtest/memtest.hh:
MemTest really doesn't want to snoop.
src/mem/bridge.cc:
checkFunctional() cleanup.
src/mem/bus.cc:
src/mem/bus.hh:
src/mem/cache/base_cache.cc:
src/mem/cache/base_cache.hh:
src/mem/cache/cache.cc:
src/mem/cache/cache.hh:
src/mem/cache/cache_blk.hh:
src/mem/cache/cache_builder.cc:
src/mem/cache/cache_impl.hh:
src/mem/cache/coherence/coherence_protocol.cc:
src/mem/cache/coherence/coherence_protocol.hh:
src/mem/cache/coherence/simple_coherence.hh:
src/mem/cache/miss/SConscript:
src/mem/cache/miss/mshr.cc:
src/mem/cache/miss/mshr.hh:
src/mem/cache/miss/mshr_queue.cc:
src/mem/cache/miss/mshr_queue.hh:
src/mem/cache/prefetch/base_prefetcher.cc:
src/mem/cache/tags/fa_lru.cc:
src/mem/cache/tags/fa_lru.hh:
src/mem/cache/tags/iic.cc:
src/mem/cache/tags/iic.hh:
src/mem/cache/tags/lru.cc:
src/mem/cache/tags/lru.hh:
src/mem/cache/tags/split.cc:
src/mem/cache/tags/split.hh:
src/mem/cache/tags/split_lifo.cc:
src/mem/cache/tags/split_lifo.hh:
src/mem/cache/tags/split_lru.cc:
src/mem/cache/tags/split_lru.hh:
src/mem/packet.cc:
src/mem/packet.hh:
src/mem/physical.cc:
src/mem/physical.hh:
src/mem/tport.cc:
More major reorg. Seems to work for atomic mode now,
timing mode still broken.
--HG--
extra : convert_revision : 7e70dfc4a752393b911880ff028271433855ae87
using a divide in order to not loop forever after resuming from a checkpoint
--HG--
extra : convert_revision : 4bbc70b1be4e5c4ed99d4f88418ab620d5ce475a
Makes page table cache scheme actually work
src/mem/page_table.cc:
src/mem/page_table.hh:
fix caching scheme to actually work and improve performance
--HG--
extra : convert_revision : 443a8d8acbee540b26affcfdfbf107b8e735d1bd
Oops... forgot to update call site after changing
function argument semantics.
src/mem/tport.cc:
Oops... forgot to update call site after changing
function argument semantics.
--HG--
extra : convert_revision : 9234b991dc678f062d268ace73c71b3d13dd17dc
- factor out checkFunctional() code so it can be
called from derived classes
- use EventWrapper for sendEvent, move event handling
code from event to port where it belongs
- make sendEvent a pointer so derived classes can
override it
- replace std::pair with new class for readability
--HG--
extra : convert_revision : 5709de2daacfb751a440144ecaab5f9fc02e6b7a
src/mem/cache/cache_impl.hh:
src/mem/cache/coherence/simple_coherence.hh:
Get rid of old invalidate propagation logic in preparation
for new multilevel snoop protocol.
src/mem/cache/coherence/coherence_protocol.cc:
L2 cache now has protocol, so protocol must handle ReadExReq
coming in from the CPU side.
src/mem/cache/miss/mshr_queue.cc:
Assertion is failing, so let's take it out for now.
src/mem/packet.cc:
src/mem/packet.hh:
Add WritebackAck command.
Reorganize enum to put responses next to corresponding requests.
Get rid of unused WriteReqNoAck.
--HG--
extra : convert_revision : 24c519846d161978123f9aa029ae358a41546c73
Compiles but doesn't work... committing just so I can merge
(stupid bk!).
src/mem/bridge.cc:
Get rid of SNOOP_COMMIT.
src/mem/bus.cc:
src/mem/packet.hh:
Get rid of SNOOP_COMMIT & two-pass snoop.
First bits of EXPRESS_SNOOP support.
src/mem/cache/base_cache.cc:
src/mem/cache/base_cache.hh:
src/mem/cache/cache.hh:
src/mem/cache/cache_impl.hh:
src/mem/cache/miss/blocking_buffer.cc:
src/mem/cache/miss/miss_queue.cc:
src/mem/cache/prefetch/base_prefetcher.cc:
Big reorg of ports and port-related functions & events.
src/mem/cache/cache.cc:
src/mem/cache/cache_builder.cc:
src/mem/cache/coherence/SConscript:
Get rid of UniCoherence object.
--HG--
extra : convert_revision : 7672434fa3115c9b1c94686f497e57e90413b7c3
port. It would be better to move this to python IMO but for
now I'll stick in a compatibility hack.
--HG--
extra : convert_revision : a81a29cbd43becd0e485559eb7b2a31f7a0b082d
configs/example/memtest.py:
PhysicalMemory has vector of uniform ports instead of one special one.
Other updates to fix obsolete brokenness.
src/mem/physical.cc:
src/mem/physical.hh:
src/python/m5/objects/PhysicalMemory.py:
Have vector of uniform ports instead of one special one.
src/python/swig/pyobject.cc:
Add comment.
--HG--
extra : convert_revision : a4a764dcdcd9720bcd07c979d0ece311fc8cb4f1
cache blocks that get dmaed ARE NOT marked invalid in the caches so it's a performance issue here
src/mem/bridge.cc:
src/mem/bridge.hh:
hopefully the final hacky change to make the bus bridge work ok
--HG--
extra : convert_revision : 62cbc65c74d1a84199f0a376546ec19994c5899c
src/dev/io_device.cc:
extra printing and assertions
src/mem/bridge.hh:
deal with packets only satisfying part of a request by making many requests
src/mem/cache/cache_impl.hh:
make the cache try to satisfy a functional request from the cache above it before checking itself
--HG--
extra : convert_revision : 1df52ab61d7967e14cc377c560495430a6af266a
set the latency parameter in terms of a latency
add caches to tsunami-simple configs
configs/common/Caches.py:
tests/configs/memtest.py:
tests/configs/o3-timing-mp.py:
tests/configs/o3-timing.py:
tests/configs/simple-atomic-mp.py:
tests/configs/simple-timing-mp.py:
tests/configs/simple-timing.py:
set the latency parameter in terms of a latency
configs/common/FSConfig.py:
give the bridge a default latency too
src/mem/cache/cache_builder.cc:
src/python/m5/objects/BaseCache.py:
remove hit_latency and make latency do the right thing
tests/configs/tsunami-simple-atomic-dual.py:
tests/configs/tsunami-simple-atomic.py:
tests/configs/tsunami-simple-timing-dual.py:
tests/configs/tsunami-simple-timing.py:
add caches to tsunami-simple configs
--HG--
extra : convert_revision : 37bef7c652e97c8cdb91f471fba62978f89019f1
add seperate response buffers and request queue sizes in bus bridge
add delay to respond to a nack in the bus bridge
src/dev/i8254xGBe.cc:
src/dev/ide_ctrl.cc:
src/dev/ns_gige.cc:
src/dev/pcidev.hh:
src/dev/sinic.cc:
add backoff delay parameters
src/dev/io_device.cc:
src/dev/io_device.hh:
add a backoff algorithm when nacks are received.
src/mem/bridge.cc:
src/mem/bridge.hh:
add seperate response buffers and request queue sizes
add a new parameters to specify how long before a nack in ready to go after a packet that needs to be nacked is received
src/mem/cache/cache_impl.hh:
assert on the
src/mem/tport.cc:
add a friendly assert to make sure the packet was inserted into the list
--HG--
extra : convert_revision : 3595ad932015a4ce2bb72772da7850ad91bd09b1
fix the timing cpu to handle receiving a nacked packet
src/cpu/simple/timing.cc:
make the timing cpu handle receiving a nacked packet
src/mem/bridge.cc:
src/mem/bridge.hh:
the bridge never returns false when recvTiming() is called on its ports now, it always returns true and nacks the packet if there isn't sufficient buffer space
--HG--
extra : convert_revision : 5e12d0cf6ce985a5f72bcb7ce26c83a76c34c50a
figure out the block size from devices attached to the bus otherwise use a default block size when no devices that care are attached
configs/common/FSConfig.py:
src/mem/bridge.cc:
src/mem/bridge.hh:
src/python/m5/objects/Bridge.py:
fix partial writes with a functional memory hack
src/mem/bus.cc:
src/mem/bus.hh:
src/python/m5/objects/Bus.py:
figure out the block size from devices attached to the bus otherwise use a default block size when no devices that care are attached
src/mem/packet.cc:
fix WriteInvalidateResp to not be a request that needs a response since it isn't
src/mem/port.hh:
by default return 0 for deviceBlockSize instead of panicing. This makes finding the block size the bus should use easier
--HG--
extra : convert_revision : 3fcfe95f9f392ef76f324ee8bd1d7f6de95c1a64
In this way a MemoryObject can keep a functional port around and give it to anyone who wants to do functional accesses rather
than creating a new one each time.
src/mem/bus.cc:
src/mem/bus.hh:
src/mem/cache/cache_impl.hh:
only keep around one func port we give to anyone who wants it. Otherwise we can run out of port ids reasonably quickly if
a lot of functional accesses are happening (e.g. remote debugging, dprintk, etc)
--HG--
extra : convert_revision : 6a9e3e96f51cedaab6de1b36cf317203899a3716