Port proxies are used to replace non-structural ports, and thus enable
all ports in the system to correspond to a structural entity. This has
the advantage of accessing memory through the normal memory subsystem
and thus allowing any constellation of distributed memories, address
maps, etc. Most accesses are done through the "system port" that is
used for loading binaries, debugging etc. For the entities that belong
to the CPU, e.g. threads and thread contexts, they wrap the CPU data
port in a port proxy.
The following replacements are made:
FunctionalPort > PortProxy
TranslatingPort > SETranslatingPortProxy
VirtualPort > FSTranslatingPortProxy
--HG--
rename : src/mem/vport.cc => src/mem/fs_translating_port_proxy.cc
rename : src/mem/vport.hh => src/mem/fs_translating_port_proxy.hh
rename : src/mem/translating_port.cc => src/mem/se_translating_port_proxy.cc
rename : src/mem/translating_port.hh => src/mem/se_translating_port_proxy.hh
This change fixes the problem for all the cases we actively use. If you want to try
more creative I/O device attachments (E.g. sharing an L2), this won't work. You
would need another level of caching between the I/O device and the cache
(which you actually need anyway with our current code to make sure writes
propagate). This is required so that you can mark the cache in between as
top level and it won't try to send ownership of a block to the I/O device.
Asserts have been added that should catch any issues.
On the config end, if a shared L2 is created for the system, it is
parameterized to have n sharers as defined by option.num_cpus. In addition to
making the cache sharing aware so that discriminating tag policies can make use
of context_ids to make decisions, I added an occupancy AverageStat and an occ %
stat to each cache so that you could know which contexts are occupying how much
cache on average, both in terms of blocks and percentage. Note that since
devices have context_id -1, having an array of occ stats that correspond to
each context_id will break here, so in FS mode I add an extra bucket for device
blocks. This bucket is explicitly not added in SE mode in order to not only
avoid ugliness in the stats.txt file, but to avoid broken stats (some formulas
break when a bucket is 0).
configs/example/memtest.py:
PhysicalMemory has vector of uniform ports instead of one special one.
Other updates to fix obsolete brokenness.
src/mem/physical.cc:
src/mem/physical.hh:
src/python/m5/objects/PhysicalMemory.py:
Have vector of uniform ports instead of one special one.
src/python/swig/pyobject.cc:
Add comment.
--HG--
extra : convert_revision : a4a764dcdcd9720bcd07c979d0ece311fc8cb4f1
set the latency parameter in terms of a latency
add caches to tsunami-simple configs
configs/common/Caches.py:
tests/configs/memtest.py:
tests/configs/o3-timing-mp.py:
tests/configs/o3-timing.py:
tests/configs/simple-atomic-mp.py:
tests/configs/simple-timing-mp.py:
tests/configs/simple-timing.py:
set the latency parameter in terms of a latency
configs/common/FSConfig.py:
give the bridge a default latency too
src/mem/cache/cache_builder.cc:
src/python/m5/objects/BaseCache.py:
remove hit_latency and make latency do the right thing
tests/configs/tsunami-simple-atomic-dual.py:
tests/configs/tsunami-simple-atomic.py:
tests/configs/tsunami-simple-timing-dual.py:
tests/configs/tsunami-simple-timing.py:
add caches to tsunami-simple configs
--HG--
extra : convert_revision : 37bef7c652e97c8cdb91f471fba62978f89019f1
Caveats:
- Even though memtest is ISA-independent, it will only
run for the Alpha builds, since there's no way to specify
ISA-independent reference files and I didn't want to commit
3 copies since I'm not sure we want to run it for all the
different ISAs anyway.
- Reference outputs were generated on my laptop,
so performance numbers will be low compared to zizzer.
--HG--
extra : convert_revision : 210fe4abecc19fbab0b15402c6cb4863650bab66
I need to move over to using the fixPacket function so I don't have to make the same changes everywhere.
Still a functional access bug someplace I need to track down in timing mode.
src/mem/cache/base_cache.cc:
src/mem/cache/cache_impl.hh:
Fix corner case on assertion
tests/configs/memtest.py:
Updated memtester with uncacheable addresses and functional accesses
--HG--
extra : convert_revision : e6fa851621700ff9227b83cc5cac20af4fc8444f
Get over 500,000 reads on each of 8 testers before memory leak becomes large.
tests/configs/memtest.py:
Update test to be more interesting
--HG--
extra : convert_revision : 4258b798fbeeed2a376f1bfac100a109eb05620e
src/mem/cache/base_cache.cc:
Fix a bug about not having a request to send
src/mem/cache/base_cache.hh:
Fix a bug with the blocking code
src/mem/cache/cache.hh:
AFix a bug with snoop hits in WB buffer
src/mem/cache/cache_impl.hh:
Fix a bug with snoop hits in WB buffer
Also, add better DPRINTF's
src/mem/cache/miss/miss_queue.cc:
Fix a bug with upgrades (Need to clean it up later)
src/mem/cache/miss/mshr.cc:
Fix a memory leak bug, still some outstanding with writebacks not being deleted
src/mem/cache/miss/mshr_queue.cc:
Fix a bug about upgrades (need to clean up later)
src/mem/packet.hh:
Fix for newly added cmd attribute for upgrades
tests/configs/memtest.py:
More interesting testcase
--HG--
extra : convert_revision : fcb4f17dd58b537bb4f67a8c835f50e455e8c688
src/mem/physical.cc:
Update comment to match memtest use
src/python/m5/objects/PhysicalMemory.py:
Make memtester have a way to connect functionally
tests/configs/memtest.py:
Properly create 8 memtesters and connect them to the memory system
--HG--
extra : convert_revision : e5a2dd9c8918d58051b553b5c6a14785d48b34ca
src/cpu/SConscript:
Add memtester to the compilation environment.
Someone who knows this better should make the MemTest a cpu model parameter.
For now attached with the build of o3 cpu.
src/cpu/memtest/memtest.cc:
src/cpu/memtest/memtest.hh:
Update Memtest for new mem system
src/python/m5/objects/MemTest.py:
Update memtest python description
--HG--
extra : convert_revision : d6a63e08fda0975a7abfb23814a86a0caf53e482