tests/SConscript:
add a new configuration for two-system tests (atomic simple only)
--HG--
extra : convert_revision : 16c260ab16f38779fe17b1cab18f36d5c7a70846
change /netperf/netperf to /netperf-bin/netperf
nat-netperf-maerts-client.rcS:
bad comment that went with the file - accidentally committed but probably doesn't matter, i ust eliminated an ivlb in the script.
configs/boot/nat-netperf-maerts-client.rcS:
replace netperf/netperf with netperf-bin/netperf
configs/boot/netperf-maerts-client.rcS:
change /netperf/netperf to /netperf-bin/netperf
--HG--
extra : convert_revision : 32fed0042e267f315d3e688ebc4b66d7002b85f0
Right now this introduces a minor memory leak as old physPorts and virtPorts are not deleted when new ones are created. A flyspray task has been created for this issue. It can not be resolved until we determine how the bus will handle giving out ID's to functional ports that may be deleted.
src/cpu/o3/cpu.cc:
src/cpu/simple/atomic.cc:
src/cpu/simple/timing.cc:
Change the setup of the physPort and virtPort to instead happen every time the CPU has a context activated. This is a little high overhead, but keeps it working correctly when the CPU does not have a physical memory attached to it until it switches in (like the case of switch CPUs).
src/cpu/o3/thread_context.hh:
Change function from being called at init() to just being called whenever the memory ports need to be connected.
src/cpu/o3/thread_context_impl.hh:
Update this to not delete the port if it's the same as the virtPort.
src/cpu/thread_context.hh:
Change function from being called at init() to whenever the memory ports need to be connected.
src/cpu/thread_state.cc:
Instead of initializing the ports, simply connect them, deleting any old ports that might exist. This allows these functions to be called multiple times.
src/cpu/thread_state.hh:
Ports are no longer initialized, but rather connected at context activation time.
--HG--
extra : convert_revision : e399ce5dfbd6ad658c953a7c9c7b69b89a70219e
Fix a small writeback bug when missing in the L2 in atomic mode
src/mem/bus.cc:
Fix a comment to make sense
src/mem/cache/cache_impl.hh:
Do a functional access to levels above on a read as a temporary solution for L2's in FS
Also fix a small writeback miss in L2 issue
src/mem/cache/coherence/simple_coherence.hh:
src/mem/cache/coherence/uni_coherence.cc:
src/mem/cache/coherence/uni_coherence.hh:
Do a functional access to levels above on a read as a temporary solution for L2's in FS
tests/quick/00.hello/ref/alpha/linux/o3-timing/m5stats.txt:
tests/quick/00.hello/ref/alpha/linux/simple-timing/m5stats.txt:
tests/quick/01.hello-2T-smt/ref/alpha/linux/o3-timing/m5stats.txt:
Update ref's for writeback changes
--HG--
extra : convert_revision : 937febd577b16b7fd97a5a68acaf53541828a251
src/cpu/o3/alpha/cpu_impl.hh:
Handle the PhysicalPort and VirtualPort in the ThreadState.
src/cpu/o3/cpu.cc:
Initialize the thread context.
src/cpu/o3/thread_context.hh:
Add new function to initialize thread context.
src/cpu/o3/thread_context_impl.hh:
Use code now put into function.
src/cpu/simple_thread.cc:
Move code to ThreadState and use the new helper function.
src/cpu/simple_thread.hh:
Remove init() in this derived class; use init() from ThreadState base class.
src/cpu/thread_state.cc:
Move setting up of Physical and Virtual ports here. Change getMemFuncPort() to connectToMemFunc(), which connects a port to a functional port of the memory object below the CPU.
src/cpu/thread_state.hh:
Update functions.
--HG--
extra : convert_revision : ff254715ef0b259dc80d08f13543b63e4024ca8d
src/cpu/simple/timing.cc:
Various updates for deleting requests more properly.
The major change is moving the deletion of the fetch request/packet to after the instruction has executed and completed. This should fix a few bugs because Ron's memory system didn't expect a call for a functional access while a timing access was being processed.
--HG--
extra : convert_revision : c7cf114bb1ff3cdaa7b0a40ed4c5302dc9d3a522
src/mem/bus.cc:
Make it so that invalidates being sent from the responder up don't call the responder
but they should also not Panic.
src/mem/packet.hh:
If we don't have data in the packet, don't call deleteData:
Example: InvalidateRequests never have data.
--HG--
extra : convert_revision : 18766bc9f3bb4d852ac651d094254d347abd1634
make a likewise updateIntrInfo for Sparc that's blank so it doesn't fart on build
src/arch/sparc/interrupts.hh:
make a likewise updateIntrInfo for Sparc that's blank so it doesn't fart on build
--HG--
extra : convert_revision : 5f469d0cf897479b42703104cd801a8ef923fcae
src/mem/bridge.cc:
Update brdiges, now that snoop addresses are properly forwarded.
Bus bridge should only handle snoops on the second phase (SNOOP_COMMIT)
src/mem/bus.cc:
src/mem/bus.hh:
Make sure if a busBridge has access to both things that snoop and things that respond it only takes the request once
--HG--
extra : convert_revision : 26cc9ee4429be45d4476fa435e0e9a54843c2509
src/mem/cache/base_cache.cc:
Sometimes a functional access comes while waiting on a outstanding packet being sent.
This could be because Timing CPU does some post processing on the recvTiming which send functional access.
Either the CPU should leave the pkt/req around (so They can be referenced in the mem system). Or the mem
system should remove them from outstanding lists and reinsert them if they fail in the sendTiming.
I did the later, eventually we should consider doing the former if that is the correct behavior.
--HG--
extra : convert_revision : be41e0d2632369dca9d7c15e96e5576d7583fe6a
src/mem/bus.cc:
Only call snoop once per port, need to fix it so snoop ranges that overlap aren't added to list
Functional accesses that call snoop and it goes to a higher bus may change the src, reset it after each snoop.
--HG--
extra : convert_revision : 7276059c798a85cb9d138ccc5531298ecd055c13
src/mem/bus.cc:
Actually return the snoop list when asked for it.
Don't get stuck in infinite functional loops
--HG--
extra : convert_revision : 8e6dafbd10b30d48d28b6b5d4b464e8e8f6a3ddc
configs/splash2/run.py:
Fix MaxTick for splash configs
configs/splash2/cluster.py:
Add a config that allows clusters of CPU's to be attached to a single L1
--HG--
extra : convert_revision : 1bb0a0c5f4889316940a9858be90ae2eaa849f1a