Fix a small writeback bug when missing in the L2 in atomic mode
src/mem/bus.cc:
Fix a comment to make sense
src/mem/cache/cache_impl.hh:
Do a functional access to levels above on a read as a temporary solution for L2's in FS
Also fix a small writeback miss in L2 issue
src/mem/cache/coherence/simple_coherence.hh:
src/mem/cache/coherence/uni_coherence.cc:
src/mem/cache/coherence/uni_coherence.hh:
Do a functional access to levels above on a read as a temporary solution for L2's in FS
tests/quick/00.hello/ref/alpha/linux/o3-timing/m5stats.txt:
tests/quick/00.hello/ref/alpha/linux/simple-timing/m5stats.txt:
tests/quick/01.hello-2T-smt/ref/alpha/linux/o3-timing/m5stats.txt:
Update ref's for writeback changes
--HG--
extra : convert_revision : 937febd577b16b7fd97a5a68acaf53541828a251
src/cpu/o3/alpha/cpu_impl.hh:
Handle the PhysicalPort and VirtualPort in the ThreadState.
src/cpu/o3/cpu.cc:
Initialize the thread context.
src/cpu/o3/thread_context.hh:
Add new function to initialize thread context.
src/cpu/o3/thread_context_impl.hh:
Use code now put into function.
src/cpu/simple_thread.cc:
Move code to ThreadState and use the new helper function.
src/cpu/simple_thread.hh:
Remove init() in this derived class; use init() from ThreadState base class.
src/cpu/thread_state.cc:
Move setting up of Physical and Virtual ports here. Change getMemFuncPort() to connectToMemFunc(), which connects a port to a functional port of the memory object below the CPU.
src/cpu/thread_state.hh:
Update functions.
--HG--
extra : convert_revision : ff254715ef0b259dc80d08f13543b63e4024ca8d
src/cpu/simple/timing.cc:
Various updates for deleting requests more properly.
The major change is moving the deletion of the fetch request/packet to after the instruction has executed and completed. This should fix a few bugs because Ron's memory system didn't expect a call for a functional access while a timing access was being processed.
--HG--
extra : convert_revision : c7cf114bb1ff3cdaa7b0a40ed4c5302dc9d3a522
src/mem/bus.cc:
Make it so that invalidates being sent from the responder up don't call the responder
but they should also not Panic.
src/mem/packet.hh:
If we don't have data in the packet, don't call deleteData:
Example: InvalidateRequests never have data.
--HG--
extra : convert_revision : 18766bc9f3bb4d852ac651d094254d347abd1634
make a likewise updateIntrInfo for Sparc that's blank so it doesn't fart on build
src/arch/sparc/interrupts.hh:
make a likewise updateIntrInfo for Sparc that's blank so it doesn't fart on build
--HG--
extra : convert_revision : 5f469d0cf897479b42703104cd801a8ef923fcae
src/mem/bridge.cc:
Update brdiges, now that snoop addresses are properly forwarded.
Bus bridge should only handle snoops on the second phase (SNOOP_COMMIT)
src/mem/bus.cc:
src/mem/bus.hh:
Make sure if a busBridge has access to both things that snoop and things that respond it only takes the request once
--HG--
extra : convert_revision : 26cc9ee4429be45d4476fa435e0e9a54843c2509
src/mem/cache/base_cache.cc:
Sometimes a functional access comes while waiting on a outstanding packet being sent.
This could be because Timing CPU does some post processing on the recvTiming which send functional access.
Either the CPU should leave the pkt/req around (so They can be referenced in the mem system). Or the mem
system should remove them from outstanding lists and reinsert them if they fail in the sendTiming.
I did the later, eventually we should consider doing the former if that is the correct behavior.
--HG--
extra : convert_revision : be41e0d2632369dca9d7c15e96e5576d7583fe6a
src/mem/bus.cc:
Only call snoop once per port, need to fix it so snoop ranges that overlap aren't added to list
Functional accesses that call snoop and it goes to a higher bus may change the src, reset it after each snoop.
--HG--
extra : convert_revision : 7276059c798a85cb9d138ccc5531298ecd055c13
src/mem/bus.cc:
Actually return the snoop list when asked for it.
Don't get stuck in infinite functional loops
--HG--
extra : convert_revision : 8e6dafbd10b30d48d28b6b5d4b464e8e8f6a3ddc
configs/splash2/run.py:
Fix MaxTick for splash configs
configs/splash2/cluster.py:
Add a config that allows clusters of CPU's to be attached to a single L1
--HG--
extra : convert_revision : 1bb0a0c5f4889316940a9858be90ae2eaa849f1a
src/cpu/o3/fetch_impl.hh:
Fetch needs to make sure it isn't waiting on an Icache access.
--HG--
extra : convert_revision : b53eb58b9e5a00bdb394134586d1f84f84d1c6e1
src/arch/alpha/interrupts.hh:
No need for this now that the ThreadContext is being used to set these IPRs in interrupts.
Also split up the interrupt checking from the updating of the IPL and interrupt summary.
src/arch/alpha/tlb.cc:
Check the PC for whether or not it's in PAL mode, not the addr.
src/cpu/o3/alpha/cpu.hh:
Split up getting the interrupt from actually processing the interrupt.
src/cpu/o3/alpha/cpu_impl.hh:
Splut up the processing of interrupts.
src/cpu/o3/commit_impl.hh:
Update for ISA-oriented interrupt changes.
src/cpu/o3/fetch_impl.hh:
Fix broken if statement from PcPAL updates, and properly populate the request fields.
Also more debugging output.
src/cpu/ozone/cpu_impl.hh:
Updates for ISA-oriented interrupt stuff.
src/cpu/ozone/front_end_impl.hh:
Populate request fields properly.
src/cpu/simple/base.cc:
Update for interrupt stuff.
--HG--
extra : convert_revision : 9bac3f9ffed4948ee788699b2fa8419bc1ca647c