Inorder expects eaComp to be visible through StaticInst object. This mirrors a similar change
to ALPHA... Needs to be done for SPARC and whatever other ISAs want to use InOrderCPU
TLBUnit no longer used and we also get rid of memAccSize and memAccFlags functions added to ISA and StaticInst
since TLB is not a separate resource to acquire. Instead, TLB access is done before any read/write to memory
and the result is checked before it's sent out to memory.
* * *
inorder was incorrectly storing FP values and confusing the integer/fp storage view of floating point operations. A big issue was knowing trying to infer when were doing single or double precision access
because this lets you know the size of value to store (32-64 bits). This isnt exactly straightforward since alpha uses all 64-bit regs while mips/sparc uses a dual-reg view. by getting this value from
the actual floating point register file, the model can figure out what it needs to store
Remove subinstructions eaComp/memAcc since unused in CPU Models. Instead, create eaComp that is visible from StaticInst object. Gives InOrder model capability of generating address without actually initiating access
* * *
Changes so that InOrder can work for a non-delay-slot ISA like Alpha. Typically, changes have to do with handling misspeculated branches at different points in pipeline
Edit AlphaISA to support the inorder model. Mostly alternate constructor functions and also a few skeleton multithreaded support functions
* * *
Remove namespace from header file. Causes compiler issues that are hard to find
* * *
Separate the TLB from the CPU and allow it to live in the TLBUnit resource. Give CPU accessor functions for access and also bind at construction time
* * *
Expose memory access size and flags through instruction object
(temporarily memAccSize and memFlags to get TLB stuff working.)
This changeset also includes a lot of work from Derek Hower <drh5@cs.wisc.edu>
RubyMemory is now both a driver for Ruby and a port for M5. Changed
makeRequest/hitCallback interface. Brought packets (superficially)
into the sequencer. Modified tester infrastructure to be packet based.
and Ruby can be used together through the example ruby_se.py
script. SPARC parallel applications work, and the timing *seems* right
from combined M5/Ruby debug traces. To run,
% build/ALPHA_SE/m5.debug configs/example/ruby_se.py -c
tests/test-progs/hello/bin/alpha/linux/hello -n 4 -t
1. removed checks from tester files
2. removed else clause in Sequencer and DirectoryMemory else clause is
needed by the tester, it is up to Derek to revive it elsewhere when he
gets to it
Also:
1. Changed m_entries in DirectoryMemory to a map
2. And replaced SIMICS_read_physical_memory with a call to now-dummy
Derek's-to-be readPhysMem function
Add the PROTOCOL sticky option sets the coherence protocol that slicc
will parse and therefore ruby will use. This whole process was made
difficult by the fact that the set of files that are output by slicc
are not easily known ahead of time. The easiest thing wound up being
to write a parser for slicc that would tell me. Incidentally this
means we now have a slicc grammar written in python.
This basically means changing all #include statements and changing
autogenerated code so that it generates the correct paths. Because
slicc generates #includes, I had to hard code the include paths to
mem/protocol.
1) Removing files from the ruby build left some unresovled
symbols. Those have been fixed.
2) Most of the dependencies on Simics data types and the simics
interface files have been removed.
3) Almost all mention of opal is gone.
4) Huge chunks of LogTM are now gone.
5) Handling 1-4 left ~hundreds of unresolved references, which were
fixed, yielding a snowball effect (and the massive size of this
delta).
I did the macro cleanup because I was worried that the SCons scanner
would get confused. This code will hopefully go away soon anyway.
--HG--
rename : src/mem/ruby/config/config.include => src/mem/ruby/config/config.hh
this was double scheduling itself (once in constructor and once in cpu code). also add support for stopping / starting
progress events through repeatEvent flag and also changing the interval of the progress event as well
Start by turning all of the *Source functions into classes
so we can do more calculations and more easily collect the data we need.
Add parameters to the new classes for indicating what sorts of flags the
objects should be compiled with so we can allow certain files to be compiled
without Werror for example.
Lowest priority interrupts are now delivered based on a rotating offset into
the list of potential recipients. There could be parasitic cases were a
processor gets picked on and ends up at that rotating offset all the time, but
it's much more likely that the group will stay consistent and the pain will be
distributed evenly.
This is a hack so that the IO APIC can figure out information about the local
APICs. The local APICs still have no way to find out about each other.
Ideally, when the local APICs update state that's relevant to somebody else,
they'd send an update to everyone. Without being able to do a broadcast, that
would still require knowing who else there is to notify. Other broadcasts are
implemented using assumptions that may not always be true.
The ID as exposed to software can be changed. Tracking those changes in M5
would be cumbersome, especially since there's no guarantee the IDs will remain
unique.
Previously there was one per bus, which caused some coherence problems
when more than one decided to respond. Now there is just one on
the main memory bus. The default bus responder on all other buses
is now the downstream cache's cpu_side port. Caches no longer need
to do address range filtering; instead, we just have a simple flag
to prevent snoops from propagating to the I/O bus.
This patch adds limited multithreading support in syscall-emulation
mode, by using the clone system call. The clone system call works
for Alpha, SPARC and x86, and multithreaded applications run
correctly in Alpha and SPARC.
This frees up needed space for more public flags. Also:
- remove unused Request accessor methods
- make Packet use public Request accessors, so it need not be a friend
This microop does a store and unlocks the requested address. The RISC86
microop ISA doesn't seem to have an equivalent to this, so I'm guessing that
the store following an ldstl is automatically unlocking. We don't do it this
way for performance reasons since the behavior is the same.
For some reason o3 FS init() only called initCPU if the thread state
was Suspended, which was no longer the case. There's no apparent
reason to check, so I whacked the test completely rather than
changing the check to Halted.
The inorder init() was also updated to be symmetric, though the
previous code was just a fancy no-op.
This situation can arise now on the first fetch cycle after
the last active thread is halted. It seems easy enough to
deal with when it happens rather than trying to avoid it.
This provides a common initial status for all threads independent
of CPU model (unlike the prior situation where CPUs initialized
threads to inconsistent states).
This mostly matters for SE mode; in FS mode, ISA-specific startupCPU()
methods generally handle boot-time initialization of thread contexts
(since the right thing to do is ISA-dependent).
Basically merge it in with Halted.
Also had to get rid of a few other functions that
called ThreadContext::deallocate(), including:
- InOrderCPU's setThreadRescheduleCondition.
- ThreadContext::exit(). This function was there to avoid terminating
simulation when one thread out of a multi-thread workload exits, but we
need to find a better (non-cpu-centric) way.
Enable more or less takes the place of check, but also allows stats to
do some other configuration. Prepare moves all of the code that readies
a stat for dumping into a separate function in preparation for supporting
serialization of certain pieces of statistics data.
While we're at it, clean up the visitor code and some of the python code.
This basically works by taking advantage of the curiously recurring template
pattern in an intelligent way so as to reduce the number of lines of code
and hopefully make things a little bit clearer.
This provides an easy way to provide the callbacks into the data side
of things from the info side of things. Rename Wrap to DataWrap so it
is more easily distinguishable from InfoWrap