This change fixes the problem for all the cases we actively use. If you want to try
more creative I/O device attachments (E.g. sharing an L2), this won't work. You
would need another level of caching between the I/O device and the cache
(which you actually need anyway with our current code to make sure writes
propagate). This is required so that you can mark the cache in between as
top level and it won't try to send ownership of a block to the I/O device.
Asserts have been added that should catch any issues.
None of the code in the ruby tester directory is compiled or referred to
outside of that directory. This change eliminates it. If it's needed in the
future, it can be revived from the history. In the mean time, this removes
clutter and the only use of the GEMS_ROOT scons variable.
There may not be a formally correct spelling for the past tense of mmap, but
mmapped is the spelling Google doesn't try to autocorrect. This makes sense
because it mirrors the past tense of map->mapped and not the past tense of
cape->caped.
--HG--
rename : src/arch/alpha/mmaped_ipr.hh => src/arch/alpha/mmapped_ipr.hh
rename : src/arch/arm/mmaped_ipr.hh => src/arch/arm/mmapped_ipr.hh
rename : src/arch/mips/mmaped_ipr.hh => src/arch/mips/mmapped_ipr.hh
rename : src/arch/power/mmaped_ipr.hh => src/arch/power/mmapped_ipr.hh
rename : src/arch/sparc/mmaped_ipr.hh => src/arch/sparc/mmapped_ipr.hh
rename : src/arch/x86/mmaped_ipr.hh => src/arch/x86/mmapped_ipr.hh
At a couple of places in PerfectSwitch.cc and MessageBuffer.cc, DPRINTF()
has not been provided with correct number of arguments. The patch fixes these
bugs.
This patch removes the store buffer from Ruby. It is not in use currently.
Since libruby is being and store buffer makes calls to libruby, it is not
possible to maintain it until substantial changes are made.
This patch changes Address.hh so that it is not dependent on RubySystem.
This dependence seems unecessary. All those functions that depend on
RubySystem have been moved to Address.cc file.
This patch changes DataBlock.hh so that it is not dependent on RubySystem.
This dependence seems unecessary. All those functions that depende on
RubySystem have been moved to DataBlock.cc file.
This patch integrates permissions with cache and memory states, and then
automates the setting of permissions within the generated code. No longer
does one need to manually set the permissions within the setState funciton.
This patch will faciliate easier functional access support by always correctly
setting permissions for both cache and memory states.
--HG--
rename : src/mem/slicc/ast/EnumDeclAST.py => src/mem/slicc/ast/StateDeclAST.py
rename : src/mem/slicc/ast/TypeFieldEnumAST.py => src/mem/slicc/ast/TypeFieldStateAST.py
"executing" isnt a very descriptive debug message and in going through the
output you get multiple messages that say "executing" but nothing to help
you parse through the code/execution.
So instead, at least print out the name of the action that is taking
place in these functions.
Overall, continue to progress Ruby debug messages to more of the normal M5
debug message style
- add a name() to the Ruby Throttle & PerfectSwitch objects so that the debug output
isn't littered w/"global:" everywhere.
- clean up messages that print over multiple lines when possible
- clean up duplicate prints in the message buffer
In certain actions of the L1 cache controller, while creating an outgoing
message, the machine type was not being set. This results in a
segmentation fault when trace is collected. Joseph Pusudesris provided
his patch for fixing this issue.
Currently the wakeup function for the PerfectSwitch contains three loops -
loop on number of virtual networks
loop on number of incoming links
loop till all messages for this (link, network) have been routed
With an 8 processor mesh network and Hammer protocol, about 11-12% of the
was observed to have been spent in this function, which is the highest
amongst all the functions. It was found that the innermost loop is executed
about 45 times per invocation of the wakeup function, when each invocation
of the wakeup function processes just about one message.
The patch tries to do away with the redundant executions of the innermost
loop. Counters have been added for each virtual network that record the
number of messages that need to be routed for that virtual network. The
inner loops are only executed when the number of messages for that particular
virtual network > 0. This does away with almost 80% of the executions of the
innermost loop. The function now consumes about 5-6% of the total execution
time.
The patch changes the order in which L1 dcache and icache are looked up when
a request comes in. Earlier, if a request came in for instruction fetch, the
dcache was looked up before the icache, to correctly handle self-modifying
code. But, in the common case, dcache is going to report a miss and the
subsequent icache lookup is going to report a hit. Given the invariant -
caches under the same controller keep track of disjoint sets of cache blocks,
we can move the icache lookup before the dcache lookup. In case of a hit in
the icache, using our invariant, we know that the dcache would have reported
a miss. In case of a miss in the icache, we know that icache would have
missed even if the dcache was looked up before looking up the icache.
Effectively, we are doing the same thing as before, though in the common case,
we expect reduction in the number of lookups. This was empirically confirmed
for MOESI hammer. The ratio lookups to access requests is now about 1.1 to 1.
The TBE pointer in the MESI CMP implementation was not being set to NULL
when the TBE is deallocated. This resulted in segmentation fault on testing
the protocol when the ProtocolTrace was switched on.
The code for Orion 2.0 makes use of printf() at several places where there as
an error in configuration of the model. These have been replaced with fatal().
By stalling and waiting the mandatory queue instead of recycling it, one can
ensure that no incoming messages are starved when the mandatory queue puts
signficant of pressure on the L1 cache controller (i.e. the ruby memtester).
--HG--
rename : src/mem/slicc/ast/WakeUpDependentsStatementAST.py => src/mem/slicc/ast/WakeUpAllDependentsStatementAST.py
The packet now identifies whether static or dynamic data has been allocated and
is used by Ruby to determine whehter to copy the data pointer into the ruby
request. Subsequently, Ruby can be told not to update phys memory when
receiving packets.
Separate data VCs and ctrl VCs in garnet, as ctrl VCs have 1 buffer per VC,
while data VCs have > 1 buffers per VC. This is for correct power estimations.
The purpose of this patch is to change the way CacheMemory interfaces with
coherence protocols. Currently, whenever a cache controller (defined in the
protocol under consideration) needs to carry out any operation on a cache
block, it looks up the tag hash map and figures out whether or not the block
exists in the cache. In case it does exist, the operation is carried out
(which requires another lookup). As observed through profiling of different
protocols, multiple such lookups take place for a given cache block. It was
noted that the tag lookup takes anything from 10% to 20% of the simulation
time. In order to reduce this time, this patch is being posted.
I have to acknowledge that the many of the thoughts that went in to this
patch belong to Brad.
Changes to CacheMemory, TBETable and AbstractCacheEntry classes:
1. The lookup function belonging to CacheMemory class now returns a pointer
to a cache block entry, instead of a reference. The pointer is NULL in case
the block being looked up is not present in the cache. Similar change has
been carried out in the lookup function of the TBETable class.
2. Function for setting and getting access permission of a cache block have
been moved from CacheMemory class to AbstractCacheEntry class.
3. The allocate function in CacheMemory class now returns pointer to the
allocated cache entry.
Changes to SLICC:
1. Each action now has implicit variables - cache_entry and tbe. cache_entry,
if != NULL, must point to the cache entry for the address on which the action
is being carried out. Similarly, tbe should also point to the transaction
buffer entry of the address on which the action is being carried out.
2. If a cache entry or a transaction buffer entry is passed on as an
argument to a function, it is presumed that a pointer is being passed on.
3. The cache entry and the tbe pointers received __implicitly__ by the
actions, are passed __explicitly__ to the trigger function.
4. While performing an action, set/unset_cache_entry, set/unset_tbe are to
be used for setting / unsetting cache entry and tbe pointers respectively.
5. is_valid() and is_invalid() has been made available for testing whether
a given pointer 'is not NULL' and 'is NULL' respectively.
6. Local variables are now available, but they are assumed to be pointers
always.
7. It is now possible for an object of the derieved class to make calls to
a function defined in the interface.
8. An OOD token has been introduced in SLICC. It is same as the NULL token
used in C/C++. If you are wondering, OOD stands for Out Of Domain.
9. static_cast can now taken an optional parameter that asks for casting the
given variable to a pointer of the given type.
10. Functions can be annotated with 'return_by_pointer=yes' to return a
pointer.
11. StateMachine has two new variables, EntryType and TBEType. EntryType is
set to the type which inherits from 'AbstractCacheEntry'. There can only be
one such type in the machine. TBEType is set to the type for which 'TBE' is
used as the name.
All the protocols have been modified to conform with the new interface.
This patch changes the manner in which data is copied from L1 to L2 cache in
the implementation of the Hammer's cache coherence protocol. Earlier, data was
copied directly from one cache entry to another. This has been broken in to
two parts. First, the data is copied from the source cache entry to a
transaction buffer entry. Then, data is copied from the transaction buffer
entry to the destination cache entry.
This has been done to maintain the invariant - at any given instant, multiple
caches under a controller are exclusive with respect to each other.
Ran all the source files through 'perl -pi' with this script:
s|\s*(};?\s*)?/\*\s*(end\s*)?namespace\s*(\S+)\s*\*/(\s*})?|} // namespace $3|;
s|\s*};?\s*//\s*(end\s*)?namespace\s*(\S+)\s*|} // namespace $2\n|;
s|\s*};?\s*//\s*(\S+)\s*namespace\s*|} // namespace $1\n|;
Also did a little manual editing on some of the arch/*/isa_traits.hh files
and src/SConscript.
Two functions in src/mem/ruby/system/PerfectCacheMemory.hh, tryCacheAccess()
and cacheProbe(), end with calls to panic(). Both of these functions have
return type other than void. Any file that includes this header file fails
to compile because of the missing return statement. This patch adds dummy
values so as to avoid the compiler warnings.
This diff is for changing the way ASSERT is handled in Ruby. m5.fast
compiles out the assert statements by using the macro NDEBUG. Ruby uses the
macro RUBY_NO_ASSERT to do so. This macro has been removed and NDEBUG has
been put in its place.
These flags were being used to identify what alignment a request needed, but
the same information is available using the request size. This change also
eliminates the isMisaligned function. If more complicated alignment checks are
needed, they can be signaled using the ASI_BITS space in the flags vector like
is currently done with ARM.
If we write back an exclusive copy, we now mark it
as such, so the cache receiving the writeback can
mark its copy as exclusive. This avoids some
unnecessary upgrade requests when a cache later
tries to re-acquire exclusive access to the block.
Also move the "Fault" reference counted pointer type into a separate file,
sim/fault.hh. It would be better to name this less similarly to sim/faults.hh
to reduce confusion, but fault.hh matches the name of the type. We could change
Fault to FaultPtr to match other pointer types, and then changing the name of
the file would make more sense.
Corrects an oversight in cset f97b62be544f. The fix there only
failed queued SCUpgradeReq packets that encountered an
invalidation, which meant that the upgrade had to reach the L2
cache. To handle pending requests in the L1 we must similarly
fail StoreCondReq packets too.
Allow lower-level caches (e.g., L2 or L3) to pass exclusive
copies to higher levels (e.g., L1). This eliminates a lot
of unnecessary upgrade transactions on read-write sequences
to non-shared data.
Also some cleanup of MSHR coherence handling and multiple
bug fixes.
This patch allows messages to be stalled in their input buffers and wait
until a corresponding address changes state. In order to make this work,
all in_ports must be ranked in order of dependence and those in_ports that
may unblock an address, must wake up the stalled messages. Alot of this
complexity is handled in slicc and the specification files simply
annotate the in_ports.
--HG--
rename : src/mem/slicc/ast/CheckAllocateStatementAST.py => src/mem/slicc/ast/StallAndWaitStatementAST.py
rename : src/mem/slicc/ast/CheckAllocateStatementAST.py => src/mem/slicc/ast/WakeUpDependentsStatementAST.py
Patch allows each individual message buffer to have different recycle latencies
and allows the overall recycle latency to be specified at the cmd line. The
patch also adds profiling info to make sure no one processor's requests are
recycled too much.
The main purpose for clearing stats in the unserialize process is so
that the profiler can correctly set its start time to the unserialized
value of curTick.
This patch allows one to disable migratory sharing for those cache blocks that
are accessed by atomic requests. While the implementations are different
between the token and hammer protocols, the motivation is the same. For
Alpha, LLSC semantics expect that normal loads do not unlock cache blocks that
have been locked by LL accesses. Therefore, locked blocks should not transfer
write permissions when responding to these load requests. Instead, only they
only transfer read permissions so that the subsequent SC access can possibly
succeed.
This patch fixes several bugs related to previous inconsistent assumptions on
how many tokens the Owner had. Mike Marty should have fixes these bugs years
ago. :)
Previously, the MOESI_hammer protocol calculated the same latency for L1 and
L2 hits. This was because the protocol was written using the old ruby
assumption that L1 hits used the sequencer fast path. Since ruby no longer
uses the fast-path, the protocol delays L2 hits by placing them on the
trigger queue.
The previous slower ruby latencies created a mismatch between the faster M5
cpu models and the much slower ruby memory system. Specifically smp
interrupts were much slower and infrequent, as well as cpus moving in and out
of spin locks. The result was many cpus were idle for large periods of time.
These changes fix the latency mismatch.
This patch adds back to ruby the capability to understand the response time
for messages that hit in different levels of the cache heirarchy.
Specifically add support for the MI_example, MOESI_hammer, and MOESI_CMP_token
protocols.
This patch adds DMA testing to the Memtester and is inherits many changes from
Polina's old tester_dma_extension patch. Since Ruby does not work in atomic
mode, the atomic mode options are removed.
Clean up some minor things left over from the default responder
change in rev 9af6fb59752f. Mostly renaming the 'responder_set'
param to 'use_default_range' to actually reflect what it does...
old name wasn't that descriptive in the first place, but now
it really doesn't make sense at all.
Also got rid of the bogus obsolete assignment to 'bus.responder'
which used to be a parameter but now is interpreted as an
implicit child assignment, and which was giving me problems in
the config restructuring to come. (A good argument for not
allowing implicit child assignments, IMO, but that's water under
the bridge, I'm afraid.)
Also moved the Bus constructor to the .cc file since that's
where it should have been all along.
Requires new "SCUpgradeReq" message that marks upgrades
for store conditionals, so downstream caches can fail
these when they run into invalidations.
See http://www.m5sim.org/flyspray/task/197
Only set the dirty bit when we actually write to a block
(not if we thought we might but didn't, as in a failed
SC or CAS). This requires makeing sure the dirty bit
stays set when we get an exclusive (writable) copy
in a cache-to-cache transfer from another owner, which
n turn requires copying the mem-inhibit flag from
timing-mode requests to their associated responses.
One big difference is that PrioHeap puts the smallest element at the
top of the heap, whereas stl puts the largest element on top, so I
changed all comparisons so they did the right thing.
Some usage of PrioHeap was simply changed to a std::vector, using sort
at the right time, other usage had me just use the various heap functions
in the stl.
This was somewhat tricky because the RefCnt API was somewhat odd. The
biggest confusion was that the the RefCnt object's constructor that
took a TYPE& cloned the object. I created an explicit virtual clone()
function for things that took advantage of this version of the
constructor. I was conservative and used clone() when I was in doubt
of whether or not it was necessary. I still think that there are
probably too many instances of clone(), but hopefully not too many.
I converted several instances of const MsgPtr & to a simple MsgPtr.
If the function wants to avoid the overhead of creating another
reference, then it should just use a regular pointer instead of a ref
counting ptr.
There were a couple of instances where refcounted objects were created
on the stack. This seems pretty dangerous since if you ever
accidentally make a reference to that object with a ref counting
pointer, bad things are bound to happen.
Further cleanup should probably be done to make this class be non-Ruby
specific and put it in src/base.
There are probably several cases where this class is used, std::bitset
could be used instead.
In addition to obvious changes, this required a slight change to the slicc
grammar to allow types with :: in them. Otherwise slicc barfs on std::string
which we need for the headers that slicc generates.
Previously, the set size was set to 4. This was mostly do to the fact that a
crazy graduate student use to create networks with 256 l2 cache banks. Now it
is far more likely that users will create systems with less than 64 of any
particular controller type. Therefore Ruby should be optimized for a set size
of 1.
On the config end, if a shared L2 is created for the system, it is
parameterized to have n sharers as defined by option.num_cpus. In addition to
making the cache sharing aware so that discriminating tag policies can make use
of context_ids to make decisions, I added an occupancy AverageStat and an occ %
stat to each cache so that you could know which contexts are occupying how much
cache on average, both in terms of blocks and percentage. Note that since
devices have context_id -1, having an array of occ stats that correspond to
each context_id will break here, so in FS mode I add an extra bucket for device
blocks. This bucket is explicitly not added in SE mode in order to not only
avoid ugliness in the stats.txt file, but to avoid broken stats (some formulas
break when a bucket is 0).
This patch includes the necessary regression updates to test the new ruby
configuration system. The patch includes support for multiple ruby protocols
and adds the ruby random tester. The patch removes atomic mode test for
ruby since ruby does not support atomic mode acceses. These tests can be
added back in when ruby supports atomic mode for real.
--HG--
rename : tests/quick/50.memtest/test.py => tests/quick/60.rubytest/test.py
Removed the dummy power function implementations so that Orion can implement
them correctly. Since Orion lacks modular design, this patch simply enables
scons to compile it. There are no python configuration changes in this patch.
Renamed the MESI directory file to be consistent with all other protocols.
--HG--
rename : src/mem/protocol/MESI_CMP_directory-mem.sm => src/mem/protocol/MESI_CMP_directory-dir.sm
Cleaned up the ruby profilers by moving the memory controller profiling code
out of the main profiler object and into a separate object similar to the
current CacheProfiler. Both the CacheProfiler and MemCntrlProfiler are
specific to a particular Ruby object, CacheMemory and MemoryControl
respectively. Therefore, these profilers should not be SimObjects and
created by the python configuration system, but instead private objects. This
simplifies the creation of these profilers.
Reorganized ruby python configuration so that protocol and ruby memory system
configuration code can be shared by multiple front-end configuration files
(i.e. memory tester, full system, and hopefully the regression tester). This
code works for memory tester, but have not tested fs mode.
Modified ruby's tracing support to no longer rely on the RubySystem map
to convert a sequencer string name to a sequencer pointer. As a
temporary solution, the code uses the sim_object find function.
Eventually, we should develop a better fix.
This patch includes a rather substantial change to the memory controller
profiler in order to work with the new configuration system. Most
noteably, the mem_cntrl_profiler no longer uses a string map, but instead
a vector. Eventually this support should be removed from the main
profiler and go into a separate object. Each memory controller should have
a pointer to that new mem_cntrl profile object.
This patch includes the necessary changes to connect ruby objects using
the python configuration system. Mainly it consists of removing
unnecessary ruby object pointers and connecting the necessary object
pointers using the generated param objects. This patch includes the
slicc changes necessary to connect generated ruby objects together using
the python configuraiton system.
The necessary companion conversion of Ruby objects generated by SLICC
are converted to M5 SimObjects in the following patch, so this patch
alone does not compile.
Conversion of Garnet network models is also handled in a separate
patch; that code is temporarily disabled from compiling to allow
testing of interim code.
Though OutPort's message type is not used to generate code, this fix checks
that the programmer's intent is correct. Eventually, we may want to
remove the message type from the OutPort declaration statement.
1) Move alpha-specific code out of page_table.cc:serialize().
2) Begin serializing M5_pid and unserializing it, but adding an function to do optional paramIn so that old checkpoints don't need to be fixed up.
3) Fix up alpha startup code so that the unserialized M5_pid value is properly written to DTB_IPR_ASN.
4) Fix the memory unserialize that I forgot somehow in the last changeset.
5) Add in an agg_se.py to handle aggregated checkpoints. --bench foo-bar plus positional arguments foo bar are the only changes in usage from se.py.
Note this aggregation stuff has only been tested for Alpha and nothing else, though it should take a very minimal amount of work to get it to work with another ISA.
This patch changes the way that Ruby handles atomic RMW instructions. This implementation, unlike the prior one, is protocol independent. It works by locking an address from the sequencer immediately after the read portion of an RMW completes. When that address is locked, the coherence controller will only satisfy requests coming from one port (e.g., the mandatory queue) and will ignore all others. After the write portion completed, the line is unlocked. This should also work with multi-line atomics, as long as the blocks are always acquired in the same order.
Added error messages when:
- a state does not exist in a machine's list of known states.
- an event does not exist in a machine
- the actions of a certain machine have not been declared
Connects M5 cpu and dma ports directly to ruby sequencers and dma
sequencers. Rubymem also includes a pio port so that pio requests
and be forwarded to a special pio bus connecting to device pio
ports.
Right now .cc and .hh files are handled separately, but then
they're just munged together at the end by scons, so it
doesn't buy us anything. Might as well munge from the start
since we'll eventually be adding generated Python files
to the list too.
This mostly was a matter of changing the license owner to Princeton
which is as it should have been. The code was originally licensed
under the GPL but was relicensed as BSD by Li-Shiuan Peh on July 27,
2009. This relicensing was in an explicit e-mail to Nathan Binkert,
Brad Beckmann, Mark Hill, David Wood, and Steve Reinhardt.
This prevents redundant prefetches from being issued, solving the
occasional 'needsExclusive && !blk->isWritable()' assertion failure
in cache_impl.hh that several people have run into.
Eliminates "prefetch_cache_check_push" flag, neither setting of
which really solved the problem.
This is simply a translation of the C++ slicc into python with very minimal
reorganization of the code. The output can be verified as nearly identical
by doing a "diff -wBur".
Slicc can easily be run manually by using util/slicc
Get rid of misc.py and just stick misc things in __init__.py
Move utility functions out of SCons files and into m5.util
Move utility type stuff from m5/__init__.py to m5/util/__init__.py
Remove buildEnv from m5 and allow access only from m5.defines
Rename AddToPath to addToPath while we're moving it to m5.util
Rename read_command to readCommand while we're moving it
Rename compare_versions to compareVersions while we're moving it.
--HG--
rename : src/python/m5/convert.py => src/python/m5/util/convert.py
rename : src/python/m5/smartdict.py => src/python/m5/util/smartdict.py
This changeset contains a lot of different changes that are too
mingled to separate. They are:
1. Added MOESI_CMP_directory
I made the changes necessary to bring back MOESI_CMP_directory,
including adding a DMA controller. I got rid of MOESI_CMP_directory_m
and made MOESI_CMP_directory use a memory controller. Added a new
configuration for two level protocols in general, and
MOESI_CMP_directory in particular.
2. DMA Sequencer uses a generic SequencerMsg
I will eventually make the cache Sequencer use this type as well. It
doesn't contain an offset field, just a physical address and a length.
MI_example has been updated to deal with this.
3. Parameterized Controllers
SLICC controllers can now take custom parameters to use for mapping,
latencies, etc. Currently, only int parameters are supported.
The inconsistency was causing a subtle bug with some of the
constructors where the params had the same name as the fields.
This is also a first step to switching the accessors over to
our new "standard", e.g., getVaddr() -> vaddr().
Caches are now responsible for their own statistic gathering. This
requires a direct callback from the protocol on misses, and so all
future protocols need to take this into account.
The DMASequencer was still using a parameter from the old RubyConfig,
causing an offset error when the requested data wasn't block aligned.
This changeset also includes a fix to MI_example for a similar bug.
2. Reintroduced RMW_Read and RMW_Write
3. Defined -2 in the Sequencer as well as made a note about mandatory queue
Did not address the issues in the slicc because remaking the atomics altogether to allow
multiple processors to issue atomic requests at once
This also includes a change to the default Ruby random seed, which was
previously set using the wall clock. It is now set to 1234 so that
the stat files don't change for the regression tester.
This was done with an automated process, so there could be things that were
done in this tree in the past that didn't make it. One known regression
is that atomic memory operations do not seem to work properly anymore.
This changeset also includes a lot of work from Derek Hower <drh5@cs.wisc.edu>
RubyMemory is now both a driver for Ruby and a port for M5. Changed
makeRequest/hitCallback interface. Brought packets (superficially)
into the sequencer. Modified tester infrastructure to be packet based.
and Ruby can be used together through the example ruby_se.py
script. SPARC parallel applications work, and the timing *seems* right
from combined M5/Ruby debug traces. To run,
% build/ALPHA_SE/m5.debug configs/example/ruby_se.py -c
tests/test-progs/hello/bin/alpha/linux/hello -n 4 -t
1. removed checks from tester files
2. removed else clause in Sequencer and DirectoryMemory else clause is
needed by the tester, it is up to Derek to revive it elsewhere when he
gets to it
Also:
1. Changed m_entries in DirectoryMemory to a map
2. And replaced SIMICS_read_physical_memory with a call to now-dummy
Derek's-to-be readPhysMem function
Add the PROTOCOL sticky option sets the coherence protocol that slicc
will parse and therefore ruby will use. This whole process was made
difficult by the fact that the set of files that are output by slicc
are not easily known ahead of time. The easiest thing wound up being
to write a parser for slicc that would tell me. Incidentally this
means we now have a slicc grammar written in python.
This basically means changing all #include statements and changing
autogenerated code so that it generates the correct paths. Because
slicc generates #includes, I had to hard code the include paths to
mem/protocol.
1) Removing files from the ruby build left some unresovled
symbols. Those have been fixed.
2) Most of the dependencies on Simics data types and the simics
interface files have been removed.
3) Almost all mention of opal is gone.
4) Huge chunks of LogTM are now gone.
5) Handling 1-4 left ~hundreds of unresolved references, which were
fixed, yielding a snowball effect (and the massive size of this
delta).
I did the macro cleanup because I was worried that the SCons scanner
would get confused. This code will hopefully go away soon anyway.
--HG--
rename : src/mem/ruby/config/config.include => src/mem/ruby/config/config.hh
Previously there was one per bus, which caused some coherence problems
when more than one decided to respond. Now there is just one on
the main memory bus. The default bus responder on all other buses
is now the downstream cache's cpu_side port. Caches no longer need
to do address range filtering; instead, we just have a simple flag
to prevent snoops from propagating to the I/O bus.
This frees up needed space for more public flags. Also:
- remove unused Request accessor methods
- make Packet use public Request accessors, so it need not be a friend
Apparently we broke it with the cache rewrite and never noticed.
Thanks to Bao Yungang <baoyungang@gmail.com> for a significant part
of these changes (and for inspiring me to work on the rest).
Some other overdue cleanup on the prefetch code too.
Bogus calls to ChunkGenerator with negative size were triggering
a new assertion that was added there.
Also did a little renaming and cleanup in the process.
I think readData() and writeData() were used for Erik's compression
work, but that code is gone, these aren't called anymore, and they
don't even really do what their names imply.
I did some of the flags and assertions wrong. Thanks to Brad Beckmann
for pointing this out. I should have run the opt regressions instead
of the fast. I also screwed up some of the logical functions in the Flags
class.
the primary identifier for a hardware context should be contextId(). The
concept of threads within a CPU remains, in the form of threadId() because
sometimes you need to know which context within a cpu to manipulate.
Since the early days of M5, an event needed to know which event queue
it was on, and that data was required at the time of construction of
the event object. In the future parallelized M5, this sort of
requirement does not work well since the proper event queue will not
always be known at the time of construction of an event. Now, events
are created, and the EventQueue itself has the schedule function,
e.g. eventq->schedule(event, when). To simplify the syntax, I created
a class called EventManager which holds a pointer to an EventQueue and
provides the schedule interface that is a proxy for the EventQueue.
The intent is that objects that frequently schedule events can be
derived from EventManager and then they have the schedule interface.
SimObject and Port are examples of objects that will become
EventManagers. The end result is that any SimObject can just call
schedule(event, when) and it will just call that SimObject's
eventq->schedule function. Of course, some objects may have more than
one EventQueue, so this interface might not be perfect for those, but
they should be relatively few.
This appears to work, but I don't want to commit it until it gets tested a lot more.
I haven't deleted the functionality in this patch that will come later, but one question
is how to enforce encourage objects that call getVirtPort() to not cache the virtual port
since if the CPU changes out from under them it will be worse than useless. Perhaps a null
function like delVirtPort() is still useful in that case.
It runs out that if a MemObject turns around and does a send in its
receive callback, and there are other sends already scheduled, then
it could observe a state where it's not at the head of the list but
the bus's sendEvent is not scheduled (because we're still in the
middle of processing the prior sendEvent).
I was asserting that the only reason you would defer targets is if
a write came in while you had an outstanding read miss, but there's
another case where you could get a read access after you've snooped
an invalidation and buffered it because it applies to a prior
outstanding miss.
Make OutputDirectory::resolve() private and change the functions using
resolve() to instead use create().
--HG--
extra : convert_revision : 36d4be629764d0c4c708cec8aa712cd15f966453
if a prior write miss arrived while an even earlier
read miss was still outstanding.
--HG--
extra : convert_revision : 4924e145829b2ecf4610b88d33f4773510c6801a
where we defer a response to a read from a far-away cache A, then later
defer a ReadExcl from a cache B on the same bus as us. We'll assert
MemInhibit in both cases, but in the latter case MemInhibit will keep
the invalidation from reaching cache A. This special response tells
cache A that it gets the block to satisfy its read, but must immediately
invalidate it.
--HG--
extra : convert_revision : f85c8b47bb30232da37ac861b50a6539dc81161b
Don't mark upstream MSHR as pending if downstream MSHR is already in service.
--HG--
extra : convert_revision : e1c135ff00217291db58ce8a06ccde34c403d37f
Not so much noise on failed sends, and more complete
info when grepping a trace using an address.
--HG--
extra : convert_revision : 05a8261c9452072ca08b906200c6322b33e2b9f1
SimObjects not yet updated:
- Process and subclasses
- BaseCPU and subclasses
The SimObject(const std::string &name) constructor was removed. Subclasses
that still rely on that behavior must call the parent initializer as
: SimObject(makeParams(name))
--HG--
extra : convert_revision : d6faddde76e7c3361ebdbd0a7b372a40941c12ed
The page table now stores actual page table entries. It is still a templated
class here, but this will be corrected in the near future.
--HG--
extra : convert_revision : 804dcc6320414c2b3ab76a74a15295bd24e1d13d
Make sure not to keep processing functional accesses
after they've been responded to.
Also use checkFunctional() return value instead of checking
packet command field where possible, mostly just for consistency.
--HG--
extra : convert_revision : 29fc76bc18731bd93a4ed05a281297827028ef75
creation and initialization now happens in python. Parameter objects
are generated and initialized by python. The .ini file is now solely for
debugging purposes and is not used in construction of the objects in any
way.
--HG--
extra : convert_revision : 7e722873e417cb3d696f2e34c35ff488b7bff4ed
Turns out DeferredSnoop isn't quite the right bit of info
we needed... see new comment in cache_impl.hh.
--HG--
extra : convert_revision : a38de8c1677a37acafb743b7074ef88b21d3b7be
If the invalidation beats the upgrade at a lower level
then the upgrade must be converted to a read exclusive
"in the field".
Restructure target list & deferred target list to
factor out some common code.
--HG--
extra : convert_revision : 7bab4482dd6c48efdb619610f0d3778c60ff777a
- Add "deferred snoop" flag to Packet so upper-level caches
can distinguish whether lower-level cache request was
in-service or not at the time of the original snoop.
- Revamp response handling to properly handle deferred snoops
on non-cache-fill requests (i.e. upgrades).
- Make sure forwarded writebacks are kept in write buffer at
lower-level caches so they get snooped properly.
--HG--
extra : convert_revision : 17f8a3772a1ae31a16991a53f8225ddf54d31fc9
Move check for loops outside, since half the call sites
end up working around it anyway. Return integer port ID
instead of port object pointer.
--HG--
extra : convert_revision : 4c31fe9930f4d1aa4919e764efb7c50d43792ea3
Note that we should *not* print pointer values in DPRINTFs as
these needlessly clutter tracediff output.
--HG--
extra : convert_revision : 25a448f1b3ac8d453a717a104ad6dc0112fb30bb
src/cpu/simple/timing.cc:
Fix another SC problem.
src/mem/cache/cache_impl.hh:
Forgot to call makeTimingResponse() on uncached timing responses.
--HG--
extra : convert_revision : 5a5a58ca2053e4e8de2133205bfd37de15eb4209
Stats pretty much line up with old code, except:
- bug in old code included L1 latency in L2 miss time, making it too high
- UniCoherence did cache-to-cache transfers even from non-owner caches,
so occasionally the icache would get a block from the dcache not the L2
- L2 can now receive ReadExReq from L1 since L1s have coherence
--HG--
extra : convert_revision : 5052c1a1767b5a662f30a88f16012165a73b791c
Change target overflow from assertion to warning.
src/mem/cache/cache_impl.hh:
Change target overflow from assertion to warning.
--HG--
extra : convert_revision : ceca990ed916bbf96dedd4836c40df522803f173
src/mem/cache/cache_impl.hh:
Handle grants with no packet.
src/mem/cache/miss/mshr.cc:
Fix MSHR snoop hit handling.
--HG--
extra : convert_revision : f365283afddaa07cb9e050b2981ad6a898c14451
sure we don't re-request bus prematurely. Use callback to
avoid calling sendRetry() recursively within recvTiming.
--HG--
extra : convert_revision : a907a2781b4b00aa8eb1ea7147afc81d6b424140
src/cpu/memtest/memtest.cc:
Need to set packet source field so that response from cache
doesn't run into assertion failure when copying source to dest.
src/mem/packet.hh:
Copy source field when copying packets.
Assert that source is valid before copying it to dest
when turning packets around.
--HG--
extra : convert_revision : 09e3cfda424aa89fe170e21e955b295746832bf8
supposed to and make sure parameters have the right type.
Also make sure that any object that should be an intermediate
type has the right options set.
--HG--
extra : convert_revision : d56910628d9a067699827adbc0a26ab629d11e93
into vm1.(none):/home/stever/bk/newmem-cache2
configs/example/memtest.py:
Hand merge redundant changes.
--HG--
extra : convert_revision : a2e36be254bf052024f37bcb23b5209f367d37e1
timing mode still broken.
configs/example/memtest.py:
Revamp options.
src/cpu/memtest/memtest.cc:
No need for memory initialization.
No need to make atomic response... memory system should do that now.
src/cpu/memtest/memtest.hh:
MemTest really doesn't want to snoop.
src/mem/bridge.cc:
checkFunctional() cleanup.
src/mem/bus.cc:
src/mem/bus.hh:
src/mem/cache/base_cache.cc:
src/mem/cache/base_cache.hh:
src/mem/cache/cache.cc:
src/mem/cache/cache.hh:
src/mem/cache/cache_blk.hh:
src/mem/cache/cache_builder.cc:
src/mem/cache/cache_impl.hh:
src/mem/cache/coherence/coherence_protocol.cc:
src/mem/cache/coherence/coherence_protocol.hh:
src/mem/cache/coherence/simple_coherence.hh:
src/mem/cache/miss/SConscript:
src/mem/cache/miss/mshr.cc:
src/mem/cache/miss/mshr.hh:
src/mem/cache/miss/mshr_queue.cc:
src/mem/cache/miss/mshr_queue.hh:
src/mem/cache/prefetch/base_prefetcher.cc:
src/mem/cache/tags/fa_lru.cc:
src/mem/cache/tags/fa_lru.hh:
src/mem/cache/tags/iic.cc:
src/mem/cache/tags/iic.hh:
src/mem/cache/tags/lru.cc:
src/mem/cache/tags/lru.hh:
src/mem/cache/tags/split.cc:
src/mem/cache/tags/split.hh:
src/mem/cache/tags/split_lifo.cc:
src/mem/cache/tags/split_lifo.hh:
src/mem/cache/tags/split_lru.cc:
src/mem/cache/tags/split_lru.hh:
src/mem/packet.cc:
src/mem/packet.hh:
src/mem/physical.cc:
src/mem/physical.hh:
src/mem/tport.cc:
More major reorg. Seems to work for atomic mode now,
timing mode still broken.
--HG--
extra : convert_revision : 7e70dfc4a752393b911880ff028271433855ae87
using a divide in order to not loop forever after resuming from a checkpoint
--HG--
extra : convert_revision : 4bbc70b1be4e5c4ed99d4f88418ab620d5ce475a
Makes page table cache scheme actually work
src/mem/page_table.cc:
src/mem/page_table.hh:
fix caching scheme to actually work and improve performance
--HG--
extra : convert_revision : 443a8d8acbee540b26affcfdfbf107b8e735d1bd
Oops... forgot to update call site after changing
function argument semantics.
src/mem/tport.cc:
Oops... forgot to update call site after changing
function argument semantics.
--HG--
extra : convert_revision : 9234b991dc678f062d268ace73c71b3d13dd17dc
- factor out checkFunctional() code so it can be
called from derived classes
- use EventWrapper for sendEvent, move event handling
code from event to port where it belongs
- make sendEvent a pointer so derived classes can
override it
- replace std::pair with new class for readability
--HG--
extra : convert_revision : 5709de2daacfb751a440144ecaab5f9fc02e6b7a
src/mem/cache/cache_impl.hh:
src/mem/cache/coherence/simple_coherence.hh:
Get rid of old invalidate propagation logic in preparation
for new multilevel snoop protocol.
src/mem/cache/coherence/coherence_protocol.cc:
L2 cache now has protocol, so protocol must handle ReadExReq
coming in from the CPU side.
src/mem/cache/miss/mshr_queue.cc:
Assertion is failing, so let's take it out for now.
src/mem/packet.cc:
src/mem/packet.hh:
Add WritebackAck command.
Reorganize enum to put responses next to corresponding requests.
Get rid of unused WriteReqNoAck.
--HG--
extra : convert_revision : 24c519846d161978123f9aa029ae358a41546c73
Compiles but doesn't work... committing just so I can merge
(stupid bk!).
src/mem/bridge.cc:
Get rid of SNOOP_COMMIT.
src/mem/bus.cc:
src/mem/packet.hh:
Get rid of SNOOP_COMMIT & two-pass snoop.
First bits of EXPRESS_SNOOP support.
src/mem/cache/base_cache.cc:
src/mem/cache/base_cache.hh:
src/mem/cache/cache.hh:
src/mem/cache/cache_impl.hh:
src/mem/cache/miss/blocking_buffer.cc:
src/mem/cache/miss/miss_queue.cc:
src/mem/cache/prefetch/base_prefetcher.cc:
Big reorg of ports and port-related functions & events.
src/mem/cache/cache.cc:
src/mem/cache/cache_builder.cc:
src/mem/cache/coherence/SConscript:
Get rid of UniCoherence object.
--HG--
extra : convert_revision : 7672434fa3115c9b1c94686f497e57e90413b7c3
port. It would be better to move this to python IMO but for
now I'll stick in a compatibility hack.
--HG--
extra : convert_revision : a81a29cbd43becd0e485559eb7b2a31f7a0b082d
configs/example/memtest.py:
PhysicalMemory has vector of uniform ports instead of one special one.
Other updates to fix obsolete brokenness.
src/mem/physical.cc:
src/mem/physical.hh:
src/python/m5/objects/PhysicalMemory.py:
Have vector of uniform ports instead of one special one.
src/python/swig/pyobject.cc:
Add comment.
--HG--
extra : convert_revision : a4a764dcdcd9720bcd07c979d0ece311fc8cb4f1
cache blocks that get dmaed ARE NOT marked invalid in the caches so it's a performance issue here
src/mem/bridge.cc:
src/mem/bridge.hh:
hopefully the final hacky change to make the bus bridge work ok
--HG--
extra : convert_revision : 62cbc65c74d1a84199f0a376546ec19994c5899c
src/dev/io_device.cc:
extra printing and assertions
src/mem/bridge.hh:
deal with packets only satisfying part of a request by making many requests
src/mem/cache/cache_impl.hh:
make the cache try to satisfy a functional request from the cache above it before checking itself
--HG--
extra : convert_revision : 1df52ab61d7967e14cc377c560495430a6af266a
set the latency parameter in terms of a latency
add caches to tsunami-simple configs
configs/common/Caches.py:
tests/configs/memtest.py:
tests/configs/o3-timing-mp.py:
tests/configs/o3-timing.py:
tests/configs/simple-atomic-mp.py:
tests/configs/simple-timing-mp.py:
tests/configs/simple-timing.py:
set the latency parameter in terms of a latency
configs/common/FSConfig.py:
give the bridge a default latency too
src/mem/cache/cache_builder.cc:
src/python/m5/objects/BaseCache.py:
remove hit_latency and make latency do the right thing
tests/configs/tsunami-simple-atomic-dual.py:
tests/configs/tsunami-simple-atomic.py:
tests/configs/tsunami-simple-timing-dual.py:
tests/configs/tsunami-simple-timing.py:
add caches to tsunami-simple configs
--HG--
extra : convert_revision : 37bef7c652e97c8cdb91f471fba62978f89019f1
add seperate response buffers and request queue sizes in bus bridge
add delay to respond to a nack in the bus bridge
src/dev/i8254xGBe.cc:
src/dev/ide_ctrl.cc:
src/dev/ns_gige.cc:
src/dev/pcidev.hh:
src/dev/sinic.cc:
add backoff delay parameters
src/dev/io_device.cc:
src/dev/io_device.hh:
add a backoff algorithm when nacks are received.
src/mem/bridge.cc:
src/mem/bridge.hh:
add seperate response buffers and request queue sizes
add a new parameters to specify how long before a nack in ready to go after a packet that needs to be nacked is received
src/mem/cache/cache_impl.hh:
assert on the
src/mem/tport.cc:
add a friendly assert to make sure the packet was inserted into the list
--HG--
extra : convert_revision : 3595ad932015a4ce2bb72772da7850ad91bd09b1
fix the timing cpu to handle receiving a nacked packet
src/cpu/simple/timing.cc:
make the timing cpu handle receiving a nacked packet
src/mem/bridge.cc:
src/mem/bridge.hh:
the bridge never returns false when recvTiming() is called on its ports now, it always returns true and nacks the packet if there isn't sufficient buffer space
--HG--
extra : convert_revision : 5e12d0cf6ce985a5f72bcb7ce26c83a76c34c50a
figure out the block size from devices attached to the bus otherwise use a default block size when no devices that care are attached
configs/common/FSConfig.py:
src/mem/bridge.cc:
src/mem/bridge.hh:
src/python/m5/objects/Bridge.py:
fix partial writes with a functional memory hack
src/mem/bus.cc:
src/mem/bus.hh:
src/python/m5/objects/Bus.py:
figure out the block size from devices attached to the bus otherwise use a default block size when no devices that care are attached
src/mem/packet.cc:
fix WriteInvalidateResp to not be a request that needs a response since it isn't
src/mem/port.hh:
by default return 0 for deviceBlockSize instead of panicing. This makes finding the block size the bus should use easier
--HG--
extra : convert_revision : 3fcfe95f9f392ef76f324ee8bd1d7f6de95c1a64
In this way a MemoryObject can keep a functional port around and give it to anyone who wants to do functional accesses rather
than creating a new one each time.
src/mem/bus.cc:
src/mem/bus.hh:
src/mem/cache/cache_impl.hh:
only keep around one func port we give to anyone who wants it. Otherwise we can run out of port ids reasonably quickly if
a lot of functional accesses are happening (e.g. remote debugging, dprintk, etc)
--HG--
extra : convert_revision : 6a9e3e96f51cedaab6de1b36cf317203899a3716
into zamp.eecs.umich.edu:/z/ktlim2/clean/tmp/clean2
src/cpu/base_dyn_inst.hh:
Hand merge. Line is no longer needed because it's handled in the ISA.
--HG--
extra : convert_revision : 0be4067aa38759a5631c6940f0167d48fde2b680
1. Update packet's flags properly when a snoop happens
2. Don't allow accesses to read a block's data if the block has outstanding MSHRs. This avoids a RAW hazard in MP systems that the memory system was not detecting properly earlier (a write required a block to upgrade, and while the upgrade was outstanding, a read came along and read old data).
3. Update MSHR's request upon a response being handled. If the MSHR has more targets than it can respond to in one cycle, then its request must be properly updated to the new head of the targets list.
src/mem/bus.cc:
Update packet's flags properly upon snoop.
src/mem/cache/cache_impl.hh:
Be sure to not allow accesses to a block with outstanding MSHRs.
src/mem/cache/miss/miss_queue.cc:
Update MSHR's request upon a response being handled.
--HG--
extra : convert_revision : 76a9abc610ca3f1904f075ad21637148a41982d6
src/cpu/memtest/memtest.cc:
Add the [] to a delete to make it work correctly
src/mem/cache/cache_impl.hh:
Fix one of the memory leaks
--HG--
extra : convert_revision : 64c7465c68a084efe38a62419205518b24d852a7
automatic. The point is that now a subdirectory can be added
to the build process just by creating a SConscript file in it.
The process has two passes. On the first pass, all subdirs
of the root of the tree are searched for SConsopts files.
These files contain any command line options that ought to be
added for a particular subdirectory. On the second pass,
all subdirs of the src directory are searched for SConscript
files. These files describe how to build any given subdirectory.
I have added a Source() function. Any file (relative to the
directory in which the SConscript resides) passed to that
function is added to the build. Clean up everything to take
advantage of Source().
function is added to the list of files to be built.
--HG--
extra : convert_revision : 103f6b490d2eb224436688c89cdc015211c4fd30
1. Make sure connectMemPorts() only gets called when the CPU's peer gets changed. This is done by making setPeer() virtual, and overriding it in the CPU's ports. When it gets called on a CPU's port (dcache specifically), it calls the normal setPeer() function, and also connectMemPorts().
2. Consolidate redundant code that handles switching in a CPU.
src/cpu/base.cc:
Move common code of switching over peers to base CPU.
src/cpu/base.hh:
Move common code of switching over peers to BaseCPU.
src/cpu/o3/cpu.cc:
Add in function that updates thread context's ports.
Also use updated function to takeOverFrom() in BaseCPU. This gets rid of some repeated code.
src/cpu/o3/cpu.hh:
Include function to update thread context's memory ports.
src/cpu/o3/lsq.hh:
Add function to dcache port that will update the memory ports upon getting a new peer.
Also include a function that will tell the CPU to update those memory ports.
src/cpu/o3/lsq_impl.hh:
Add function that will update the memory ports upon getting a new peer.
src/cpu/simple/atomic.cc:
src/cpu/simple/timing.cc:
Add function that will update thread context's memory ports upon getting a new peer.
Also use the new BaseCPU's take over from function.
src/cpu/simple/atomic.hh:
Add in function (and dcache port) that will allow the dcache to update memory ports when it gets assigned a new peer.
src/cpu/simple/timing.hh:
Add function that will update thread context's memory ports upon getting a new peer.
src/mem/port.hh:
Make setPeer virtual so that other classes can override it.
--HG--
extra : convert_revision : 2050f1241dd2e83875d281cfc5ad5c6c8705fdaf
don't create a new physPort/virtPort every time activateContext() is called
add the ability to tell a memory object to delete it's reference to a port and a method to have a port call deletePortRefs()
on the port owner as well as delete it's peer
still need to stop calling connectMemoPorts() every time activateContext() is called or we'll overflow the bus id and panic
src/cpu/thread_state.cc:
if we hav ea (phys|virt)Port don't create a new on, have it delete it's peer and then reuse it
src/mem/bus.cc:
src/mem/bus.hh:
add ability to delete a port by usig a hash_map instead of an array to store port ids
add a function to do deleting
src/mem/cache/cache.hh:
src/mem/cache/cache_impl.hh:
src/mem/mem_object.cc:
src/mem/mem_object.hh:
adda function to delete port references from a memory object
src/mem/port.cc:
src/mem/port.hh:
add a removeConn function that tell the owener to delete any references to the port and then deletes its peer
--HG--
extra : convert_revision : 272f0c8f80e1cf1ab1750d8be5a6c9aa110b06a4
directly configured by python. Move stuff from root.(cc|hh) to
core.(cc|hh) since it really belogs there now.
In the process, simplify how ticks are used in the python code.
--HG--
extra : convert_revision : cf82ee1ea20f9343924f30bacc2a38d4edee8df3
src/arch/alpha/vtophys.cc:
src/arch/alpha/vtophys.hh:
src/arch/sparc/arguments.hh:
move Copy* to vport since it's generic for all the ISAs
src/arch/sparc/isa_traits.hh:
the Solaris kernel sets up a virtual-> real mapping for all memory starting at SegKPMBase
src/arch/sparc/pagetable.hh:
add a class for getting bits out of the TteTag
src/arch/sparc/remote_gdb.cc:
add 32bit support kinda.... If its 32 bit
src/arch/sparc/remote_gdb.hh:
Add 32bit register offsets too.
src/arch/sparc/tlb.cc:
cleanup generation of tsb pointers
src/arch/sparc/tlb.hh:
add function to return tsb pointers for an address
make lookup public so vtophys can use it
src/arch/sparc/vtophys.cc:
src/arch/sparc/vtophys.hh:
write vtophys for sparc
src/base/bitfield.hh:
return a mask of bits first->last
src/mem/vport.cc:
src/mem/vport.hh:
move Copy* here since it's ISA generic
--HG--
extra : convert_revision : c42c331e396c0d51a2789029d8e232fe66995d0f
Add support for a twin 64 bit int load
Add Memory barrier and write barrier flags as appropriate
Make atomic memory ops atomic
src/arch/alpha/isa/mem.isa:
src/arch/alpha/locked_mem.hh:
src/cpu/base_dyn_inst.hh:
src/mem/cache/cache_blk.hh:
src/mem/cache/cache_impl.hh:
rename store conditional stuff as extra data so it can be used for conditional swaps as well
src/arch/alpha/types.hh:
src/arch/mips/types.hh:
src/arch/sparc/types.hh:
add a largest read data type for statically allocating read buffers in atomic simple cpu
src/arch/isa_parser.py:
Add support for a twin 64 bit int load
src/arch/sparc/isa/decoder.isa:
Make atomic memory ops atomic
Add Memory barrier and write barrier flags as appropriate
src/arch/sparc/isa/formats/mem/basicmem.isa:
add post access code block and define a twinload format for twin loads
src/arch/sparc/isa/formats/mem/blockmem.isa:
remove old microcoded twin load coad
src/arch/sparc/isa/formats/mem/mem.isa:
swap.isa replaces the code in loadstore.isa
src/arch/sparc/isa/formats/mem/util.isa:
add a post access code block
src/arch/sparc/isa/includes.isa:
need bigint.hh for Twin64_t
src/arch/sparc/isa/operands.isa:
add a twin 64 int type
src/cpu/simple/atomic.cc:
src/cpu/simple/atomic.hh:
src/cpu/simple/base.hh:
src/cpu/simple/timing.cc:
add support for twinloads
add support for swap and conditional swap instructions
rename store conditional stuff as extra data so it can be used for conditional swaps as well
src/mem/packet.cc:
src/mem/packet.hh:
Add support for atomic swap memory commands
src/mem/packet_access.hh:
Add endian conversion function for Twin64_t type
src/mem/physical.cc:
src/mem/physical.hh:
src/mem/request.hh:
Add support for atomic swap memory commands
Rename sc code to extradata
--HG--
extra : convert_revision : 69d908512fb34a4e28b29a6e58b807fb1a6b1656
Created MemCmd class to wrap enum and provide handy methods to
check attributes, convert to string/int, etc.
--HG--
extra : convert_revision : 57f147ad893443e3a2040c6d5b4cdb1a8033930b
SConstruct:
src/SConscript:
Add flags for Intel CC while i'm at it
src/base/compiler.hh:
the _Pragma stuff needst to be called this way unless someone happens to have a cleaner way
src/base/cprintf_formats.hh:
add std:: where appropriate
src/base/statistics.hh:
use this->map since icc was getting confused about std::map vs the locally defined map
src/cpu/static_inst.hh:
Add some more dummy returns where needed
src/mem/packet.hh:
add more dummy returns where needed
src/sim/host.hh:
use limits to come up with max tick
--HG--
extra : convert_revision : 08e9f7898b29fb9d063136529afb9b6abceab60c
pretty close to compiling w/ suns compiler
briefly:
add dummy return after panic()/fatal()
split out flags by compiler vendor
include cstring and cmath where appropriate
use std namespace for string ops
SConstruct:
Add code to detect compiler and choose cflags based on detected compiler
Fix zlib check to work with suncc
src/SConscript:
split out flags by compiler vendor
src/arch/sparc/isa/decoder.isa:
use correct namespace for sqrt
src/arch/sparc/isa/formats/basic.isa:
add dummy return around panic
src/arch/sparc/isa/formats/integerop.isa:
use correct namespace for stringops
src/arch/sparc/isa/includes.isa:
include cstring and cmath where appropriate
src/arch/sparc/isa_traits.hh:
remove dangling comma
src/arch/sparc/system.cc:
dummy return to make sun cc front end happy
src/arch/sparc/tlb.cc:
src/base/compression/lzss_compression.cc:
use std namespace for string ops
src/arch/sparc/utility.hh:
no reason to say something is unsigned unsigned int
src/base/compression/null_compression.hh:
dummy returns to for suncc front end
src/base/cprintf.hh:
use standard variadic argument syntax instead of gnuc specefic renaming
src/base/hashmap.hh:
don't need to define hash for suncc
src/base/hostinfo.cc:
need stdio.h for sprintf
src/base/loader/object_file.cc:
munmap is in std namespace not null
src/base/misc.hh:
use M5 generic noreturn macros
use standard variadic macro __VA_ARGS__
src/base/pollevent.cc:
we need file.h for file flags
src/base/random.cc:
mess with include files to make suncc happy
src/base/remote_gdb.cc:
malloc memory for function instead of having a non-constant in an array size
src/base/statistics.hh:
use std namespace for floor
src/base/stats/text.cc:
include math.h for rint (cmath won't work)
src/base/time.cc:
use suncc version of ctime_r
src/base/time.hh:
change macro to work with both gcc and suncc
src/base/timebuf.hh:
include cstring from memset and use std::
src/base/trace.hh:
change variadic macros to be normal format
src/cpu/SConscript:
add dummy returns where appropriate
src/cpu/activity.cc:
include cstring for memset
src/cpu/exetrace.hh:
include cstring fro memcpy
src/cpu/simple/base.hh:
add dummy return for panic
src/dev/baddev.cc:
src/dev/pciconfigall.cc:
src/dev/platform.cc:
src/dev/sparc/t1000.cc:
add dummy return where appropriate
src/dev/ide_atareg.h:
make define work for both gnuc and suncc
src/dev/io_device.hh:
add dummy returns where approirate
src/dev/pcidev.hh:
src/mem/cache/cache_impl.hh:
src/mem/cache/miss/blocking_buffer.cc:
src/mem/cache/tags/lru.hh:
src/mem/cache/tags/split.hh:
src/mem/cache/tags/split_lifo.hh:
src/mem/cache/tags/split_lru.hh:
src/mem/dram.cc:
src/mem/packet.cc:
src/mem/port.cc:
include cstring for string ops
src/dev/sparc/mm_disk.cc:
add dummy return where appropriate
include cstring for string ops
src/mem/cache/miss/blocking_buffer.hh:
src/mem/port.hh:
Add dummy return where appropriate
src/mem/cache/tags/iic.cc:
cast hastSets to double for log() call
src/mem/physical.cc:
cast pmemAddr to char* for munmap
src/sim/byteswap.hh:
make define work for suncc and gnuc
--HG--
extra : convert_revision : ef8a1f1064e43b6c39838a85c01aee4f795497bd
don't regenerate address from block in cache so that tags can
turn around and use address to look up block again.
--HG--
extra : convert_revision : 171018aa6e331d98399c4e5ef24e173c95eaca28
and push those into derived Cache template class to
eliminate a few layers of virtual functions and
conditionals ("if (isCpuSide) { ... }" etc.).
--HG--
extra : convert_revision : cb1b88246c95b36aa0cf26d534127d3714ddb774
getting touched.
configs/common/FSConfig.py:
Physical memory on the T1 starts at 1MB, The first megabyte is unmapped to catch bugs
src/arch/isa_parser.py:
we should readmiscregwitheffect not readmiscreg
src/arch/sparc/asi.cc:
Fix AsiIsNucleus spelling with respect to header file
Add ASI_LSU_CONTROL_REG to AsiSiMmu
src/arch/sparc/asi.hh:
Fix spelling of two ASIs
src/arch/sparc/isa/decoder.isa:
switch back to defaults letting the isa_parser insert readMiscRegWithEffect
src/arch/sparc/isa/formats/mem/util.isa:
Flesh out priviledgedString with hypervisor checks
Make load alternate set the flags correctly
src/arch/sparc/miscregfile.cc:
insert some forgotten break statements
src/arch/sparc/miscregfile.hh:
Add some comments to make it easier to find which misc register is which number
src/arch/sparc/tlb.cc:
flesh out the tlb memory mapped registers a lot more
src/base/traceflags.py:
add an IPR traceflag
src/mem/request.hh:
Fix a bad assert() in request
--HG--
extra : convert_revision : 1e11aa004e8f42c156e224c1d30d49479ebeed28
into zower.eecs.umich.edu:/eecshome/m5/newmemmid
src/arch/sparc/isa_traits.hh:
src/arch/sparc/miscregfile.hh:
hand merge
--HG--
extra : convert_revision : 34f50dc5e6e22096cb2c08b5888f2b0fcd418f3e
src/arch/SConscript:
add mmaped_ipr.hh to switch headers
src/arch/sparc/asi.hh:
make ASI_IMPLICT=0 so by default nothing needs to be done
src/arch/sparc/miscregfile.hh:
miscregfile no longer needs to include asi.hh
src/arch/sparc/tlb.cc:
src/arch/sparc/tlb.hh:
implement panic instructions for mmaped ipr reads
src/cpu/simple/atomic.cc:
add check for mmaped iprs and handle them if it exists
src/mem/request.hh:
allocate space in the flags for mmaped iprs. Put in in the first 8 bits so that by default its fast. Move the other flags up 8 bits
--HG--
extra : convert_revision : 31255b0494588c4d06a727fe35241121d741b115
src/arch/sparc/SConscript:
Add code to serialize/unserialze tlb entries
src/arch/sparc/asi.cc:
src/arch/sparc/asi.hh:
update asi names for how they're listed in the supplement
add asis
add more asi functions
src/arch/sparc/isa_traits.hh:
move the interrupt stuff and some basic address space stuff into isa traits
src/arch/sparc/miscregfile.cc:
src/arch/sparc/miscregfile.hh:
add mmu registers to tlb
get rid of implicit asi stuff... the tlb will handle it
src/arch/sparc/regfile.hh:
make isnt/dataAsid return ints not asis
src/arch/sparc/tlb.cc:
src/arch/sparc/tlb.hh:
first cut at sparc tlb
src/arch/sparc/vtophys.hh:
pagatable nedes to be included here
src/mem/request.hh:
add asi and if the request is a memory mapped register to the requset object
src/sim/host.hh:
fix incorrect definition of LL
--HG--
extra : convert_revision : 6c85cd1681c62c8cd8eab04f70b1f15a034b0aa3
Fix a small writeback bug when missing in the L2 in atomic mode
src/mem/bus.cc:
Fix a comment to make sense
src/mem/cache/cache_impl.hh:
Do a functional access to levels above on a read as a temporary solution for L2's in FS
Also fix a small writeback miss in L2 issue
src/mem/cache/coherence/simple_coherence.hh:
src/mem/cache/coherence/uni_coherence.cc:
src/mem/cache/coherence/uni_coherence.hh:
Do a functional access to levels above on a read as a temporary solution for L2's in FS
tests/quick/00.hello/ref/alpha/linux/o3-timing/m5stats.txt:
tests/quick/00.hello/ref/alpha/linux/simple-timing/m5stats.txt:
tests/quick/01.hello-2T-smt/ref/alpha/linux/o3-timing/m5stats.txt:
Update ref's for writeback changes
--HG--
extra : convert_revision : 937febd577b16b7fd97a5a68acaf53541828a251
src/mem/bus.cc:
Make it so that invalidates being sent from the responder up don't call the responder
but they should also not Panic.
src/mem/packet.hh:
If we don't have data in the packet, don't call deleteData:
Example: InvalidateRequests never have data.
--HG--
extra : convert_revision : 18766bc9f3bb4d852ac651d094254d347abd1634
src/mem/bridge.cc:
Update brdiges, now that snoop addresses are properly forwarded.
Bus bridge should only handle snoops on the second phase (SNOOP_COMMIT)
src/mem/bus.cc:
src/mem/bus.hh:
Make sure if a busBridge has access to both things that snoop and things that respond it only takes the request once
--HG--
extra : convert_revision : 26cc9ee4429be45d4476fa435e0e9a54843c2509
src/mem/cache/base_cache.cc:
Sometimes a functional access comes while waiting on a outstanding packet being sent.
This could be because Timing CPU does some post processing on the recvTiming which send functional access.
Either the CPU should leave the pkt/req around (so They can be referenced in the mem system). Or the mem
system should remove them from outstanding lists and reinsert them if they fail in the sendTiming.
I did the later, eventually we should consider doing the former if that is the correct behavior.
--HG--
extra : convert_revision : be41e0d2632369dca9d7c15e96e5576d7583fe6a
src/mem/bus.cc:
Only call snoop once per port, need to fix it so snoop ranges that overlap aren't added to list
Functional accesses that call snoop and it goes to a higher bus may change the src, reset it after each snoop.
--HG--
extra : convert_revision : 7276059c798a85cb9d138ccc5531298ecd055c13
src/mem/bus.cc:
Actually return the snoop list when asked for it.
Don't get stuck in infinite functional loops
--HG--
extra : convert_revision : 8e6dafbd10b30d48d28b6b5d4b464e8e8f6a3ddc
Fixes for Mem Leak associated with Writebacks.
src/mem/cache/miss/mshr_queue.cc:
Fixes for Mem Leak associated with Writebacks. (Double Delete removed)
--HG--
extra : convert_revision : 7a52ddd57da35995896f2c4438a58aa53f762416
src/mem/cache/cache_impl.hh:
When upgrades change to readEx make sure to allocate the block
Fix dprintf
--HG--
extra : convert_revision : 8700a7e47ad042c8708302620b907849c4bfdded
src/mem/cache/base_cache.cc:
On a delayed response, be sure to call the fixPacket wrapper to toggle hasData flag.
src/mem/packet.cc:
src/mem/packet.hh:
Create a wrapper to toggle the hasData flag on delayed responses
--HG--
extra : convert_revision : 1ced8d4e3dc12a059fb7636d59e429cd3dd46901
Working on that now.
src/mem/cache/base_cache.cc:
Keep a list of the responders so we can search them on functional accesses.
src/mem/cache/base_cache.hh:
Properly put things on a list for responses so we can search the list.
Also, be sure to check the outgoing ports lists on a functional access (factor some common code out there)
src/mem/cache/cache_impl.hh:
Properly return when the first read hit on a functional access.
Make sure to call to check the other ports list of packets before forwarding it out.
--HG--
extra : convert_revision : 1d21cb55ff29c15716617efc48441329707c088a
src/mem/packet.cc:
Make sure to copy the whole data (we were one byte short)
src/mem/tport.cc:
Fix for the proper semantics of fixPacket
--HG--
extra : convert_revision : 215e05db9099d427afd4994f5b29079354c847d8
Since we don't have a platform yet, you need to comment out the default responder stuff in Bus.py to make it work.
SConstruct:
Add TARGET_ISA to the list of environment variables that end up in the build_env for python
configs/common/FSConfig.py:
add a simple SPARC system to being testing with, you'll need to change makeLinuxAlphaSystem to makeSparcSystem in fs.py for now
src/SConscript:
add a raw file object, at least until we get more info about how to compile openboot properly
src/arch/sparc/system.cc:
src/arch/sparc/system.hh:
add parameters for ROM files (OBP/Reset/Hypervisor), a ROM, load files into ROM
src/base/loader/object_file.cc:
src/base/loader/object_file.hh:
add option to try raw when nothing works
src/cpu/exetrace.cc:
cleanup lockstep printing a little bit
src/cpu/m5legion_interface.h:
change the instruction to be 32 bits because it is
src/mem/physical.cc:
fix assert that doesn't work if memory starts somewhere above 0
src/python/m5/objects/BaseCPU.py:
Add if statement to choose between sparc tlbs and alpha tlbs
src/python/m5/objects/System.py:
Add a sparc system that sets the rom addresses correctly
src/python/m5/params.py:
add the ability to add Addr() together
--HG--
extra : convert_revision : bbbd8a56134f2dda2728091f740e2f7119b0c4af
src/cpu/o3/cpu.cc:
Handle draining properly when CPU isn't actually being used.
src/cpu/simple/atomic.cc:
Be sure to set status properly when draining.
src/mem/bus.cc:
Fix for draining.
--HG--
extra : convert_revision : d9796e6693e974f022159029fc9743c49a970c8f
src/mem/bus.cc:
Fix up draining to work properly.
src/mem/bus.hh:
Initialize drainEvent to NULL.
src/mem/cache/base_cache.cc:
src/mem/cache/base_cache.hh:
Add draining to the caches.
--HG--
extra : convert_revision : 3082220a75d50876f10909f9f99bec535889f818
src/mem/bus.cc:
src/mem/bus.hh:
Bus now will be setup with a default responder, unless the user overrides it. This default responder should return BadAddress if no matching port is found.
src/python/m5/objects/Bus.py:
Bus now has a default responder for FS mode if the user doesn't override it. It returns BadAddress if no matching port is found.
src/python/m5/objects/Tsunami.py:
Add bad address device. Also record when the user has specified their own default responder.
--HG--
extra : convert_revision : 59070477ae313ee711b2d59baa2369c9a91c5b85
src/mem/cache/base_cache.cc:
Have caches return a new functional port whenever asked for them. I'm pretty sure this is desired behavior. Ron can correct me if it's not.
--HG--
extra : convert_revision : e1fadf895a7d714968128ff900d10e86fde53387
into zamp.eecs.umich.edu:/z/ktlim2/clean/newmem-busfix
configs/example/fs.py:
configs/example/se.py:
src/mem/tport.hh:
Hand merge.
--HG--
extra : convert_revision : b9df95534d43b3b311f24ae24717371d03d615bf
src/cpu/simple/atomic.hh:
Port now takes in the MemObject that owns it.
src/cpu/simple/timing.hh:
Port now takes in MemObject that owns it.
src/dev/io_device.cc:
src/mem/bus.hh:
Ports now take in the MemObject that owns it.
src/mem/cache/base_cache.cc:
Ports now take in the MemObject that own it.
src/mem/port.hh:
src/mem/tport.hh:
Ports now optionally take in the MemObject that owns it.
--HG--
extra : convert_revision : 890a72a871795987c2236c65937e06973412d349
not necessarily 100% there yet.
src/mem/cache/cache_impl.hh:
Generate response packet on failed store conditional.
src/mem/packet.hh:
Clear packet flags when reinitializing.
(SATISFIED in particular is one we don't want to leave set.)
--HG--
extra : convert_revision : 29207c8a09afcbce43f41c480ad0c1b21d47454f
Fix fixPacket assert function.
Stop timing port from forwarding the request if a response was found in its queue on a read.
src/cpu/memtest/memtest.cc:
src/cpu/memtest/memtest.hh:
src/python/m5/objects/MemTest.py:
Add parameter to configure what percentage of mem accesses are functional
src/mem/cache/base_cache.cc:
src/mem/cache/cache_impl.hh:
Use fix Packet function
src/mem/packet.cc:
Fix an assert that was checking the wrong thing
src/mem/tport.cc:
Properly detect if we need to do the access to the functional device
--HG--
extra : convert_revision : 447cc1a9a65ddd2a41e937fb09dc0e7c74e9c75e
I need to move over to using the fixPacket function so I don't have to make the same changes everywhere.
Still a functional access bug someplace I need to track down in timing mode.
src/mem/cache/base_cache.cc:
src/mem/cache/cache_impl.hh:
Fix corner case on assertion
tests/configs/memtest.py:
Updated memtester with uncacheable addresses and functional accesses
--HG--
extra : convert_revision : e6fa851621700ff9227b83cc5cac20af4fc8444f
src/cpu/memtest/memtest.cc:
Fix memtest to do functional accesses
src/mem/cache/cache_impl.hh:
Fix cache to handle functional accesses properly based on memtester changes
Still need to fix functional accesses in timing mode now that the memtester can test it.
--HG--
extra : convert_revision : a6dbca4dc23763ca13560fbf5d41a23ddf021113
?? doesn't compile in warn statements
Should have been false, where I had a true.
src/cpu/o3/lsq_impl.hh:
Apparently you can't have ?? in a warn statement (Something about trigraphs)
src/mem/cache/cache_impl.hh:
Forgot to signal atomic mode in snoopProbe
--HG--
extra : convert_revision : c75cb76e193e852284564993440c8ea39e6de426
Now to try L2 caches in FS.
src/mem/cache/base_cache.hh:
Fix uni-coherence for atomic accesses in coherence protocol access to port
src/mem/cache/cache_impl.hh:
Properly handle uni-coherence
src/mem/cache/coherence/simple_coherence.hh:
Properly forward invalidates (not done for MSI+ protocols (assumed top level for now)
src/mem/cache/coherence/uni_coherence.cc:
src/mem/cache/coherence/uni_coherence.hh:
Properly forward invalidates in atomic/timing uni-coherence
--HG--
extra : convert_revision : f0f11315e8e7f32c19d92287f6f9c27b079c96f7
src/mem/cache/cache_impl.hh:
Get the read data from the highest level of cache on a functional access
--HG--
extra : convert_revision : 7437ac46fb40f3ea3b42197a1aa8aec62af60181
configs/example/fs.py:
Add MOESI protocol to caches (uni coherence not quite working w/FS yet).
--HG--
extra : convert_revision : 7bef7d9c5b24bf7241cc810df692408837b06b86
Still a bug in atomic uni-coherence in FS.
src/cpu/o3/fetch_impl.hh:
src/cpu/o3/lsq_impl.hh:
src/cpu/simple/atomic.cc:
src/cpu/simple/timing.cc:
Make CPU models handle coherence requests
src/mem/cache/base_cache.cc:
Properly signal coherence CSHRs
src/mem/cache/coherence/uni_coherence.cc:
Only deallocate once
--HG--
extra : convert_revision : c4533de421c371c5532ee505e3ecd451511f5c99
Still need to rework upgrades into this system, but works for now.
src/mem/cache/base_cache.cc:
Re order code to be more readable
src/mem/cache/base_cache.hh:
Be sure to delete the copy on a bus block
src/mem/cache/cache_impl.hh:
Be sure to remove the copy on a writeback success
src/mem/cache/miss/mshr_queue.cc:
Demorgans to make it easier to understand
src/mem/tport.cc:
Delete writebacks
--HG--
extra : convert_revision : 9519fb37b46ead781d340de29bb342a322a6a92e
Still need to fix upgrades to use this path
src/mem/cache/base_cache.cc:
Copy the pkt to the MSHR before issuing the sendTiming where it may be changed/consumed
src/mem/cache/cache_impl.hh:
Use copy of packet, because sendTiming may have changed the pkt
Also, delete the copy when the time comes
--HG--
extra : convert_revision : 635cde6b4f08d010affde310c46b1caf50fbe424
Fix CSHR's for flow control.
Fix for Bus Bridges reusing packets (clean flags up)
Now both timing/atomic caches with MOESI in UP fail at same point.
src/dev/io_device.hh:
DMA's should send WriteInvalidates
src/mem/bridge.cc:
Reusing packet, clean flags in the packet set by bus.
src/mem/cache/base_cache.cc:
src/mem/cache/base_cache.hh:
src/mem/cache/cache.hh:
src/mem/cache/cache_impl.hh:
src/mem/cache/coherence/simple_coherence.hh:
src/mem/cache/coherence/uni_coherence.cc:
src/mem/cache/coherence/uni_coherence.hh:
Fix CSHR's for flow control.
src/mem/packet.hh:
Make a writeInvalidateResp, since the DMA expects responses to it's writes
--HG--
extra : convert_revision : 59fd6658bcc0d076f4b143169caca946472a86cd
fixPacket() should be used anywhere a functional packet and timing packet are found to have the same address.
--HG--
extra : convert_revision : 783ec438271b24ddb0ae742b4efd1ed7d6be93f3
src/mem/cache/base_cache.hh:
Remove top level param from cache
src/mem/cache/coherence/uni_coherence.cc:
Remove top level parameters from the cache
--HG--
extra : convert_revision : 4437aeedc20866869de7f9ab123dfa7baeebedf0
implement fix packet and add the ability to print a packet to a ostream
remove tabs in packet.hh (Could people stop inserting them??!?!?!)
mark const functions in packet.hh as such
src/base/traceflags.py:
add a traceflag for functional accesses
src/mem/packet.cc:
implement fix packet and add the ability to print a packet to a ostream
src/mem/packet.hh:
add the ability to print a packet to an ostream
remove tabs in file
mark const functions as such
--HG--
extra : convert_revision : 4297bce5e1d3abbab48be5bd9eb9e982b751fc7c
The response queue is not tying up an MSHR, should we change that or assume infinite storage for responses?
src/mem/cache/base_cache.cc:
src/mem/tport.cc:
Add in functional check of retry queued packets.
--HG--
extra : convert_revision : 0cb40b3a96d37a5e9eec95312d660ec6a9ce526a
src/base/traceflags.py:
src/mem/physical.cc:
Add debug falgs fro physical memory accesses
src/mem/cache/cache_impl.hh:
Snoops to uncacheable blocks should not happen
src/mem/cache/miss/miss_queue.cc:
Set the size properly on unCacheable accesses
--HG--
extra : convert_revision : fc78192863afb11fc7c591fba169021b9e127d16