Added the options to Options.py for FS mode with backward compatibility. It is
good to provide an option to specify the disk image and the memory size from
command line since a lot of disk images are created to support different
benchmark suites as well as per user needs. Change in program also leads to
change in memory requirements. These options provide the interface to provide
both disk image and memory size from the command line and gives more
flexibility.
This patch removes the assumption on having on single instance of
PhysicalMemory, and enables a distributed memory where the individual
memories in the system are each responsible for a single contiguous
address range.
All memories inherit from an AbstractMemory that encompasses the basic
behaviuor of a random access memory, and provides untimed access
methods. What was previously called PhysicalMemory is now
SimpleMemory, and a subclass of AbstractMemory. All future types of
memory controllers should inherit from AbstractMemory.
To enable e.g. the atomic CPU and RubyPort to access the now
distributed memory, the system has a wrapper class, called
PhysicalMemory that is aware of all the memories in the system and
their associated address ranges. This class thus acts as an
infinitely-fast bus and performs address decoding for these "shortcut"
accesses. Each memory can specify that it should not be part of the
global address map (used e.g. by the functional memories by some
testers). Moreover, each memory can be configured to be reported to
the OS configuration table, useful for populating ATAG structures, and
any potential ACPI tables.
Checkpointing support currently assumes that all memories have the
same size and organisation when creating and resuming from the
checkpoint. A future patch will enable a more flexible
re-organisation.
--HG--
rename : src/mem/PhysicalMemory.py => src/mem/AbstractMemory.py
rename : src/mem/PhysicalMemory.py => src/mem/SimpleMemory.py
rename : src/mem/physical.cc => src/mem/abstract_mem.cc
rename : src/mem/physical.hh => src/mem/abstract_mem.hh
rename : src/mem/physical.cc => src/mem/simple_mem.cc
rename : src/mem/physical.hh => src/mem/simple_mem.hh
With recent changes to the memory system, a port cannot be assigned a peer
port twice. While making use of the Ruby memory system in FS mode, DMA
ports were assigned peer twice, once for the classic memory system
and once for the Ruby memory system. This patch removes this double
assignment of peer ports.
This patch removes the physmem_port from the Atomic CPU and instead
uses the system pointer to access the physmem when using the fastmem
option. The system already keeps track of the physmem and the valid
memory address ranges, and with this patch we merely make use of that
existing functionality. As a result of this change, the overloaded
getMasterPort in the Atomic CPU can be removed, thus unifying the CPUs.
This patch removes the physMemPort from the RubySequencer and instead
uses the system pointer to access the physmem. The system already
keeps track of the physmem and the valid memory address ranges, and
with this patch we merely make use of that existing functionality. The
memory is modified so that it is possible to call the access functions
(atomic and functional) without going through the port, and the memory
is allowed to be unconnected, i.e. have no ports (since Ruby does not
attach it like the conventional memory system).
I am not too happy with the way options are added in files se.py and fs.py
currently. This patch moves all the options to the file Options.py, functions
from which are called when required.
With the SE/FS merge, interrupt controller is created irrespective of the
mode. This patch creates the interrupt controller when Ruby is used and
connects its ports.
Enables the CheckerCPU to be selected at runtime with the --checker option
from the configs/example/fs.py and configs/example/se.py configuration
files. Also merges with the SE/FS changes.
This patch cleans up a number of remaining uses of bus.port which
is now split into bus.master and bus.slave. The only non-trivial change
is the memtest where the level building now has to be aware of the role
of the ports used in the previous level.
This patch merely removes the use of the num_cpus cache parameter
which no longer exists after the introduction of the masterIds. The
affected scripts fail when trying to set the parameter. Note that this
patch does not update the regression stats.
This patch classifies all ports in Python as either Master or Slave
and enforces a binding of master to slave. Conceptually, a master (such
as a CPU or DMA port) issues requests, and receives responses, and
conversely, a slave (such as a memory or a PIO device) receives
requests and sends back responses. Currently there is no
differentiation between coherent and non-coherent masters and slaves.
The classification as master/slave also involves splitting the dual
role port of the bus into a master and slave port and updating all the
system assembly scripts to use the appropriate port. Similarly, the
interrupt devices have to have their int_port split into a master and
slave port. The intdev and its children have minimal changes to
facilitate the extra port.
Note that this patch does not enforce any port typing in the C++
world, it merely ensures that the Python objects have a notion of the
port roles and are connected in an appropriate manner. This check is
carried when two ports are connected, e.g. bus.master =
memory.port. The following patches will make use of the
classifications and specialise the C++ ports into masters and slaves.
This patch moves the connection of the system port to create_system in
Ruby.py. Thereby it allows the failing Ruby test (and other Ruby
systems) to run again.
This patch fixes the currently broken fs.py by specifying the size of
the bridge range rather than the end address. This effectively
subtracts one when determining the address range for the IO bridge
(from IO bus to membus), and thus avoids the overlapping ranges.
This patch implements the functionality for forwarding invalidations and
replacements from the L1 cache of the Ruby memory system to the O3 CPU. The
implementation adds a list of ports to RubyPort. Whenever a replacement or an
invalidation is performed, the L1 cache forwards this to all the ports, which
is the LSQ in case of the O3 CPU.
In preparation for the introduction of Master and Slave ports, this
patch removes the default port parameter in the Python port and thus
forces the argument list of the Port to contain only the
description. The drawback at this point is that the config port and
dma port of PCI and DMA devices have to be connected explicitly. This
is key for future diversification as the pio and config port are
slaves, but the dma port is a master.
This patch makes the bus bridge uni-directional and specialises the
bus ports to be a master port and a slave port. This greatly
simplifies the assumptions on both sides as either port only has to
deal with requests or responses. The following patches introduce the
notion of master and slave ports, and would not be possible without
this split of responsibilities.
In making the bridge unidirectional, the address range mechanism of
the bridge is also changed. For the cases where communication is
taking place both ways, an additional bridge is needed. This causes
issues with the existing mechanism, as the busses cannot determine
when to stop iterating the address updates from the two bridges. To
avoid this issue, and also greatly simplify the specification, the
bridge now has a fixed set of address ranges, specified at creation
time.
Port proxies are used to replace non-structural ports, and thus enable
all ports in the system to correspond to a structural entity. This has
the advantage of accessing memory through the normal memory subsystem
and thus allowing any constellation of distributed memories, address
maps, etc. Most accesses are done through the "system port" that is
used for loading binaries, debugging etc. For the entities that belong
to the CPU, e.g. threads and thread contexts, they wrap the CPU data
port in a port proxy.
The following replacements are made:
FunctionalPort > PortProxy
TranslatingPort > SETranslatingPortProxy
VirtualPort > FSTranslatingPortProxy
--HG--
rename : src/mem/vport.cc => src/mem/fs_translating_port_proxy.cc
rename : src/mem/vport.hh => src/mem/fs_translating_port_proxy.hh
rename : src/mem/translating_port.cc => src/mem/se_translating_port_proxy.cc
rename : src/mem/translating_port.hh => src/mem/se_translating_port_proxy.hh
Currently there is an assumption that restoration from a checkpoint will
happen by first restoring to an atomic CPU and then switching to a timing
CPU. This patch adds support for directly restoring to a timing CPU. It
adds a new option '--restore-with-cpu' which is used to specify the type
of CPU to which the checkpoint should be restored to. It defaults to
'atomic' which was the case before.
This patch adds a new option for cpu type. This option is of type 'choice'
which is similar to a C++ enum, except that it takes string values as
possible choices. Following options are being removed -- detailed, timing,
inorder.
--HG--
extra : rebase_source : 58885e2e8a88b6af8e6ff884a5922059dbb1a6cb
When a change in the frame buffer from the VNC server is detected, the new
frame is stored out to the m5out/frames_*/ directory. Specifiy the flag
"--frame-capture" when running configs/example/fs.py to enable this behavior.
--HG--
extra : rebase_source : d4e08e83f4fa6ff79f3dc9c433fc1f0487e057fc
There are two lines in O3CPU.py that set the dcache and icache
tgts_per_mshr to 20, ignoring any pre-configured value of tgts_per_mshr.
This patch removes these hardcoded lines from O3CPU.py and sets the default
L1 cache mshr targets to 20.
--HG--
extra : rebase_source : 6f92d950e90496a3102967442814e97dc84db08b
This patch adds a fault model, which provides the probability of a number of
architectural faults in the interconnection network (e.g., data corruption,
misrouting). These probabilities can be used to realistically inject faults
in GARNET and faithfully evaluate the effectiveness of novel resilient NoC
architectures.
This patch drops RUBY as a compile time option. Instead the PROTOCOL option
is used to figure out whether or not to build Ruby. If the specified protocol
is 'None', then Ruby is not compiled.
The patch on Ruby functional accesses made changes to the process of
instantiating controllers and sequencers. The DMA controller and
sequencer was not updated, hence this patch.
Addition of functional access support to Ruby necessitated some changes to
the way coherence protocols are written. I had forgotten to update the
Network_test protocol. This patch makes those updates.
This patch rpovides functional access support in Ruby. Currently only
the M5Port of RubyPort supports functional accesses. The support for
functional through the PioPort will be added as a separate patch.
A significant contributor to the need for adoptOrphanParams()
is the practice of appending to SimObjectVectors which have
already been assigned as children. This practice sidesteps the
assignment operation for those appended SimObjects, which is
where parent/child relationships are typically established.
This patch reworks the config scripts that use append() on
SimObjectVectors, which all happen to be in the x86 system
configuration. At some point in the future, I hope to make
SimObjectVectors immutable (by deriving from tuple rather than
list), at which time this patch will be necessary for correct
operation. For now, it just avoids some of the warning
messages that get printed in adoptOrphanParams().
Re-enabling implicit parenting (see previous patch) causes current
Ruby config scripts to create some strange hierarchies and generate
several warnings. This patch makes three general changes to address
these issues.
1. The order of object creation in the ruby config files makes the L1
caches children of the sequencer rather than the controller; these
config ciles are rewritten to assign the L1 caches to the
controller first.
2. The assignment of the sequencer list to system.ruby.cpu_ruby_ports
causes the sequencers to be children of system.ruby, generating
warnings because they are already parented to their respective
controllers. Changing this attribute to _cpu_ruby_ports fixes this
because the leading underscore means this is now treated as a plain
Python attribute rather than a child assignment. As a result, the
configuration hierarchy changes such that, e.g.,
system.ruby.cpu_ruby_ports0 becomes system.l1_cntrl0.sequencer.
3. In the topology classes, the routers become children of some random
internal link node rather than direct children of the topology.
The topology classes are rewritten to assign the routers to the
topology object first.
A recent patch broke the ruby network tester by adding -p inside Options.py
which conflicts with the -p inside ruby_network_test.py.
Have removed -p from ruby_network_test.py
The network tester terminates after injecting for sim_cycles
(default=1000), instead of having to explicitly pass --maxticks from the
command line as before. If fixed_pkts is enabled, the tester only
injects maxpackets number of packets, else it keeps injecting till sim_cycles.
The tester also works with zero command line arguments now.
This patch ensures that both Garnet and the simple networks use the bw value
specified in the topology. To do so, the patch generalizes the specification
of bw for basic links. This value is then translated to the specific value
used by the simple and Garnet networks. Since Garent does not support
non-uniformed link bandwidth, the patch also adds a check to ensure all bws are
equal.
--HG--
rename : src/mem/ruby/network/BasicLink.cc => src/mem/ruby/network/simple/SimpleLink.cc
rename : src/mem/ruby/network/BasicLink.hh => src/mem/ruby/network/simple/SimpleLink.hh
rename : src/mem/ruby/network/BasicLink.py => src/mem/ruby/network/simple/SimpleLink.py
This patch converts links and switches from second class simobjects that were
virtually ignored by the networks (both simple and Garnet) to first class
simobjects that directly correspond to c++ ojbects manipulated by the
topology and network classes. This is especially true for Garnet, where the
links and switches directly correspond to specific C++ objects.
By making this change, many aspects of the Topology class were simplified.
--HG--
rename : src/mem/ruby/network/Network.cc => src/mem/ruby/network/BasicLink.cc
rename : src/mem/ruby/network/Network.hh => src/mem/ruby/network/BasicLink.hh
rename : src/mem/ruby/network/Network.cc => src/mem/ruby/network/garnet/fixed-pipeline/GarnetLink_d.cc
rename : src/mem/ruby/network/Network.hh => src/mem/ruby/network/garnet/fixed-pipeline/GarnetLink_d.hh
rename : src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.py => src/mem/ruby/network/garnet/fixed-pipeline/GarnetLink_d.py
rename : src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.py => src/mem/ruby/network/garnet/fixed-pipeline/GarnetRouter_d.py
rename : src/mem/ruby/network/Network.cc => src/mem/ruby/network/garnet/flexible-pipeline/GarnetLink.cc
rename : src/mem/ruby/network/Network.hh => src/mem/ruby/network/garnet/flexible-pipeline/GarnetLink.hh
rename : src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.py => src/mem/ruby/network/garnet/flexible-pipeline/GarnetLink.py
rename : src/mem/ruby/network/garnet/fixed-pipeline/GarnetNetwork_d.py => src/mem/ruby/network/garnet/flexible-pipeline/GarnetRouter.py
Frame buffer and boot linux:
./build/ARM_FS/m5.opt configs/example/fs.py --benchmark=ArmLinuxFrameBuf --kernel=vmlinux.touchkit
Linux from a CF card:
./build/ARM_FS/m5.opt configs/example/fs.py --benchmark=ArmLinuxCflash --kernel=vmlinux.touchkit
Run Android
./build/ARM_FS/m5.opt configs/example/fs.py --benchmark=ArmAndroid --kernel=vmlinux.android
Run MP
./build/ARM_FS/m5.opt configs/example/fs.py --benchmark=ArmLinuxCflash --kernel=vmlinux.mp-2.6.38
This patch moves the assignment of testsys.switch_cpus, testsys.switch_cpus_1,
switch_cpu_list, and switch_cpu_list1 outside of the for loop so they are
assigned only once, after switch_cpus and switch_cpus_1 are constructed.
The tester code is in testers/networktest.
The tester can be invoked by configs/example/ruby_network_test.py.
A dummy coherence protocol called Network_test is also addded for network-only simulations and testing. The protocol takes in messages from the tester and just pushes them into the network in the appropriate vnet, without storing any state.
Now, instead of --bench benchname, you can do --bench bench1-bench2-bench3 and it will
set up a simulation that instantiates those three workloads. Only caveat is that now,
for sanity checking, your -n X must match the number of benches in the list.
This change fixes the problem for all the cases we actively use. If you want to try
more creative I/O device attachments (E.g. sharing an L2), this won't work. You
would need another level of caching between the I/O device and the cache
(which you actually need anyway with our current code to make sure writes
propagate). This is required so that you can mark the cache in between as
top level and it won't try to send ownership of a block to the I/O device.
Asserts have been added that should catch any issues.
makeArmSystem creates both bare-metal and Linux systems more cleanly.
machine_type was never optional though listed as an optional argument; a system
such as "RealView_PBX" must now be explicitly specified. Now that it is a
required argument, the placement of the arguments has changed slightly
requiring some changes to calls that create ARM systems.
It's confusing (especially to new users), when you are setting some standard
parameters (as defined in Options.py) and they aren't reflected in the simulations
so we might as well link the settings in CacheConfig.py to those in Options.py
This way things that don't care about work count options and/or aren't called
by something that has those command line options set up doesn't have to build
a fake object to carry in inert values.
This makes sure that the address ranges requested for caches and uncached ports
don't conflict with each other, and that accesses which are always uncached
(message signaled interrupts for instance) don't waste time passing through
caches.
The disk image to use was always being forced to a particular value. This
change changes what disk image is selected as the default based on the
architecture being built. In the future, a more sophisticated system might be
used that selected a path based on certain rules instead of relying on one off
file names.
M5 skips over any simulated time where it doesn't have any work to do. When
the simulation is active, the time skipped is short and the work done at any
point in time is relatively substantial. If the time between events is long
and/or the work to do at each event is small, it's possible for simulated time
to pass faster than real time. When running a benchmark that can be good
because it means the simulation will finish sooner in real time. When
interacting with the real world through, for instance, a serial terminal or
bridge to a real network, this can be a problem. Human or network response time
could be greatly exagerated from the perspective of the simulation and make
simulated events happen "too soon" from an external perspective.
This change adds the capability to force the simulation to run no faster than
real time. It does so by scheduling a periodic event that checks to see if
its simulated period is shorter than its real period. If it is, it stalls the
simulation until they're equal. This is called time syncing.
A future change could add pseudo instructions which turn time syncing on and
off from within the simulation. That would allow time syncing to be used for
the interactive parts of a session but then turned off when running a
benchmark using the m5 utility program inside a script. Time syncing would
probably not happen anyway while running a benchmark because there would be
plenty of work for M5 to do, but the event overhead could be avoided.
This patch adds an option to the script Ruby.py for setting the parameter
m_random_seed used for randomizing delays in the memory system. The option
can be specified as "--random_seed <seed value>".
Most of the messages in the config scripts that report a time value already
print "@ tick" followed by the current tick value, but a few were printing
"@ cycle". Since this is a distinction that's frequently confusing to new
users, this changes those message to the more accurate and consistent "@ tick".
Patch allows each individual message buffer to have different recycle latencies
and allows the overall recycle latency to be specified at the cmd line. The
patch also adds profiling info to make sure no one processor's requests are
recycled too much.
This patch allows one to disable migratory sharing for those cache blocks that
are accessed by atomic requests. While the implementations are different
between the token and hammer protocols, the motivation is the same. For
Alpha, LLSC semantics expect that normal loads do not unlock cache blocks that
have been locked by LL accesses. Therefore, locked blocks should not transfer
write permissions when responding to these load requests. Instead, only they
only transfer read permissions so that the subsequent SC access can possibly
succeed.
The previous slower ruby latencies created a mismatch between the faster M5
cpu models and the much slower ruby memory system. Specifically smp
interrupts were much slower and infrequent, as well as cpus moving in and out
of spin locks. The result was many cpus were idle for large periods of time.
These changes fix the latency mismatch.
This patch adds DMA testing to the Memtester and is inherits many changes from
Polina's old tester_dma_extension patch. Since Ruby does not work in atomic
mode, the atomic mode options are removed.
This patch attaches ruby objects to the system before the topology is
created so that their simobject names read their meaningful variable
names instead of their topology name.
The separate restoreCheckpoint() call is gone; just pass
the checkpoint dir as an optional arg to instantiate().
This change is a precursor to some more extensive
reworking of the startup code.
Enforce that the Python Root SimObject is instantiated only
once. The C++ Root object already panics if more than one is
created. This change avoids the need to track what the root
object is, since it's available from Root.getInstance() (if it
exists). It's now redundant to have the user pass the root
object to functions like instantiate(), checkpoint(), and
restoreCheckpoint(), so that arg is gone. Users who use
configs/common/Simulate.py should not notice.
The patch creates a specific mesh network where directories are at the corners.
The patch is a good example of how to create an arbitrary network, similar to
the old file specified network, while leveraging scripts and loops when
possible.
Most of these frontend configurations share cache configuration code, pull it out so that
changes to caches don't have to require changing multiple config files.
On the config end, if a shared L2 is created for the system, it is
parameterized to have n sharers as defined by option.num_cpus. In addition to
making the cache sharing aware so that discriminating tag policies can make use
of context_ids to make decisions, I added an occupancy AverageStat and an occ %
stat to each cache so that you could know which contexts are occupying how much
cache on average, both in terms of blocks and percentage. Note that since
devices have context_id -1, having an array of occ stats that correspond to
each context_id will break here, so in FS mode I add an extra bucket for device
blocks. This bucket is explicitly not added in SE mode in order to not only
avoid ugliness in the stats.txt file, but to avoid broken stats (some formulas
break when a bucket is 0).
Based on Steve's suggestion, the ugly if-elif statement and multiple protocol
module import calls are removed and replaced with exec statements using the
protocol string.
Cleaned up the ruby profilers by moving the memory controller profiling code
out of the main profiler object and into a separate object similar to the
current CacheProfiler. Both the CacheProfiler and MemCntrlProfiler are
specific to a particular Ruby object, CacheMemory and MemoryControl
respectively. Therefore, these profilers should not be SimObjects and
created by the python configuration system, but instead private objects. This
simplifies the creation of these profilers.
Reorganized ruby python configuration so that protocol and ruby memory system
configuration code can be shared by multiple front-end configuration files
(i.e. memory tester, full system, and hopefully the regression tester). This
code works for memory tester, but have not tested fs mode.
This patch includes a rather substantial change to the memory controller
profiler in order to work with the new configuration system. Most
noteably, the mem_cntrl_profiler no longer uses a string map, but instead
a vector. Eventually this support should be removed from the main
profiler and go into a separate object. Each memory controller should have
a pointer to that new mem_cntrl profile object.
This patch includes the necessary changes to connect ruby objects using
the python configuration system. Mainly it consists of removing
unnecessary ruby object pointers and connecting the necessary object
pointers using the generated param objects. This patch includes the
slicc changes necessary to connect generated ruby objects together using
the python configuraiton system.
The necessary companion conversion of Ruby objects generated by SLICC
are converted to M5 SimObjects in the following patch, so this patch
alone does not compile.
Conversion of Garnet network models is also handled in a separate
patch; that code is temporarily disabled from compiling to allow
testing of interim code.
Connects M5 cpu and dma ports directly to ruby sequencers and dma
sequencers. Rubymem also includes a pio port so that pio requests
and be forwarded to a special pio bus connecting to device pio
ports.
Get rid of misc.py and just stick misc things in __init__.py
Move utility functions out of SCons files and into m5.util
Move utility type stuff from m5/__init__.py to m5/util/__init__.py
Remove buildEnv from m5 and allow access only from m5.defines
Rename AddToPath to addToPath while we're moving it to m5.util
Rename read_command to readCommand while we're moving it
Rename compare_versions to compareVersions while we're moving it.
--HG--
rename : src/python/m5/convert.py => src/python/m5/util/convert.py
rename : src/python/m5/smartdict.py => src/python/m5/util/smartdict.py
fix bug with 'numThreads=len(workloads)' which was counting characters of command-line not counting threads as intended.
Update numThreads for inorder/o3 cases and default to 1 for all other cases.
-option to allow threads to run to a max_inst_any_thread which is more useful/quicker in a lot of
cases then always having to figure out what tick to run your simulation to.
This changeset also includes a lot of work from Derek Hower <drh5@cs.wisc.edu>
RubyMemory is now both a driver for Ruby and a port for M5. Changed
makeRequest/hitCallback interface. Brought packets (superficially)
into the sequencer. Modified tester infrastructure to be packet based.
and Ruby can be used together through the example ruby_se.py
script. SPARC parallel applications work, and the timing *seems* right
from combined M5/Ruby debug traces. To run,
% build/ALPHA_SE/m5.debug configs/example/ruby_se.py -c
tests/test-progs/hello/bin/alpha/linux/hello -n 4 -t
this was double scheduling itself (once in constructor and once in cpu code). also add support for stopping / starting
progress events through repeatEvent flag and also changing the interval of the progress event as well
Previously there was one per bus, which caused some coherence problems
when more than one decided to respond. Now there is just one on
the main memory bus. The default bus responder on all other buses
is now the downstream cache's cpu_side port. Caches no longer need
to do address range filtering; instead, we just have a simple flag
to prevent snoops from propagating to the I/O bus.
- Add the option of redirecting stderr to a file. With the old
behaviour, stderr would follow stdout if stdout was to a file, but
stderr went to the host stderr if stdout went to the host stdout. The
new default maintains stdout and stderr going to the host. Now the
two can specify different files, but they will share a file descriptor
if the name of the files is the same.
- Add --output and --errout options to se.py to go with --input.
into vm1.(none):/home/stever/bk/newmem-cache2
configs/example/memtest.py:
Hand merge redundant changes.
--HG--
extra : convert_revision : a2e36be254bf052024f37bcb23b5209f367d37e1
timing mode still broken.
configs/example/memtest.py:
Revamp options.
src/cpu/memtest/memtest.cc:
No need for memory initialization.
No need to make atomic response... memory system should do that now.
src/cpu/memtest/memtest.hh:
MemTest really doesn't want to snoop.
src/mem/bridge.cc:
checkFunctional() cleanup.
src/mem/bus.cc:
src/mem/bus.hh:
src/mem/cache/base_cache.cc:
src/mem/cache/base_cache.hh:
src/mem/cache/cache.cc:
src/mem/cache/cache.hh:
src/mem/cache/cache_blk.hh:
src/mem/cache/cache_builder.cc:
src/mem/cache/cache_impl.hh:
src/mem/cache/coherence/coherence_protocol.cc:
src/mem/cache/coherence/coherence_protocol.hh:
src/mem/cache/coherence/simple_coherence.hh:
src/mem/cache/miss/SConscript:
src/mem/cache/miss/mshr.cc:
src/mem/cache/miss/mshr.hh:
src/mem/cache/miss/mshr_queue.cc:
src/mem/cache/miss/mshr_queue.hh:
src/mem/cache/prefetch/base_prefetcher.cc:
src/mem/cache/tags/fa_lru.cc:
src/mem/cache/tags/fa_lru.hh:
src/mem/cache/tags/iic.cc:
src/mem/cache/tags/iic.hh:
src/mem/cache/tags/lru.cc:
src/mem/cache/tags/lru.hh:
src/mem/cache/tags/split.cc:
src/mem/cache/tags/split.hh:
src/mem/cache/tags/split_lifo.cc:
src/mem/cache/tags/split_lifo.hh:
src/mem/cache/tags/split_lru.cc:
src/mem/cache/tags/split_lru.hh:
src/mem/packet.cc:
src/mem/packet.hh:
src/mem/physical.cc:
src/mem/physical.hh:
src/mem/tport.cc:
More major reorg. Seems to work for atomic mode now,
timing mode still broken.
--HG--
extra : convert_revision : 7e70dfc4a752393b911880ff028271433855ae87
Make clocks more reasonable.
Fix bug in sense of options.timing flag.
configs/example/memtest.py:
Fix bug in sense of options.timing flag.
configs/example/memtest.py:
Make clocks more reasonable.
--HG--
extra : convert_revision : 3715697988c56e92a4da129b42026d0623f5e85e
configs/example/memtest.py:
PhysicalMemory has vector of uniform ports instead of one special one.
Other updates to fix obsolete brokenness.
src/mem/physical.cc:
src/mem/physical.hh:
src/python/m5/objects/PhysicalMemory.py:
Have vector of uniform ports instead of one special one.
src/python/swig/pyobject.cc:
Add comment.
--HG--
extra : convert_revision : a4a764dcdcd9720bcd07c979d0ece311fc8cb4f1
cache blocks that get dmaed ARE NOT marked invalid in the caches so it's a performance issue here
src/mem/bridge.cc:
src/mem/bridge.hh:
hopefully the final hacky change to make the bus bridge work ok
--HG--
extra : convert_revision : 62cbc65c74d1a84199f0a376546ec19994c5899c
set the latency parameter in terms of a latency
add caches to tsunami-simple configs
configs/common/Caches.py:
tests/configs/memtest.py:
tests/configs/o3-timing-mp.py:
tests/configs/o3-timing.py:
tests/configs/simple-atomic-mp.py:
tests/configs/simple-timing-mp.py:
tests/configs/simple-timing.py:
set the latency parameter in terms of a latency
configs/common/FSConfig.py:
give the bridge a default latency too
src/mem/cache/cache_builder.cc:
src/python/m5/objects/BaseCache.py:
remove hit_latency and make latency do the right thing
tests/configs/tsunami-simple-atomic-dual.py:
tests/configs/tsunami-simple-atomic.py:
tests/configs/tsunami-simple-timing-dual.py:
tests/configs/tsunami-simple-timing.py:
add caches to tsunami-simple configs
--HG--
extra : convert_revision : 37bef7c652e97c8cdb91f471fba62978f89019f1
figure out the block size from devices attached to the bus otherwise use a default block size when no devices that care are attached
configs/common/FSConfig.py:
src/mem/bridge.cc:
src/mem/bridge.hh:
src/python/m5/objects/Bridge.py:
fix partial writes with a functional memory hack
src/mem/bus.cc:
src/mem/bus.hh:
src/python/m5/objects/Bus.py:
figure out the block size from devices attached to the bus otherwise use a default block size when no devices that care are attached
src/mem/packet.cc:
fix WriteInvalidateResp to not be a request that needs a response since it isn't
src/mem/port.hh:
by default return 0 for deviceBlockSize instead of panicing. This makes finding the block size the bus should use easier
--HG--
extra : convert_revision : 3fcfe95f9f392ef76f324ee8bd1d7f6de95c1a64
fix script to reflect new benchmark directory sturcture
configs/boot/spec-surge-client.rcS:
fix script to reflect new benchmark directory sturcture
--HG--
extra : convert_revision : 45f9d8aebabd1f3f8d1e826e07840e2365511a35
directly configured by python. Move stuff from root.(cc|hh) to
core.(cc|hh) since it really belogs there now.
In the process, simplify how ticks are used in the python code.
--HG--
extra : convert_revision : cf82ee1ea20f9343924f30bacc2a38d4edee8df3
configs/common/FSConfig.py:
add an attachOnChipIO to force people to think about where "onchip" i/o should be connected in their hierarchy
--HG--
extra : convert_revision : cf79a9a00760b7daf28063f407a04bd38b956843
configs/common/FSConfig.py:
Use binaries we've compiled instead of the ones that come with Legion
src/arch/alpha/interrupts.hh:
get rid of post(int int_type) and add a get_vec function that gets the interrupt vector for an interrupt number
src/arch/sparc/asi.cc:
Add AsiIsInterrupt() to AsiIsMmu()
src/arch/sparc/faults.cc:
src/arch/sparc/faults.hh:
Add InterruptVector type
src/arch/sparc/interrupts.hh:
rework interrupts. They are no longer cleared when created... A I/O or ASI read/write needs to happen before they are cleared
src/arch/sparc/isa_traits.hh:
Add the "interrupt" trap types to isa traits
src/arch/sparc/miscregfile.cc:
add names for all the misc registers and possible post an interrupt when TL is changed.
src/arch/sparc/miscregfile.hh:
Add a helper function to post an interrupt when pil < some set softint
src/arch/sparc/regfile.cc:
src/arch/sparc/regfile.hh:
InterruptLevel shouldn't really live here, moved to interrupt.hh
src/arch/sparc/tlb.cc:
Add interrupt ASIs to TLB
src/arch/sparc/ua2005.cc:
Add checkSoftInt to check if a softint needs to be posted
Check that a tickCompare isn't scheduled before scheduling one
Post and clear interrupts on queue writes and what not
src/base/bitfield.hh:
Add an helper function to return the msb that is set
src/cpu/base.cc:
src/cpu/base.hh:
get rid of post_interrupt(type) since it's no longer needed.. Add a way to see what interrupts are pending
src/cpu/intr_control.cc:
src/cpu/intr_control.hh:
src/dev/alpha/tsunami_cchip.cc:
src/python/m5/objects/IntrControl.py:
Make IntrControl have a system pointer rather than using a cpu pointer to get one
src/dev/sparc/SConscript:
add iob to SConsscrip
tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-atomic-dual/config.ini:
tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-atomic-dual/config.out:
tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-atomic/config.ini:
tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-atomic/config.out:
tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.ini:
tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing-dual/config.out:
tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing/config.ini:
tests/quick/10.linux-boot/ref/alpha/linux/tsunami-simple-timing/config.out:
tests/quick/80.netperf-stream/ref/alpha/linux/twosys-tsunami-simple-atomic/config.ini:
tests/quick/80.netperf-stream/ref/alpha/linux/twosys-tsunami-simple-atomic/config.out:
update config.ini/out for intrcntrl not having a cpu pointer anymore
--HG--
extra : convert_revision : 38614f6b9ffc8f3c93949a94ff04b7d2987168dd
relevant code directly into the SimConsole object. Now,
you can easily turn off the listen port by just specifying
0 as the port.
--HG--
extra : convert_revision : c8937fa45b429d8a0728e6c720a599e38972aaf0