Not all objects need a platform pointer, and having one creates a dependence
on their being a platform object. This change removes the platform pointer to
from the base device object and moves it into subclasses that actually need
it.
In order for a system object to work in SE mode and FS mode, it has to either
always require a platform object even in SE mode, or get rid of the
requirement all together. Making SE mode carry around unnecessary/unused bits
of FS seems less than ideal, so I decided to go with the second option. The
platform pointer in the System class was used for exactly one purpose, a path
for the Alpha Linux system object to get to the real time clock and read its
frequency so that it could short cut the loops_per_jiffy calculation. There
was also a copy and pasted implementation in MIPS, but since it was only there
because it was there in Alpha I still count that as one use.
This change reverses the mechanism that communicates the RTC frequency so that
the Tsunami platform object pushes it up to the AlphaSystem object. This is
slightly less specific than it could be because really only the
AlphaLinuxSystem uses it. Because the intrFrequency function on the Platform
class was no longer necessary (and unimplemented on anything but Alpha) it was
eliminated.
After this change, a platform will need to have a system, but a system won't
have to have a platform.
These faults take varargs to their constructors which they print into a string
and pass to the M5DebugFault base class. They are basically faults wrapped
around panics, faults, warns, and warnonce-es so that they happen only at
commit.
All of the classes will now be available in both modes, and only
GenericPageTableFault will continue to check the mode for conditional
compilation. It uses a process object to handle the fault in SE mode, and
for now those aren't available in FS mode.
By using an underscore, the "." is still available and can unambiguously be
used to refer to members of a structure if an operand is a structure, class,
etc. This change mostly just replaces the appropriate "."s with "_"s, but
there were also a few places where the ISA descriptions where handling the
extensions themselves and had their own regular expressions to update. The
regular expressions in the isa parser were updated as well. It also now
looks for one of the defined type extensions specifically after connecting "_"
where before it would look for any sequence of characters after a "."
following an operand name and try to use it as the extension. This helps to
disambiguate cases where a "_" may legitimately be part of an operand name but
not separate the name from the type suffix.
Because leaving the "_" and suffix on the variable name still leaves a valid
C++ identifier and all extensions need to be consistent in a given context, I
considered leaving them on as a breadcrumb that would show what the intended
type was for that operand. Unfortunately the operands can be referred to in
code templates, the Mem operand in particular, and since the exact type of Mem
can be different for different uses of the same template, that broke things.
This patch makes O3 CPU work along with the Ruby memory model. Ruby
overwrites the senderState pointer with another pointer. The pointer
is restored only when Ruby gets done with the packet. LSQ makes use of
senderState just after sendTiming() returns. But the dynamic_cast returns
a NULL pointer since Ruby's senderState pointer is from a different class.
Storing the senderState pointer before calling sendTiming() does away with
the problem.
This constant will have the same value as FULL_SYSTEM but will not be usable
by the preprocessor. It can be substituted into places where FULL_SYSTEM is
used in a C++ context and will make it easier to find which parts of the
simulator still use FULL_SYSTEM with the preprocessor using grep.
There was a change a while ago that refactored some scons stuff which got rid
of cpu_models.py but also accidentally got rid of the ISA parser as a source
for its target files. That meant that changes which affected the parser
wouldn't cause a rebuild unless they also changed one of the description
files. This change fixes that.
Translating MSR addresses into MSR register indices took a lot of space in the
TLB source and made looking around in that file awkward. This change moves
the lookup into its own file to get it out of the way. It also changes it from
a switch statement to a hash map which should hopefully be a little more
efficient.
Initialize flags via the Event constructor instead of calling
setFlags() in the body of the derived class's constructor. I
forget exactly why, but this made life easier when implementing
multi-queue support.
Also rename Event::getFlags() to isFlagSet() to better match
common usage, and get rid of some unused Event methods.
Use exitSimLoop() function instead of explicitly scheduling
on mainEventQueue (which won't work once we go to multiple
event queues). Also introduced a local params variable to
shorten a lot of expressions.
Print IpAddress params in dot notation for readability.
Properly compare IpAddress objects (by value and not object identity).
Also fix up derived param classes (IpNetmask and IpWithPort)
similarly.
This change is a significant reorganization of the MIPS fault code that gets
rid of duplication, fixes some bugs, doubtlessly introduces others, and adds
names for the exception code constants.
Pass in a bool to indicate if the fault is from a store instead of having two
different classes. The classes were also misleadingly named since loads are
also processed by the DTB but should return ITB faults since they aren't
stores. The TLB may be returning the wrong fault in this case, but I haven't
looked at it closely.
Get rid of Fault classes left over from when this file was copied from Alpha,
and rename ArithmeticOverflowFault to be IntegerOverflowFault and get rid of
the old IntegerOverflowFault stub. The Integer version is what's actually in
the manual, but the Arithmetic version had the implementation.
It was technically possible but clumsy to determine what endianness a guest
was configured with using the state in byteswap.hh. This change makes that
information available more directly.
Also get rid of unused (and mildly redundant) ByteOrderDiffers constant.
The decoder now checks the value of FULL_SYSTEM in a switch statement to
decide whether to return a real syscall instruction or one that triggers
syscall emulation (or a panic in FS mode). The switch statement should devolve
into an if, and also should be optimized out since it's based on constant
input.
In FS mode the syscall function will panic, but the interface will be
consistent and code which calls syscall can be compiled in. This will allow,
for instance, instructions that use syscall to be built unconditionally but
then not returned by the decoder.
Some DPRINTFs were printing uninitalized values because the DPRINTFs were
always being printed even when the features they were printing weren't
being used. This change moves the DPRINTFs into the appropriate if blocks
and initializes the state variables correctly.
There also is a case where the offset into the packet could be calculated
incorrectly during a DMA that is fixed.
Check that we're not currently writing back an address the prefetcher is trying
to prefetch before issuing it. We previously checked the mshrQueue and the cache
itself, but forgot to check the writeBuffer. This fixes a memory corrucption
issue with an L2 prefetcher.
Only create a memory ordering violation when the value could have changed
between two subsequent loads, instead of just when loads go out-of-order
to the same address. While not very common in the case of Alpha, with
an architecture with a hardware table walker this can happen reasonably
frequently beacuse a translation will miss and start a table walk and
before the CPU re-schedules the faulting instruction another one will
pass it to the same address (or cache block depending on the dendency
checking).
This patch has been tested with a couple of self-checking hand crafted
programs to stress ordering between two cores.
The performance improvement on SPEC benchmarks can be substantial (2-10%).
So a mips-cross-gdb can connect with gem5(MIPS_SE), and do some remote
debugging.
Testing:
Build gem5 for MIPS_SE and make gem5 wait at beginning:
modify "rgdb_wait = -1" to "rgdb_wait = 0" in src/sim/system.cc;
scons build/MIPS_SE/gem5.opt CPU_MODELS=O3CPU
----
Build GDB-7.3 mips-cross:
./configure --target=mips-linux-gnu --prefix=xxx/gdb-7.3-install/
make
make install
----
Run:
./build/MIPS_SE/gem5.opt configs/example/se.py --detailed --caches
./mips-linux-gnu-gdb xxx/gem5/tests/test-progs/hello/bin/mips/linux/hello
(gdb) target remote :7000
(gdb) info registers
(gdb) disassemble
(gdb) si
(gdb) break main
(gdb) c
(gdb) quit
Testing done.
Having two StaticInst classes, one nominally ISA dependent and the other ISA
dependent, has not been historically useful and makes the StaticInst class
more complicated that it needs to be. This change merges StaticInstBase into
StaticInst.
This change pulls the instruction decoding machinery (including caches) out of
the StaticInst class and puts it into its own class. This has a few intrinsic
benefits. First, the StaticInst code, which has gotten to be quite large, gets
simpler. Second, the code that handles decode caching is now separated out
into its own component and can be looked at in isolation, making it easier to
understand. I took the opportunity to restructure the code a bit which will
hopefully also help.
Beyond that, this change also lays some ground work for each ISA to have its
own, potentially stateful decode object. We'd be able to include less
contextualizing information in the ExtMachInst objects since that context
would be applied at the decoder. Also, the decoder could "know" ahead of time
that all the instructions it's going to see are going to be, for instance, 64
bit mode, and it will have one less thing to check when it decodes them.
Because the decode caching mechanism has been separated out, it's now possible
to have multiple caches which correspond to different types of decoding
context. Having one cache for each element of the cross product of different
configurations may become prohibitive, so it may be desirable to clear out the
cache when relatively static state changes and not to have one for each
setting.
Because the decode function is no longer universally accessible as a static
member of the StaticInst class, a new function was added to the ThreadContexts
that returns the applicable decode object.
Do some minor cleanup of some recently added comments, a warning, and change
other instances of stack extension to be like what's now being done for x86.
The way flag bits were being set for microops in x86 ended up implicitly
calling the bitset constructor which was truncating flags beyond the width of
an unsigned long. This change sets the bits in chunks which are always small
enough to avoid being truncated. On 64 bit machines this should reduce to be
the same as before, and on 32 bit machines it should work properly and not be
unreasonably inefficient.
When an instruction is translated in the x86 TLB, a variable called
delayedResponse is passed back and forth which tracks whether a translation
could be completed immediately, or if there's going to be callback that will
finish things up. If a read was to the internal memory space, memory mapped
registers used to implement things like MSRs, the function hadn't yet gotten
to where delayedResponse was set to false, it's default. That meant that the
value was never set, and the TLB could start waiting for a callback that would
never come. This change simply moves the assignment to above where control
can divert to translateInt().
Nothing big here, but when you have an address that is not in the page table request to be allocated, if it falls outside of the maximum stack range all you get is a page fault and you don't know why. Add a little warn() to explain it a bit. Also add some comments and alter logic a little so that you don't totally ignore the return value of checkAndAllocNextPage().
Even though the code is safe, compiler flags a warning here, which are treated as errors for fast/opt. I know it's redundant but it has no side effects and fixes the compile.
In the current implementation of Functional Accesses, it's very hard to
implement broadcast or snooping protocols where the memory has no idea if it
has exclusive access to a cache block or not. Without this knowledge, making
sure the RW vs. RO permissions are right are next to impossible. So we add a
new state called Backing_Store to enable the conveyance that this is the backup
storage for a block, so that it can be written if it is the only possibly RW
block in the system, or written even if there is another RW block in the
system, without causing problems.
Also, a small change to actually set the m_name field for each Controller so
that debugging can be easier. Now you can access a controller's name just by
controller->getName().
There are a set of locations is the linux kernel that are managed via
cache maintence instructions until all processors enable their MMUs & TLBs.
Writes to these locations are manually flushed from the cache to main
memory when the occur so that cores operating without their MMU enabled
and only issuing uncached accesses can receive the correct data. Unfortuantely,
gem5 doesn't support any kind of software directed maintence of the cache.
Until such time as that support exists this patch marks the specific cache blocks
that need to be coherent as non-cacheable until all CPUs enable their MMU and
thus allows gem5 to boot MP systems with caches enabled (a requirement for
booting an O3 cpu and thus an O3 CPU regression).
The driver can read the IDE config register as a 32 bit register since
some adapters use bit 18 as a disable channel bit. If the size isn't
set in a PRD it should be 64K according to the SPEC (and driver) not
128K.
SEV instructions were originally implemented to cause asynchronous squashes
via the generateTCSquash() function in the O3 pipeline when updating the
SEV_MAILBOX miscReg. This caused race conditions between CPUs in an MP system
that would lead to a pipeline either going inactive indefinitely or not being
able to commit squashed instructions. Fixed SEV instructions to behave like
interrupts and cause synchronous sqaushes inside the pipeline, eliminating
the race conditions. Also fixed up the semantics of the WFE instruction to
behave as documented in the ARMv7 ISA description to not sleep if SEV_MAILBOX=1
or unmasked interrupts are pending.
Two issues are fixed in this patch:
1. The load and store pc passed to the predictor are passed in reverse order.
2. The flag indicating that a barrier is inflight was never cleared when
the barrier was squashed instead of committed. This made all load insts
dependent on a non-existent barrier in-flight.
Change the way instructions are squashed on memory ordering violations
to squash the violator and younger instructions, not all instructions
that are younger than the instruction they violated (no reason to throw
away valid work).
Cortex-A9 processors can have a local timer and watchdog counter. It
is enabled by default in Linux and up to this point we've had to disable
them since a model wasn't available. This change allows a default
MP ARM Linux configuration to boot.
Control register operands are set up so that writing to them is serialize
after, serialize before, and non-speculative. These are probably overboard,
but they should usually be safe. Unfortunately there are times when even these
aren't enough. If an instruction modifies state that affects fetch, later
serialized instructions which come after it might have already gone through
fetch and decode by the time it commits. These instructions may have been
translated incorrectly or interpretted incorrectly and need to be destroyed.
This change modifies instructions which will or may have this behavior so that
they use the IsSquashAfter flag when necessary.
It's possible (though until now very unlikely) for fetchAddr to get out of
sync with the actual PC of the current instruction. This change forcefull
resets fetchAddr at the end of every instruction.
Until now, the only reason a macroop would be left was because it ended at a
microop marked as the last microop. In O3 with branch prediction, it's
possible for the branch predictor to have entries which originally came from
different instructions which happened to have the same RIP. This could
theoretically happen in many ways, but it was encountered specifically when
different programs in different address spaces ran one after the other in
X86_FS.
What would happen in that case was that the macroop would continue to be
looped over and microops fetched from it until it reached the last microop
even though the macropc had moved out from under it. If things lined up
properly, this could mean that the end bytes of an instruction actually fell
into the instruction sized block of memory after the one in the predecoder.
The fetch loop implicitly assumes that the last instruction sized chunk of
memory processed was the last one needed for the instruction it just finished
executing. It would then tell the predecoder to move to an offset within the
bytes it was given that is larger than those bytes, and that would trip an
assert in the x86 predecoder.
This change fixes this problem by making fetch stop processing the current
macroop if the address it should be fetching from changed when the PC is
updated. That happens when the last microop was reached because the instruction
handled it properly, and it also catches the case where the branch predictor
makes fetch do a macro level branch when it shouldn't.
The check of isLastMicroop is retained because otherwise, a macroop that
branches back to itself would act like a single, long macroop instead of
multiple instances of the same microop. There may be situations (which may
turn out to be purely hypothetical) where that matters.
This also fixes a relatively minor issue where the curMacroop variable would
be set to NULL immediately after seeing that a microop was the last one before
curMacroop was used to build the dyninst. The traceData structure would have a
NULL pointer to the macroop for that microop.
Before this change, the commit stage would wait until the ROB and store queue
were empty before recognizing an interrupt. The fetch stage would stop
generating instructions at an appropriate point, so commit would then wait
until a valid time to interrupt the instruction stream. Instructions might be
in flight after fetch but not the in the ROB or store queue (in rename, for
instance), so this change makes commit wait until all in flight instructions
are finished.
This patch replaces RUBY with PROTOCOL in all the SConscript files as
the environment variable that decides whether or not certain components
of the simulator are compiled.
This constructor assumes that the ExtMachInst can be decoded directly into a
StaticInst that's useful to execute. With the advent of microcoded
instructions that's no longer true.
This patch drops RUBY as a compile time option. Instead the PROTOCOL option
is used to figure out whether or not to build Ruby. If the specified protocol
is 'None', then Ruby is not compiled.
When fetching from the microcode ROM, if the PC is set so that it isn't in the
cache block that's been fetched the CPU will get stuck. The fetch stage
notices that it's in the ROM so it doesn't try to fetch from the current PC.
It then later notices that it's outside of the current cache block so it skips
generating instructions expecting to continue once the right bytes have been
fetched. This change lets the fetch stage attempt to generate instructions,
and only checks if the bytes it's going to use are valid if it's really going
to use them.
Currently, functions associated with a controller go into separate files.
This patch puts all the functions in the controller's .cc file. This should
hopefully take away some time from compilation.
Prefetch requests issued from the L2 or below wouldn't check if valid data is
present higher in the system. If a prefetch into the L2 occured at the same
time as writeback from a higher-level cache the dirty data could be replaced
in by unmodified data in memory.
Implemented a pipeline activity viewer as a python script (util/o3-pipeview.py)
and modified O3 code base to support an extra trace flag (O3PipeView) for
generating traces to be used as inputs by the tool.
SWP and SWPB now throw an undefined instruction exception if
SCTLR.SW == 0. This also required the MIDR to be changed
slightly so programs can correctly determine that gem5 supports
the ARM v7 behavior of SWP/SWPB (in ARM v6, SWP/SWPB were
deprecated, but not disabled at CPU startup).
Adds MISCREG_ID_MMFR2 and removes break on access to MISCREG_CLIDR. Both
registers now return values that are consistent with current ARM
implementations.
This patch implements the copyRegs() function for the x86 architecture.
The patch assumes that no side effects other than TLB invalidation need
to be considered while copying the registers. This may not hold true in
future.
Branch predictor could not predict a branch in a nested loop because:
1. The global history was not updated after a mispredict squash.
2. The global history was updated in the fetch stage. The choice predictors
that were updated used the changed global history. This is incorrect, as
it incorporates the state of global history after the branch in
encountered. Fixed update to choice predictor using the global history
state before the branch happened.
3. The global predictor table was also updated using the global history state
before the branch happened as above.
Additionally, parameters to initialize ctr and history size were reversed.
Fixed up the patch from Yasuko Watanabe that enabled pipelining of fetch accessess to
icache to work with recent changes to main repository.
Also added in ability for fetch stage to delay issuing the fault carrying
nop when a pipeline fetch causes a fault and no fetch bandwidth is available
until the next cycle.
change hwrei back to being a non-control instruction so O3-FS mode will work
add squash in inorder that will catch a hwrei (or any other genric instruction)
that isnt a control inst but changes the PC. Additional testing still needs to be done
for inorder-FS mode but this change will free O3 development back up in the interim
This makes it possible to use the grammar multiple times and use the multiple
instances concurrently. This makes implementing an include statement as part
of a grammar possible.
This change simplifies the code surrounding operand type handling and makes it
depend only on the ctype that goes with each operand type. Future changes will
allow defining operand types by their ctypes directly, convert the ISAs over
to that style of definition, and then remove support for the old style. These
changes are to make it easier to use non-builtin types like classes or
structures as the type for operands.
Addition of functional access support to Ruby necessitated some changes to
the way coherence protocols are written. I had forgotten to update the
Network_test protocol. This patch makes those updates.
readBytes and writeBytes had the word "bytes" in their names because they
accessed blobs of bytes. This distinguished them from the read and write
functions which handled higher level data types. Because those functions don't
exist any more, this change renames readBytes and writeBytes to more general
names, readMem and writeMem, which reflect the fact that they are how you read
and write memory. This also makes their names more consistent with the
register reading/writing functions, although those are still read and set for
some reason.
This patch rpovides functional access support in Ruby. Currently only
the M5Port of RubyPort supports functional accesses. The support for
functional through the PioPort will be added as a separate patch.
The DTB expects the correct PC in the ThreadContext
but how if the memory accesses are speculative? Shouldn't
we send along the requestor's PC to the translate functions?
if a faulting instruction reaches an execution unit,
then ignore it and pass it through the pipeline.
Once we recognize the fault in the graduation unit,
dont allow a second fault to creep in on the same cycle.
Before graduating an instruction, explicitly check fault
by making the fault check it's own separate command
that can be put on an instruction schedule.
this always changes the PC and is basically an impromptu branch instruction. why
not speculate on this instead of always be forced to mispredict/squash after the
hwrei gets resolved?
The InOrder model needs this marked as "isControl" so it knows to update the PC
after the ALU executes it. If this isnt marked as control, then it's going to
force the model to check the PC of every instruction at commit (what O3 does?),
and that would be a wasteful check for a very high percentage of instructions.
remove events in the resource pool that can be called from the CPU event, since the CPU
event is scheduled at the same time at the resource pool event.
----
Also, match the resPool event function names to the cpu event function names
----
only update BTB on a taken branch and update branch predictor w/pcstate from instruction
---
only pay attention to branch predictor updates if the the inst. is in fact a branch
formerly, this was implicit when you accessed the execution unit
or the use-def unit but it's better that this just be something
that a user can specify.
Architectures like SPARC need to read the window pointer
in order to figure out it's register dependence. However,
this may not get updated until after an instruction gets
executed, so now we lazily detect the register dependence
in the EXE stage (execution unit or use_def). This
makes sure we get the mapping after the most current change.
Instead of clearing the entire TLB on initialization and flush, the code was
clearing only one element. This patch corrects the memsets in the init and
flush routines.
The code for Set class was written under the assumption that
std::numeric_limits<long>::digits returns the number of bits used for
data type long, which was presumed to be either 32 or 64. But return value
is actually one less, that is, it is either 31 or 63. The value is now
being incremented by 1 so as to correctly set it.
If there's a problem when reading the section names from a supposed ELF file,
this change makes gem5 print an error message as returned by libelf and die.
Previously these sorts of errors would make gem5 segfault when it tried to
access the section name through a NULL pointer.
this flag is only used for early branch resolution in the O3 model (of pc-relative branches)
but this isnt cleanly working even when the branch target code is added for sparc. For now,
we'll ignore this optimization and add a todo in the SPARC ISA for future developers
Add a few constants and functions that the InOrder model wants for SPARC.
* * *
sparc: add eaComp function
InOrder separates the address generation from the actual access so give
Sparc that functionality
* * *
sparc: add control flags for branches
branch predictors and other cpu model functions need to know specific information
about branches, so add the necessary flags here
The access permissions for the directory entries are not being set correctly.
This is because pointers are not used for handling directory entries.
function. get and set functions for access permissions have been added to the
Controller state machine. The changePermission() function provided by the
AbstractEntry and AbstractCacheEntry classes has been exposed to SLICC
code once again. The set_permission() functionality has been removed.
NOTE: Each protocol will have to define these get and set functions in order
to compile successfully.
The regular expressions matching filenames in the ##include directives and the
internally generated ##newfile directives where only looking for filenames
composed of alpha numeric characters, periods, and dashes. In Unix/Linux, the
rules for what characters can be in a filename are much looser than that. This
change replaces those expressions with ones that look for anything other than
a quote character. Technically quote characters are allowed as well so we
should allow escaping them somehow, but the additional complexity probably
isn't worth it.
Currently, the machine name is appended before any of the functions
defined with in the sm files. This is not necessary and it also
means that these functions cannot be used outside the sm files.
This patch does away with the prefixes. Note that the generated
C++ files in which the code for these functions is present are
still named such that the machine name is the prefix.
The default generated binary is now gem5.<type> instead of m5.<type>.
The latter does still work but gem5.<type> will be generated first and
then m5.<type> will be hard linked to it.
The end of the COPYING file was generated with:
% python ./util/find_copyrights.py configs src system tests util
Update -C command line option to spit out COPYING file
We were getting a spurious warning in the regressions that turned
out to be due to having the wrong value for TGT_MAP_ANONYMOUS for
Power Linux, but in the process of tracking it down I ended up
doing some cleanup of the mmap handling in general.
A significant contributor to the need for adoptOrphanParams()
is the practice of appending to SimObjectVectors which have
already been assigned as children. This practice sidesteps the
assignment operation for those appended SimObjects, which is
where parent/child relationships are typically established.
This patch reworks the config scripts that use append() on
SimObjectVectors, which all happen to be in the x86 system
configuration. At some point in the future, I hope to make
SimObjectVectors immutable (by deriving from tuple rather than
list), at which time this patch will be necessary for correct
operation. For now, it just avoids some of the warning
messages that get printed in adoptOrphanParams().
Re-enabling implicit parenting (see previous patch) causes current
Ruby config scripts to create some strange hierarchies and generate
several warnings. This patch makes three general changes to address
these issues.
1. The order of object creation in the ruby config files makes the L1
caches children of the sequencer rather than the controller; these
config ciles are rewritten to assign the L1 caches to the
controller first.
2. The assignment of the sequencer list to system.ruby.cpu_ruby_ports
causes the sequencers to be children of system.ruby, generating
warnings because they are already parented to their respective
controllers. Changing this attribute to _cpu_ruby_ports fixes this
because the leading underscore means this is now treated as a plain
Python attribute rather than a child assignment. As a result, the
configuration hierarchy changes such that, e.g.,
system.ruby.cpu_ruby_ports0 becomes system.l1_cntrl0.sequencer.
3. In the topology classes, the routers become children of some random
internal link node rather than direct children of the topology.
The topology classes are rewritten to assign the routers to the
topology object first.
Last summer's big rewrite of the initialization code (in
particular cset 6efc3672733b) got rid of the implicit parenting
that used to occur when an unparented SimObject was assigned as
a parameter value to another SimObject. The idea was that the
new adoptOrphanParams() step would catch these anyway so it was
unnecessary.
Unfortunately it turns out that adoptOrphanParams() has some
inherent instability in that the parent that does the adoption
depends on the config tree traversal order. Even making this
order deterministic (e.g., by traversing children in
alphabetical order) can introduce unwanted and unexpected
hierarchy changes between similar configs (e.g., when adding a
switch_cpu in place of a cpu), causing problems when trying to
restore checkpoints across similar configs. The hierarchy
created by implicit parenting is more stable and more
controllable, so this patch turns that behavior back on.
This patch also cleans up some long-standing holes regarding
parenting of SimObjects that are created in class definitions
(either in the body of the class, or as default parameters).
To avoid breaking some existing config files, this necessitated
changing the error on reparenting children to a warning. This
change fixes another bug where attempting to print the prior
error message would fail on reparenting SimObjectVectors
because they lack a _parent attribute. Some further issues
with SimObjectVectors were cleaned up by getting rid of the
get_parent() call (which could cause errors with some
SimObjectVectors where there was no single parent to return)
with has_parent() (since all the uses of get_parent() were just
boolean tests anyway).
Finally, since the adoptOrphanParam() step turned out to be so
problematic, we now issue a warning when it actually has to do
an adoption. Future cleanup of config files will get rid of
current warnings.
Calculation of offset to copy from storeQueue[idx].data structure for load to
store forwarding fixed to be difference in bytes between store and load virtual
addresses. Previous method would induce bug where a load would index into
buffer at the wrong location.
If a split load fails on a blocked cache wbOutstanding can be decremented
twice if the first part of the split load succeeds and the second part fails.
Condition the decrementing on not having completed the first part of the load.
This patch fixes two problems with the O3 cpu model. The first is an issue
with an instruction fetch causing a fault on the next address while the
current macro-op is being issued. This happens when the micro-ops exceed
the fetch bandwdith and then on the next cycle the fetch stage attempts
to issue a request to the next line while it still has micro-ops to issue
if the next line faults a fault is attached to a micro-op in the currently
executing macro-op rather than a "nop" from the next instruction block.
This leads to an instruction incorrectly faulting when on fetch when
it had no reason to fault.
A similar problem occurs with interrupts. When an interrupt occurs the
fetch stage nominally stops issuing instructions immediately. This is incorrect
in the case of a macro-op as the current location might not be interruptable.
The virtual channels within "response" vnets are made buffers_per_data_vc
deep (default=4), while virtual channels within other vnets are made
buffers_per_ctrl_vc deep (default = 1). This is for accurate power estimates.
Identifying response vnets versus other vnets will allow garnet to
determine which vnets will carry data packets, and which will carry
ctrl packets, and use appropriate buffer sizes (since data packets are larger
than ctrl packets). This in turn allows the orion power model to accurately
estimate buffer power.