2006-04-23 01:10:39 +02:00
|
|
|
/*
|
2006-06-07 22:02:55 +02:00
|
|
|
* Copyright (c) 2006 The Regents of The University of Michigan
|
2006-04-23 01:10:39 +02:00
|
|
|
* All rights reserved.
|
|
|
|
*
|
|
|
|
* Redistribution and use in source and binary forms, with or without
|
|
|
|
* modification, are permitted provided that the following conditions are
|
|
|
|
* met: redistributions of source code must retain the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer;
|
|
|
|
* redistributions in binary form must reproduce the above copyright
|
|
|
|
* notice, this list of conditions and the following disclaimer in the
|
|
|
|
* documentation and/or other materials provided with the distribution;
|
|
|
|
* neither the name of the copyright holders nor the names of its
|
|
|
|
* contributors may be used to endorse or promote products derived from
|
|
|
|
* this software without specific prior written permission.
|
|
|
|
*
|
|
|
|
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
|
|
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
|
|
|
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
|
|
|
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
|
|
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
|
|
|
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
|
|
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
|
|
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
|
|
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
|
|
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
2006-06-07 22:02:55 +02:00
|
|
|
*
|
|
|
|
* Authors: Kevin Lim
|
2006-04-23 01:10:39 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef __CPU_OZONE_LW_LSQ_HH__
|
|
|
|
#define __CPU_OZONE_LW_LSQ_HH__
|
|
|
|
|
2011-04-15 19:44:06 +02:00
|
|
|
#include <algorithm>
|
2006-04-23 01:10:39 +02:00
|
|
|
#include <list>
|
|
|
|
#include <map>
|
|
|
|
#include <queue>
|
|
|
|
|
2006-08-15 11:07:15 +02:00
|
|
|
#include "arch/types.hh"
|
2006-04-23 01:10:39 +02:00
|
|
|
#include "base/hashmap.hh"
|
2011-04-15 19:44:06 +02:00
|
|
|
#include "config/the_isa.hh"
|
2006-04-23 01:10:39 +02:00
|
|
|
#include "cpu/inst_seq.hh"
|
2006-06-03 00:15:20 +02:00
|
|
|
#include "mem/packet.hh"
|
|
|
|
#include "mem/port.hh"
|
2006-04-23 01:10:39 +02:00
|
|
|
//#include "mem/page_table.hh"
|
Fixes for ozone CPU to successfully boot and run linux.
cpu/base_dyn_inst.hh:
Remove snoop function (did not mean to commit it).
cpu/ozone/back_end_impl.hh:
Set instruction as having its result ready, not completed.
cpu/ozone/cpu.hh:
Fixes for store conditionals. Use an additional lock addr list to make sure that the access is valid. I don't know if this is fully necessary, but it gives me a peace of mind (at some performance cost).
Make sure to schedule for cycles(1) and not just 1 cycle in the future as tick = 1ps.
Also support the new Checker.
cpu/ozone/cpu_builder.cc:
Add parameter for maxOutstandingMemOps so it can be set through the config.
Also add in the checker. Right now it's a BaseCPU simobject, but that may change in the future.
cpu/ozone/cpu_impl.hh:
Add support for the checker. For now there's a dynamic cast to convert the simobject passed back from the builder to the proper Checker type. It's ugly, but only happens at startup, and is probably a justified use of dynamic cast.
Support switching out/taking over from other CPUs.
Correct indexing problem for float registers.
cpu/ozone/dyn_inst.hh:
Add ability for instructions to wait on memory instructions in addition to source register instructions. This is needed for memory dependence predictors and memory barriers.
cpu/ozone/dyn_inst_impl.hh:
Support waiting on memory operations.
Use "resultReady" to differentiate an instruction having its registers produced vs being totally completed.
cpu/ozone/front_end.hh:
Support switching out.
Also record if an interrupt is pending.
cpu/ozone/front_end_impl.hh:
Support switching out. Also support stalling the front end if an interrupt is pending.
cpu/ozone/lw_back_end.hh:
Add checker in.
Support switching out.
Support memory barriers.
cpu/ozone/lw_back_end_impl.hh:
Lots of changes to get things to work right.
Faults, traps, interrupts all wait until all stores have written back (important).
Memory barriers are supported, as is the general ability for instructions to be dependent on other memory instructions.
cpu/ozone/lw_lsq.hh:
Support switching out.
Also use store writeback events in all cases, not just dcache misses.
cpu/ozone/lw_lsq_impl.hh:
Support switching out.
Also use store writeback events in all cases, not just dcache misses.
Support the checker CPU. Marks instructions as completed once the functional access is done (which has to be done for the checker to be able to verify results).
cpu/ozone/simple_params.hh:
Add max outstanding mem ops parameter.
python/m5/objects/OzoneCPU.py:
Add max outstanding mem ops, checker.
--HG--
extra : convert_revision : f4d408e1bb1f25836a097b6abe3856111e950c59
2006-05-12 01:18:36 +02:00
|
|
|
#include "sim/debug.hh"
|
2012-02-07 13:43:21 +01:00
|
|
|
#include "sim/fault_fwd.hh"
|
2006-04-23 01:10:39 +02:00
|
|
|
#include "sim/sim_object.hh"
|
|
|
|
|
2006-06-23 05:33:26 +02:00
|
|
|
class MemObject;
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
/**
|
|
|
|
* Class that implements the actual LQ and SQ for each specific thread.
|
|
|
|
* Both are circular queues; load entries are freed upon committing, while
|
|
|
|
* store entries are freed once they writeback. The LSQUnit tracks if there
|
|
|
|
* are memory ordering violations, and also detects partial load to store
|
|
|
|
* forwarding cases (a store only has part of a load's data) that requires
|
|
|
|
* the load to wait until the store writes back. In the former case it
|
|
|
|
* holds onto the instruction until the dependence unit looks at it, and
|
|
|
|
* in the latter it stalls the LSQ until the store writes back. At that
|
|
|
|
* point the load is replayed.
|
|
|
|
*/
|
|
|
|
template <class Impl>
|
|
|
|
class OzoneLWLSQ {
|
|
|
|
public:
|
|
|
|
typedef typename Impl::Params Params;
|
2006-06-23 05:33:26 +02:00
|
|
|
typedef typename Impl::OzoneCPU OzoneCPU;
|
2006-04-23 01:10:39 +02:00
|
|
|
typedef typename Impl::BackEnd BackEnd;
|
|
|
|
typedef typename Impl::DynInstPtr DynInstPtr;
|
|
|
|
typedef typename Impl::IssueStruct IssueStruct;
|
|
|
|
|
|
|
|
typedef TheISA::IntReg IntReg;
|
|
|
|
|
|
|
|
typedef typename std::map<InstSeqNum, DynInstPtr>::iterator LdMapIt;
|
|
|
|
|
|
|
|
public:
|
|
|
|
/** Constructs an LSQ unit. init() must be called prior to use. */
|
|
|
|
OzoneLWLSQ();
|
|
|
|
|
|
|
|
/** Initializes the LSQ unit with the specified number of entries. */
|
|
|
|
void init(Params *params, unsigned maxLQEntries,
|
|
|
|
unsigned maxSQEntries, unsigned id);
|
|
|
|
|
|
|
|
/** Returns the name of the LSQ unit. */
|
|
|
|
std::string name() const;
|
|
|
|
|
2006-08-24 23:45:04 +02:00
|
|
|
void regStats();
|
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
/** Sets the CPU pointer. */
|
2006-06-23 05:33:26 +02:00
|
|
|
void setCPU(OzoneCPU *cpu_ptr);
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
/** Sets the back-end stage pointer. */
|
|
|
|
void setBE(BackEnd *be_ptr)
|
|
|
|
{ be = be_ptr; }
|
|
|
|
|
2006-07-08 00:24:13 +02:00
|
|
|
Port *getDcachePort() { return &dcachePort; }
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
/** Ticks the LSQ unit, which in this case only resets the number of
|
|
|
|
* used cache ports.
|
|
|
|
* @todo: Move the number of used ports up to the LSQ level so it can
|
|
|
|
* be shared by all LSQ units.
|
|
|
|
*/
|
|
|
|
void tick() { usedPorts = 0; }
|
|
|
|
|
|
|
|
/** Inserts an instruction. */
|
|
|
|
void insert(DynInstPtr &inst);
|
|
|
|
/** Inserts a load instruction. */
|
|
|
|
void insertLoad(DynInstPtr &load_inst);
|
|
|
|
/** Inserts a store instruction. */
|
|
|
|
void insertStore(DynInstPtr &store_inst);
|
|
|
|
|
|
|
|
/** Executes a load instruction. */
|
|
|
|
Fault executeLoad(DynInstPtr &inst);
|
|
|
|
|
|
|
|
/** Executes a store instruction. */
|
|
|
|
Fault executeStore(DynInstPtr &inst);
|
|
|
|
|
|
|
|
/** Commits the head load. */
|
|
|
|
void commitLoad();
|
|
|
|
/** Commits loads older than a specific sequence number. */
|
|
|
|
void commitLoads(InstSeqNum &youngest_inst);
|
|
|
|
|
|
|
|
/** Commits stores older than a specific sequence number. */
|
|
|
|
void commitStores(InstSeqNum &youngest_inst);
|
|
|
|
|
|
|
|
/** Writes back stores. */
|
|
|
|
void writebackStores();
|
|
|
|
|
2006-06-23 05:33:26 +02:00
|
|
|
/** Completes the data access that has been returned from the
|
|
|
|
* memory system. */
|
|
|
|
void completeDataAccess(PacketPtr pkt);
|
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
// @todo: Include stats in the LSQ unit.
|
|
|
|
//void regStats();
|
|
|
|
|
|
|
|
/** Clears all the entries in the LQ. */
|
|
|
|
void clearLQ();
|
|
|
|
|
|
|
|
/** Clears all the entries in the SQ. */
|
|
|
|
void clearSQ();
|
|
|
|
|
|
|
|
/** Resizes the LQ to a given size. */
|
|
|
|
void resizeLQ(unsigned size);
|
|
|
|
|
|
|
|
/** Resizes the SQ to a given size. */
|
|
|
|
void resizeSQ(unsigned size);
|
|
|
|
|
|
|
|
/** Squashes all instructions younger than a specific sequence number. */
|
|
|
|
void squash(const InstSeqNum &squashed_num);
|
|
|
|
|
|
|
|
/** Returns if there is a memory ordering violation. Value is reset upon
|
|
|
|
* call to getMemDepViolator().
|
|
|
|
*/
|
|
|
|
bool violation() { return memDepViolator; }
|
|
|
|
|
|
|
|
/** Returns the memory ordering violator. */
|
|
|
|
DynInstPtr getMemDepViolator();
|
|
|
|
|
|
|
|
/** Returns if a load became blocked due to the memory system. It clears
|
|
|
|
* the bool's value upon this being called.
|
|
|
|
*/
|
|
|
|
bool loadBlocked()
|
|
|
|
{ return isLoadBlocked; }
|
|
|
|
|
|
|
|
void clearLoadBlocked()
|
|
|
|
{ isLoadBlocked = false; }
|
|
|
|
|
|
|
|
bool isLoadBlockedHandled()
|
|
|
|
{ return loadBlockedHandled; }
|
|
|
|
|
|
|
|
void setLoadBlockedHandled()
|
|
|
|
{ loadBlockedHandled = true; }
|
|
|
|
|
|
|
|
/** Returns the number of free entries (min of free LQ and SQ entries). */
|
|
|
|
unsigned numFreeEntries();
|
|
|
|
|
|
|
|
/** Returns the number of loads ready to execute. */
|
|
|
|
int numLoadsReady();
|
|
|
|
|
|
|
|
/** Returns the number of loads in the LQ. */
|
|
|
|
int numLoads() { return loads; }
|
|
|
|
|
|
|
|
/** Returns the number of stores in the SQ. */
|
2006-08-24 23:45:04 +02:00
|
|
|
int numStores() { return stores + storesInFlight; }
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
/** Returns if either the LQ or SQ is full. */
|
|
|
|
bool isFull() { return lqFull() || sqFull(); }
|
|
|
|
|
|
|
|
/** Returns if the LQ is full. */
|
|
|
|
bool lqFull() { return loads >= (LQEntries - 1); }
|
|
|
|
|
|
|
|
/** Returns if the SQ is full. */
|
2006-08-24 23:45:04 +02:00
|
|
|
bool sqFull() { return (stores + storesInFlight) >= (SQEntries - 1); }
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
/** Debugging function to dump instructions in the LSQ. */
|
|
|
|
void dumpInsts();
|
|
|
|
|
|
|
|
/** Returns the number of instructions in the LSQ. */
|
|
|
|
unsigned getCount() { return loads + stores; }
|
|
|
|
|
|
|
|
/** Returns if there are any stores to writeback. */
|
|
|
|
bool hasStoresToWB() { return storesToWB; }
|
|
|
|
|
|
|
|
/** Returns the number of stores to writeback. */
|
|
|
|
int numStoresToWB() { return storesToWB; }
|
|
|
|
|
|
|
|
/** Returns if the LSQ unit will writeback on this cycle. */
|
|
|
|
bool willWB() { return storeQueue.back().canWB &&
|
2006-06-23 05:33:26 +02:00
|
|
|
!storeQueue.back().completed &&
|
|
|
|
!isStoreBlocked; }
|
2006-04-23 01:10:39 +02:00
|
|
|
|
Fixes for ozone CPU to successfully boot and run linux.
cpu/base_dyn_inst.hh:
Remove snoop function (did not mean to commit it).
cpu/ozone/back_end_impl.hh:
Set instruction as having its result ready, not completed.
cpu/ozone/cpu.hh:
Fixes for store conditionals. Use an additional lock addr list to make sure that the access is valid. I don't know if this is fully necessary, but it gives me a peace of mind (at some performance cost).
Make sure to schedule for cycles(1) and not just 1 cycle in the future as tick = 1ps.
Also support the new Checker.
cpu/ozone/cpu_builder.cc:
Add parameter for maxOutstandingMemOps so it can be set through the config.
Also add in the checker. Right now it's a BaseCPU simobject, but that may change in the future.
cpu/ozone/cpu_impl.hh:
Add support for the checker. For now there's a dynamic cast to convert the simobject passed back from the builder to the proper Checker type. It's ugly, but only happens at startup, and is probably a justified use of dynamic cast.
Support switching out/taking over from other CPUs.
Correct indexing problem for float registers.
cpu/ozone/dyn_inst.hh:
Add ability for instructions to wait on memory instructions in addition to source register instructions. This is needed for memory dependence predictors and memory barriers.
cpu/ozone/dyn_inst_impl.hh:
Support waiting on memory operations.
Use "resultReady" to differentiate an instruction having its registers produced vs being totally completed.
cpu/ozone/front_end.hh:
Support switching out.
Also record if an interrupt is pending.
cpu/ozone/front_end_impl.hh:
Support switching out. Also support stalling the front end if an interrupt is pending.
cpu/ozone/lw_back_end.hh:
Add checker in.
Support switching out.
Support memory barriers.
cpu/ozone/lw_back_end_impl.hh:
Lots of changes to get things to work right.
Faults, traps, interrupts all wait until all stores have written back (important).
Memory barriers are supported, as is the general ability for instructions to be dependent on other memory instructions.
cpu/ozone/lw_lsq.hh:
Support switching out.
Also use store writeback events in all cases, not just dcache misses.
cpu/ozone/lw_lsq_impl.hh:
Support switching out.
Also use store writeback events in all cases, not just dcache misses.
Support the checker CPU. Marks instructions as completed once the functional access is done (which has to be done for the checker to be able to verify results).
cpu/ozone/simple_params.hh:
Add max outstanding mem ops parameter.
python/m5/objects/OzoneCPU.py:
Add max outstanding mem ops, checker.
--HG--
extra : convert_revision : f4d408e1bb1f25836a097b6abe3856111e950c59
2006-05-12 01:18:36 +02:00
|
|
|
void switchOut();
|
|
|
|
|
2006-06-06 23:32:21 +02:00
|
|
|
void takeOverFrom(ThreadContext *old_tc = NULL);
|
Fixes for ozone CPU to successfully boot and run linux.
cpu/base_dyn_inst.hh:
Remove snoop function (did not mean to commit it).
cpu/ozone/back_end_impl.hh:
Set instruction as having its result ready, not completed.
cpu/ozone/cpu.hh:
Fixes for store conditionals. Use an additional lock addr list to make sure that the access is valid. I don't know if this is fully necessary, but it gives me a peace of mind (at some performance cost).
Make sure to schedule for cycles(1) and not just 1 cycle in the future as tick = 1ps.
Also support the new Checker.
cpu/ozone/cpu_builder.cc:
Add parameter for maxOutstandingMemOps so it can be set through the config.
Also add in the checker. Right now it's a BaseCPU simobject, but that may change in the future.
cpu/ozone/cpu_impl.hh:
Add support for the checker. For now there's a dynamic cast to convert the simobject passed back from the builder to the proper Checker type. It's ugly, but only happens at startup, and is probably a justified use of dynamic cast.
Support switching out/taking over from other CPUs.
Correct indexing problem for float registers.
cpu/ozone/dyn_inst.hh:
Add ability for instructions to wait on memory instructions in addition to source register instructions. This is needed for memory dependence predictors and memory barriers.
cpu/ozone/dyn_inst_impl.hh:
Support waiting on memory operations.
Use "resultReady" to differentiate an instruction having its registers produced vs being totally completed.
cpu/ozone/front_end.hh:
Support switching out.
Also record if an interrupt is pending.
cpu/ozone/front_end_impl.hh:
Support switching out. Also support stalling the front end if an interrupt is pending.
cpu/ozone/lw_back_end.hh:
Add checker in.
Support switching out.
Support memory barriers.
cpu/ozone/lw_back_end_impl.hh:
Lots of changes to get things to work right.
Faults, traps, interrupts all wait until all stores have written back (important).
Memory barriers are supported, as is the general ability for instructions to be dependent on other memory instructions.
cpu/ozone/lw_lsq.hh:
Support switching out.
Also use store writeback events in all cases, not just dcache misses.
cpu/ozone/lw_lsq_impl.hh:
Support switching out.
Also use store writeback events in all cases, not just dcache misses.
Support the checker CPU. Marks instructions as completed once the functional access is done (which has to be done for the checker to be able to verify results).
cpu/ozone/simple_params.hh:
Add max outstanding mem ops parameter.
python/m5/objects/OzoneCPU.py:
Add max outstanding mem ops, checker.
--HG--
extra : convert_revision : f4d408e1bb1f25836a097b6abe3856111e950c59
2006-05-12 01:18:36 +02:00
|
|
|
|
|
|
|
bool isSwitchedOut() { return switchedOut; }
|
|
|
|
|
|
|
|
bool switchedOut;
|
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
private:
|
2006-06-23 05:33:26 +02:00
|
|
|
/** Writes back the instruction, sending it to IEW. */
|
|
|
|
void writeback(DynInstPtr &inst, PacketPtr pkt);
|
|
|
|
|
|
|
|
/** Handles completing the send of a store to memory. */
|
2006-10-20 09:10:12 +02:00
|
|
|
void storePostSend(PacketPtr pkt, DynInstPtr &inst);
|
2006-06-23 05:33:26 +02:00
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
/** Completes the store at the specified index. */
|
2006-08-24 23:45:04 +02:00
|
|
|
void completeStore(DynInstPtr &inst);
|
|
|
|
|
|
|
|
void removeStore(int store_idx);
|
2006-04-23 01:10:39 +02:00
|
|
|
|
2006-06-23 05:33:26 +02:00
|
|
|
/** Handles doing the retry. */
|
|
|
|
void recvRetry();
|
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
private:
|
|
|
|
/** Pointer to the CPU. */
|
2006-06-23 05:33:26 +02:00
|
|
|
OzoneCPU *cpu;
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
/** Pointer to the back-end stage. */
|
|
|
|
BackEnd *be;
|
|
|
|
|
MEM: Introduce the master/slave port sub-classes in C++
This patch introduces the notion of a master and slave port in the C++
code, thus bringing the previous classification from the Python
classes into the corresponding simulation objects and memory objects.
The patch enables us to classify behaviours into the two bins and add
assumptions and enfore compliance, also simplifying the two
interfaces. As a starting point, isSnooping is confined to a master
port, and getAddrRanges to slave ports. More of these specilisations
are to come in later patches.
The getPort function is not getMasterPort and getSlavePort, and
returns a port reference rather than a pointer as NULL would never be
a valid return value. The default implementation of these two
functions is placed in MemObject, and calls fatal.
The one drawback with this specific patch is that it requires some
code duplication, e.g. QueuedPort becomes QueuedMasterPort and
QueuedSlavePort, and BusPort becomes BusMasterPort and BusSlavePort
(avoiding multiple inheritance). With the later introduction of the
port interfaces, moving the functionality outside the port itself, a
lot of the duplicated code will disappear again.
2012-03-30 15:40:11 +02:00
|
|
|
class DcachePort : public MasterPort
|
2006-06-03 00:15:20 +02:00
|
|
|
{
|
|
|
|
protected:
|
2006-06-23 05:33:26 +02:00
|
|
|
OzoneLWLSQ *lsq;
|
2006-06-03 00:15:20 +02:00
|
|
|
|
|
|
|
public:
|
2006-07-08 00:24:13 +02:00
|
|
|
DcachePort(OzoneLWLSQ *_lsq)
|
|
|
|
: lsq(_lsq)
|
2006-06-03 00:15:20 +02:00
|
|
|
{ }
|
|
|
|
|
|
|
|
protected:
|
|
|
|
virtual Tick recvAtomic(PacketPtr pkt);
|
|
|
|
|
|
|
|
virtual void recvFunctional(PacketPtr pkt);
|
|
|
|
|
2012-01-17 19:55:09 +01:00
|
|
|
/**
|
|
|
|
* Is a snooper due to LSQ maintenance
|
|
|
|
*/
|
MEM: Introduce the master/slave port sub-classes in C++
This patch introduces the notion of a master and slave port in the C++
code, thus bringing the previous classification from the Python
classes into the corresponding simulation objects and memory objects.
The patch enables us to classify behaviours into the two bins and add
assumptions and enfore compliance, also simplifying the two
interfaces. As a starting point, isSnooping is confined to a master
port, and getAddrRanges to slave ports. More of these specilisations
are to come in later patches.
The getPort function is not getMasterPort and getSlavePort, and
returns a port reference rather than a pointer as NULL would never be
a valid return value. The default implementation of these two
functions is placed in MemObject, and calls fatal.
The one drawback with this specific patch is that it requires some
code duplication, e.g. QueuedPort becomes QueuedMasterPort and
QueuedSlavePort, and BusPort becomes BusMasterPort and BusSlavePort
(avoiding multiple inheritance). With the later introduction of the
port interfaces, moving the functionality outside the port itself, a
lot of the duplicated code will disappear again.
2012-03-30 15:40:11 +02:00
|
|
|
virtual bool isSnooping() const { return true; }
|
2006-06-03 00:15:20 +02:00
|
|
|
|
|
|
|
virtual bool recvTiming(PacketPtr pkt);
|
|
|
|
|
|
|
|
virtual void recvRetry();
|
|
|
|
};
|
|
|
|
|
2006-07-08 00:24:13 +02:00
|
|
|
/** D-cache port. */
|
|
|
|
DcachePort dcachePort;
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
public:
|
|
|
|
struct SQEntry {
|
|
|
|
/** Constructs an empty store queue entry. */
|
|
|
|
SQEntry()
|
|
|
|
: inst(NULL), req(NULL), size(0), data(0),
|
|
|
|
canWB(0), committed(0), completed(0), lqIt(NULL)
|
|
|
|
{ }
|
|
|
|
|
|
|
|
/** Constructs a store queue entry for a given instruction. */
|
|
|
|
SQEntry(DynInstPtr &_inst)
|
|
|
|
: inst(_inst), req(NULL), size(0), data(0),
|
|
|
|
canWB(0), committed(0), completed(0), lqIt(NULL)
|
|
|
|
{ }
|
|
|
|
|
|
|
|
/** The store instruction. */
|
|
|
|
DynInstPtr inst;
|
|
|
|
/** The memory request for the store. */
|
2006-06-03 00:15:20 +02:00
|
|
|
RequestPtr req;
|
2006-04-23 01:10:39 +02:00
|
|
|
/** The size of the store. */
|
|
|
|
int size;
|
|
|
|
/** The store data. */
|
|
|
|
IntReg data;
|
|
|
|
/** Whether or not the store can writeback. */
|
|
|
|
bool canWB;
|
|
|
|
/** Whether or not the store is committed. */
|
|
|
|
bool committed;
|
|
|
|
/** Whether or not the store is completed. */
|
|
|
|
bool completed;
|
|
|
|
|
|
|
|
typename std::list<DynInstPtr>::iterator lqIt;
|
|
|
|
};
|
|
|
|
|
2006-06-23 05:33:26 +02:00
|
|
|
/** Derived class to hold any sender state the LSQ needs. */
|
2012-06-05 07:23:08 +02:00
|
|
|
class LSQSenderState : public Packet::SenderState
|
2006-06-23 05:33:26 +02:00
|
|
|
{
|
|
|
|
public:
|
|
|
|
/** Default constructor. */
|
|
|
|
LSQSenderState()
|
|
|
|
: noWB(false)
|
|
|
|
{ }
|
|
|
|
|
|
|
|
/** Instruction who initiated the access to memory. */
|
|
|
|
DynInstPtr inst;
|
|
|
|
/** Whether or not it is a load. */
|
|
|
|
bool isLoad;
|
|
|
|
/** The LQ/SQ index of the instruction. */
|
|
|
|
int idx;
|
|
|
|
/** Whether or not the instruction will need to writeback. */
|
|
|
|
bool noWB;
|
|
|
|
};
|
|
|
|
|
|
|
|
/** Writeback event, specifically for when stores forward data to loads. */
|
|
|
|
class WritebackEvent : public Event {
|
|
|
|
public:
|
|
|
|
/** Constructs a writeback event. */
|
|
|
|
WritebackEvent(DynInstPtr &_inst, PacketPtr pkt, OzoneLWLSQ *lsq_ptr);
|
|
|
|
|
|
|
|
/** Processes the writeback event. */
|
|
|
|
void process();
|
|
|
|
|
|
|
|
/** Returns the description of this event. */
|
2008-02-06 22:32:40 +01:00
|
|
|
const char *description() const;
|
2006-06-23 05:33:26 +02:00
|
|
|
|
|
|
|
private:
|
|
|
|
/** Instruction whose results are being written back. */
|
|
|
|
DynInstPtr inst;
|
|
|
|
|
|
|
|
/** The packet that would have been sent to memory. */
|
|
|
|
PacketPtr pkt;
|
|
|
|
|
|
|
|
/** The pointer to the LSQ unit that issued the store. */
|
|
|
|
OzoneLWLSQ<Impl> *lsqPtr;
|
|
|
|
};
|
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
enum Status {
|
|
|
|
Running,
|
|
|
|
Idle,
|
|
|
|
DcacheMissStall,
|
|
|
|
DcacheMissSwitch
|
|
|
|
};
|
|
|
|
|
|
|
|
private:
|
|
|
|
/** The OzoneLWLSQ thread id. */
|
|
|
|
unsigned lsqID;
|
|
|
|
|
|
|
|
/** The status of the LSQ unit. */
|
|
|
|
Status _status;
|
|
|
|
|
|
|
|
/** The store queue. */
|
|
|
|
std::list<SQEntry> storeQueue;
|
|
|
|
/** The load queue. */
|
|
|
|
std::list<DynInstPtr> loadQueue;
|
|
|
|
|
|
|
|
typedef typename std::list<SQEntry>::iterator SQIt;
|
|
|
|
typedef typename std::list<DynInstPtr>::iterator LQIt;
|
|
|
|
|
|
|
|
|
|
|
|
struct HashFn {
|
|
|
|
size_t operator() (const int a) const
|
|
|
|
{
|
|
|
|
unsigned hash = (((a >> 14) ^ ((a >> 2) & 0xffff))) & 0x7FFFFFFF;
|
|
|
|
|
|
|
|
return hash;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
m5::hash_map<int, SQIt, HashFn> SQItHash;
|
|
|
|
std::queue<int> SQIndices;
|
|
|
|
m5::hash_map<int, LQIt, HashFn> LQItHash;
|
|
|
|
std::queue<int> LQIndices;
|
|
|
|
|
|
|
|
typedef typename m5::hash_map<int, LQIt, HashFn>::iterator LQHashIt;
|
|
|
|
typedef typename m5::hash_map<int, SQIt, HashFn>::iterator SQHashIt;
|
|
|
|
// Consider making these 16 bits
|
|
|
|
/** The number of LQ entries. */
|
|
|
|
unsigned LQEntries;
|
|
|
|
/** The number of SQ entries. */
|
|
|
|
unsigned SQEntries;
|
|
|
|
|
|
|
|
/** The number of load instructions in the LQ. */
|
|
|
|
int loads;
|
|
|
|
/** The number of store instructions in the SQ (excludes those waiting to
|
|
|
|
* writeback).
|
|
|
|
*/
|
|
|
|
int stores;
|
|
|
|
|
|
|
|
int storesToWB;
|
|
|
|
|
2006-08-24 23:45:04 +02:00
|
|
|
public:
|
|
|
|
int storesInFlight;
|
|
|
|
|
|
|
|
private:
|
2006-04-23 01:10:39 +02:00
|
|
|
/// @todo Consider moving to a more advanced model with write vs read ports
|
|
|
|
/** The number of cache ports available each cycle. */
|
|
|
|
int cachePorts;
|
|
|
|
|
|
|
|
/** The number of used cache ports in this cycle. */
|
|
|
|
int usedPorts;
|
|
|
|
|
|
|
|
//list<InstSeqNum> mshrSeqNums;
|
|
|
|
|
2006-08-24 23:45:04 +02:00
|
|
|
/** Tota number of memory ordering violations. */
|
2009-03-06 04:09:53 +01:00
|
|
|
Stats::Scalar lsqMemOrderViolation;
|
2006-08-24 23:45:04 +02:00
|
|
|
|
2009-03-06 04:09:53 +01:00
|
|
|
//Stats::Scalar dcacheStallCycles;
|
2006-04-23 01:10:39 +02:00
|
|
|
Counter lastDcacheStall;
|
|
|
|
|
|
|
|
// Make these per thread?
|
|
|
|
/** Whether or not the LSQ is stalled. */
|
|
|
|
bool stalled;
|
|
|
|
/** The store that causes the stall due to partial store to load
|
|
|
|
* forwarding.
|
|
|
|
*/
|
|
|
|
InstSeqNum stallingStoreIsn;
|
|
|
|
/** The index of the above store. */
|
|
|
|
LQIt stallingLoad;
|
|
|
|
|
2006-06-23 05:33:26 +02:00
|
|
|
/** The packet that needs to be retried. */
|
|
|
|
PacketPtr retryPkt;
|
|
|
|
|
|
|
|
/** Whehter or not a store is blocked due to the memory system. */
|
|
|
|
bool isStoreBlocked;
|
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
/** Whether or not a load is blocked due to the memory system. It is
|
|
|
|
* cleared when this value is checked via loadBlocked().
|
|
|
|
*/
|
|
|
|
bool isLoadBlocked;
|
|
|
|
|
|
|
|
bool loadBlockedHandled;
|
|
|
|
|
|
|
|
InstSeqNum blockedLoadSeqNum;
|
|
|
|
|
|
|
|
/** The oldest faulting load instruction. */
|
|
|
|
DynInstPtr loadFaultInst;
|
|
|
|
/** The oldest faulting store instruction. */
|
|
|
|
DynInstPtr storeFaultInst;
|
|
|
|
|
|
|
|
/** The oldest load that caused a memory ordering violation. */
|
|
|
|
DynInstPtr memDepViolator;
|
|
|
|
|
|
|
|
// Will also need how many read/write ports the Dcache has. Or keep track
|
|
|
|
// of that in stage that is one level up, and only call executeLoad/Store
|
|
|
|
// the appropriate number of times.
|
|
|
|
|
|
|
|
public:
|
|
|
|
/** Executes the load at the given index. */
|
|
|
|
template <class T>
|
2006-06-03 00:15:20 +02:00
|
|
|
Fault read(RequestPtr req, T &data, int load_idx);
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
/** Executes the store at the given index. */
|
|
|
|
template <class T>
|
2006-06-03 00:15:20 +02:00
|
|
|
Fault write(RequestPtr req, T &data, int store_idx);
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
/** Returns the sequence number of the head load instruction. */
|
|
|
|
InstSeqNum getLoadHeadSeqNum()
|
|
|
|
{
|
|
|
|
if (!loadQueue.empty()) {
|
|
|
|
return loadQueue.back()->seqNum;
|
|
|
|
} else {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
/** Returns the sequence number of the head store instruction. */
|
|
|
|
InstSeqNum getStoreHeadSeqNum()
|
|
|
|
{
|
|
|
|
if (!storeQueue.empty()) {
|
|
|
|
return storeQueue.back().inst->seqNum;
|
|
|
|
} else {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
/** Returns whether or not the LSQ unit is stalled. */
|
|
|
|
bool isStalled() { return stalled; }
|
|
|
|
};
|
|
|
|
|
|
|
|
template <class Impl>
|
|
|
|
template <class T>
|
|
|
|
Fault
|
2006-06-03 00:15:20 +02:00
|
|
|
OzoneLWLSQ<Impl>::read(RequestPtr req, T &data, int load_idx)
|
2006-04-23 01:10:39 +02:00
|
|
|
{
|
|
|
|
//Depending on issue2execute delay a squashed load could
|
|
|
|
//execute if it is found to be squashed in the same
|
|
|
|
//cycle it is scheduled to execute
|
|
|
|
typename m5::hash_map<int, LQIt, HashFn>::iterator
|
|
|
|
lq_hash_it = LQItHash.find(load_idx);
|
|
|
|
assert(lq_hash_it != LQItHash.end());
|
|
|
|
DynInstPtr inst = (*(*lq_hash_it).second);
|
|
|
|
|
|
|
|
// Make sure this isn't an uncacheable access
|
|
|
|
// A bit of a hackish way to get uncached accesses to work only if they're
|
|
|
|
// at the head of the LSQ and are ready to commit (at the head of the ROB
|
|
|
|
// too).
|
|
|
|
// @todo: Fix uncached accesses.
|
2006-10-08 23:48:24 +02:00
|
|
|
if (req->isUncacheable() &&
|
2006-06-23 05:33:26 +02:00
|
|
|
(inst != loadQueue.back() || !inst->isAtCommit())) {
|
2006-04-23 01:10:39 +02:00
|
|
|
DPRINTF(OzoneLSQ, "[sn:%lli] Uncached load and not head of "
|
|
|
|
"commit/LSQ!\n",
|
|
|
|
inst->seqNum);
|
|
|
|
be->rescheduleMemInst(inst);
|
|
|
|
return TheISA::genMachineCheckFault();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check the SQ for any previous stores that might lead to forwarding
|
|
|
|
SQIt sq_it = storeQueue.begin();
|
|
|
|
int store_size = 0;
|
|
|
|
|
|
|
|
DPRINTF(OzoneLSQ, "Read called, load idx: %i addr: %#x\n",
|
2006-06-03 00:15:20 +02:00
|
|
|
load_idx, req->getPaddr());
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
while (sq_it != storeQueue.end() && (*sq_it).inst->seqNum > inst->seqNum)
|
|
|
|
++sq_it;
|
|
|
|
|
|
|
|
while (1) {
|
|
|
|
// End once we've reached the top of the LSQ
|
|
|
|
if (sq_it == storeQueue.end()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
assert((*sq_it).inst);
|
|
|
|
|
|
|
|
store_size = (*sq_it).size;
|
|
|
|
|
2006-09-28 06:14:15 +02:00
|
|
|
if (store_size == 0 || (*sq_it).committed) {
|
2006-04-23 01:10:39 +02:00
|
|
|
sq_it++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check if the store data is within the lower and upper bounds of
|
|
|
|
// addresses that the request needs.
|
|
|
|
bool store_has_lower_limit =
|
2006-06-03 00:15:20 +02:00
|
|
|
req->getVaddr() >= (*sq_it).inst->effAddr;
|
2006-04-23 01:10:39 +02:00
|
|
|
bool store_has_upper_limit =
|
2006-06-03 00:15:20 +02:00
|
|
|
(req->getVaddr() + req->getSize()) <= ((*sq_it).inst->effAddr +
|
|
|
|
store_size);
|
2006-04-23 01:10:39 +02:00
|
|
|
bool lower_load_has_store_part =
|
2006-06-03 00:15:20 +02:00
|
|
|
req->getVaddr() < ((*sq_it).inst->effAddr +
|
|
|
|
store_size);
|
2006-04-23 01:10:39 +02:00
|
|
|
bool upper_load_has_store_part =
|
2006-06-03 00:15:20 +02:00
|
|
|
(req->getVaddr() + req->getSize()) > (*sq_it).inst->effAddr;
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
// If the store's data has all of the data needed, we can forward.
|
|
|
|
if (store_has_lower_limit && store_has_upper_limit) {
|
2006-06-03 00:15:20 +02:00
|
|
|
int shift_amt = req->getVaddr() & (store_size - 1);
|
2006-04-23 01:10:39 +02:00
|
|
|
// Assumes byte addressing
|
|
|
|
shift_amt = shift_amt << 3;
|
|
|
|
|
|
|
|
// Cast this to type T?
|
|
|
|
data = (*sq_it).data >> shift_amt;
|
|
|
|
|
2006-06-03 00:15:20 +02:00
|
|
|
assert(!inst->memData);
|
|
|
|
inst->memData = new uint8_t[64];
|
2006-04-23 01:10:39 +02:00
|
|
|
|
2006-06-03 00:15:20 +02:00
|
|
|
memcpy(inst->memData, &data, req->getSize());
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
DPRINTF(OzoneLSQ, "Forwarding from store [sn:%lli] to load to "
|
|
|
|
"[sn:%lli] addr %#x, data %#x\n",
|
2006-06-23 05:33:26 +02:00
|
|
|
(*sq_it).inst->seqNum, inst->seqNum, req->getVaddr(),
|
|
|
|
*(inst->memData));
|
|
|
|
|
MEM: Remove the Broadcast destination from the packet
This patch simplifies the packet by removing the broadcast flag and
instead more firmly relying on (and enforcing) the semantics of
transactions in the classic memory system, i.e. request packets are
routed from a master to a slave based on the address, and when they
are created they have neither a valid source, nor destination. On
their way to the slave, the request packet is updated with a source
field for all modules that multiplex packets from multiple master
(e.g. a bus). When a request packet is turned into a response packet
(at the final slave), it moves the potentially populated source field
to the destination field, and the response packet is routed through
any multiplexing components back to the master based on the
destination field.
Modules that connect multiplexing components, such as caches and
bridges store any existing source and destination field in the sender
state as a stack (just as before).
The packet constructor is simplified in that there is no longer a need
to pass the Packet::Broadcast as the destination (this was always the
case for the classic memory system). In the case of Ruby, rather than
using the parameter to the constructor we now rely on setDest, as
there is already another three-argument constructor in the packet
class.
In many places where the packet information was printed as part of
DPRINTFs, request packets would be printed with a numeric "dest" that
would always be -1 (Broadcast) and that field is now removed from the
printing.
2012-04-14 11:45:55 +02:00
|
|
|
PacketPtr data_pkt = new Packet(req, Packet::ReadReq);
|
2006-06-23 05:33:26 +02:00
|
|
|
data_pkt->dataStatic(inst->memData);
|
|
|
|
|
|
|
|
WritebackEvent *wb = new WritebackEvent(inst, data_pkt, this);
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
// We'll say this has a 1 cycle load-store forwarding latency
|
|
|
|
// for now.
|
2006-06-23 05:33:26 +02:00
|
|
|
// @todo: Need to make this a parameter.
|
2011-01-08 06:50:29 +01:00
|
|
|
wb->schedule(curTick());
|
2006-06-23 05:33:26 +02:00
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
// Should keep track of stat for forwarded data
|
|
|
|
return NoFault;
|
|
|
|
} else if ((store_has_lower_limit && lower_load_has_store_part) ||
|
|
|
|
(store_has_upper_limit && upper_load_has_store_part) ||
|
|
|
|
(lower_load_has_store_part && upper_load_has_store_part)) {
|
|
|
|
// This is the partial store-load forwarding case where a store
|
|
|
|
// has only part of the load's data.
|
|
|
|
|
|
|
|
// If it's already been written back, then don't worry about
|
|
|
|
// stalling on it.
|
|
|
|
if ((*sq_it).completed) {
|
|
|
|
sq_it++;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Must stall load and force it to retry, so long as it's the oldest
|
|
|
|
// load that needs to do so.
|
|
|
|
if (!stalled ||
|
|
|
|
(stalled &&
|
|
|
|
inst->seqNum <
|
|
|
|
(*stallingLoad)->seqNum)) {
|
|
|
|
stalled = true;
|
|
|
|
stallingStoreIsn = (*sq_it).inst->seqNum;
|
|
|
|
stallingLoad = (*lq_hash_it).second;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Tell IQ/mem dep unit that this instruction will need to be
|
|
|
|
// rescheduled eventually
|
|
|
|
be->rescheduleMemInst(inst);
|
|
|
|
|
|
|
|
DPRINTF(OzoneLSQ, "Load-store forwarding mis-match. "
|
|
|
|
"Store [sn:%lli] to load addr %#x\n",
|
2006-06-23 05:33:26 +02:00
|
|
|
(*sq_it).inst->seqNum, req->getVaddr());
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
return NoFault;
|
|
|
|
}
|
|
|
|
sq_it++;
|
|
|
|
}
|
|
|
|
|
|
|
|
// If there's no forwarding case, then go access memory
|
2006-04-24 23:10:06 +02:00
|
|
|
DPRINTF(OzoneLSQ, "Doing functional access for inst PC %#x\n",
|
|
|
|
inst->readPC());
|
|
|
|
|
2006-06-03 00:15:20 +02:00
|
|
|
assert(!inst->memData);
|
|
|
|
inst->memData = new uint8_t[64];
|
2006-04-24 23:10:06 +02:00
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
++usedPorts;
|
|
|
|
|
2006-06-03 00:15:20 +02:00
|
|
|
DPRINTF(OzoneLSQ, "Doing timing access for inst PC %#x\n",
|
|
|
|
inst->readPC());
|
2006-04-23 01:10:39 +02:00
|
|
|
|
2007-07-01 05:35:42 +02:00
|
|
|
PacketPtr data_pkt =
|
|
|
|
new Packet(req,
|
2009-04-20 06:44:15 +02:00
|
|
|
(req->isLLSC() ?
|
MEM: Remove the Broadcast destination from the packet
This patch simplifies the packet by removing the broadcast flag and
instead more firmly relying on (and enforcing) the semantics of
transactions in the classic memory system, i.e. request packets are
routed from a master to a slave based on the address, and when they
are created they have neither a valid source, nor destination. On
their way to the slave, the request packet is updated with a source
field for all modules that multiplex packets from multiple master
(e.g. a bus). When a request packet is turned into a response packet
(at the final slave), it moves the potentially populated source field
to the destination field, and the response packet is routed through
any multiplexing components back to the master based on the
destination field.
Modules that connect multiplexing components, such as caches and
bridges store any existing source and destination field in the sender
state as a stack (just as before).
The packet constructor is simplified in that there is no longer a need
to pass the Packet::Broadcast as the destination (this was always the
case for the classic memory system). In the case of Ruby, rather than
using the parameter to the constructor we now rely on setDest, as
there is already another three-argument constructor in the packet
class.
In many places where the packet information was printed as part of
DPRINTFs, request packets would be printed with a numeric "dest" that
would always be -1 (Broadcast) and that field is now removed from the
printing.
2012-04-14 11:45:55 +02:00
|
|
|
MemCmd::LoadLockedReq : Packet::ReadReq));
|
2006-06-03 00:15:20 +02:00
|
|
|
data_pkt->dataStatic(inst->memData);
|
2006-04-23 01:10:39 +02:00
|
|
|
|
2006-06-23 05:33:26 +02:00
|
|
|
LSQSenderState *state = new LSQSenderState;
|
|
|
|
state->isLoad = true;
|
|
|
|
state->idx = load_idx;
|
|
|
|
state->inst = inst;
|
|
|
|
data_pkt->senderState = state;
|
|
|
|
|
2006-06-03 00:15:20 +02:00
|
|
|
// if we have a cache, do cache access too
|
2006-07-08 00:24:13 +02:00
|
|
|
if (!dcachePort.sendTiming(data_pkt)) {
|
2006-06-03 00:15:20 +02:00
|
|
|
// There's an older load that's already going to squash.
|
|
|
|
if (isLoadBlocked && blockedLoadSeqNum < inst->seqNum)
|
|
|
|
return NoFault;
|
2006-04-23 01:10:39 +02:00
|
|
|
|
2006-06-03 00:15:20 +02:00
|
|
|
// Record that the load was blocked due to memory. This
|
|
|
|
// load will squash all instructions after it, be
|
|
|
|
// refetched, and re-executed.
|
|
|
|
isLoadBlocked = true;
|
|
|
|
loadBlockedHandled = false;
|
|
|
|
blockedLoadSeqNum = inst->seqNum;
|
|
|
|
// No fault occurred, even though the interface is blocked.
|
|
|
|
return NoFault;
|
|
|
|
}
|
2006-04-23 01:10:39 +02:00
|
|
|
|
2009-04-20 06:44:15 +02:00
|
|
|
if (req->isLLSC()) {
|
2006-06-25 06:22:41 +02:00
|
|
|
cpu->lockFlag = true;
|
|
|
|
}
|
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
return NoFault;
|
|
|
|
}
|
|
|
|
|
|
|
|
template <class Impl>
|
|
|
|
template <class T>
|
|
|
|
Fault
|
2006-06-03 00:15:20 +02:00
|
|
|
OzoneLWLSQ<Impl>::write(RequestPtr req, T &data, int store_idx)
|
2006-04-23 01:10:39 +02:00
|
|
|
{
|
|
|
|
SQHashIt sq_hash_it = SQItHash.find(store_idx);
|
|
|
|
assert(sq_hash_it != SQItHash.end());
|
|
|
|
|
|
|
|
SQIt sq_it = (*sq_hash_it).second;
|
|
|
|
assert((*sq_it).inst);
|
|
|
|
|
|
|
|
DPRINTF(OzoneLSQ, "Doing write to store idx %i, addr %#x data %#x"
|
|
|
|
" | [sn:%lli]\n",
|
2006-06-03 00:15:20 +02:00
|
|
|
store_idx, req->getPaddr(), data, (*sq_it).inst->seqNum);
|
2006-04-23 01:10:39 +02:00
|
|
|
|
|
|
|
(*sq_it).req = req;
|
|
|
|
(*sq_it).size = sizeof(T);
|
|
|
|
(*sq_it).data = data;
|
2006-06-03 00:15:20 +02:00
|
|
|
/*
|
Fixes for ozone CPU to successfully boot and run linux.
cpu/base_dyn_inst.hh:
Remove snoop function (did not mean to commit it).
cpu/ozone/back_end_impl.hh:
Set instruction as having its result ready, not completed.
cpu/ozone/cpu.hh:
Fixes for store conditionals. Use an additional lock addr list to make sure that the access is valid. I don't know if this is fully necessary, but it gives me a peace of mind (at some performance cost).
Make sure to schedule for cycles(1) and not just 1 cycle in the future as tick = 1ps.
Also support the new Checker.
cpu/ozone/cpu_builder.cc:
Add parameter for maxOutstandingMemOps so it can be set through the config.
Also add in the checker. Right now it's a BaseCPU simobject, but that may change in the future.
cpu/ozone/cpu_impl.hh:
Add support for the checker. For now there's a dynamic cast to convert the simobject passed back from the builder to the proper Checker type. It's ugly, but only happens at startup, and is probably a justified use of dynamic cast.
Support switching out/taking over from other CPUs.
Correct indexing problem for float registers.
cpu/ozone/dyn_inst.hh:
Add ability for instructions to wait on memory instructions in addition to source register instructions. This is needed for memory dependence predictors and memory barriers.
cpu/ozone/dyn_inst_impl.hh:
Support waiting on memory operations.
Use "resultReady" to differentiate an instruction having its registers produced vs being totally completed.
cpu/ozone/front_end.hh:
Support switching out.
Also record if an interrupt is pending.
cpu/ozone/front_end_impl.hh:
Support switching out. Also support stalling the front end if an interrupt is pending.
cpu/ozone/lw_back_end.hh:
Add checker in.
Support switching out.
Support memory barriers.
cpu/ozone/lw_back_end_impl.hh:
Lots of changes to get things to work right.
Faults, traps, interrupts all wait until all stores have written back (important).
Memory barriers are supported, as is the general ability for instructions to be dependent on other memory instructions.
cpu/ozone/lw_lsq.hh:
Support switching out.
Also use store writeback events in all cases, not just dcache misses.
cpu/ozone/lw_lsq_impl.hh:
Support switching out.
Also use store writeback events in all cases, not just dcache misses.
Support the checker CPU. Marks instructions as completed once the functional access is done (which has to be done for the checker to be able to verify results).
cpu/ozone/simple_params.hh:
Add max outstanding mem ops parameter.
python/m5/objects/OzoneCPU.py:
Add max outstanding mem ops, checker.
--HG--
extra : convert_revision : f4d408e1bb1f25836a097b6abe3856111e950c59
2006-05-12 01:18:36 +02:00
|
|
|
assert(!req->data);
|
|
|
|
req->data = new uint8_t[64];
|
|
|
|
memcpy(req->data, (uint8_t *)&(*sq_it).data, req->size);
|
2006-06-03 00:15:20 +02:00
|
|
|
*/
|
2006-05-23 22:57:14 +02:00
|
|
|
|
2006-04-23 01:10:39 +02:00
|
|
|
// This function only writes the data to the store queue, so no fault
|
|
|
|
// can happen here.
|
|
|
|
return NoFault;
|
|
|
|
}
|
|
|
|
|
|
|
|
#endif // __CPU_OZONE_LW_LSQ_HH__
|