gem5/arch/alpha/alpha_memory.hh
Steve Reinhardt d53c6c168a Get software prefetching to work in full-system mode.
Mostly a matter of keeping prefetches to invalid addrs
from messing up VM IPRs.  Also discovered that wh64s were
not being treated as prefetches, when they really should be
(for the most part, anyway).

arch/alpha/alpha_memory.cc:
arch/alpha/alpha_memory.hh:
    - Get rid of intrlock flag for locking VM fault regs (a la EV5);
    instead, just don't update regs on VPTE loads (a la EV6).
    - Add NO_FAULT MemReq flag to indicate references that should not
    cause page faults (i.e., prefetches).
arch/alpha/ev5.cc:
    - Get rid of intrlock flag for locking VM fault regs (a la EV5);
    instead, just don't update regs on VPTE loads (a la EV6).
    - Add Fault trace flag.
arch/alpha/isa_desc:
    - Add NO_FAULT MemReq flag to indicate references that should not
    cause page faults (i.e., prefetches).
    - Mark wh64 as a "data prefetch" instruction so it gets controlled
    properly by the FullCPU data prefetch control switch.
    - Align wh64 EA in decoder so issue stage doesn't need to worry about it.
arch/alpha/isa_traits.hh:
    - Get rid of intrlock flag for locking VM fault regs (a la EV5);
    instead, just don't update regs on VPTE loads (a la EV6).
base/traceflags.py:
    - Add Fault trace flag.
cpu/simple_cpu/simple_cpu.hh:
    - Pass MemReq flags to writeHint() operation.
cpu/static_inst.hh:
    Update comment re: prefetches.

--HG--
extra : convert_revision : 62e466b0f4c0ff9961796270fa2e371ec24bcbb6
2004-06-15 10:48:08 -07:00

125 lines
4 KiB
C++

/*
* Copyright (c) 2003 The Regents of The University of Michigan
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met: redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer;
* redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution;
* neither the name of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef __ALPHA_MEMORY_HH__
#define __ALPHA_MEMORY_HH__
#include <map>
#include "mem/mem_req.hh"
#include "sim/sim_object.hh"
#include "base/statistics.hh"
class ExecContext;
class AlphaTLB : public SimObject
{
protected:
typedef std::multimap<Addr, int> PageTable;
PageTable lookupTable; // Quick lookup into page table
AlphaISA::PTE *table; // the Page Table
int size; // TLB Size
int nlu; // not last used entry (for replacement)
void nextnlu() { if (++nlu >= size) nlu = 0; }
AlphaISA::PTE *lookup(Addr vpn, uint8_t asn) const;
public:
AlphaTLB(const std::string &name, int size);
virtual ~AlphaTLB();
int getsize() const { return size; }
AlphaISA::PTE &index(bool advance = true);
void insert(Addr vaddr, AlphaISA::PTE &pte);
void flushAll();
void flushProcesses();
void flushAddr(Addr addr, uint8_t asn);
// static helper functions... really EV5 VM traits
static bool validVirtualAddress(Addr vaddr) {
// unimplemented bits must be all 0 or all 1
Addr unimplBits = vaddr & VA_UNIMPL_MASK;
return (unimplBits == 0) || (unimplBits == VA_UNIMPL_MASK);
}
static void checkCacheability(MemReqPtr &req);
// Checkpointing
virtual void serialize(std::ostream &os);
virtual void unserialize(Checkpoint *cp, const std::string &section);
};
class AlphaITB : public AlphaTLB
{
protected:
mutable Stats::Scalar<> hits;
mutable Stats::Scalar<> misses;
mutable Stats::Scalar<> acv;
mutable Stats::Formula accesses;
protected:
void fault(Addr pc, ExecContext *xc) const;
public:
AlphaITB(const std::string &name, int size);
virtual void regStats();
Fault translate(MemReqPtr &req) const;
};
class AlphaDTB : public AlphaTLB
{
protected:
mutable Stats::Scalar<> read_hits;
mutable Stats::Scalar<> read_misses;
mutable Stats::Scalar<> read_acv;
mutable Stats::Scalar<> read_accesses;
mutable Stats::Scalar<> write_hits;
mutable Stats::Scalar<> write_misses;
mutable Stats::Scalar<> write_acv;
mutable Stats::Scalar<> write_accesses;
Stats::Formula hits;
Stats::Formula misses;
Stats::Formula acv;
Stats::Formula accesses;
protected:
void fault(MemReqPtr &req, uint64_t flags) const;
public:
AlphaDTB(const std::string &name, int size);
virtual void regStats();
Fault translate(MemReqPtr &req, bool write) const;
};
#endif // __ALPHA_MEMORY_HH__