gem5/src/mem/cache/tags
Sophiane Senni ce2722cdd9 mem: Split the hit_latency into tag_latency and data_latency
If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.

Signed-off-by: Jason Lowe-Power <jason@lowepower.com>
2016-11-30 17:10:27 -05:00
..
base.cc mem: Split the hit_latency into tag_latency and data_latency 2016-11-30 17:10:27 -05:00
base.hh mem: Split the hit_latency into tag_latency and data_latency 2016-11-30 17:10:27 -05:00
base_set_assoc.cc mem: fix headers include order in the cache related classes 2016-05-26 11:56:24 +01:00
base_set_assoc.hh mem: Split the hit_latency into tag_latency and data_latency 2016-11-30 17:10:27 -05:00
cacheset.hh mem: fix headers include order in the cache related classes 2016-05-26 11:56:24 +01:00
fa_lru.cc mem: Split the hit_latency into tag_latency and data_latency 2016-11-30 17:10:27 -05:00
fa_lru.hh mem: Split the hit_latency into tag_latency and data_latency 2016-11-30 17:10:27 -05:00
lru.cc mem: fix headers include order in the cache related classes 2016-05-26 11:56:24 +01:00
lru.hh mem: Remove templates in cache model 2015-05-05 03:22:21 -04:00
random_repl.cc mem: fix headers include order in the cache related classes 2016-05-26 11:56:24 +01:00
random_repl.hh mem: Remove templates in cache model 2015-05-05 03:22:21 -04:00
SConscript mem: refactor LRU cache tags and add random replacement tags 2014-07-28 12:23:23 -04:00
Tags.py mem: Split the hit_latency into tag_latency and data_latency 2016-11-30 17:10:27 -05:00