. helps debugging output; you can see the difference
between parent and child easily (it's sometimes
confusing to see an expected endpoint number with
an unexpected name, i.e. before exec())
. when processes crash after fork and before exec, it's
an instant hint that that's what's going on, instead of
it being the parent (endpoint numbers don't usually convey
this)
. name returns to 'normal' after exec(), so *F isn't visible
normally at all. (Except for for RS which forks apparently.)
Headers that will be shared between old includes and NetBSD-like includes
are moved into common/include tree. They are still copied in /usr/include
in 'make includes', so compilation and programs aren't be affected.
- kernel maintains a cpu_info array which contains various
information about each cpu as filled when each cpu boots
- the information contains idetification, features etc.
- flush TLB of processes only if the page tables has been changed and
the page tables of this process are already loaded on this cpu which
means that there might be stale entries in TLB. Until now SMP was
always flushing TLB to make sure everything is consistent.
- accidentaly this wasn't part of the SMP merge and the implementation
remained uncomplete with the timer keeping ticking periodically
- APIC timer is set for a signel shot and restarted everytime it
expires. This way we can keep the AP's trully idle
- the timer is restarted a little later before leaving to userspace
- LAPIC_TIMER_ICR is written before LAPIC_LVTTR so the newest value is
used
- fixed spurious and error interrupt handlers
- not to hog the system the warning isn't reported every time, just
once every 100 times, similarly for the spurious PIC interrupts
- a different set of MSRs and performance counters is used on AMD
- when initializing NMI watchdog the test for Intel architecture
performance counters feature only applies to Intel now
- NMI is enabled if the CPU belongs to a family which has the
performance counters that we use
- sometimes the system needs to know precisely on what type of cpu is
running. The cpu type id detected during arch specific
initialization and kept in the machine structure for later use.
- as a side-effect the information is exported to userland
- the Intel architecture cycle counter (performance counter) does not
count when the CPU is idle therefore we use busy loop instead of
halting the cpu when there is nothing to schedule
- the downside is that handling interrupts may be accounted as idle
time if a sample is taken before we get out of the nested trap and
pick a new process
- when profiling is compiled in kernel includes a 64M buffer for
sample
- 64M is the default used by profile tool as its buffer
- when using nmi profiling it is not possible to always copy sample
stright to userland as the nmi may (and does) happen in bad moments
- reduces sampling overhead as samples are copied out only when
profiling stops
- if profile --nmi kernel uses NMI watchdog based sampling based on
Intel architecture performance counters
- using NMI makes kernel profiling possible
- watchdog kernel lockup detection is disabled while sampling as we
may get unpredictable interrupts in kernel and thus possibly many
false positives
- if watchdog is not enabled at boot time, profiling enables it and
turns it of again when done
- when kernel profiles a process for the first time it saves an entry
describing the process [endpoint|name]
- every profile sample is only [endpoint|pc]
- profile utility creates a table of endpoint <-> name relations and
translates endpoints of samples into names and writing out the
results to comply with the processing tools
- "task" endpoints like KERNEL are negative thus we must cast it to
unsigned when hashing
- contributed by Bjorn Swift
- adds process accounting, for example counting the number of messages
sent, how often the process was preemted and how much time it spent
in the run queue. These statistics, along with the current cpu load,
are sent back to the user-space scheduler in the Out Of Quantum
message.
- the user-space scheduler may choose to make use of these statistics
when making scheduling decisions. For isntance the cpu load becomes
especially useful when scheduling on multiple cores.