With this change, suggested by Gautam Tirumala, ports for pkgin and
pkg_install are cleaner and so easier to upstream. Presumably other
ports will be smoother too.
There doesn't seem to be a reason SSIZE_MAX was so small to begin with.
- regions were preivous stored in a linked list, as 'normally'
there are just 2 or 3 (text, data, stack), but that's slow
if lots of regions are made with mmap()
- measurable performance improvement with gcc and clang
this is a fix for e.g. the situation where lots of processes die
instantly, and PM has to send an asyn msg for each one to VFS, and
panics if there are too many. there are likely more situations in
which this table should be dependent on the no. of processes.
reported by pikpik on #minix3.
Before, the 'main thread' of a process was never taken into account anywhere in
the library, causing mutexes not to work properly (and consequently, neither
did the condition variables). For example, if the 'main thread' (that is, the
thread which is started at the beginning of a process; not a spawned thread by
the library) would lock a mutex, it wasn't actually locked.
- a different set of MSRs and performance counters is used on AMD
- when initializing NMI watchdog the test for Intel architecture
performance counters feature only applies to Intel now
- NMI is enabled if the CPU belongs to a family which has the
performance counters that we use
- sometimes the system needs to know precisely on what type of cpu is
running. The cpu type id detected during arch specific
initialization and kept in the machine structure for later use.
- as a side-effect the information is exported to userland
- the Intel architecture cycle counter (performance counter) does not
count when the CPU is idle therefore we use busy loop instead of
halting the cpu when there is nothing to schedule
- the downside is that handling interrupts may be accounted as idle
time if a sample is taken before we get out of the nested trap and
pick a new process
- when profiling is compiled in kernel includes a 64M buffer for
sample
- 64M is the default used by profile tool as its buffer
- when using nmi profiling it is not possible to always copy sample
stright to userland as the nmi may (and does) happen in bad moments
- reduces sampling overhead as samples are copied out only when
profiling stops
- if profile --nmi kernel uses NMI watchdog based sampling based on
Intel architecture performance counters
- using NMI makes kernel profiling possible
- watchdog kernel lockup detection is disabled while sampling as we
may get unpredictable interrupts in kernel and thus possibly many
false positives
- if watchdog is not enabled at boot time, profiling enables it and
turns it of again when done
- profile --nmi | --rtc sets the profiling mode
- --rtc is default, uses BIOS RTC, cannot profile kernel the presetted
frequency values apply
- --nmi is only available in APIC mode as it uses the NMI watchdog, -f
allows any frequency in Hz
- both modes use compatible data structures
- when kernel profiles a process for the first time it saves an entry
describing the process [endpoint|name]
- every profile sample is only [endpoint|pc]
- profile utility creates a table of endpoint <-> name relations and
translates endpoints of samples into names and writing out the
results to comply with the processing tools
- "task" endpoints like KERNEL are negative thus we must cast it to
unsigned when hashing