* Removed startup code patches in lib/csu regarding kernel to userland
ABI.
* Aligned stack layout on NetBSD stack layout.
* Generate valid stack pointers instead of offsets by taking into account
_minix_kerninfo->kinfo->user_sp.
* Refactored stack generation, by moving part of execve in two
functions {minix_stack_params(), minix_stack_fill()} and using them
in execve(), rs and vm.
* Changed load offset of rtld (ld.so) to:
execi.args.stack_high - execi.args.stack_size - 0xa00000
which is 10MB below the main executable stack.
Change-Id: I839daf3de43321cded44105634102d419cb36cec
The Cycle CouNTer on ARM cannot be used reliably as it wraps around
rather quickly and can be altered by user space (on Minix). Furthermore,
it's buggy when wrapping and is not implemented at all on the Linaro
Beagleboard emulator.
This patch programs GPTIMER10 as a free running clock at 1.625 MHz (it
doesn't generate interrupts). It's memory mapped into every process,
which enables libsys to provide micro_delay().
Change-Id: Iba004c6c62976762fe154ea390d69e518eec1531
. add cpufeature detection of both
. use it for both ipc and kernelcall traps, using a register
for call number
. SYSENTER/SYSCALL does not save any context, therefore userland
has to save it
. to accomodate multiple kernel entry/exit types, the entry
type is recorded in the process struct. hitherto all types
were interrupt (soft int, exception, hard int); now SYSENTER/SYSCALL
is new, with the difference that context is not fully restored
from proc struct when running the process again. this can't be
done as some information is missing.
. complication: cases in which the kernel has to fully change
process context (i.e. sigreturn). in that case the exit type
is changed from SYSENTER/SYSEXIT to soft-int (i.e. iret) and
context is fully restored from the proc struct. this does mean
the PC and SP must change, as the sysenter/sysexit userland code
will otherwise try to restore its own context. this is true in the
sigreturn case.
. override all usage by setting libc_ipc=1
Coverity was flagging a recursive include between kernel.h and
cpulocals.h. As cpulocals.h also included proc.h, we can move that
include statement into kernel.h, and clean up the source files'
include statements accordingly.
. map all objects named usermapped_*.o with globally visible
pages; usermapped_glo_*.o with the VM 'global' bit on, i.e.
permanently in tlb (very scarce resource!)
. added kinfo, machine, kmessages and loadinfo for a start
. modified log, tty to make use of the shared messages struct
. some strncpy/strcpy to strlcpy conversions
. new <minix/param.h> to avoid including other minix headers
that have colliding definitions with library and commands code,
causing parse warnings
. removed some dead code / assignments
adjust the smp booting procedure for segmentless operation. changes are
mostly due to gdt/idt being dependent on paging, because of the high
location, and paging being on much sooner because of that too.
also smaller fixes: redefine DESC_SIZE, fix kernel makefile variable name
(crosscompiling), some null pointer checks that trap now because of a
sparser pagetable, acpi sanity checking
This commit removes all traces of Minix segments (the text/data/stack
memory map abstraction in the kernel) and significance of Intel segments
(hardware segments like CS, DS that add offsets to all addressing before
page table translation). This ultimately simplifies the memory layout
and addressing and makes the same layout possible on non-Intel
architectures.
There are only two types of addresses in the world now: virtual
and physical; even the kernel and processes have the same virtual
address space. Kernel and user processes can be distinguished at a
glance as processes won't use 0xF0000000 and above.
No static pre-allocated memory sizes exist any more.
Changes to booting:
. The pre_init.c leaves the kernel and modules exactly as
they were left by the bootloader in physical memory
. The kernel starts running using physical addressing,
loaded at a fixed location given in its linker script by the
bootloader. All code and data in this phase are linked to
this fixed low location.
. It makes a bootstrap pagetable to map itself to a
fixed high location (also in linker script) and jumps to
the high address. All code and data then use this high addressing.
. All code/data symbols linked at the low addresses is prefixed by
an objcopy step with __k_unpaged_*, so that that code cannot
reference highly-linked symbols (which aren't valid yet) or vice
versa (symbols that aren't valid any more).
. The two addressing modes are separated in the linker script by
collecting the unpaged_*.o objects and linking them with low
addresses, and linking the rest high. Some objects are linked
twice, once low and once high.
. The bootstrap phase passes a lot of information (e.g. free memory
list, physical location of the modules, etc.) using the kinfo
struct.
. After this bootstrap the low-linked part is freed.
. The kernel maps in VM into the bootstrap page table so that VM can
begin executing. Its first job is to make page tables for all other
boot processes. So VM runs before RS, and RS gets a fully dynamic,
VM-managed address space. VM gets its privilege info from RS as usual
but that happens after RS starts running.
. Both the kernel loading VM and VM organizing boot processes happen
using the libexec logic. This removes the last reason for VM to
still know much about exec() and vm/exec.c is gone.
Further Implementation:
. All segments are based at 0 and have a 4 GB limit.
. The kernel is mapped in at the top of the virtual address
space so as not to constrain the user processes.
. Processes do not use segments from the LDT at all; there are
no segments in the LDT any more, so no LLDT is needed.
. The Minix segments T/D/S are gone and so none of the
user-space or in-kernel copy functions use them. The copy
functions use a process endpoint of NONE to realize it's
a physical address, virtual otherwise.
. The umap call only makes sense to translate a virtual address
to a physical address now.
. Segments-related calls like newmap and alloc_segments are gone.
. All segments-related translation in VM is gone (vir2map etc).
. Initialization in VM is simpler as no moving around is necessary.
. VM and all other boot processes can be linked wherever they wish
and will be mapped in at the right location by the kernel and VM
respectively.
Other changes:
. The multiboot code is less special: it does not use mb_print
for its diagnostics any more but uses printf() as normal, saving
the output into the diagnostics buffer, only printing to the
screen using the direct print functions if a panic() occurs.
. The multiboot code uses the flexible 'free memory map list'
style to receive the list of free memory if available.
. The kernel determines the memory layout of the processes to
a degree: it tells VM where the kernel starts and ends and
where the kernel wants the top of the process to be. VM then
uses this entire range, i.e. the stack is right at the top,
and mmap()ped bits of memory are placed below that downwards,
and the break grows upwards.
Other Consequences:
. Every process gets its own page table as address spaces
can't be separated any more by segments.
. As all segments are 0-based, there is no distinction between
virtual and linear addresses, nor between userspace and
kernel addresses.
. Less work is done when context switching, leading to a net
performance increase. (8% faster on my machine for 'make servers'.)
. The layout and configuration of the GDT makes sysenter and syscall
possible.
. new mode for sys_memset: include process so memset can be
done in physical or virtual address space.
. add a mode to mmap() that lets a process allocate uninitialized
memory.
. this allows an exec()er (RS, VFS, etc.) to request uninitialized
memory from VM and selectively clear the ranges that don't come
from a file, leaving no uninitialized memory left for the process
to see.
. use callbacks for clearing the process, clearing memory in the
process, and copying into the process; so that the libexec code
can be used from rs, vfs, and in the future, kernel (to load vm)
and vm (to load boot-time processes)
. readbios call is now a physical copy with range check in
the kernel call instead of BIOS_SEG+umap_bios
. requires all access to physical memory in bios range to go
through sys_readbios
. drivers/dpeth: wasn't using it
. adjusted printer
. remove some call cycles by low-level functions invoking printf(); e.g.
send_sig() gets a return value that the caller should check
. reason: very-early-phase printf() would trigger a printf() causing
infinite recursion -> GPF
. move serial initialization a little earlier so DEBUG_EXTRA works for
serial earlier (e.g. its first instance, for "cstart")
. closes tracker item 583:
System Fails to Complete Startup with Verbose 2 and 3 Boot Parameters,
reported by Stephen Hatton / pikpik.
- when kernel copies from userspace, it must be sure that the TLB
entries are not stale and thus the referenced memory is correct
- everytime we change a process' address space we set p_stale_tlb
bits for all CPUs.
- Whenever a cpu finds its bit set when it wants to access the
process' memory, it refreshes the TLB
- it is more conservative than it needs to be but it has low
overhead than checking precisely
- if profile --nmi kernel uses NMI watchdog based sampling based on
Intel architecture performance counters
- using NMI makes kernel profiling possible
- watchdog kernel lockup detection is disabled while sampling as we
may get unpredictable interrupts in kernel and thus possibly many
false positives
- if watchdog is not enabled at boot time, profiling enables it and
turns it of again when done
- cross-address space copies use these slots to map user memory for
kernel. This avoid any collisions between CPUs
- well, we only have a single CPU running at a time, this is just to
be safe for the future
- APs configure local timers
- while configuring local APIC timer the CPUs fiddle with the interrupt
handlers. As the interrupt table is shared the BSP must not run
- kernel detects CPUs by searching ACPI tables for local apic nodes
- each CPU has its own TSS that points to its own stack. All cpus boot
on the same boot stack (in sequence) but switch to its private stack
as soon as they can.
- final booting code in main() placed in bsp_finish_booting() which is
executed only after the BSP switches to its final stack
- apic functions to send startup interrupts
- assembler functions to handle CPU features not needed for single cpu
mode like memory barries, HT detection etc.
- new files kernel/smp.[ch], kernel/arch/i386/arch_smp.c and
kernel/arch/i386/include/arch_smp.h
- 16-bit trampoline code for the APs. It is executed by each AP after
receiving startup IPIs it brings up the CPUs to 32bit mode and let
them spin in an infinite loop so they don't do any damage.
- implementation of kernel spinlock
- CONFIG_SMP and CONFIG_MAX_CPUS set by the build system
- most global variables carry information which is specific to the
local CPU and each CPU must have its own copy
- cpu local variable must be declared in cpulocal.h between
DECLARE_CPULOCAL_START and DECLARE_CPULOCAL_END markers using
DECLARE_CPULOCAL macro
- to access the cpu local data the provided macros must be used
get_cpu_var(cpu, name)
get_cpu_var_ptr(cpu, name)
get_cpulocal_var(name)
get_cpulocal_var_ptr(name)
- using this macros makes future changes in the implementation
possible
- switching to ELF will make the declaration of cpu local data much
simpler, e.g.
CPULOCAL int blah;
anywhere in the kernel source code
- kernel turns on IO APICs if no_apic is _not_ set or is equal 0
- pci driver must use the acpi driver to setup IRQ routing otherwise
the system cannot work correctly except systems like KVM that use
only legacy (E)ISA IRQs 0-15
- kernel exports DSDP (the root pointer where ACPI parsing starts) and
apic_enabled in the machine structure.
- ACPI driver uses DSDP to locate ACPI in memory. acpi_enabled tell
PCI driver to query ACPI for IRQ routing information.
- the ability for kernel to use ACPI tables to detect IO APICs. It is
the bare minimum the kernel needs to know about ACPI tables.
- it will be used to find out about processors as the MPS tables are
deprecated by ACPI and not all vendorsprovide them.