Commit graph

20 commits

Author SHA1 Message Date
Ben Gras
50e2064049 No more intel/minix segments.
This commit removes all traces of Minix segments (the text/data/stack
memory map abstraction in the kernel) and significance of Intel segments
(hardware segments like CS, DS that add offsets to all addressing before
page table translation). This ultimately simplifies the memory layout
and addressing and makes the same layout possible on non-Intel
architectures.

There are only two types of addresses in the world now: virtual
and physical; even the kernel and processes have the same virtual
address space. Kernel and user processes can be distinguished at a
glance as processes won't use 0xF0000000 and above.

No static pre-allocated memory sizes exist any more.

Changes to booting:
        . The pre_init.c leaves the kernel and modules exactly as
          they were left by the bootloader in physical memory
        . The kernel starts running using physical addressing,
          loaded at a fixed location given in its linker script by the
          bootloader.  All code and data in this phase are linked to
          this fixed low location.
        . It makes a bootstrap pagetable to map itself to a
          fixed high location (also in linker script) and jumps to
          the high address. All code and data then use this high addressing.
        . All code/data symbols linked at the low addresses is prefixed by
          an objcopy step with __k_unpaged_*, so that that code cannot
          reference highly-linked symbols (which aren't valid yet) or vice
          versa (symbols that aren't valid any more).
        . The two addressing modes are separated in the linker script by
          collecting the unpaged_*.o objects and linking them with low
          addresses, and linking the rest high. Some objects are linked
          twice, once low and once high.
        . The bootstrap phase passes a lot of information (e.g. free memory
          list, physical location of the modules, etc.) using the kinfo
          struct.
        . After this bootstrap the low-linked part is freed.
        . The kernel maps in VM into the bootstrap page table so that VM can
          begin executing. Its first job is to make page tables for all other
          boot processes. So VM runs before RS, and RS gets a fully dynamic,
          VM-managed address space. VM gets its privilege info from RS as usual
          but that happens after RS starts running.
        . Both the kernel loading VM and VM organizing boot processes happen
	  using the libexec logic. This removes the last reason for VM to
	  still know much about exec() and vm/exec.c is gone.

Further Implementation:
        . All segments are based at 0 and have a 4 GB limit.
        . The kernel is mapped in at the top of the virtual address
          space so as not to constrain the user processes.
        . Processes do not use segments from the LDT at all; there are
          no segments in the LDT any more, so no LLDT is needed.
        . The Minix segments T/D/S are gone and so none of the
          user-space or in-kernel copy functions use them. The copy
          functions use a process endpoint of NONE to realize it's
          a physical address, virtual otherwise.
        . The umap call only makes sense to translate a virtual address
          to a physical address now.
        . Segments-related calls like newmap and alloc_segments are gone.
        . All segments-related translation in VM is gone (vir2map etc).
        . Initialization in VM is simpler as no moving around is necessary.
        . VM and all other boot processes can be linked wherever they wish
          and will be mapped in at the right location by the kernel and VM
          respectively.

Other changes:
        . The multiboot code is less special: it does not use mb_print
          for its diagnostics any more but uses printf() as normal, saving
          the output into the diagnostics buffer, only printing to the
          screen using the direct print functions if a panic() occurs.
        . The multiboot code uses the flexible 'free memory map list'
          style to receive the list of free memory if available.
        . The kernel determines the memory layout of the processes to
          a degree: it tells VM where the kernel starts and ends and
          where the kernel wants the top of the process to be. VM then
          uses this entire range, i.e. the stack is right at the top,
          and mmap()ped bits of memory are placed below that downwards,
          and the break grows upwards.

Other Consequences:
        . Every process gets its own page table as address spaces
          can't be separated any more by segments.
        . As all segments are 0-based, there is no distinction between
          virtual and linear addresses, nor between userspace and
          kernel addresses.
        . Less work is done when context switching, leading to a net
          performance increase. (8% faster on my machine for 'make servers'.)
	. The layout and configuration of the GDT makes sysenter and syscall
	  possible.
2012-07-15 22:30:15 +02:00
Ben Gras
8c4cdbd3c5 import genassym and use it for sconst.h in kernel 2012-03-31 15:29:53 +02:00
Ben Gras
b984fa41df Revert "print kernel stacktrace for exceptions in kernel"
This reverts commit eff1369cab.

This was in a working branch and I only intended to commit
exception.c. But I committed the exact inverse. Sorry.
2011-07-22 15:01:44 +02:00
Ben Gras
eff1369cab print kernel stacktrace for exceptions in kernel
fpu alignment check feature, checksum feature
2011-07-22 11:03:45 +00:00
Tomas Hruby
62c666566e SMP - We boot APs
- kernel detects CPUs by searching ACPI tables for local apic nodes

- each CPU has its own TSS that points to its own stack. All cpus boot
  on the same boot stack (in sequence) but switch to its private stack
  as soon as they can.

- final booting code in main() placed in bsp_finish_booting() which is
  executed only after the BSP switches to its final stack

- apic functions to send startup interrupts

- assembler functions to handle CPU features not needed for single cpu
  mode like memory barries, HT detection etc.

- new files kernel/smp.[ch], kernel/arch/i386/arch_smp.c and
  kernel/arch/i386/include/arch_smp.h

- 16-bit trampoline code for the APs. It is executed by each AP after
  receiving startup IPIs it brings up the CPUs to 32bit mode and let
  them spin in an infinite loop so they don't do any damage.

- implementation of kernel spinlock

- CONFIG_SMP and CONFIG_MAX_CPUS set by the build system
2010-09-15 14:09:52 +00:00
Tomas Hruby
cbc9586c13 Lazy FPU
- FPU context is stored only if conflict between 2 FPU users or while
  exporting context of a process to userspace while it is the active
  user of FPU

- FPU has its owner (fpu_owner) which points to the process whose
  state is currently loaded in FPU

- the FPU exception is only turned on when scheduling a process which
  is not the owner of FPU

- FPU state is restored for the process that generated the FPU
  exception. This process runs immediately without letting scheduler
  to pick a new process to resolve the FPU conflict asap, to minimize
  the FPU thrashing and FPU exception hadler execution

- faster all non-FPU-exception kernel entries as FPU state is not
  checked nor saved

- removed MF_USED_FPU flag, only MF_FPU_INITIALIZED remains to signal
  that a process has used FPU in the past
2010-06-07 07:43:17 +00:00
Ben Gras
2f892aca91 kernel fpu context switching: fix race condition
There seems to have been a broken assumption in the fpu context
restoring code.  It restores the context of the running process, without
guarantee that the current process is the one that will be scheduled.
This caused fpu saving for a different process to be triggered without
fpu hardware being enabled, causing an fpu exception in the kernel. This
practically only shows up with DEBUG_RACE on. Fix my thruby+me.

The fix
 . is to only set the fpu-in-use-by-this-process flag in the
   exception handler, and then take care of fpu restoring when
   actually returning to userspace

And the patch
 . translates fpu saving and restoring to c in arch_system.c,
   getting rid of a juicy chunk of assembly
 . makes osfxsr_feature private to arch_system.c
 . removes most of the arch dependent code from do_sigsend
2010-06-03 11:32:22 +00:00
Arun Thomas
b0159ad168 Buildsystem changes for GCC
-Makefile updates
-Update mkdep
-Build fixes/warning cleanups for some programs
-Restore leading underscores on global syms in kernel asm files
-Increase ramdisk size
2010-05-19 13:24:15 +00:00
Ben Gras
c5c25e7abc kernel/vm: change pde table info from single buffer to explicit per-process.
makes code in kernel more readable, and allows better sanity checking on
using the pde info.
2010-05-12 08:31:05 +00:00
Arun Thomas
4ed3a0cf3a Convert kernel over to bsdmake 2010-04-01 22:22:33 +00:00
Tomas Hruby
1dd6f5573a Direction flag
- ack assumes that the direction flag in eflags is clear when
  assigning two structures. It is implemented by a call to a built-in
  function which is like memcpy but needs the flag to be clear
  otherwise rubish is copied. This patch fixes the kernel entries.
2010-03-26 12:29:52 +00:00
Tomas Hruby
8451a86f0a Interrupts hadling while idle
- When the cpu halts, the interrupts are enable so the cpu may be
  woken up. When the interrupt handler returns but another interrupt
  is available it is also serviced immediately. This is not a problem
  per-se. It only slightly breaks time accounting as idle accounted is
  for the kernel time in the interrupt handler.
  
  
-  As the big kernel lock is lock/unlocked in the smp branch in the
   time acounting functions as they are called exactly at the places
   we need to take the lock) this leads to a deadlock.

- we make sure that once the interrupt handler returns from the nested
  trap, the interrupts are disabled. This means that only one
  interrupt is serviced after idle is interrupted.

- this requires the loop in apic timer calibration to keep reenabling
  the interrupts. I admit it is a little bit hackish (one line),
  however, this code is a stupid corner case at the boot time.
  Hopefully it does not matter too much.
2010-03-23 13:35:01 +00:00
Tomas Hruby
5efa92f754 NMI watchdog is an awesome feature for debugging locked up kernels.
There is not that much use for it on a single CPU, however, deadlock
between kernel and system task can be delected. Or a runaway loop.

If a kernel gets locked up the timer interrupts don't occure (as all
interrupts are disabled in kernel mode). The only chance is to
interrupt the kernel by a non-maskable interrupt.

This patch generates NMIs using performance counters. It uses the most
widely available performace counters. As the performance counters are 
highly model-specific this patch is not guaranteed to work on every
machine.  Unfortunately this is also true for KVM :-/ On the other
hand adding this feature for other models is not extremely difficult
and the framework makes it hopefully easy enough.

Depending on the frequency of the CPU an NMI is generated at most
about every 0.5s If the cpu's speed is less then 2Ghz it is generated
at most every 1s. In general an NMI is generated much less often as
the performance counter counts down only if the cpu is not idle.
Therefore the overhead of this feature is fairly minimal even if the
load is high.

Uppon detecting that the kernel is locked up the kernel dumps the 
state of the kernel registers and panics.

Local APIC must be enabled for the watchdog to work.

The code is _always_ compiled in, however, it is only enabled if  
watchdog=<non-zero> is set in the boot monitor.

One corner case is serial console debugging. As dumping a lot of stuff
to the serial link may take a lot of time, the watchdog does not 
detect lockups during this time!!! as it would result in too many
false positives. 10 nmi have to be handled before the lockup is
detected. This means something between ~5s to 10s.

Another corner case is that the watchdog is enabled only after the
paging is enabled as it would be pure madness to try to get it right.
2010-01-16 20:53:55 +00:00
Ben Gras
bd42705433 FPU context switching support by Evgeniy Ivanov. 2009-12-02 13:01:48 +00:00
Tomas Hruby
8a44a44cb9 Local APIC
- local APIC timer used as the source of time

- PIC is still used as the hw interrupt controller as we don't have
  enough info without ACPI or MPS to set up IO APICs

- remapping of APIC when switching paging on, uses the new mechanism
  to tell VM what phys areas to map in kernel's virtual space

- one more step to SMP

based on code by Arun C.
2009-11-16 21:41:44 +00:00
Tomas Hruby
ae75f9d4e5 Removal of the executable flag from files that cannot be executed
- 755 -> 644
2009-11-09 10:26:00 +00:00
Tomas Hruby
ebbce7507b Complete ovehaul of mode switching code
- after a trap to kernel, the code automatically switches to kernel
  stack, in the future local to the CPU

- k_reenter variable replaced by a test whether the CS is kernel cs or
  not. The information is passed further if needed. Removes a global
  variable which would need to be cpu local

- no need for global variables describing the exception or trap
  context. This information is kept on stack and a pointer to this
  structure is passed to the C code as a single structure

- removed loadedcr3 variable and its use replaced by reading the %cr3
  register

- no need to redisable interrupts in restart() as they are already
  disabled.

- unified handling of traps that push and don't push errorcode

- removed save() function as the process context is not saved directly
  to process table but saved as required by the trap code. Essentially
  it means that save() code is inlined everywhere not only in the
  exception handling routine

- returning from syscall is more arch independent - it sets the retger
  in C

- top of the x86 stack contains the current CPU id and pointer to the
  currently scheduled process (the one right interrupted) so the mode
  switch code can find where to save the context without need to use
  proc_ptr which will be cpu local in the future and therefore
  difficult to access in assembler and expensive to access in general

- some more clean up of level0 code. No need to read-back the argument
  passed in
  %eax from the proc structure. The mode switch code does not clobber
  %the general registers and hence we can just call what is in %eax

- many assebly macros in sconst.h as they will be reused by the apic
  assembly
2009-11-06 09:08:26 +00:00
Tomas Hruby
403764c538 Conversion of kernel assembly from ACK to GNU
- .s files removed and replaced by .S as the .S is a standard extension for assembly that needs preprocessing
2009-10-30 16:00:44 +00:00
Ben Gras
c078ec0331 Basic VM and other minor improvements.
Not complete, probably not fully debugged or optimized.
2008-11-19 12:26:10 +00:00
Ben Gras
6f77685609 Split of architecture-dependent and -independent functions for i386,
mainly in the kernel and headers. This split based on work by
Ingmar Alting <iaalting@cs.vu.nl> done for his Minix PowerPC architecture
port.

 . kernel does not program the interrupt controller directly, do any
   other architecture-dependent operations, or contain assembly any more,
   but uses architecture-dependent functions in arch/$(ARCH)/.
 . architecture-dependent constants and types defined in arch/$(ARCH)/include.
 . <ibm/portio.h> moved to <minix/portio.h>, as they have become, for now,
   architecture-independent functions.
 . int86, sdevio, readbios, and iopenable are now i386-specific kernel calls
   and live in arch/i386/do_* now.
 . i386 arch now supports even less 86 code; e.g. mpx86.s and klib86.s have
   gone, and 'machine.protected' is gone (and always taken to be 1 in i386).
   If 86 support is to return, it should be a new architecture.
 . prototypes for the architecture-dependent functions defined in
   kernel/arch/$(ARCH)/*.c but used in kernel/ are in kernel/proto.h
 . /etc/make.conf included in makefiles and shell scripts that need to
   know the building architecture; it defines ARCH=<arch>, currently only
   i386.
 . some basic per-architecture build support outside of the kernel (lib)
 . in clock.c, only dequeue a process if it was ready
 . fixes for new include files

files deleted:
 . mpx/klib.s - only for choosing between mpx/klib86 and -386
 . klib86.s - only for 86

i386-specific files files moved (or arch-dependent stuff moved) to arch/i386/:
 . mpx386.s (entry point)
 . klib386.s
 . sconst.h
 . exception.c
 . protect.c
 . protect.h
 . i8269.c
2006-12-22 15:22:27 +00:00
Renamed from kernel/sconst.h (Browse further)