Commit graph

29 commits

Author SHA1 Message Date
Ben Gras
50e2064049 No more intel/minix segments.
This commit removes all traces of Minix segments (the text/data/stack
memory map abstraction in the kernel) and significance of Intel segments
(hardware segments like CS, DS that add offsets to all addressing before
page table translation). This ultimately simplifies the memory layout
and addressing and makes the same layout possible on non-Intel
architectures.

There are only two types of addresses in the world now: virtual
and physical; even the kernel and processes have the same virtual
address space. Kernel and user processes can be distinguished at a
glance as processes won't use 0xF0000000 and above.

No static pre-allocated memory sizes exist any more.

Changes to booting:
        . The pre_init.c leaves the kernel and modules exactly as
          they were left by the bootloader in physical memory
        . The kernel starts running using physical addressing,
          loaded at a fixed location given in its linker script by the
          bootloader.  All code and data in this phase are linked to
          this fixed low location.
        . It makes a bootstrap pagetable to map itself to a
          fixed high location (also in linker script) and jumps to
          the high address. All code and data then use this high addressing.
        . All code/data symbols linked at the low addresses is prefixed by
          an objcopy step with __k_unpaged_*, so that that code cannot
          reference highly-linked symbols (which aren't valid yet) or vice
          versa (symbols that aren't valid any more).
        . The two addressing modes are separated in the linker script by
          collecting the unpaged_*.o objects and linking them with low
          addresses, and linking the rest high. Some objects are linked
          twice, once low and once high.
        . The bootstrap phase passes a lot of information (e.g. free memory
          list, physical location of the modules, etc.) using the kinfo
          struct.
        . After this bootstrap the low-linked part is freed.
        . The kernel maps in VM into the bootstrap page table so that VM can
          begin executing. Its first job is to make page tables for all other
          boot processes. So VM runs before RS, and RS gets a fully dynamic,
          VM-managed address space. VM gets its privilege info from RS as usual
          but that happens after RS starts running.
        . Both the kernel loading VM and VM organizing boot processes happen
	  using the libexec logic. This removes the last reason for VM to
	  still know much about exec() and vm/exec.c is gone.

Further Implementation:
        . All segments are based at 0 and have a 4 GB limit.
        . The kernel is mapped in at the top of the virtual address
          space so as not to constrain the user processes.
        . Processes do not use segments from the LDT at all; there are
          no segments in the LDT any more, so no LLDT is needed.
        . The Minix segments T/D/S are gone and so none of the
          user-space or in-kernel copy functions use them. The copy
          functions use a process endpoint of NONE to realize it's
          a physical address, virtual otherwise.
        . The umap call only makes sense to translate a virtual address
          to a physical address now.
        . Segments-related calls like newmap and alloc_segments are gone.
        . All segments-related translation in VM is gone (vir2map etc).
        . Initialization in VM is simpler as no moving around is necessary.
        . VM and all other boot processes can be linked wherever they wish
          and will be mapped in at the right location by the kernel and VM
          respectively.

Other changes:
        . The multiboot code is less special: it does not use mb_print
          for its diagnostics any more but uses printf() as normal, saving
          the output into the diagnostics buffer, only printing to the
          screen using the direct print functions if a panic() occurs.
        . The multiboot code uses the flexible 'free memory map list'
          style to receive the list of free memory if available.
        . The kernel determines the memory layout of the processes to
          a degree: it tells VM where the kernel starts and ends and
          where the kernel wants the top of the process to be. VM then
          uses this entire range, i.e. the stack is right at the top,
          and mmap()ped bits of memory are placed below that downwards,
          and the break grows upwards.

Other Consequences:
        . Every process gets its own page table as address spaces
          can't be separated any more by segments.
        . As all segments are 0-based, there is no distinction between
          virtual and linear addresses, nor between userspace and
          kernel addresses.
        . Less work is done when context switching, leading to a net
          performance increase. (8% faster on my machine for 'make servers'.)
	. The layout and configuration of the GDT makes sysenter and syscall
	  possible.
2012-07-15 22:30:15 +02:00
Ben Gras
7336a67dfe retire PUBLIC, PRIVATE and FORWARD 2012-03-25 21:58:14 +02:00
Tomas Hruby
758d788bbe SMP - asyn send SMP safe
- we must not deliver messages from/to unstable address spaces.
  In such a case, we must postpone the delivery. To make sute
  that a process which is expecting an asynchronous message does
  not starve, we must remember that we skipped delivery of some
  messages and we must try to deliver again once the source
  address space is stable again.
2012-01-13 11:30:01 +00:00
Tomas Hruby
8fa95abae4 SMP - fixed usage of stale TLB entries
- when kernel copies from userspace, it must be sure that the TLB
  entries are not stale and thus the referenced memory is correct

- everytime we change a process' address space we set p_stale_tlb
  bits for all CPUs.

- Whenever a cpu finds its bit set when it wants to access the
  process' memory, it refreshes the TLB

- it is more conservative than it needs to be but it has low
  overhead than checking precisely
2012-01-13 11:30:00 +00:00
Tomas Hruby
9e01a83636 SMP - reduced TLB flushing
- flush TLB of processes only if the page tables has been changed and
  the page tables of this process are already loaded on this cpu which
  means that there might be stale entries in TLB. Until now SMP was
  always flushing TLB to make sure everything is consistent.
2010-10-25 16:21:23 +00:00
Tomas Hruby
6513d20744 SMP - Process is stopped when VM modifies the page tables
- RTS_VMINHIBIT flag is used to stop process while VM is fiddling with
  its pagetables

- more generic way of sending synchronous scheduling events among cpus

- do the x-cpu smp sched calls only if the target process is runnable.
  If it is not, it cannot be running and it cannot become runnable
  this CPU holds the BKL
2010-09-15 14:11:12 +00:00
Tomas Hruby
ce4fd0c0fb Enable paging - some more code reshuffling 2010-09-15 14:09:41 +00:00
Tomas Hruby
40f440b8cd KCall methods do not depend on m_source and m_type fields
- substituted the use of the m_source message field by
  caller->p_endpoint in kernel calls. It is the same information, just
  passed more intuitively.
  
- the last dependency on m_type field is removed.
  
- do_unused() is substituted by a check for NULL.

- this pretty much removes the depency of kernel calls on the general
  message format. In the future this may be used to pass the kcall
  arguments in a different structure or registers (x86-64, ARM?) The
  kcall number may be passed in a register already.
2010-06-01 08:54:31 +00:00
Erik van der Kouwe
1f11a57141 Oops, last commit included more than was intended 2010-05-20 08:07:47 +00:00
Erik van der Kouwe
5f15ec05b2 More system processes, this was not enough for the release script to run on some configurations 2010-05-20 08:05:07 +00:00
Arun Thomas
4ed3a0cf3a Convert kernel over to bsdmake 2010-04-01 22:22:33 +00:00
Kees van Reeuwijk
fc7dced1fa Fix printfs with too few or too many parms, remove unused vars, fix incorrect flag tests, other code cleanup. 2010-04-01 13:25:05 +00:00
Tomas Hruby
12ef495cac atomicity fix when enabling paging
- before enabling paging VM asks kernel to resize its segments. This
  may cause kernel to segfault if APIC is used and an interrupt
  happens between this and paging enabled. As these are 2 separate
  vmctl calls it is not atomic. This patch fixes this problem. VM does
  not ask kernel to resize the segments in a separate call anymore.
  The new segments limit is part of the "enable paging" call. It
  generalizes this call in such a way that more information can be
  passed as need be or the information may be completely different if
  another architecture requires this.
2010-03-22 07:42:52 +00:00
Ben Gras
0937d6c367 re-establish kernel assert()s.
use the regular <assert.h> assert() instead of vmassert() in
kernel. throw out some #if 0 code. fix a few assert() conditions.
enable by default.
2010-03-10 13:00:05 +00:00
Ben Gras
35a108b911 panic() cleanup.
this change
   - makes panic() variadic, doing full printf() formatting -
     no more NO_NUM, and no more separate printf() statements
     needed to print extra info (or something in hex) before panicing
   - unifies panic() - same panic() name and usage for everyone -
     vm, kernel and rest have different names/syntax currently
     in order to implement their own luxuries, but no longer
   - throws out the 1st argument, to make source less noisy.
     the panic() in syslib retrieves the server name from the kernel
     so it should be clear enough who is panicing; e.g.
         panic("sigaction failed: %d", errno);
     looks like:
         at_wini(73130): panic: sigaction failed: 0
         syslib:panic.c: stacktrace: 0x74dc 0x2025 0x100a
   - throws out report() - printf() is more convenient and powerful
   - harmonizes/fixes the use of panic() - there were a few places
     that used printf-style formatting (didn't work) and newlines
     (messes up the formatting) in panic()
   - throws out a few per-server panic() functions
   - cleans up a tie-in of tty with panic()

merging printf() and panic() statements to be done incrementally.
2010-03-05 15:05:11 +00:00
Ben Gras
e6cb76a2e2 no more kprintf - kernel uses libsys printf now, only kputc is special
to the kernel.
2010-03-03 15:45:01 +00:00
Tomas Hruby
c6fec6866f No locking in kernel code
- No locking in RTS_(UN)SET macros

- No lock_notify()

- Removed unused lock_send()

- No lock/unlock macros anymore
2010-02-09 15:26:58 +00:00
Tomas Hruby
728f0f0c49 Removal of the system task
* Userspace change to use the new kernel calls

	- _taskcall(SYSTASK...) changed to _kernel_call(...)

	- int 32 reused for the kernel calls

	- _do_kernel_call() to make the trap to kernel

	- kernel_call() to make the actuall kernel call from C using
	  _do_kernel_call()

	- unlike ipc call the kernel call always succeeds as kernel is
	  always available, however, kernel may return an error

* Kernel side implementation of kernel calls

	- the SYSTEm task does not run, only the proc table entry is
	  preserved

	- every data_copy(SYSTEM is no data_copy(KERNEL

	- "locking" is an empty operation now as everything runs in
	  kernel

	- sys_task() is replaced by kernel_call() which copies the
	  message into kernel, dispatches the call to its handler and
	  finishes by either copying the results back to userspace (if
	  need be) or by suspending the process because of VM

	- suspended processes are later made runnable once the memory
	  issue is resolved, picked up by the scheduler and only at
	  this time the call is resumed (in fact restarted) which does
	  not need to copy the message from userspace as the message
	  is already saved in the process structure.

	- no ned for the vmrestart queue, the scheduler will restart
	  the system calls

	- no special case in do_vmctl(), all requests remove the
	  RTS_VMREQUEST flag
2010-02-09 15:20:09 +00:00
Tomas Hruby
cca24d06d8 This patch removes the global variables who_p and who_e from the
kernel (sys task).  The main reason is that these would have to become
cpu local variables on SMP.  Once the system task is not a task but a
genuine part of the kernel there is even less reason to have these
extra variables as proc_ptr will already contain all neccessary
information. In addition converting who_e to the process pointer and
back again all the time will be avoided.

Although proc_ptr will contain all important information, accessing it
as a cpu local variable will be fairly expensive, hence the value
would be assigned to some on stack local variable. Therefore it is
better to add the 'caller' argument to the syscall handlers to pass
the value on stack anyway. It also clearly denotes on who's behalf is
the syscall being executed.

This patch also ANSIfies the syscall function headers.

Last but not least, it also fixes a potential bug in virtual_copy_f()
in case the check is disabled. So far the function in case of a
failure could possible reuse an old who_p in case this function had
not been called from the system task.

virtual_copy_f() takes the caller as a parameter too. In case the
checking is disabled, the caller must be NULL and non NULL if it is
enabled as we must be able to suspend the caller.
2010-02-03 09:04:48 +00:00
Kees van Reeuwijk
a7cee5bec4 Removed unused symbols.
Minor cleanups.
2010-01-22 22:01:08 +00:00
Cristiano Giuffrida
c5b309ff07 Merge of Wu's GSOC 09 branch (src.20090525.r4372.wu)
Main changes:
- COW optimization for safecopy.
- safemap, a grant-based interface for sharing memory regions between processes.
- Integration with safemap and complete rework of DS, supporting new data types
  natively (labels, memory ranges, memory mapped ranges).
- For further information:
  http://wiki.minix3.org/en/SummerOfCode2009/MemoryGrants

Additional changes not included in the original Wu's branch:
- Fixed unhandled case in VM when using COW optimization for safecopy in case
  of a block that has already been shared as SMAP.
- Better interface and naming scheme for sys_saferevmap and ds_retrieve_map
  calls.
- Better input checking in syslib: check for page alignment when creating
  memory mapping grants.
- DS notifies subscribers when an entry is deleted.
- Documented the behavior of indirect grants in case of memory mapping.
- Test suite in /usr/src/test/safeperf|safecopy|safemap|ds/* reworked
  and extended.
- Minor fixes and general cleanup.
- TO-DO: Grant ids should be generated and managed the way endpoints are to make
sure grant slots are never misreused.
2010-01-14 15:24:16 +00:00
David van Moolenbroek
ac9a5829a2 suppress kernel/VM memory debugging information 2009-12-29 21:35:12 +00:00
Tomas Hruby
b3b0a18403 allow kernel to tell VM extra physical addresses it wants mapped in.
used in the future for mapping in local APIC memory.
2009-11-11 12:07:06 +00:00
Tomas Hruby
a972f4bacc All macros defining rts flags are prefixed with RTS_
- macros used with RTS_SET group of macros to define struct proc p_rts_flags are
  now prefixed with RTS_ to make things clear
2009-11-10 09:11:13 +00:00
Ben Gras
cd8b915ed9 Primary goal for these changes is:
- no longer have kernel have its own page table that is loaded
    on every kernel entry (trap, interrupt, exception). the primary
    purpose is to reduce the number of required reloads.
Result:
  - kernel can only access memory of process that was running when
    kernel was entered
  - kernel must be mapped into every process page table, so traps to
    kernel keep working
Problem:
  - kernel must often access memory of arbitrary processes (e.g. send
    arbitrary processes messages); this can't happen directly any more;
    usually because that process' page table isn't loaded at all, sometimes
    because that memory isn't mapped in at all, sometimes because it isn't
    mapped in read-write.
So:
  - kernel must be able to map in memory of any process, in its own
    address space.
Implementation:
  - VM and kernel share a range of memory in which addresses of
    all page tables of all processes are available. This has two purposes:
      . Kernel has to know what data to copy in order to map in a range
      . Kernel has to know where to write the data in order to map it in
    That last point is because kernel has to write in the currently loaded
    page table.
  - Processes and kernel are separated through segments; kernel segments
    haven't changed.
  - The kernel keeps the process whose page table is currently loaded
    in 'ptproc.'
  - If it wants to map in a range of memory, it writes the value of the
    page directory entry for that range into the page directory entry
    in the currently loaded map. There is a slot reserved for such
    purposes. The kernel can then access this memory directly.
  - In order to do this, its segment has been increased (and the
    segments of processes start where it ends).
  - In the pagefault handler, detect if the kernel is doing
    'trappable' memory access (i.e. a pagefault isn't a fatal
     error) and if so,
       - set the saved instruction pointer to phys_copy_fault,
	 breaking out of phys_copy
       - set the saved eax register to the address of the page
	 fault, both for sanity checking and for checking in
	 which of the two ranges that phys_copy was called
	 with the fault occured
  - Some boot-time processes do not have their own page table,
    and are mapped in with the kernel, and separated with
    segments. The kernel detects this using HASPT. If such a
    process has to be scheduled, any page table will work and
    no page table switch is done.

Major changes in kernel are
  - When accessing user processes memory, kernel no longer
    explicitly checks before it does so if that memory is OK.
    It simply makes the mapping (if necessary), tries to do the
    operation, and traps the pagefault if that memory isn't present;
    if that happens, the copy function returns EFAULT.
    So all of the CHECKRANGE_OR_SUSPEND macros are gone.
  - Kernel no longer has to copy/read and parse page tables.
  - A message copying optimisation: when messages are copied, and
    the recipient isn't mapped in, they are copied into a buffer
    in the kernel. This is done in QueueMess. The next time
    the recipient is scheduled, this message is copied into
    its memory. This happens in schedcheck().
    This eliminates the mapping/copying step for messages, and makes
    it easier to deliver messages. This eliminates soft_notify.
  - Kernel no longer creates a page table at all, so the vm_setbuf
    and pagetable writing in memory.c is gone.

Minor changes in kernel are
  - ipc_stats thrown out, wasn't used
  - misc flags all renamed to MF_*
  - NOREC_* macros to enter and leave functions that should not
    be called recursively; just sanity checks really
  - code to fully decode segment selectors and descriptors
    to print on exceptions
  - lots of vmassert()s added, only executed if DEBUG_VMASSERT is 1
2009-09-21 14:31:52 +00:00
Ben Gras
c628f24bc2 moved stacktrace to sysctl, as vmctl is very privileged so can't
be used outside VM. IS code cleanup. added stacktrace feature to IS.
2009-01-27 12:54:33 +00:00
Ben Gras
f0000078c3 make kernel leave a page-sized gap in its code and data to not be
mapped in if so configured.
2008-12-18 14:30:55 +00:00
Ben Gras
5db1a042c2 stacktrace feature. 2008-12-11 15:33:43 +00:00
Ben Gras
c078ec0331 Basic VM and other minor improvements.
Not complete, probably not fully debugged or optimized.
2008-11-19 12:26:10 +00:00