Commit graph

148 commits

Author SHA1 Message Date
Tomas Hruby fac5fbfdbf SMP - CPU local run queues
- each CPU has its own runqueues

- processes on BSP are put on the runqueues later after a switch to
  the final stack when cpuid works to avoid special cases

- enqueue() and dequeue() use the run queues of the cpu the process is
  assigned to

- pick_proc() uses the local run queues

- printing of per-CPU run queues ('2') on serial console
2010-09-15 14:10:18 +00:00
Tomas Hruby 62c666566e SMP - We boot APs
- kernel detects CPUs by searching ACPI tables for local apic nodes

- each CPU has its own TSS that points to its own stack. All cpus boot
  on the same boot stack (in sequence) but switch to its private stack
  as soon as they can.

- final booting code in main() placed in bsp_finish_booting() which is
  executed only after the BSP switches to its final stack

- apic functions to send startup interrupts

- assembler functions to handle CPU features not needed for single cpu
  mode like memory barries, HT detection etc.

- new files kernel/smp.[ch], kernel/arch/i386/arch_smp.c and
  kernel/arch/i386/include/arch_smp.h

- 16-bit trampoline code for the APs. It is executed by each AP after
  receiving startup IPIs it brings up the CPUs to 32bit mode and let
  them spin in an infinite loop so they don't do any damage.

- implementation of kernel spinlock

- CONFIG_SMP and CONFIG_MAX_CPUS set by the build system
2010-09-15 14:09:52 +00:00
Tomas Hruby 13a0d5fa5e SMP - Cpu local variables
- most global variables carry information which is specific to the
  local CPU and each CPU must have its own copy

- cpu local variable must be declared in cpulocal.h between
  DECLARE_CPULOCAL_START and DECLARE_CPULOCAL_END markers using
  DECLARE_CPULOCAL macro

- to access the cpu local data the provided macros must be used

	get_cpu_var(cpu, name)
	get_cpu_var_ptr(cpu, name)

	get_cpulocal_var(name)
	get_cpulocal_var_ptr(name)

- using this macros makes future changes in the implementation
  possible

- switching to ELF will make the declaration of cpu local data much
  simpler, e.g.

  CPULOCAL int blah;

  anywhere in the kernel source code
2010-09-15 14:09:46 +00:00
Tomas Hruby 2a2a19e542 proc_init()
- code that initializes proc.c structures removed from main() and placed in
  proc_init() function
2010-09-15 14:09:43 +00:00
Ben Gras b9cea27497 kernel: deadlock test with endpoints instead of slot numbers, slightly cleaner 2010-07-28 14:14:06 +00:00
Ben Gras 7f343ed574 kernel: clear MF_CONTEXT_SET on kernel exit. 2010-07-20 17:13:44 +00:00
Ben Gras b05c989298 kernel - prettier output for ipc errors, call names instead of trap numbers 2010-07-16 15:36:29 +00:00
Tomas Hruby 67fa273d00 MF_REPLY_PEND should be removed when sendrec finishes 2010-06-28 08:32:49 +00:00
Kees van Reeuwijk 5eb6f6e922 Fixed a type declaration inconsistency. 2010-06-26 21:13:36 +00:00
Erik van der Kouwe fe07e7c984 Optional IPC logging 2010-06-24 13:31:40 +00:00
Tomas Hruby 76708e9bf4 mini_receive() clean up
- for better readability xpp is substitued by sender

- makes sure that the dequeued sender has p_q_link == NULL and that
  this condition holds when enqueuing the sender again. This is a
  sanity check to make sure that the new sender is not enqueued
  already

- Before this change the dequeued sender's p_q_link may not be NULL
  and it was only set to NULL when enqueued again
2010-06-23 10:36:19 +00:00
Tomas Hruby 360de619c0 No linear addresses in message delivery
- removes p_delivermsg_lin item from the process structure and code
  related to it

- as the send part, the receive does not need to use the
  PHYS_COPY_CATCH() and umap_local() couple.  

- The address space of the target process is installed before
  delivermsg() is called.

- unlike the linear address, the virtual address does not change when
  paging is turned on nor after fork().
2010-06-11 08:16:10 +00:00
Tomas Hruby cbc9586c13 Lazy FPU
- FPU context is stored only if conflict between 2 FPU users or while
  exporting context of a process to userspace while it is the active
  user of FPU

- FPU has its owner (fpu_owner) which points to the process whose
  state is currently loaded in FPU

- the FPU exception is only turned on when scheduling a process which
  is not the owner of FPU

- FPU state is restored for the process that generated the FPU
  exception. This process runs immediately without letting scheduler
  to pick a new process to resolve the FPU conflict asap, to minimize
  the FPU thrashing and FPU exception hadler execution

- faster all non-FPU-exception kernel entries as FPU state is not
  checked nor saved

- removed MF_USED_FPU flag, only MF_FPU_INITIALIZED remains to signal
  that a process has used FPU in the past
2010-06-07 07:43:17 +00:00
Ben Gras 2f892aca91 kernel fpu context switching: fix race condition
There seems to have been a broken assumption in the fpu context
restoring code.  It restores the context of the running process, without
guarantee that the current process is the one that will be scheduled.
This caused fpu saving for a different process to be triggered without
fpu hardware being enabled, causing an fpu exception in the kernel. This
practically only shows up with DEBUG_RACE on. Fix my thruby+me.

The fix
 . is to only set the fpu-in-use-by-this-process flag in the
   exception handler, and then take care of fpu restoring when
   actually returning to userspace

And the patch
 . translates fpu saving and restoring to c in arch_system.c,
   getting rid of a juicy chunk of assembly
 . makes osfxsr_feature private to arch_system.c
 . removes most of the arch dependent code from do_sigsend
2010-06-03 11:32:22 +00:00
Kees van Reeuwijk 36e12d5bd8 Use endpoint_t for the destination of mini_send and _syscall, and the
source of mini_receive.

Also some small cleanup.
2010-06-02 21:51:32 +00:00
Tomas Hruby ebbd319ac0 do_safecopy split
- removes dependency of do_safecopy() on the m_type field of the kcall
  messages.

- instead of do_safecopy() figuring out what action is requested, the
  correct safecopy method is called right away.
2010-06-01 08:51:37 +00:00
Tomas Hruby 451a6890d6 scheduling - time quantum in miliseconds
- Currently the cpu time quantum is timer-ticks based. Thus the
  remaining quantum is decreased only if the processes is interrupted
  by a timer tick. As processes block a lot this typically does not
  happen for normal user processes. Also the quantum depends on the
  frequency of the timer.

- This change makes the quantum miliseconds based. Internally the
  miliseconds are translated into cpu cycles. Everytime userspace
  execution is interrupted by kernel the cycles just consumed by the
  current process are deducted from the remaining quantum.

- It makes the quantum system timer frequency independent.

- The boot processes quantum is loosely derived from the tick-based
  quantas and 60Hz timer and subject to future change

- the 64bit arithmetics is a little ugly, will be changes once we have
  compiler support for 64bit integers (soon)
2010-05-25 08:06:14 +00:00
Kees van Reeuwijk ac14a989b3 Fixed some inconsistent strict typing declarations.
Better strict typing.
2010-05-25 07:23:24 +00:00
Tomas Hruby b90c2d7026 rename of mode/context switching functions
- this patch only renames schedcheck() to switch_to_user(),
  cycles_accounting_stop() to context_stop() and restart() to
  +restore_user_context()

- the motivation is that since the introduction of schedcheck() it has
  been abused for many things. It deserves a better name.  It should
  express the fact that from the moment we call the function we are in
  the process of switching to user.

- cycles_accounting_stop() was originally a single purpose function.
  As this function is called at were convenient places it is used in
  for other things too, e.g. (un)locking the kernel. Thus it deserves
  a better name too.

- using the old name, restart() does not call schedcheck(), however
  calls to restart are replaced by calls to schedcheck()
  [switch_to_user] and it calls restart() [restore_user_context]
2010-05-18 13:00:39 +00:00
Ben Gras a1636b85b7 kernel: new DEBUG_RACE option. try to provoke race conditions between processes.
it does this by 
  - making all processes interruptible by running out of quantum
  - giving all processes a single tick of quantum
  - picking a random runnable process instead of in order, and
    from a single pool of runnable processes (no priorities)

This together with very high HZ values currently provokes some race conditions
seen earlier only when running with SMP.
2010-05-08 18:00:03 +00:00
Tomas Hruby 4f962b4798 A small mini_receive() cleanup
- this patch substitutes *xpp for sender to increase readability of
  mini_receive().

- makes sure that the dequeued sender has p_q_link == NULL and that
  this condition holds when enqueuing the sender again. 

- it is a sanity check to make sure that the new sender is not
  enqueued already. Before this change the dequeued sender's p_q_link
  may not be NULL and it was only set to NULL when enqueued again.
2010-05-07 11:22:49 +00:00
Tomas Hruby ec56479675 deadlock() - more info
- deadlock() is more verbose in case of a detected deadlock. First, it
  lists all processses in the deadlock group. Then it prints the proc
  extra info, not only  stack trace and register dump
2010-05-03 17:38:54 +00:00
Cristiano Giuffrida 0f353411d7 Set IPC status code only for RECEIVE 2010-04-26 14:43:59 +00:00
Tomas Hruby 9fdb773cdb A simpler test whether to use kernel's default scheduling
- this is a small addition to the userspace scheduling.
  proc_kernel_scheduler() tests whether to use the default scheduling
  policy in kernel. It is true if the process' scheduler is NULL _or_
  self. Currently none of the tests was complete.
2010-04-10 15:19:25 +00:00
Tomas Hruby 987b87e2ad Small fixes
- do_sync_ipc() is private

- fixed typo in a comment
2010-04-06 11:29:31 +00:00
Tomas Hruby a774cc832f do_ipc() rearrangements
this patch does not add or change any functionality of do_ipc(), it
only makes things a little cleaner (hopefully).

Until now do_ipc() was responsible for handling all ipc calls. The
catch is that SENDA is fairly different which results in some ugly
code like this typecasting and variables naming which does not make
much sense for SENDA and makes the code hard to read.

result = mini_senda(caller_ptr, (asynmsg_t *)m_ptr, (size_t)src_dst_e);

As it is called directly from assembly, the new do_ipc() takes as
input values of 3 registers in reg_t variables (it used to be 4,
however, bit_map wasn't used so I removed it), does the checks common
to all ipc calls and call the appropriate handler either for
do_sync_ipc() (all except SENDA) or mini_senda() (for SENDA) while
typecasting the reg_t values correctly. As a result, handling SENDA
differences in do_sync_ipc() is no more needed. Also the code that
uses msg_size variable is improved a little bit.

arch_do_syscall() is simplified too.
2010-04-06 11:24:26 +00:00
Kees van Reeuwijk 4865e3f4f9 More use of endpoint_t. Other code cleanup. 2010-03-30 14:07:15 +00:00
Tomas Hruby 62203ec287 NOREC_ENTER and NOREC_RETURN checks removed
- the reasons for these checks no longer exist

- these check are problematic on SMP
2010-03-29 11:43:10 +00:00
Tomas Hruby 5b52c5aa02 A reliable way for userspace to check if a msg is from kernel
- IPC_FLG_MSG_FROM_KERNEL status flag is returned to userspace if the
  receive was satisfied by s message which was sent by the kernel on
  behalf of a process. This perfectly reliale information.

- MF_SENDING_FROM_KERNEL flag added to processes to be able to set
  IPC_FLG_MSG_FROM_KERNEL when finishing receive if the receiver
  wasn't ready to receive immediately.

- PM is changed to use this information to confirm that the scheduling
  messages are indeed from the kernel and not faked by a process.

  PM uses sef_receive_status()

- get_work() is removed from PM to make the changes simpler
2010-03-29 11:25:01 +00:00
Tomas Hruby 2521cc6bdf Slightly faster IPC
- there are cycles wasted in the IPC call due to a fairly compliacted
  way of copying messages from userland to kernel. Sometimes this
  complicated way (generic though) is used even for copying within the
  kernel address space, sometimes it is used for copying in case _no_
  copying is necessary. The goal of this patch is to improve this a
  little bit.

- the places where a copy is from user to kernel use the
  copy_msg_from_user() kernel-kernel copies are turned into
  assignments and BuildNotifyMessage uses the delivery buffers to
  avoid copying.

- copy_msg_from_user() was introduced when removing the system task
  and is about 2/3 faster then using the current mechanism
  (phys_copy). It also avoids the PHYS_COPY_CATCH macro. Assignment is
  also faster and no copy is the fastest ;-) so perhaps there will be
  some hardly noticable performance gain besides the clean up.
2010-03-29 11:16:37 +00:00
Tomas Hruby b4cf88a04f Userspace scheduling
- cotributed by Bjorn Swift

- In this first phase, scheduling is moved from the kernel to the PM
  server. The next steps are to a) moving scheduling to its own server
  and b) include useful information in the "out of quantum" message,
  so that the scheduler can make use of this information.

- The kernel process table now keeps record of who is responsible for
  scheduling each process (p_scheduler). When this pointer is NULL,
  the process will be scheduled by the kernel. If such a process runs
  out of quantum, the kernel will simply renew its quantum an requeue
  it.

- When PM loads, it will take over scheduling of all running
  processes, except system processes, using sys_schedctl().
  Essentially, this only results in taking over init. As children
  inherit a scheduler from their parent, user space programs forked by
  init will inherit PM (for now) as their scheduler.

 - Once a process has been assigned a scheduler, and runs out of
   quantum, its RTS_NO_QUANTUM flag will be set and the process
   dequeued. The kernel will send a message to the scheduler, on the
   process' behalf, informing the scheduler that it has run out of
   quantum. The scheduler can take what ever action it pleases, based
   on its policy, and then reschedule the process using the
   sys_schedule() system call.

- Balance queues does not work as before. While the old in-kernel
  function used to renew the quantum of processes in the highest
  priority run queue, the user-space implementation only acts on
  processes that have been bumped down to a lower priority queue.
  This approach reacts slower to changes than the old one, but saves
  us sending a sys_schedule message for each process every time we
  balance the queues. Currently, when processes are moved up a
  priority queue, their quantum is also renewed, but this can be
  fiddled with.

- do_nice has been removed from kernel. PM answers to get- and
  setpriority calls, updates it's own nice variable as well as the
  max_run_queue. This will be refactored once scheduling is moved to a
  separate server. We will probably have PM update it's local nice
  value and then send a message to whoever is scheduling the process.

- changes to fix an issue in do_fork() where processes could run out
  of quantum but bypassing the code path that handles it correctly.
  The future plan is to remove the policy from do_fork() and implement
  it in userspace too.
2010-03-29 11:07:20 +00:00
Tomas Hruby a3ffc0f7ad Removed NIL_SYS_PROC and NIL_PROC
- NIL_PROC replaced by simple NULLs
2010-03-28 09:54:32 +00:00
Kees van Reeuwijk 98493805fd Lots of const correctness. 2010-03-27 14:31:00 +00:00
Cristiano Giuffrida 9192dbecc9 Preserve the order of IPC messages between two parties.
Currently a sequence of messages between a sender A and a receiver B of the
form: A.asynsend(M1, B); A.send(M2, B) may result in the receiver receiving
M1 first and then M2 or viceversa. This patch makes sure that the original
order M1, M2 is always preserved.

Note that the order of a hypotetical sequence A.asynsend(M1, B);
A.asynsend(M2, B) is already guaranteed by the implementation of
asynsend by design. Other senda-based wrappers can define their own
semantics.
2010-03-27 00:09:22 +00:00
Cristiano Giuffrida bde2109b7c IPC status code for receive().
IPC changes:
- receive() is changed to take an additional parameter, which is a pointer to
a status code.
- The status code is filled in by the kernel to provide additional information
to the caller. For now, the kernel only fills in the IPC call used by the
sender.

Syslib changes:
- sef_receive() has been split into sef_receive() (with the original semantics)
and sef_receive_status() which exposes the status code to userland.
- Ideally, every sys process should gradually switch to sef_receive_status()
and use is_ipc_notify() as a dependable way to check for notify.
- SEF has been modified to use is_ipc_notify() and demonstrate how to use the
new status code.
2010-03-23 00:09:11 +00:00
Ben Gras 0937d6c367 re-establish kernel assert()s.
use the regular <assert.h> assert() instead of vmassert() in
kernel. throw out some #if 0 code. fix a few assert() conditions.
enable by default.
2010-03-10 13:00:05 +00:00
Ben Gras 35a108b911 panic() cleanup.
this change
   - makes panic() variadic, doing full printf() formatting -
     no more NO_NUM, and no more separate printf() statements
     needed to print extra info (or something in hex) before panicing
   - unifies panic() - same panic() name and usage for everyone -
     vm, kernel and rest have different names/syntax currently
     in order to implement their own luxuries, but no longer
   - throws out the 1st argument, to make source less noisy.
     the panic() in syslib retrieves the server name from the kernel
     so it should be clear enough who is panicing; e.g.
         panic("sigaction failed: %d", errno);
     looks like:
         at_wini(73130): panic: sigaction failed: 0
         syslib:panic.c: stacktrace: 0x74dc 0x2025 0x100a
   - throws out report() - printf() is more convenient and powerful
   - harmonizes/fixes the use of panic() - there were a few places
     that used printf-style formatting (didn't work) and newlines
     (messes up the formatting) in panic()
   - throws out a few per-server panic() functions
   - cleans up a tie-in of tty with panic()

merging printf() and panic() statements to be done incrementally.
2010-03-05 15:05:11 +00:00
Ben Gras e6cb76a2e2 no more kprintf - kernel uses libsys printf now, only kputc is special
to the kernel.
2010-03-03 15:45:01 +00:00
Ben Gras 18924ea563 New P_BLOCKEDON for kernel - a macro that encodes the "who is this
process waiting for" logic, which is duplicated a few times in the
kernel. (For a new feature for top.)

Introducing it and throwing out ESRCDIED and EDSTDIED (replaced by
EDEADSRCDST - so we don't have to care which part of the blocking is
failing in system.c) simplifies some code in the kernel and callers that
check for E{DEADSRCDST,ESRCDIED,EDSTDIED}, but don't care about the
difference, a fair bit, and more significantly doesn't duplicate the
'blocked-on' logic.
2010-03-03 15:32:26 +00:00
Kees van Reeuwijk 1ba0936619 Fix some uses of uninitialized variables. 2010-02-19 10:41:02 +00:00
Kees van Reeuwijk 97c169b93a Remove some unused #include.
Remove some unused variables and computations on them.
2010-02-17 20:24:42 +00:00
Tomas Hruby 1b56fdb33c Time accounting based on TSC
- as thre are still KERNEL and IDLE entries, time accounting for
  kernel and idle time works the same as for any other process

- everytime we stop accounting for the currently running process,
  kernel or idle, we read the TSC counter and increment the p_cycles
  entry.

- the process cycles inherently include some of the kernel cycles as
  we can stop accounting for the process only after we save its
  context and we start accounting just before we restore its context

- this assumes that the system does not scale the CPU frequency which
  will be true for ... long time ;-)
2010-02-10 15:36:54 +00:00
Tomas Hruby c9da61022b intr_disabled() tests removed
- we don't need to test this in kernel as we always have interrupts
  disabled

- if interrupts are enabled in kernel, it is only at very carefully
  chosen places. There are no such places now.
2010-02-09 15:29:58 +00:00
Tomas Hruby c6fec6866f No locking in kernel code
- No locking in RTS_(UN)SET macros

- No lock_notify()

- Removed unused lock_send()

- No lock/unlock macros anymore
2010-02-09 15:26:58 +00:00
Tomas Hruby 728f0f0c49 Removal of the system task
* Userspace change to use the new kernel calls

	- _taskcall(SYSTASK...) changed to _kernel_call(...)

	- int 32 reused for the kernel calls

	- _do_kernel_call() to make the trap to kernel

	- kernel_call() to make the actuall kernel call from C using
	  _do_kernel_call()

	- unlike ipc call the kernel call always succeeds as kernel is
	  always available, however, kernel may return an error

* Kernel side implementation of kernel calls

	- the SYSTEm task does not run, only the proc table entry is
	  preserved

	- every data_copy(SYSTEM is no data_copy(KERNEL

	- "locking" is an empty operation now as everything runs in
	  kernel

	- sys_task() is replaced by kernel_call() which copies the
	  message into kernel, dispatches the call to its handler and
	  finishes by either copying the results back to userspace (if
	  need be) or by suspending the process because of VM

	- suspended processes are later made runnable once the memory
	  issue is resolved, picked up by the scheduler and only at
	  this time the call is resumed (in fact restarted) which does
	  not need to copy the message from userspace as the message
	  is already saved in the process structure.

	- no ned for the vmrestart queue, the scheduler will restart
	  the system calls

	- no special case in do_vmctl(), all requests remove the
	  RTS_VMREQUEST flag
2010-02-09 15:20:09 +00:00
Tomas Hruby ad9ba944d1 Early address space switch
- switch_address_space() implements a switch of the user address space
  for the destination process

- this makes memory of this process easily accessible, e.g. a pointer
  valid in the userspace can be used with a little complexity to
  access the process's memory

- the switch does not happed only just before we return to userspace,
  however, it happens right after we know which process we are going
  to schedule. This happens before we start processing the misc flags
  of this process so its memory is available

- if the process becomes not runnable while processing the mics flags
  we pick a new process and we switch the address space again which
  introduces possibly a little bit more overhead, however, it is
  hopefully hidden by reducing the overheads when we actually access
  the memory
2010-02-09 15:13:52 +00:00
Tomas Hruby b14a86ca5c Sys calls are called ipc calls now
- the syscalls are pretty much just ipc calls, however, sendrec() is
  used to implement system task (sys) calls

- sendrec() won't be used anymore for this, therefore ipc calls will
  become pure ipc calls
2010-02-09 15:13:07 +00:00
Kees van Reeuwijk c8a11b5453 Fixed some type inconsistencies in the kernel. 2010-01-26 12:26:06 +00:00
Kees van Reeuwijk b67f788eea Removed a number of useless #includes 2010-01-26 10:59:01 +00:00
Tomas Hruby 0cfbe936ce Removed bunch of unused variables in kernel/proc.c 2010-01-22 16:14:57 +00:00