Commit graph

172 commits

Author SHA1 Message Date
Ben Gras
a77c2973b3 fix clang warnings -R in kernel/ and servers/ 2011-06-09 16:09:13 +02:00
Tomas Hruby
423be1545c Fix for SPROFILE == 0
- contributed by Antoine Leca
2011-05-25 09:42:11 +02:00
Tomas Hruby
bed9e48c12 APIC timer needs rearming before halting the cpu
- time stops if there is no activity and the timer expired before
  we halted the cpu

- restart_local_timer() checks if the timer has expired and if so it
  restarts it

- we do the same when switching back to userspace
2011-05-11 11:54:23 +02:00
David van Moolenbroek
46cee00ad8 Kernel: try_async/try_one fixes
- skip processes that are not asynsending to the target
- do not clear whole asynsend table upon IPC permission error
- be more accepting when one table entry is bogus later on
2011-04-18 22:56:34 +00:00
Thomas Veerman
7457cbe62f Enable sending a notification when sending of an asynchronous message was
completed (successfully or not). AMF_NOTIFY_ERR can be used if the sender 
only wishes to be notified in case of an error (e.g., EDEADSRCDST). A new
endpoint ASYNCM will be the sender of the notification.
2011-04-08 15:14:48 +00:00
Thomas Veerman
16e0e9370e Use a bitmap for pending asynchronous messages instead of a global flag.
That way it works similar to pending notifications.
2011-04-08 15:03:33 +00:00
Thomas Veerman
9db2311b22 Reduce the number of copies done by the kernel when handling retrieval and
delivery of asynchronous messages.
2011-04-08 14:53:55 +00:00
David van Moolenbroek
61ad5f9b94 Kernel: small comment fix (Bug#562, reported by Ryan Riley) 2011-02-25 16:46:30 +00:00
Ben Gras
07bfb4f4e4 kernel - account for kernel cpu time (ipc, kcalls) in caller 2011-02-08 13:58:32 +00:00
Ben Gras
b2d1109737 kernel - change print*() functions for ipc to generic ipc hook functions.
- used to implement ipc stats tracking code
2011-02-08 13:54:33 +00:00
David van Moolenbroek
efca30b081 Kernel: fix notification delivery to non-ANY receivers 2011-01-07 17:04:43 +00:00
Tomas Hruby
9e01a83636 SMP - reduced TLB flushing
- flush TLB of processes only if the page tables has been changed and
  the page tables of this process are already loaded on this cpu which
  means that there might be stale entries in TLB. Until now SMP was
  always flushing TLB to make sure everything is consistent.
2010-10-25 16:21:23 +00:00
Tomas Hruby
a1eefc013e single shot timer interrupts fix
- accidentaly this wasn't part of the SMP merge and the implementation
  remained uncomplete with the timer keeping ticking periodically

- APIC timer is set for a signel shot and restarted everytime it
  expires. This way we can keep the AP's trully idle

- the timer is restarted a little later before leaving to userspace

- LAPIC_TIMER_ICR is written before LAPIC_LVTTR so the newest value is
  used
2010-10-21 17:07:01 +00:00
Tomas Hruby
ef92583c3a Busy idle loop when profiling
- the Intel architecture cycle counter (performance counter) does not
  count when the CPU is idle therefore we use busy loop instead of
  halting the cpu when there is nothing to schedule

- the downside is that handling interrupts may be accounted as idle
  time if a sample is taken before we get out of the nested trap and
  pick a new process
2010-09-23 10:49:52 +00:00
Tomas Hruby
2d1c8849d8 Remove unnecessary TLB flushes
- this should be only for SMP
2010-09-22 08:01:36 +00:00
Tomas Hruby
a665ae3de1 Userspace scheduling - exporting stats
- contributed by Bjorn Swift

- adds process accounting, for example counting the number of messages
  sent, how often the process was preemted and how much time it spent
  in the run queue. These statistics, along with the current cpu load,
  are sent back to the user-space scheduler in the Out Of Quantum
  message.

- the user-space scheduler may choose to make use of these statistics
  when making scheduling decisions. For isntance the cpu load becomes
  especially useful when scheduling on multiple cores.
2010-09-19 15:52:12 +00:00
Tomas Hruby
5b8b623765 SMP - lazy FPU
- when a process is migrated to a different CPU it may have an active
  FPU context in the processor registers. We must save it and migrate
  it together with the process.
2010-09-15 14:11:25 +00:00
Tomas Hruby
e4283176ae SMP - Force TLB flush before scheduling a process
- this makes sure that each process always run with updated TLB

- this is the simplest way how to achieve the consistency. As it means
  significant performace degradation when not require, this is nto the
  final solution and will be refined
2010-09-15 14:11:17 +00:00
Tomas Hruby
e2701da5a9 SMP - Single shot local timer
- APIC timer always reprogrammed if expired

- timer tick never happens when in kernel => never immediate return
  from userspace to kernel because of a buffered interrupt

- renamed argument to lapic_set_timer_one_shot()

- removed arch_ prefix from timer functions
2010-09-15 14:11:06 +00:00
Tomas Hruby
e87d29171f SMP - Compiles for both single and multi processor again
- this patch adds various fixes as some of the previous patches break
  compilations without CONFIG_SMP being set
2010-09-15 14:11:03 +00:00
Tomas Hruby
454589debd SMP - Print cpu of the process
- adds '4' to print processes assigned to each cpu without printing
  the process it is blocked on (a lightweight '1')
2010-09-15 14:11:01 +00:00
Tomas Hruby
0ac9b6d4cf SMP - trully idle APs
- any cpu can use smp_schedule() to tell another cpu to reschedule

- if an AP is idle, it turns off timer as there is nothing to
  preempt, no need to wakeup just to go back to sleep again

- if a cpu makes a process runnable on an idle cpu, it must wake it up
  to reschedule
2010-09-15 14:10:57 +00:00
Tomas Hruby
67f039540c SMP - proc_ptr and bill_ptr initialization
- they should point somewhere
2010-09-15 14:10:24 +00:00
Tomas Hruby
865e21b884 SMP - CPU local idle stub
- each CPU has its own pseudo idle process and its structure

- idle cycles accounting is agregated when exporting to userspace
2010-09-15 14:10:21 +00:00
Tomas Hruby
fac5fbfdbf SMP - CPU local run queues
- each CPU has its own runqueues

- processes on BSP are put on the runqueues later after a switch to
  the final stack when cpuid works to avoid special cases

- enqueue() and dequeue() use the run queues of the cpu the process is
  assigned to

- pick_proc() uses the local run queues

- printing of per-CPU run queues ('2') on serial console
2010-09-15 14:10:18 +00:00
Tomas Hruby
62c666566e SMP - We boot APs
- kernel detects CPUs by searching ACPI tables for local apic nodes

- each CPU has its own TSS that points to its own stack. All cpus boot
  on the same boot stack (in sequence) but switch to its private stack
  as soon as they can.

- final booting code in main() placed in bsp_finish_booting() which is
  executed only after the BSP switches to its final stack

- apic functions to send startup interrupts

- assembler functions to handle CPU features not needed for single cpu
  mode like memory barries, HT detection etc.

- new files kernel/smp.[ch], kernel/arch/i386/arch_smp.c and
  kernel/arch/i386/include/arch_smp.h

- 16-bit trampoline code for the APs. It is executed by each AP after
  receiving startup IPIs it brings up the CPUs to 32bit mode and let
  them spin in an infinite loop so they don't do any damage.

- implementation of kernel spinlock

- CONFIG_SMP and CONFIG_MAX_CPUS set by the build system
2010-09-15 14:09:52 +00:00
Tomas Hruby
13a0d5fa5e SMP - Cpu local variables
- most global variables carry information which is specific to the
  local CPU and each CPU must have its own copy

- cpu local variable must be declared in cpulocal.h between
  DECLARE_CPULOCAL_START and DECLARE_CPULOCAL_END markers using
  DECLARE_CPULOCAL macro

- to access the cpu local data the provided macros must be used

	get_cpu_var(cpu, name)
	get_cpu_var_ptr(cpu, name)

	get_cpulocal_var(name)
	get_cpulocal_var_ptr(name)

- using this macros makes future changes in the implementation
  possible

- switching to ELF will make the declaration of cpu local data much
  simpler, e.g.

  CPULOCAL int blah;

  anywhere in the kernel source code
2010-09-15 14:09:46 +00:00
Tomas Hruby
2a2a19e542 proc_init()
- code that initializes proc.c structures removed from main() and placed in
  proc_init() function
2010-09-15 14:09:43 +00:00
Ben Gras
b9cea27497 kernel: deadlock test with endpoints instead of slot numbers, slightly cleaner 2010-07-28 14:14:06 +00:00
Ben Gras
7f343ed574 kernel: clear MF_CONTEXT_SET on kernel exit. 2010-07-20 17:13:44 +00:00
Ben Gras
b05c989298 kernel - prettier output for ipc errors, call names instead of trap numbers 2010-07-16 15:36:29 +00:00
Tomas Hruby
67fa273d00 MF_REPLY_PEND should be removed when sendrec finishes 2010-06-28 08:32:49 +00:00
Kees van Reeuwijk
5eb6f6e922 Fixed a type declaration inconsistency. 2010-06-26 21:13:36 +00:00
Erik van der Kouwe
fe07e7c984 Optional IPC logging 2010-06-24 13:31:40 +00:00
Tomas Hruby
76708e9bf4 mini_receive() clean up
- for better readability xpp is substitued by sender

- makes sure that the dequeued sender has p_q_link == NULL and that
  this condition holds when enqueuing the sender again. This is a
  sanity check to make sure that the new sender is not enqueued
  already

- Before this change the dequeued sender's p_q_link may not be NULL
  and it was only set to NULL when enqueued again
2010-06-23 10:36:19 +00:00
Tomas Hruby
360de619c0 No linear addresses in message delivery
- removes p_delivermsg_lin item from the process structure and code
  related to it

- as the send part, the receive does not need to use the
  PHYS_COPY_CATCH() and umap_local() couple.  

- The address space of the target process is installed before
  delivermsg() is called.

- unlike the linear address, the virtual address does not change when
  paging is turned on nor after fork().
2010-06-11 08:16:10 +00:00
Tomas Hruby
cbc9586c13 Lazy FPU
- FPU context is stored only if conflict between 2 FPU users or while
  exporting context of a process to userspace while it is the active
  user of FPU

- FPU has its owner (fpu_owner) which points to the process whose
  state is currently loaded in FPU

- the FPU exception is only turned on when scheduling a process which
  is not the owner of FPU

- FPU state is restored for the process that generated the FPU
  exception. This process runs immediately without letting scheduler
  to pick a new process to resolve the FPU conflict asap, to minimize
  the FPU thrashing and FPU exception hadler execution

- faster all non-FPU-exception kernel entries as FPU state is not
  checked nor saved

- removed MF_USED_FPU flag, only MF_FPU_INITIALIZED remains to signal
  that a process has used FPU in the past
2010-06-07 07:43:17 +00:00
Ben Gras
2f892aca91 kernel fpu context switching: fix race condition
There seems to have been a broken assumption in the fpu context
restoring code.  It restores the context of the running process, without
guarantee that the current process is the one that will be scheduled.
This caused fpu saving for a different process to be triggered without
fpu hardware being enabled, causing an fpu exception in the kernel. This
practically only shows up with DEBUG_RACE on. Fix my thruby+me.

The fix
 . is to only set the fpu-in-use-by-this-process flag in the
   exception handler, and then take care of fpu restoring when
   actually returning to userspace

And the patch
 . translates fpu saving and restoring to c in arch_system.c,
   getting rid of a juicy chunk of assembly
 . makes osfxsr_feature private to arch_system.c
 . removes most of the arch dependent code from do_sigsend
2010-06-03 11:32:22 +00:00
Kees van Reeuwijk
36e12d5bd8 Use endpoint_t for the destination of mini_send and _syscall, and the
source of mini_receive.

Also some small cleanup.
2010-06-02 21:51:32 +00:00
Tomas Hruby
ebbd319ac0 do_safecopy split
- removes dependency of do_safecopy() on the m_type field of the kcall
  messages.

- instead of do_safecopy() figuring out what action is requested, the
  correct safecopy method is called right away.
2010-06-01 08:51:37 +00:00
Tomas Hruby
451a6890d6 scheduling - time quantum in miliseconds
- Currently the cpu time quantum is timer-ticks based. Thus the
  remaining quantum is decreased only if the processes is interrupted
  by a timer tick. As processes block a lot this typically does not
  happen for normal user processes. Also the quantum depends on the
  frequency of the timer.

- This change makes the quantum miliseconds based. Internally the
  miliseconds are translated into cpu cycles. Everytime userspace
  execution is interrupted by kernel the cycles just consumed by the
  current process are deducted from the remaining quantum.

- It makes the quantum system timer frequency independent.

- The boot processes quantum is loosely derived from the tick-based
  quantas and 60Hz timer and subject to future change

- the 64bit arithmetics is a little ugly, will be changes once we have
  compiler support for 64bit integers (soon)
2010-05-25 08:06:14 +00:00
Kees van Reeuwijk
ac14a989b3 Fixed some inconsistent strict typing declarations.
Better strict typing.
2010-05-25 07:23:24 +00:00
Tomas Hruby
b90c2d7026 rename of mode/context switching functions
- this patch only renames schedcheck() to switch_to_user(),
  cycles_accounting_stop() to context_stop() and restart() to
  +restore_user_context()

- the motivation is that since the introduction of schedcheck() it has
  been abused for many things. It deserves a better name.  It should
  express the fact that from the moment we call the function we are in
  the process of switching to user.

- cycles_accounting_stop() was originally a single purpose function.
  As this function is called at were convenient places it is used in
  for other things too, e.g. (un)locking the kernel. Thus it deserves
  a better name too.

- using the old name, restart() does not call schedcheck(), however
  calls to restart are replaced by calls to schedcheck()
  [switch_to_user] and it calls restart() [restore_user_context]
2010-05-18 13:00:39 +00:00
Ben Gras
a1636b85b7 kernel: new DEBUG_RACE option. try to provoke race conditions between processes.
it does this by 
  - making all processes interruptible by running out of quantum
  - giving all processes a single tick of quantum
  - picking a random runnable process instead of in order, and
    from a single pool of runnable processes (no priorities)

This together with very high HZ values currently provokes some race conditions
seen earlier only when running with SMP.
2010-05-08 18:00:03 +00:00
Tomas Hruby
4f962b4798 A small mini_receive() cleanup
- this patch substitutes *xpp for sender to increase readability of
  mini_receive().

- makes sure that the dequeued sender has p_q_link == NULL and that
  this condition holds when enqueuing the sender again. 

- it is a sanity check to make sure that the new sender is not
  enqueued already. Before this change the dequeued sender's p_q_link
  may not be NULL and it was only set to NULL when enqueued again.
2010-05-07 11:22:49 +00:00
Tomas Hruby
ec56479675 deadlock() - more info
- deadlock() is more verbose in case of a detected deadlock. First, it
  lists all processses in the deadlock group. Then it prints the proc
  extra info, not only  stack trace and register dump
2010-05-03 17:38:54 +00:00
Cristiano Giuffrida
0f353411d7 Set IPC status code only for RECEIVE 2010-04-26 14:43:59 +00:00
Tomas Hruby
9fdb773cdb A simpler test whether to use kernel's default scheduling
- this is a small addition to the userspace scheduling.
  proc_kernel_scheduler() tests whether to use the default scheduling
  policy in kernel. It is true if the process' scheduler is NULL _or_
  self. Currently none of the tests was complete.
2010-04-10 15:19:25 +00:00
Tomas Hruby
987b87e2ad Small fixes
- do_sync_ipc() is private

- fixed typo in a comment
2010-04-06 11:29:31 +00:00
Tomas Hruby
a774cc832f do_ipc() rearrangements
this patch does not add or change any functionality of do_ipc(), it
only makes things a little cleaner (hopefully).

Until now do_ipc() was responsible for handling all ipc calls. The
catch is that SENDA is fairly different which results in some ugly
code like this typecasting and variables naming which does not make
much sense for SENDA and makes the code hard to read.

result = mini_senda(caller_ptr, (asynmsg_t *)m_ptr, (size_t)src_dst_e);

As it is called directly from assembly, the new do_ipc() takes as
input values of 3 registers in reg_t variables (it used to be 4,
however, bit_map wasn't used so I removed it), does the checks common
to all ipc calls and call the appropriate handler either for
do_sync_ipc() (all except SENDA) or mini_senda() (for SENDA) while
typecasting the reg_t values correctly. As a result, handling SENDA
differences in do_sync_ipc() is no more needed. Also the code that
uses msg_size variable is improved a little bit.

arch_do_syscall() is simplified too.
2010-04-06 11:24:26 +00:00