Commit graph

20 commits

Author SHA1 Message Date
Ben Gras
07bfb4f4e4 kernel - account for kernel cpu time (ipc, kcalls) in caller 2011-02-08 13:58:32 +00:00
Arun Thomas
aaaad89244 Use int64 functions consistently
Instead of manipulating the u64_t type directly, use the
ex64hi()/ex64lo()/make64() functions.
2010-11-07 23:35:29 +00:00
Tomas Hruby
c9bfb13cdb Kernel keeps information about each cpu
- kernel maintains a cpu_info array which contains various
  information about each cpu as filled when each cpu boots

- the information contains idetification, features etc.
2010-10-26 21:07:27 +00:00
Tomas Hruby
a1eefc013e single shot timer interrupts fix
- accidentaly this wasn't part of the SMP merge and the implementation
  remained uncomplete with the timer keeping ticking periodically

- APIC timer is set for a signel shot and restarted everytime it
  expires. This way we can keep the AP's trully idle

- the timer is restarted a little later before leaving to userspace

- LAPIC_TIMER_ICR is written before LAPIC_LVTTR so the newest value is
  used
2010-10-21 17:07:01 +00:00
Tomas Hruby
2419ab589d Fixed BKL statistics 2010-10-19 17:07:11 +00:00
Tomas Hruby
ef92583c3a Busy idle loop when profiling
- the Intel architecture cycle counter (performance counter) does not
  count when the CPU is idle therefore we use busy loop instead of
  halting the cpu when there is nothing to schedule

- the downside is that handling interrupts may be accounted as idle
  time if a sample is taken before we get out of the nested trap and
  pick a new process
2010-09-23 10:49:52 +00:00
Tomas Hruby
e9ecba9fc7 fix - forgotten debug print 2010-09-19 15:54:31 +00:00
Tomas Hruby
a665ae3de1 Userspace scheduling - exporting stats
- contributed by Bjorn Swift

- adds process accounting, for example counting the number of messages
  sent, how often the process was preemted and how much time it spent
  in the run queue. These statistics, along with the current cpu load,
  are sent back to the user-space scheduler in the Out Of Quantum
  message.

- the user-space scheduler may choose to make use of these statistics
  when making scheduling decisions. For isntance the cpu load becomes
  especially useful when scheduling on multiple cores.
2010-09-19 15:52:12 +00:00
Tomas Hruby
e2701da5a9 SMP - Single shot local timer
- APIC timer always reprogrammed if expired

- timer tick never happens when in kernel => never immediate return
  from userspace to kernel because of a buffered interrupt

- renamed argument to lapic_set_timer_one_shot()

- removed arch_ prefix from timer functions
2010-09-15 14:11:06 +00:00
Tomas Hruby
e87d29171f SMP - Compiles for both single and multi processor again
- this patch adds various fixes as some of the previous patches break
  compilations without CONFIG_SMP being set
2010-09-15 14:11:03 +00:00
Tomas Hruby
0ac9b6d4cf SMP - trully idle APs
- any cpu can use smp_schedule() to tell another cpu to reschedule

- if an AP is idle, it turns off timer as there is nothing to
  preempt, no need to wakeup just to go back to sleep again

- if a cpu makes a process runnable on an idle cpu, it must wake it up
  to reschedule
2010-09-15 14:10:57 +00:00
Tomas Hruby
c554aef0e1 SMP - BKL statistics
- pressing 'B' on the serial cnsole prints statistics for BKL per cpu.

- 'b' resets the counters

- it presents number of cycles each CPU spends in kernel, how many
  cycyles it spends spinning while waiting for the BKL

- it shows optimistic estimation in how many cases we get the lock
  immediately without spinning. As the test is not atomic the lock may
  be already held by some other cpu before we actually try to acquire
  it.
2010-09-15 14:10:37 +00:00
Tomas Hruby
d37b7ebc0b SMP - CPU local cycles accounting
- tsc_ctr_switch is made cpu local

- although an x86 specific variable it must be declared globaly as the
  cpulocal implementation does not allow otherwise
2010-09-15 14:10:27 +00:00
Tomas Hruby
6aa26565e6 SMP - Big kernel lock (BKL)
- to isolate execution inside kernel we use a big kernel lock
  implemented as a spinlock

- the lock is acquired asap after entering kernel mode and released as
  late as possible. Only one CPU as a time can execute the core kernel
  code

- measurement son real hw show that the overhead of this lock is close
  to 0% of kernel time for the currnet system

- the overhead of this lock may be as high as 45% of kernel time in
  virtual machines depending on the ratio between physical CPUs
  available and emulated CPUs. The performance degradation is
  significant
2010-09-15 14:10:03 +00:00
Tomas Hruby
62c666566e SMP - We boot APs
- kernel detects CPUs by searching ACPI tables for local apic nodes

- each CPU has its own TSS that points to its own stack. All cpus boot
  on the same boot stack (in sequence) but switch to its private stack
  as soon as they can.

- final booting code in main() placed in bsp_finish_booting() which is
  executed only after the BSP switches to its final stack

- apic functions to send startup interrupts

- assembler functions to handle CPU features not needed for single cpu
  mode like memory barries, HT detection etc.

- new files kernel/smp.[ch], kernel/arch/i386/arch_smp.c and
  kernel/arch/i386/include/arch_smp.h

- 16-bit trampoline code for the APs. It is executed by each AP after
  receiving startup IPIs it brings up the CPUs to 32bit mode and let
  them spin in an infinite loop so they don't do any damage.

- implementation of kernel spinlock

- CONFIG_SMP and CONFIG_MAX_CPUS set by the build system
2010-09-15 14:09:52 +00:00
Kees van Reeuwijk
ed0b81c25c Removed some unused variables and functions. 2010-06-02 19:41:38 +00:00
Tomas Hruby
24764ff47a Fixed ms-based scheduling for legacy timer 2010-05-26 08:20:29 +00:00
Tomas Hruby
451a6890d6 scheduling - time quantum in miliseconds
- Currently the cpu time quantum is timer-ticks based. Thus the
  remaining quantum is decreased only if the processes is interrupted
  by a timer tick. As processes block a lot this typically does not
  happen for normal user processes. Also the quantum depends on the
  frequency of the timer.

- This change makes the quantum miliseconds based. Internally the
  miliseconds are translated into cpu cycles. Everytime userspace
  execution is interrupted by kernel the cycles just consumed by the
  current process are deducted from the remaining quantum.

- It makes the quantum system timer frequency independent.

- The boot processes quantum is loosely derived from the tick-based
  quantas and 60Hz timer and subject to future change

- the 64bit arithmetics is a little ugly, will be changes once we have
  compiler support for 64bit integers (soon)
2010-05-25 08:06:14 +00:00
Tomas Hruby
b90c2d7026 rename of mode/context switching functions
- this patch only renames schedcheck() to switch_to_user(),
  cycles_accounting_stop() to context_stop() and restart() to
  +restore_user_context()

- the motivation is that since the introduction of schedcheck() it has
  been abused for many things. It deserves a better name.  It should
  express the fact that from the moment we call the function we are in
  the process of switching to user.

- cycles_accounting_stop() was originally a single purpose function.
  As this function is called at were convenient places it is used in
  for other things too, e.g. (un)locking the kernel. Thus it deserves
  a better name too.

- using the old name, restart() does not call schedcheck(), however
  calls to restart are replaced by calls to schedcheck()
  [switch_to_user] and it calls restart() [restore_user_context]
2010-05-18 13:00:39 +00:00
Arun Thomas
4ed3a0cf3a Convert kernel over to bsdmake 2010-04-01 22:22:33 +00:00
Renamed from kernel/arch/i386/clock.c (Browse further)