Mach
-
What is it:
-
Very large micro-kernel
-
Very flexible
-
Multi-CPU
-
For research, production just happened????
-
Where is it used
-
Some people run it
-
Is in NextOS (apple??)
-
Is basis of MacOS X
-
Reference port for OSF/1.
-
Platforms
-
Can handle UMA, NUMA, NORMA.
-
Features
-
Can have multiple personalities: BSD, OS/2, DOS, Macintoch
-
Small number of kernel abstractions. (Did they succeed??)
-
Distributed objects...
-
Easy porting
-
Multi-CPU -- including process migration
-
Basic Abstractions
-
Task -- what we call a process
-
Thread -- what we call a thread
-
Port -- what we would call a socket.
-
Port Right -- a capability to send to a port.
-
Port Set -- a group of ports having a common receive queue. A
read from a port set gets the first message sent to any port. This
is how they implement select.
-
Message -- a message
-
Memory -- Virtual RAM.
-
Message Passing
-
In large part massage passing speed defines the cost of micro-kernel
vs. macro-kernel.
-
Speed of EVERYTHING depends on this!!
-
Pass large messages by remapping address spaces
-
Use Copy-On-Write for large messages that are not read only.
-
Don't use (as far as I know) memory moving.
-
How Virtual Memory is implemented
-
Each memory region has a port associated with it.
-
That port goes to the paging server.
-
The paging server gets requests like pagein, pageout, etc.
-
Kernel does standard LRU like stuff, but uses these requests to move
data.
-
Can use special servers for special needs (i.e. database, etc.).
-
Can use remote servers for diskless clients
-
Can use standard server if you don't want to worry about it.
-
With message passing, local servers just get a pointer to the RAM,
not a copy of the RAM.
-
Can migrate processes easy.
-
Process management
-
Schedule threads, not processes
-
Can hog CPU with multiple threads
-
Priority goes from 0 to 127; based on time decayed average of CPU time
used.
-
Different from VMS system
-
Different threads can have different priorities
-
Put threads on global run queue of 32 queues.
-
Each CPU has a local run queue.
-
Schedule first local run queue (for device drivers, etc) then global
run queue in order.
-
Vary time quantum based on contention.
-
Queues muct be locked before modification.
-
Imagine the implementation where on some systems there is remote memory
access, and others there is none.
-
Have list of idle CPUs for quick dispatching. Alternative would
suck!