Operating system

Inter-Process Communication

Posted on Updated on

Producer-Consumer

clip_image002

• Consumer sends N empty messages (i.e., slots)

• Producer picks up an empty message/slot and send back an item produced

• Consumer receives an item and sends back an empty message (i.e., a slot)

• Messages sent, but not received are buffered by OS

Characteristics

Naming: Direct Communication

• Sender/receiver refer to each other, as seen before

• Properties of communication link

– Link is established automatically between communicating processes

– Link is associated with exactly two processes

–Exactly one link for every pair of processes

• Communication is symmetric(above) or asymmetric

– send(P,m) // send a message to P

– receive(&id, m) // receive from any process, set id to sender

Direct Naming

clip_image004

Naming: Indirect Communication

• Communication via mailboxes (or ports)

• Processes communicate by putting and taking messages in/from mailboxes

–send(A, m)and receive(A,m)

• Properties of communication link

– A link is established between two processes, if they share a mailbox

– Link maybe associated with more than two processes

– A number of different links may exist between any pair of processes; each one a separate mailbox

Indirect Naming

clip_image006

Posted By-: Vissicomp Technology Pvt. Ltd.

Website -: http://www.vissicomp.com

Advertisements

Inter-Process Communication

Posted on Updated on

Mailboxes

It depends:

• At most one process may execute a receive operation at a time

• The system arbitrarily selects one as the receiver

• Message is retrieved by both (a form of multicast)

Ownership

• Either process can own the mailbox

• OS can own it and/or creator owns it

clip_image002

Sender/Receiver Synchronization

• Message passing maybe blocking or non-blocking

(synchronous or asynchronous)

• Blocking send– sender blocked until message is received by receiver (or by mailbox)

• Non-blocking send– sending process resumes operation right after sending

• Blocking receive– receiver blocks until message is available

• Non-blocking receive– receiver retrieves a valid message or returns an error code

• Any combination of the above send/receive is possible

Blocking vs. non-blocking send

clip_image004

Buffering

• The mechanism that buffers messages (a.k.a.queue) may have the following properties

–Zero capacity: queue has length 0, no messages can be outstanding on link, sender blocks for message exchange

–Bounded capacity: queue has length N, N messages can be in queue at any point in time, sender blocks if queue is full, otherwise it may continue to execute

–Unbounded capacity: queue has infinite length, sender never blocks

Message Passing in Practise

• Message Passing Interface (MPI) used in parallel programming systems

• Open MPI, a standard

• Unix’s IPC – Inter-process communication facilities

– Semaphores

– Pipes, FIFO

– Message queues

–Sockets

– Shared memory, memory mapped files

Characteristics of Message Passing

• Addressing

– Each process has a unique name

– Introduce the concept of a mailbox

• With sender, with receiver, with both, with OS

• Buffering

– No buffering, send blocks until receive happens or vice versa, then copy message (called a rendezvous)

Copy-on-write

• Message passing for process-to-process communication can be quite inefficient due to context switching etc.

• Copy-on-write first sets a pointer in receivers address space pointing to received data in sender’s address space

• Only if sender / receiver start writing this data will the OS copy the actual data from sender to receiver

• Otherwise, no copying of the data is performed

Posted By-: Vissicomp Technology Pvt. Ltd.

Website -: http://www.vissicomp.com

Inter-Process Communication

Posted on Updated on

Mutual Exclusion Based on Message Passing

clip_image002

Mutual Exclusion Based on Message Passing Code

P1:

send(Resource Contro, “Give me resource”);

receive(Resource Controller, Message);

// block while waiting for a reply from RC

{

…Critical Section…

}

send(Resource Controller, “Done with resource”);

Bounded Buffer Problem (no shared memory)

clip_image004

void consumer(void)

int item;

message m;

// initialization loop

for (i=0; i<N; i++) send(producer, &m);// N empty slots

while (TRUE) {

receive(producer, &m); // receive item

item=extract_item(&m);

send(producer, &m); // send empty slot

consume_item(item);

}

clip_image006

void producer(void)

int item;

message m;

while (TRUE) {

item Produce_Item();

receive(consumer, &m); // empty slot

build_message(&m, item);

send(consumer, &m); // send item

}

Posted By-: Vissicomp Technology Pvt. Ltd.

Website -: http://www.vissicomp.com

Memory management

Posted on Updated on

Memory Allocation or Contiguous memory allocation

Main memory usually has two partitions:

 Low Memory — Operating system resides in this memory.

 High Memory — User processes then held in high memory.

Operating system uses the following memory allocation mechanism

clip_image002

a) SINGLE-PARTITION ALLOCATION

If the operating system is residing in low memory, and the user processes are executing in high memory. And operating system code and data are protected from changes by the user processes. We also need protect the user processes from one another. We can provide this 2 protection by using a relocation registers.

The relocation register contains the value of the smallest physical address; the limit register contains the range of logical addresses (for example, relocation = 100,040 and limit =74,600).

With relocation and limit registers, each logical address must be less than the limit register;

The MMU (Memory Management Unit) maps the logical address dynamically by adding the value in the relocation register. This mapped address is sent to memory.

The relocation-register scheme provides an effective way to allow the operating-system size to change dynamically.

clip_image004

Fig : HARDWARE SUPPORT FOR RELOCATION AND LIMIT REGISTERS

b) MULTIPLE-PARTITION ALLOCATION

One of the simplest schemes for memory allocation is to divide memory into a number of fixed-sized partitions. Each partition may contain exactly one process.

Thus the degree of multiprogramming is bound of partitions. When a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process.

The operating system keeps a table indicating which parts of memory are available and which are occupied.

Initially, all memory is available for user processes, and is considered as one large block, of available memory, a hole.

When a process arrives and needs memory, operating system forms a hole large enough for this process.

clip_image006

When a process arrives and needs memory, we search this set for a hole that is large enough for this process. If the hole is to large, it is split into two: One part is allocated to the arriving process; the other is returned to the set of holes. When a process terminates, it releases its block of memory, which is then placed back in the set of holes. If the new hole is adjacent to other holes, we merge these adjacent holes to form one larger hole.

This procedure is a particular instance of the general dynamic storage-allocation problem, which is how to satisfy a request of size n from a list of free holes. There are many solutions to this problem.

The set of holes is searched to determine which hole is best to allocate, first-fit, best-fit, and worst-fit are the most common strategies used to select a free hole from the set of available holes.

First-fit:

Allocate the first hole that is big enough. Searching can start either at the beginning of the set of holes or where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough.

Best-fit:

Allocate the smallest hole that is big enough. We must search the entire list, unless the list is kept ordered by size. This strategy produces the smallest leftover hole.

Worst-fit:

Allocate the largest hole. Again, we must search the entire list unless it is sorted by size. This strategy produces the largest leftover hole which may be more useful than the smaller leftover hole from a best-t approach.

Posted By-: Vissicomp Technology Pvt. Ltd.

Website -: http://www.vissicomp.com