Big-Oh Notation…

Posted on Updated on

Big-Oh Notation

This idea of an upper bound can be stated more formally. The formal statement of an upper bound is called Big-Oh notation. The Big-Oh refers to the Greek letter Omicron which is  used for expressing upper bounds of overheads of an algorithm.. As computer programmers, our number one concern is how our programs will perform when we have large amounts of data, in this particular case very very large value of n.

As computer programmers, our number one concern is how our programs will perform when we have large amounts of data. In terms of the memory  and CPU time usage of a computer, we wanted to know how our program would perform if we have a very large number of assignments statements to be executed. We found that for second algorithm, all calculations are done  in the same amount of time  irrespective of value of n (n is number of first integers to be added).   Let’s represent the size of the  data set by a variable called n. Let the average performance time for computing sum  of size n be given by f (n). Now one can state the following.

O(g(n)) ={ f |∃d > 0, n0 ∈ Z,  +∋ 0 ≤ f (n) ≤ d *g(n), ∀n ≥ n0 }

In simple English above mathematical statement translates in to:  The class of functions defined by O( g(n)) consists of all functions f, where there exists a d greater than 0 and an n0  (a positive integer) such that 0 is less than or equal to f(n) is less than or equal to d times g(n) for all n greater than or equal to n0.  This behavior is shown as graph in Figure   ……

111

If f is an element of O( g(n)), then  f(n) is O( g(n)). The function g is called an asymptotic upper bound for f in this case. Stated in English the set named O( g(n)) consists of the set of all functions, f(n), that have an upper bound of d* g(n), as n approaches infinity. This is the meaning of the word asymptotic. The idea of an asymptotic bound means that for some small values of n the value of f(n) might be bigger than the value of d* g(n), but once n gets big enough (i.e. bigger than n0 ), then for all bigger n it will always be true that f(n) is less than d* g(n).

Mathematically, if f(n) is O(g(n)), then

However, other way it is not true:

if f(n) is  not O(g(n)),

 

Consider following example  if f(n) = log n,  and g(n) =n , we say log n is O(n) , but n is not O(log n). one can easily prove

The scaling factor d and the threshold n0  are interdependent and differ for different particular functions f(n) in O(g (n)).

  • Notations f(n) = O(g (n)) or f(n) is O( g(n)) mean actually f(n) ∈ O(g (n)).
  • The notation f(n) ∈ O(g (n)) indicates thatf(n) is a member of the set O( g (n)) of functions.
  • All the functions in the set O( g (n)) are increasing with the same or the lesser rate as

g (n) when n → ∞.

Because the time consumed by second algorithm  does not depend on n, the size of the instance, one can say  g(n) = 1.So, the average time to complete the summation for second algorithm  of size n is O(1). If it never takes longer than 1 µs  in second case  in Python, then a good choice for d would be 1. According to the definition above then it must be the case that f(n) is less than or equal to 1 µs once n gets big enough.

Rate  of  Growth

The choice of g(n) = 1 is arbitrary in computing the complexity of an algorithm. One could have taken g(n) = 2. If g(n) = 2 were chosen, d might be taken as  5 instead of 1. But, since we are only concerned with the overall growth in the function g, the choice of 1 or 2 is irrelevant and the simplest function is chosen, in this case O(1). In English, when an operation or program is O(1),  it is said to be  constant time operation or program. This means the operation does not depend on the size of n. This behavior is shown in figure……

The behavior of an algorithm is represented as the  rate of growth of its execution time ( CPU time) as a function of the size of the input problem instance (n). Characterizing an algorithm’s performance in this way is a common practice (an abstraction) that ignores numerous details. To use this measure properly requires an awareness of the details hidden by the abstraction. Every program is run on a computing platform, which is a general term meant to contain:

  • The computer on which the program is run, its CPU, data cache, floating-point unit (FPU), and other on-chip features
  • The programming language in which the program is written, along with the compiler/interpreter and optimization settings for generated code
  • The operating system
  • Other processes being run in the background

Thus changing the platform will change the execution time of the program(an algorithm) by a constant factor. Therefore ignoring platform differences in comparing algorithms with the asymptotically equivalent principle is feasible.

Analysis in the Best, Average, and Worst Cases

One question to ask is whether the results of the previous section will be true for all input problem instances. Most studied algorithms are search and sorting in computer engineering. How will the behavior of different sorting algorithms change with different input problem instances of the same size?

  • The data could contain large runs on elements already in sorted order
  • The input could contain duplicate values
  • Regardless of the size n of the input set, the elements could be drawn from a much      smaller set and contain a significant number of duplicate values

The conclusion to draw is that for many problems, no single optimal algorithm exists. Choosing an algorithm depends on understanding the problem being solved and the underlying probability distribution of the instances likely to be treated, as well as the behavior of the algorithms being considered.

To provide some guidance, algorithms are typically presented with three common cases in mind:

Worst case

Defines a class of problem instances for which an algorithm exhibits its worst runtime behavior. Instead of trying to identify the specific input, algorithm designers typically describe  properties of the input that prevent an algorithm from running efficiently.  For a given program and a given value n, the worst-case execution time is the maximum execution time, where the maximum is taken over all instances of size n.

Best case

Defines a class of problem instances for which an algorithm exhibits its best runtime behavior. For these instances, the algorithm does the least work. In reality, the best case rarely occurs.

Average case

Defines the expected behavior when executing the algorithm on random problem instances. While some instances will require greater time to complete because of some special cases, the vast majority will not. This measure describes the expectation value of CPU time, an average user of the algorithm should have.

Therefore  describing the rate of growth of work or time, we consistently ignore constants. So when we say that  Linear Search of  n elements takes on average:

n +

Comparisions (subject to our earlier assumptions), then by convention subject to these assumptions, we expect  Linear Search algorithm average time complexity to be of O(n).

One can look at ‘Linear Search’ algorithm of a huge list in Python. If the search item is the last in the list, instance size n very large, will have to make n comparisons before computer can find the item.  This will be ‘Worst Case’ scenario. {  O(n), understand why?}.

If the item to be searched is first in the list, CPU will make only one comparison irrespective of value of n. This will be the ‘Best Case’ scenario. The average case will be the arithmetic average of the worst case and best case.   In this case time complexity will be O(n) . Behaviour of O(n) against n is straight line as shown in figure ……

333

For further details contact Vissicomp  26708190 / 9320957718 /17

Written and Edited by

Dr. Ashwin I Mehta

Advertisements

Threads  in Java ..

Posted on Updated on

Threads  in Java

Java is a multi-threaded programming language which means programmer  can develop multi-threaded program using Java. A multi-threaded program contains two or more parts that can run simultaneously (concurrently) and each part can execute a different code  at the same time making maximum use of the available resources. Resources mean available CPU time or memory or both.

Process vs. Thread 

To understand concurrency and multi threading, one must first understand differences between a process and a thread.

process runs independently and isolated of other processes. It cannot directly access shared data in other processes. The resources of the process, e.g. memory and CPU time, are allocated to it through  the operating system.

thread is a so called lightweight process. It has its own call stack (order in which various methods are called), but can access shared data of other threads in the same process. Every thread has its own memory cache. Many threads run within a single process. This reduces consumption of resources.  If a thread reads shared data it stores this data in its own memory cache.

By definition, multitasking is when multiple processes sharing  common processing resources such as a CPU time. Multi-threading extends the idea of multitasking into applications where you can subdivide specific operations within a single application into individual threads. Each of the threads  run in parallel. Multi-threading enables you to write in a way where multiple activities can proceed concurrently in the same program.

The idea of a thread is that of a lightweight unit of execution—smaller than a process, but still capable of executing arbitrary Java code. The usual way that this is implemented is for each thread to be a fully fledged unit of execution to the operating system but to belong to a process, with the address space of the process being shared between all threads comprising that process. This means that each thread can be scheduled independently and has its own stack and program counter but shares memory and objects with other threads in the same process.

Life Cycle Of a Thread

A thread goes through various stages in its life cycle. For example, a thread is born, started, runs, and then dies. The following diagram shows the complete life cycle of a thread.

Following are the stages of the life cycle −

  • New− A new thread begins its life cycle in the new state. The thread has been created (its constructor has been called).It remains in this state until the program starts the thread.
  • Runnable– The thread’s start() method has been called,the thread becomes Runnable. The thread is available to be run(execute code) by the thread scheduler.
  • Waiting– The thread has been temporarily removed from the RunnableA thread transitions back to the Runnable state only when another thread signals the waiting thread to continue executing.
  • Timed Waiting(also known as Blocked)− A runnable thread can enter the timed waiting state for a specified interval of time. This is achieved by calling sleep() method. This method can take argument in terms of time interval. A thread in this state transitions back to the runnable state when that time interval expires or when the event it is waiting for occurs. An event like completion of I/O
  • Terminated (Dead)− A runnable thread enters the terminated state when it finishes execution of run() method.  It remains there until the application ends.

 

Diagram below depicts various state of a thread and shows method  which will transform a thread from one state to another state. Here Ready state is also called as Runnable state .

   The diagram represents all the phases of life cycle of a thread. 

789

Figure 1. Life cycle representing all phases of a thread.

Multithreading environment in java can be implemented in two ways as the diagram below shows.

  1. Create a class(user defined) which extends java built-in Thread class.
  2. Create a class (user defined) which implements java built-in Runnable interface.

Classes and interfaces to create threads

333

Thread (Class): A class that creates and defines thread. This class inherits the Object class and implements Runnable interface.

Runnable(interface): The only method in this interface is the run() method.

Object: wait(), notify() and  notifyAll() are methods of this class used for threading.

Example of single thread running by creating a class which extends Thread class.

 

class SinglethreadDemo extends Thread

{                                   // method of Runnable interface

public void run() {

System.out.println (” My single thread is in running state”); }

 

public static void main (String args[]) {

SinglethreadDemo obj = new SinglethreadDemo ();

obj.start();                             // Calling start() method of Thread class

}

}

555

For further information call : Vissicomp Technology    26708190/9320957718

 

Written and Edited By                                                                 Vissicomp Technology

Dr. Ashwin I Mehta

RASPBERRY-PI

Posted on Updated on

RASPBERRY-PI

Raspberry Pi is a credit-card sized computer manufactured and designed in the United Kingdom by the Raspberry Pi foundation with the intention of teaching basic computer science to school students and every other person interested in computer hardware, programming and DIY-Do-it Yourself projects.

The Raspberry Pi is manufactured in three board configurations through licensed manufacturing deals with Newark element14 (Premier Farnell), RS Components and Egoman. These companies sell the Raspberry Pi online. Egoman produces a version for distribution solely in China and Taiwan, which can be distinguished from other Pi by their red coloring and lack of FCC/CE marks.

The hardware is the same across all manufacturers. The Raspberry Pi has a Broadcom BCM2835 system on a chip (SoC), which includes an ARM1176JZF-S 700 MHz processor, VideoCore IV GPU and was originally shipped with 256 megabytes of RAM, later upgraded (Model B & Model B+) to 512 MB. It does not include a built-in hard disk or solid-state drive, but it uses an SD card for booting and persistent storage, with the Model B+ using a MicroSD.

The Foundation provides Debian (Type of linux distro) and Arch Linux ARM distributions for download. Tools are available for Python as the main programming language, with support for BBC BASIC (via the RISC OS image or the Brandy Basic clone for Linux), C, Java and Perl.

Before you plug anything into your Raspberry Pi, make sure that you have all the equipment you need:

  • A monitor with the correct cable and adapter
  • A micro USB power supply
  • A wired keyboard and mouse, or a wireless keyboard and mouse with a Bluetooth adapter
  • A micro SD card
  • A Raspberry Pi

RP

To get started with Raspberry Pi, you also need an operating system.

Plugging in your Raspberry Pi

  1. Begin by placing your SD card into the SD card slot on the Raspberry Pi. It will only fit one way.
  2. Next, plug your keyboard and mouse into the USB ports on the Raspberry Pi.
  3. Make sure that your monitor or TV is turned on, and that you have selected the right input (e.g. HDMI 1, DVI, etc).
  4. Connect your HDMI cable from your Raspberry Pi to your monitor or TV.
  5. If you intend to connect your Raspberry Pi to the internet, plug an Ethernet cable into the Ethernet port, or connect a WiFi dongle to one of the USB ports (unless you have a Raspberry Pi 3).
  6. When you’re happy that you have plugged all the cables and SD card in correctly, connect the micro USB power supply. This action will turn on and boot your Raspberry Pi.

Raspberry Pi networking

You’ll probably want to connect your Raspberry Pi to your local network or the internet. You can use any of the following options to do this:

Connecting via Ethernet

The Raspberry Pi has an Ethernet port, alongside the USB ports. If your Raspberry Pi is situated close to a router, access point, or switch, you can connect to a network using an Ethernet cable.

12435

Once you’ve plugged the Ethernet cable into the Raspberry Pi and the other end into an access point, your Raspberry Pi will automatically connect to the network.

Connecting via WiFi

If you have a Raspberry Pi 3, then there is built-in WiFi. If you’re using an earlier version of the Raspberry Pi, then you’ll need a USB WiFi dongle.

Some WiFi dongles, when used with the Raspberry Pi, are simple plug and play devices. Others require specific drivers, and may not be compatible

esdfsssfssd

with the Raspberry Pi. Make sure you read the device manufacturer’s documentation before making a purchase.

You can buy the official Raspberry Pi WiFi dongle from The Pi Hut or Farnell.

Steps Of OS Installation:

1.Download Wheezy from Raspberry pi & extract it in new folder while extracting it you will get an image file with dot img extension.

2.Download SD card For matter from source forge net on your local Desktop

3.Format your SD card by inserting in Card reader and connect it to your desktop.

4 Make SD card  bootable By using SD card For matter

4.Now insert SD card in raspberry pi

5.Download windows 32 disc imager to write the OS

6.Open windows 32 disc imager.

7.There is “browse” Option on imager. Browse  dot img file and click on “ write “ button and exit. Os will successfully install on SD card

8.Connect all hardware accessories to your Raspberry pi kit

9.Turn on the  Raspberry pi kit ,tons of code will run on your desktop.

It will ask you for id and password your default id is pi and password is raspberry you will get special Raspberry pi window on desktop

For further information keep reading our blog. OR call Vissicomp

Written By                                                                     Edited By

Mrs. Jyoti Khandagale                                                 Dr. Ashwin Mehta

 

Inter-Process Communication

Posted on Updated on

Producer-Consumer

clip_image002

• Consumer sends N empty messages (i.e., slots)

• Producer picks up an empty message/slot and send back an item produced

• Consumer receives an item and sends back an empty message (i.e., a slot)

• Messages sent, but not received are buffered by OS

Characteristics

Naming: Direct Communication

• Sender/receiver refer to each other, as seen before

• Properties of communication link

– Link is established automatically between communicating processes

– Link is associated with exactly two processes

–Exactly one link for every pair of processes

• Communication is symmetric(above) or asymmetric

– send(P,m) // send a message to P

– receive(&id, m) // receive from any process, set id to sender

Direct Naming

clip_image004

Naming: Indirect Communication

• Communication via mailboxes (or ports)

• Processes communicate by putting and taking messages in/from mailboxes

–send(A, m)and receive(A,m)

• Properties of communication link

– A link is established between two processes, if they share a mailbox

– Link maybe associated with more than two processes

– A number of different links may exist between any pair of processes; each one a separate mailbox

Indirect Naming

clip_image006

Posted By-: Vissicomp Technology Pvt. Ltd.

Website -: http://www.vissicomp.com

Inter-Process Communication

Posted on Updated on

Mailboxes

It depends:

• At most one process may execute a receive operation at a time

• The system arbitrarily selects one as the receiver

• Message is retrieved by both (a form of multicast)

Ownership

• Either process can own the mailbox

• OS can own it and/or creator owns it

clip_image002

Sender/Receiver Synchronization

• Message passing maybe blocking or non-blocking

(synchronous or asynchronous)

• Blocking send– sender blocked until message is received by receiver (or by mailbox)

• Non-blocking send– sending process resumes operation right after sending

• Blocking receive– receiver blocks until message is available

• Non-blocking receive– receiver retrieves a valid message or returns an error code

• Any combination of the above send/receive is possible

Blocking vs. non-blocking send

clip_image004

Buffering

• The mechanism that buffers messages (a.k.a.queue) may have the following properties

–Zero capacity: queue has length 0, no messages can be outstanding on link, sender blocks for message exchange

–Bounded capacity: queue has length N, N messages can be in queue at any point in time, sender blocks if queue is full, otherwise it may continue to execute

–Unbounded capacity: queue has infinite length, sender never blocks

Message Passing in Practise

• Message Passing Interface (MPI) used in parallel programming systems

• Open MPI, a standard

• Unix’s IPC – Inter-process communication facilities

– Semaphores

– Pipes, FIFO

– Message queues

–Sockets

– Shared memory, memory mapped files

Characteristics of Message Passing

• Addressing

– Each process has a unique name

– Introduce the concept of a mailbox

• With sender, with receiver, with both, with OS

• Buffering

– No buffering, send blocks until receive happens or vice versa, then copy message (called a rendezvous)

Copy-on-write

• Message passing for process-to-process communication can be quite inefficient due to context switching etc.

• Copy-on-write first sets a pointer in receivers address space pointing to received data in sender’s address space

• Only if sender / receiver start writing this data will the OS copy the actual data from sender to receiver

• Otherwise, no copying of the data is performed

Posted By-: Vissicomp Technology Pvt. Ltd.

Website -: http://www.vissicomp.com

Inter-Process Communication

Posted on Updated on

Mutual Exclusion Based on Message Passing

clip_image002

Mutual Exclusion Based on Message Passing Code

P1:

send(Resource Contro, “Give me resource”);

receive(Resource Controller, Message);

// block while waiting for a reply from RC

{

…Critical Section…

}

send(Resource Controller, “Done with resource”);

Bounded Buffer Problem (no shared memory)

clip_image004

void consumer(void)

int item;

message m;

// initialization loop

for (i=0; i<N; i++) send(producer, &m);// N empty slots

while (TRUE) {

receive(producer, &m); // receive item

item=extract_item(&m);

send(producer, &m); // send empty slot

consume_item(item);

}

clip_image006

void producer(void)

int item;

message m;

while (TRUE) {

item Produce_Item();

receive(consumer, &m); // empty slot

build_message(&m, item);

send(consumer, &m); // send item

}

Posted By-: Vissicomp Technology Pvt. Ltd.

Website -: http://www.vissicomp.com

Inter-Process Communication

Posted on Updated on

Semaphores & Monitors

Monitors are high-level programming language concepts

– Make mutual exclusion of critical section “automatic” and therefore less error-prone

– Require compiler support

– Not widely available in modern languages (e.g., avail. In Modula 3)

– Available as an “approximation”in Java

Semaphores are more lower-level

– Require OS support

– Applications can be layered on top of them, i.e., no compiler support required

• Both concepts require some kind of shared memory

• How about synchronization problems in a distributed system (no shared memory)?

Conclusion About Monitors and Semaphores

• Monitors are high-level, but not widely supported; therefore not too useful in practice

• Semaphores are rather low-level

• None of the mechanisms discussed so far allow for information exchange between

distributed processes So we need something else!

Message Passing

•An inter-process communication mechanism

• Based on two primitives

–send (destination, &message)

• Sends a message to a destination

–receive (source, &message)

• Receives a message from a source (any source)

• System calls, not language constructs

• Blocking versions and non-blocking versions are available.

Message Passing

clip_image002

• Naming of sender / receiver

• Number of interacting entities

• Reliability of link

• Capacity of link

Design Issues:

Various design concerns not applicable in semaphore & monitor discussion:

• Networked system, i.e., messages may be lost (network is unreliable)

• Sender/receiver agree to use acknowledgements and retransmission timers

• Message received, but acknowledgement lost

– Receiver will get the same message twice

– Use consecutive sequence numbers

–Computer Networks

clip_image004

• Naming of processes for unambiguous send/receive

• Authentication, how can client know it is communicating with proper server

• Tune for performance, if sender & receiver are on the same machine (semaphore is usually cheaper)

Posted By-: Vissicomp Technology Pvt. Ltd.

Website -: http://www.vissicomp.com