Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

Analysis Of Algorithms

Evolution Of algorithm
 The first ever algorithm was proposed by Ada Lovelace
Mathematician who used the concepts of zero and decimal
position of number.
 During the 1940’s and 1950’s research oriented towards building
efficient computer system , so that they can be used in
scientific, commercial engineering problems.
 Structured Programming came into existence after Alan Turning
introduced the idea of effective procedure in 1936.
 Donald Knuth is the father of algorithms.
What is an algorithm?
 An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for
obtaining a required output for any legitimate input in a finite amount of time.

Problem

Algorithm

Input Computer
Output
 Following are the features which an algorithm has:
1. Finiteness
 terminates after a finite number of steps

2. Definiteness
 rigorously and unambiguously specified

3. Clearly specified input


 valid inputs are clearly specified

4. Clearly specified/expected output


 can be proved to produce the correct output given a valid input

5. Effectiveness
 steps are sufficiently simple and basic
Why study algorithms?
 Theoretical importance

 the core of computer science

 Practical importance

 A practitioner’s toolkit of known algorithms

 Framework for designing and analyzing algorithms for new problems


Scientific Approach For Algorithmic Problem Solving
Issues in the Study of Algorithms

 How to create an algorithm.


 How to validate an algorithm.
 How to analyses an algorithm
 How to test a program.
Cont’d …
1 .How to create an algorithm: To create an algorithm we have following design techniques
 a) Divide & Conquer
 b) Greedy method
 c) Dynamic Programming
 d) Branch & Bound
 e) Backtracking

2.How to validate an algorithm: Once an algorithm is created it is necessary to show that it


computes the correct output for all possible legal input , this process is called algorithm
validation.
3. How to analyses an algorithm: Analysis of an algorithm or performance analysis refers to
task of determining how much computing time & storage algorithms require.
a) Time complexity:- Frequency or Step count method
b) Space complexity:- To calculate space complexity we have to use number of input
used in algorithms.

4. How to test the program: To test a program we need following


c) Debugging: It is processes of executing programs on sample data sets to determine
whether faulty results occur & if so correct them.
b) Profiling or performance measurement is the process of executing a correct program
on data set and measuring the time & space it takes to compute the result.
Example of Quick sort Algorithm using Divide-and-Conquer

 Divide: Partition (separate) the array A[p..r] into two


(possibly empty) subarrays A[p..q–1] and A[q+1..r].
 Each element in A[p..q–1] < A[q].
 A[q] < each element in A[q+1..r].
 Index q is computed as part of the partitioning procedure.

 Conquer: Sort the two subarrays by recursive calls to


quicksort.
 Combine: The subarrays are sorted in place – no work is
needed to combine them.
Quicksort
 The key process in quick Sort is a partition(). The target of partitions is to place the pivot (any element
can be chosen to be a pivot) at its correct position in the sorted array and put all smaller elements to the
left of the pivot, and all greater elements to the right of the pivot.
 Partition is done recursively on each side of the pivot after the pivot is placed in its correct position and
this finally sorts the array.

11
Choosing Pivot
• Always pick the first element as a pivot.
• Always pick the last element as a pivot
• Pick a random element as a pivot.
• Pick the middle as the pivot.
 Consider: arr[] = {10, 80, 30, 90, 40}.
• Compare 10 with the pivot and as it is less than pivot arrange it accordingly .
Compare 80 with the pivot. It is greater than pivot.

Compare 30 with pivot. It is less than pivot so arrange it accordingly.

Compare 90 with the pivot. It is greater than the pivot.


Arrange the pivot in its correct position .

As the partition process is done recursively, it keeps on putting the pivot in its actual position in the
sorted array. Repeatedly putting pivots in their actual position makes the array sorted
 Step 1 − Choose the highest index value has pivot
 Step 2 − Take two variables to point left and right of the list excluding pivot Step 3 − left points to the low index
 Step 4 − right points to the high
 Step 5 − while value at left is less than pivot move right
 Step 6 − while value at right is greater than pivot move left
 Step 7 − if both step 5 and step 6 does not match swap left and right
 Step 8 − if left ≥ right, the point where they met is new pivot
 Pseudocode
 procedure quickSort(left, right)
 if right-left <= 0
 return
 else
 pivot = A[right]
 partition = partitionFunc(left, right, pivot)
 quickSort(left,partition-1)
 quickSort(partition+1,right)
 end if
 end procedure
Complexity Analysis of Quick Sort:
 Time Complexity:
 Best Case: Ω (N log (N))
The best-case scenario for quicksort occur when the pivot chosen at the each step
divides the array into roughly equal halves.
In this case, the algorithm will make balanced partitions, leading to efficient
Sorting.
 Average Case: θ ( N log (N))
Quicksort’s average-case performance is usually very good in practice, making it
one of the fastest sorting Algorithm.
 Worst Case: O(N2)
The worst-case Scenario for Quicksort occur when the pivot at each step
consistently results in highly unbalanced partitions.
 Auxiliary Space: O(1), if we don’t consider the recursive stack space. If we
consider the recursive stack space then, in the worst case quicksort could
make O(N).
Mergesort
 Merge sort is a sorting technique based on divide and conquer technique. With worst-case time
complexity being Ο(n log n), it is one of the most used and approached algorithms.
 The MergeSort function keeps on splitting an array into two halves until a condition is met where we
try to perform MergeSort on a subarray of size 1, i.e., p == r.
 And then, it combines the individually sorted subarrays into larger arrays until the whole array is
merged.
 ALGORITHM-MERGE SORT
 1. If p<r
 2. Then q → ( p+ r)/2
 3. MERGE-SORT (A, p, q)
 4. MERGE-SORT ( A, q+1,r)
 5. MERGE ( A, p, q, r)
 Here we called MergeSort(A, 0, length(A)-1) to sort the complete array.
Procedure
What is a recurrence relation?
 A recurrence relation, T(n), is a recursive function of an integer variable n.

 Like all recursive functions, it has one or more recursive cases and one or
more base cases.

 Example:

 The portion of the definition that does not contain T is called the base case of
the recurrence relation; the portion that contains T is called the recurrent or
recursive case.

 Recurrence relations are useful for expressing the running times (i.e., the
number of basic operations executed) of recursive algorithms

 .
Forming Recurrence Relations
 For a given recursive method, the base case and the recursive case of its recurrence relation
correspond directly to the base case and the recursive case of the method.
 Example 1: Write the recurrence relation for the following method:
public void f (int n) {
if (n > 0) {
System.out.println(n);
f(n-1);
}
}

 The base case is reached when n = = 0. The method performs one comparison. Thus, the number of
operations when n = = 0, T(0), is some constant a.
 When n > 0, the method performs two basic operations and then calls itself, using ONE recursive
call, with a parameter n – 1.
 Therefore the recurrence relation is:
T(0) = a for some constant a
T(n) = b + T(n – 1) for some constant b
• In General, T(n) is usually a sum of various choices of T(m ), the cost of the recursive
sub problems, plus the cost of the work done outside the recursive calls:
 T(n ) = aT(f(n)) + bT(g(n)) + . . . + c(n)
 where a and b are the number of subproblems, f(n) and g(n) are subproblem sizes,
and
 c(n) is the cost of the work done outside the recursive calls [Note: c(n) may be a
constant]
Example
 Example 1: Write the recurrence relation for the following method:

public int g(int n) {


if (n == 1)
return 2;
else
return 3 * g(n / 2) + g( n / 2) + 5;
}

 The base case is reached when n == 1. The method performs one comparison and one
return statement. Therefore, T(1), is some constant c.

 When n > 1, the method performs TWO recursive calls, each with the parameter n /
2, and some constant # of basic operations.

 Hence, the recurrence relation is:


T(1) = c for some constant c
T(n) = b + 2T(n / 2) for some constant b
Different types of recurrence relations

 Type 1: Divide and Conquer Recurrence Relations:


 Following are some of the examples of recurrence relations based on divide and conquer.
 T(n) = 2T(n/2) + cn
T(n) = 2T(n/2) + √n
 These types of recurrence relations can be easily solved using Master Method

 For recurrence relation T(n) = 2T(n/2) + cn, the values of a = 2, b = 2 and k =1. Here logb(a) =
log2(2) = 1 = k. Therefore, the complexity will be Θ(nlog2(n)). Similarly for recurrence
relation T(n) = 2T(n/2) + √n, the values of a = 2, b = 2 and k =1/2. Here logb(a) = log2(2) = 1 >
k. Therefore, the complexity will be Θ(n).
 Type 2: Linear Recurrence Relations:
 Following are some of the examples of recurrence relations based on linear
recurrence relation.
 T(n) = T(n-1) + n for n>0 and T(0) = 1

 These types of recurrence relations can be easily solved using


substitution method
 For example,
 T(n) = T(n-1) + n
= T(n-2) + (n-1) + n
= T(n-k) + (n-(k-1))….. (n-1) + n

 Substituting k = n, we get
 T(n) = T(0) + 1 + 2+….. +n = n(n+1)/2 = O(n^2)
 Type 3: Value Substitution Before Solving:
 Sometimes, recurrence relations can’t be directly solved using techniques like
substitution, recurrence tree or master method. Therefore, we need to
convert the recurrence relation into appropriate form before solving. For
example,
 T(n) = T(√n) + 1

 To solve this type of recurrence, substitute n = 2^m as:


 T(2^m) = T(2^m /2) + 1
Let T(2^m) = S(m),
S(m) = S(m/2) + 1

 Solving by master method, we get


 S(m) = Θ(logm)
As n = 2^m or m = log2(n),
T(n) = T(2^m) = S(m) = Θ(logm) = Θ(loglogn)
Telescoping in DAA

 Telescoping is a technique used in the analysis of algorithms to simplify the computation of


sums and integrals. It is particularly useful when the summands or integrands have a
recursive structure.
Creative telescoping is a powerful computer algebra paradigm for dealing with definite inte
grals and sums with parameters
.

 The complexity of telescoping is a topic in mathematics that deals with the computational
complexity of the creative telescoping method for definite integration of special functions.
The goal is to obtain fast algorithms and implementations for definite integration of general
special functions, in a complexity-driven perspective.
Master Theorem (Master Method)
 The master method provides an estimate of the growth rate of the solution for recurrences of the
form:

where a ≥ 1, b > 1 and the overhead function f(n) > 0

It is used for calculation of time complexity. Only specific recurrence relations can be solved using
master theorem.

 If T(n) is interpreted as the number of steps needed to execute an algorithm for an input of size n,
this recurrence corresponds to a “divide and conquer” algorithm, in which a problem of size n is
divided into a sub-problems of size n / b, where a, b are positive constants:
 Example1: Find the big-Oh running time of the following recurrence. Use the Master
Theorem:
T(1) = 1
T(n) = 2T(n / 2) + n
Solution: a = 2, b = 2, c = 1  a = bc  Case 2
Hence T(n) is O(n log n)
 Example2: Find the big-Oh running time of the following recurrence. Use the Master
Theorem:
T(1) = 1
T(n) = 4T(n / 2) + kn3 + h where k ≥ 1 and h  1
Solution: a = 4, b = 2, c = 3  a < bc  Case 3
Hence T(n) is O(n3)

You might also like