You should be able to implement all of the functions declared in the functions.hpp
, string.hpp
, and sorts.hpp
files.
Although there are some functions that may not be solvable recursively, you should be able to solve the vast majority both iteratively, and recursively.
For solutions, consult the various .cpp
files.
You should be able to analyze and discuss all algorithms and functions we have implemented in class, along with analyzing some fictitious pseudocode algorithms.
- What are the required components of recursive programming?
- A base case, which handles simple inputs that can be easily solved.
- A recursive call to the function itself, where the parameters are made closer to the base case.
- What happens when you create a recursive function that does not reduce the problem size on each call?
- A recursive function that doesn't reduce the size of its problem will recurse infinitely, which results in a stack overflow, since each call requires a separate stack frame.
Provide recursive definitions, recurrence relations, and Big-Oh notation for the following functions:
Definition: func(x) = func(abs(x) - 1) + x; func(0) = 0
Relation: T(n) = T(n - 1) + 1; T(0) = 0
O(n), Omega(1)
function func(int x):
return (x) ? func(abs(x) - 1) + x : 0;
Definition: func(array, n) = array[-1] + func(array + 1, n - 2) + array[0]; func([n], 1) = func([], 0) = 1
Relation: T(n) = T(n - 2) + 2; T(0) = T(1) = 1
O(n), Omega(1)
function func(int[] array, int n):
if n <= 1: return
else:
swap(array[0], array[n-1])
func(array + 1, n - 2)
- What does it mean for a sorting algorithm to be stable?
- An algorithm that is stable will maintain the relative order of the objects it is sorting.
- What does it mean for a sorting algorithm to be in-place?
- An in-place algorithm is one that can sort the elements given within the memory the original elements were stored, i.e. it requires O(1) space to perform its sorting.
- What does it mean for a sorting algorithm to be adaptive?
- An adaptive algorithm is one that will take advantage of an array that is already partially sorted.
- If you were given input conditions that the arrays would always be almost completely sorted, what algorithm would you choose?
- Insertion sort will be the best algorithm here as it runs in O(kn) where k is the upper bound on the distance each element is from where it should be.
- Given input constraints that every input array would be smaller then 10 elements, which algorithm would you choose? Does it matter?
- I would choose insertion sort, though it doesn't necessarily matter for such a small number of elements, as the differences in runtime are extremely unlikely to make a difference.
Calculate the value for n
at which you would choose algorithm A over algorithm B to sort a given list, or if you would always choose algorithm B.
Algorithms: A: T1(n) = 27n; B: T2(n) = n!
Answer: n = 6. You can set the equations equal to one another and find the n such that T1(n-1) > T2(n-1) but T1(n) < T2(n).
- 27n = n!
- 27 = n!/n = (n-1)!
- Think about the first few factorials:
- 2! = 2, 3! = 6, 4! = 24, 5! = 120, 6! = 720
- Since 4! = 24 < 27 < 120 = 5!, we have n-1 = 5, or n = 6.
Algorithms: A: T1(n) = 10n log n; B: T2(n) = 2^n n = 1
Answer: n = 8. Here, trial and error is easier than solving algebraically. Since log n is easy to solve for powers of 2, try each one starting from 1.
- T1(1) = 10*1 * log 1 = 10 * 0 = 0; T2(1) = 2^1 = 2
- T1(1) < T2(1)
- We might think we are done, but since we have log 1 = 0 in T1(1), we can't be certain this is the solution
- T1(2) = 10*2 * log 2 = 20 * 1 = 20; T2(2) = 2^2 = 4.
- T1(2) > T2(2)
- T1(4) = 10*4 * log 4 = 40 * 2 = 80; T2(4) = 2^4 = 16.
- T1(4) > T2(4)
- T1(8) = 10*8 * log 8 = 80 * 3 = 240; T2(8) = 2^8 = 256.
- T1(8) < T2(8)
- We could then check T1(7) and T2(7) with a calculator, and see that T1(7) > T2(7)
- Since T2 continues to increase more quickly than T1 after n = 8, our solution is n = 8.
Algorithms: A: T1(n) = 5n + n^2 - 6; B: T2(n) = n^3 / 2
Answer: n = 4. Again, rather than attempting to solve a cubic polynomial, we can do simple trial and error starting at n = 1
- T1(1) = 5*1 + 1^2 - 6 = 0; T2(1) = 1^3 / 2 = 1/2.
- T1(1) < T2(1).
- Again, we have to be careful, because 1^x is 1 for all x.
- T1(2) = 5*2 + 2^2 - 6 = 8; T2(2) = 2^3 / 2 = 4.
- T1(2) > T2(2).
- T1(3) = 5*3 + 3^2 - 6 = 18; T2(3) = 3^3 / 2 = 13.5.
- T1(3) > T2(3).
- T1(4) = 5*4 + 4^2 - 6 = 30; T2(4) = 4^3 / 2 = 32.
- T1(4) < T2(4).
- Since T2 continues to increase more quickly than T1 after n = 4, our solution is n = 4.
Algorithms: A: T1(n) = n^60; B: T2(n) = 100n
Answer: n = 2. It may seem like the answer is "Always B", since even just 2^60 is much greater than 200, but again, since 1^x is 1 for all x, T1(1) = 1 < T2(1) = 100. Thus, since we clearly saw that 2^60 is much greater than 200, n = 2.
- Provide the upper and lower bounds for best and worst cases for Bubble Sort.
- Best:
$O(n)$ Worst:$O(n^2)$
- Best:
- Is Bubble Sort stable? Adaptive? In-Place?
- Bubble sort is stable and in-place.
- Are there any advantages to Bubble Sort?
- It is adaptive, and in-place.
- What is the space-complexity of Bubble Sort?
$O(1)$
- Provide the upper and lower bounds for the best and worst cases for Insertion Sort.
- Best:
$O(n)$ Worst:$O(n^2)$
- Best:
- Is Insertion Sort stable? Adaptive? In-Place?
- Insertion sort is adaptive, stable, and in-place.
- What are the advantages to Insertion Sort?
- It runs in
$O(kn)$ for partially sorted arrays, is in-place, and stable.
- It runs in
- What is the space-complexity of Insertion Sort?
$O(1)$
- Provide the upper and lower bounds for the best and worst cases for Selection Sort.
- Best:
$O(n^2)$ Worst:$O(n^2)$
- Best:
- Is Selection Sort stable? Adaptive? In-Place?
- In-place, can be made stable, not adaptive.
- Are there any advantages to Selection Sort?
- Not really.
- What is the space-complexity of Selection Sort?
$O(1)$
- Provide the upper and lower bounds for the best and worst cases for MergeSort.
- Best:
$O(n \log n)$ Worst: $O(n \log n)
- Best:
- Is MergeSort stable? Adaptive? In-Place?
- Stable
- What is the purpose of the
merge
function in most MergeSort implementations? What about theMergeSort
function?- MergeSort splits up the array into smaller arrays, while merge puts the arrays back together.
- What is the space complexity of MergeSort?
$O(n)$
- Provide the upper and lower bounds for the best and worst cases for QuickSort.
- Best:
$O(n \log n)$ Worst: $O(n^2)
- Best:
- Is QuickSort stable? Adaptive? In-Place?
- In-Place
- What is the purpose of the
partition
function in quicksort? What about thequicksort
function?- Partition will place all elements that are larger or smaller than the pivot to either the left or right and then call quicksort on both the new halves.
- What partitioning scheme did you use in class for your
partition
function?- EX: The Lomuto partitioning scheme, which selects the element in the last position in the array to be the pivot.
- Find an array that would produce the worst case runtime for this partitioning scheme.
- EX: Lomuto, an array in reverse order will produce the worst-case for this partitioning scheme.
Solve the following recurrence relations:
T(0) = 1 ; T(n) = 2T(n - 1) + 1
T(1) = 1 ; T(n) = 2T(n / 2) + n
T(0) = 1 ; T(n) = T(n - 1) + n
T(1) = 1 ; T(n) = T(n / 3) + 1
T(0) = 1 ; T(n) = T(n - 4) + 1