-
Notifications
You must be signed in to change notification settings - Fork 0
Algorithms Analysis
Involves implementing the method or the algorithm and calculating the elapsedTime which is the time the algorithm has spent since its start.
[Java]
final long startTime = System.currentTimeMillis();
/* algorithm code here */
final long endTime = System.currentTimeMillis();
final long elapsedTime = endTime - startTime;
/* algorithm return */Ref : https://docs.oracle.com/javase/9/troubleshoot/time-zone-settings-jre.htm#JSTGD362
Using CPU clocks : [C/C++]
#include<iostream>
#include<ctime>
/* Directives */
const double* getCPUClock();
void algorithm();
/* Definitions */
const double* getCPUClock() {
/* create a pointer mem location */
clock_t* cpu = new clock_t();
/* allocate the memory with the current cpu clock */
*cpu = clock();
/* get the time from the clock */
const double* cpuTime = (double*) cpu;
return cpuTime;
}
void algorithm() {
const double startTime = *getCPUClock();
/* algorithm code */
const double endTime = *getCPUClock();
/* algorithm return */
const double elapsedTime = endTime - startTime;
std::cout << "elapsedTime in seconds = " << elapsedTime / CLOCKS_PER_SEC;
}
int main() {
algorithm();
return 0;
}Ref : https://www.ibm.com/docs/en/zos/2.4.0?topic=functions-clock-determine-processor-time
[Python]
# importing time module
import time
def algorithm() :
# algorithm start
startTime = time.clock()
# algorithm code goes here
endTime = time.clock()
# algorithm returns
elapsedTime = endTime - startTime- Easy to manipulate in code, despite of previously unknown complexity type (function type) for unknown source code for example.
- Keeps track of the current system state while running the algorithm (as it calculates the actual time, the method spends).
- Shows the difference when running the same algorithm on different operating systems and/or on different environments.
- Not so accurate in some cases especially when CPU units are full.
- Different code may be needed when changing the environment.
Time complexity is defined as a function of the number and the type of the algorithm input ignoring other conditions such as software env and cpu/hardware vendor.
- No need to implement the algorithm in order to study it.
- Effectively compare the functions based on the number of inputs.
- Doesn't take the environment and the cpu type/units into the account while calculating the algorithm elapsed time.
- Hard to find the algorithm time representative function.
These are the seven most common types of time complexity functions that can be used to describe algorithms in terms of (n) which stands for the number of inputs :
- The constant function,
f(n) = c, where n is the number or the size of the input. - The Logarithmic function or the inverse of the exponential function,
f(n) = log<b>(n), where n is the size of the input and b is the exponential functionbase. - The Linear function.
- The N-Log-N function.
- The Quadratic function.
- The Cubic function and other polynomials.
- The Exponential function. Let's talk about each of these separately and illustrate on our algorithms stated on this repo.
Complexity = f(n) = c, where n is the number of inputs and c is a constant value representing the time complexity.
Most basic operations are represented by f(n) = 1.
The simplest form of time functions, used to describe the time for the basic computer operations :
- Adding.
- Subtracting (which is adding the 2's complement of a number).
- Assigning a value to a variable.
- Comparing 2 numbers.
- etc...
Graph View : a straight line parallel to the x-axis.
Complexity = f(n) = n or f(n) = c * g(n), where n is the number of inputs and the function coordinates here are (n1, n1), (n2, n2), (n3, n3),......
in which the time complexity is equal to the number of inputs.
Considered the best time we want to achieve when dealing with data structured algorithms.
It represents multiple basic operations, examples :
- Comparing number x to each element in an array of size n.
- Matrices addition and subtraction operations,
Complexity = 1 (fundamental complexity) * M1-Size * M2-Size. - Adding a scalar to a vector,
Complexity = f(n) = c * g(n) = f(V-Size) = c * g(V-Size) = 1 (fundamental complexity) * V-Size
Graph View : diagonal straight line emerging from the origin point and moving obliquely in proportion to orthogonal plane (x and y).
Complexity = f(n) = log<b>(n), where b is the base of its exponential function and n is the number of inputs.
-
Represents the inverse of the exponential function.
-
Default base is 2 for computer systems.
-
The most ubiquitous (widespread) time function ever.
-
Grows less rapidly than the linear function (f(n) = log(n) is better than f(n) = n).
Graph View : inverse of f(n) = b ^ n.
Complexity = f(n) = n * log<b>(n), where b is the base of its exponential function and n is the number of inputs, this time the algorithm that is represented by log<b>(n) is performed n times, this form of time complexity takes place when we try to apply alogrithm with an array of n size.
-
Represents the multiplicity of the
logarithmic function. -
Default base is 2.
-
Grows more rapidly than the logarithmic function and linear function, but less rapidly than the quadratic function.
Graph View : Log<b>(n) function with a vertical growth (n).
Complexity = f(n) = n * n = n ^ 2, where n is the number of the algorithm input.
-
Represents the multiplicity of the
Linear function. -
Examples : nested loops, when we build an algorithm that uses 2 loops each of which uses a linear complexity (f(n) = n), so
the anticipated time = n * n = n ^ 2.
Graph View : exponential growth of power 2.