Algorithm Analysis

Gabriel Dantas - Sep 17 - - Dev Community

Algorithm analysis is a way to measure the impact of an algorithm's performance in solving a problem, whether it is in terms of time or space. To start, we can talk about what algorithms solve: problems.

All computational problems have a set of arguments called instances. An instance can also be "a case" of a problem. One instance is specified when we know the value of the parameters. In general, not every problem has a great solution described by a known algorithm, and these problems are called "non-deterministic polynomial complete problems" or NP-complete problems. Of course, they can be solved but not in polynomial time by a knew algorithm. However, in some cases, we can solve any instance of a problem using a matching algorithm, so we will focus on these cases.

For these cases, when there is a known algorithm that solves them, we can calculate the time spent by the algorithm, or at least the equation that describes this amount of time. There are some mathematical definitions to classify an algorithm, like the Big O notation.

Big O notation is a way to describe the asymptotic efficiency of an algorithm. In other words, it is a simplified way to measure the time consumption of an algorithm, a way to express how they grow when we ignore very small n sizes, or in other words, focusing on large instances, where n is the size of the instance. When we evaluate the time spent by an algorithm, we can simplify this portion of time by calling it "t". So if a computer spends t to solve a problem using algorithm "a", we can say that the exact same algorithm solving the exact same problem on a computer that is twice as slow will spend 2t, and a computer that is twice as fast will spend t/2.

Big O notation also provides a definition for the upper limit and lower limit of an algorithm. These limits are represented by the Greek letters O and Ω respectively. So when we say that an algorithm has a time complexity of O(n²), we are saying: "the time spent by this algorithm is at most n2n2," i.e., in the worst case, our algorithm will spend n2n2 units of time, never worse than that.

On the other hand, when we say: "the time spent by this algorithm is Ω(n²)," that is the lower limit, and our algorithm will spend n2n2 in the best case, but it can be worse than that.

You might be thinking right now, "Ok, but what about the case where the algorithm grows exactly proportional to the instance size?" In this case, we have the Greek letter Θ to denote the time spent by the algorithm. Thus, the time spent can be described by Θ(). For example, if an algorithm spends Θ(n²), I know the time spent by the algorithm is proportional to n2n2, because n is the instance size.

This article is a kind of test to summarize what I've learned in the last few days/weeks. I will keep writing about my studies, I don't know how often, but like this one they will be short texts about specific topics in computer science. I also simplify some concepts in this text, for education and time propose, but feel free to explain in comments. And please, if I wrote something wrong about ANY concept, let me know. Thank you all, see you!

. .
Terabox Video Player