time complexity

mathematics
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Share
Share to social media
URL
https://mainten.top/science/time-complexity
Feedback
Corrections? Updates? Omissions? Let us know if you have suggestions to improve this article (requires login).
Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

External Websites

time complexity, a description of how much computer time is required to run an algorithm. In computer science, time complexity is one of two commonly discussed kinds of computational complexity, the other being space complexity (the amount of memory used to run an algorithm). Understanding the time complexity of an algorithm allows programmers to select the algorithm best suited for their needs, as a fast algorithm that is good enough is often preferable to a slow algorithm that performs better along other metrics.

Estimates of time complexity use mathematical models to estimate how many operations a computer will need to run to execute an algorithm. Because the actual time it takes an algorithm to run may vary depending on the specifics of the application of the algorithm (e.g., whether 100 or 1 million records are being searched), computer scientists define time complexity in reference to the size of the input into the algorithm. Time complexity is typically written as T(n), where n is a variable related to the size of the input. To describe T(n), big-O notation is used to refer to the order, or kind, of growth the function experiences as the number of elements in the function increases.

Constant time, or O(1), is the time complexity of an algorithm that always uses the same number of operations, regardless of the number of elements being operated on. For example, an algorithm to return the length of a list may use a single operation to return the index number of the final element in the list. O(1) does not imply that only one operation is used but rather that the number of operations is always constant.

Linear time, or O(n), indicates that the time it takes to run an algorithm grows in a linear fashion as n increases. In linear time, searching a list of 1,000 records should take roughly 10 times as long as searching a list of 100 records, which in turn should take roughly 10 times as long as searching a list of 10 records.

Logarithmic time, or O(log n), indicates that the time needed to run an algorithm grows as a logarithm of n. For example, when a binary search on a sorted list is performed, the list is searched by dividing it in half repeatedly until the desired element is found. The number of divisions necessary to find the element grows with the logarithm of n in base 2 rather than proportionally to n. O(log n) is a slower growth rate than O(n); thus, these algorithms have lower time complexity than linear time algorithms.

Quadratic time, or O(n2), indicates that the time it takes to run an algorithm grows as the square of n. For example, in a selection sort algorithm, a list is sorted by repeatedly finding the minimum value in the unsorted part of the list and placing this value at the beginning. Because the number of operations needed to find the minimum value in the list grows as the length n of the list grows, and the number of values that must be sorted also grows with n, the total number of operations grows with n2. These algorithms grow much faster than those that grow in linear time.

Big-O notation can be used to describe many different orders of time complexity with varying degrees of specificity. For example, T(n) might be expressed as O(n log n), O(n7), O(n!), or O(2n). The O value of a particular algorithm may also depend upon the specifics of the problem, and so it is sometimes analyzed for best-case, worst-case, and average scenarios. For example, the Quicksort sorting algorithm has an average time complexity of O(n log n), but in a worst-case scenario it can have O(n2) complexity.

Are you a student?
Get a special academic rate on Britannica Premium.

Problems that can be solved in polynomial time (that is, problems where the time complexity can be expressed as a polynomial function of n) are considered efficient, while problems that grow in exponential time (problems where the time required grows exponentially with n) are said to be intractable, meaning they are impractical for computers to solve. For example, an algorithm with time complexity O(2n) quickly becomes useless at even relatively low values of n. Suppose a computer can perform 1018 operations per second, and it runs an algorithm that grows in O(2n) time. If n = 10, the algorithm will run in less than a second. However, if n = 100, the algorithm will take more than 40,000 years to run.

Stephen Eldridge