Asymptotic analysis evaluates the computational [[complexity]] of an [[algorithm]] in terms of time and space complexity, where space refers to memory required. In practice, we focus on time complexity. For this reason, asymptotic analysis is sometimes also referred to as **running time analysis**. Asymptotic analysis calculates the **order of growth** (or rate of growth) for an algorithm, which describes the running time of an algorithm as the input size gets large. Consider only the leading term of the running time, ignoring lower-order terms and the leading term's constant coefficient (i.e., for $an^2 + bn + c$ the order of growth is $n^2$). Order of growth is denoted using [[asymptotic notation]]. In asymptotic analysis, we are most concerned with the worst case complexity of an algorithm, which is to say the input is one which will require the maximum amount of time (and/or space). For example, a list in reverse sorted order is the worst case for a any implementation of [[sort]]. The average case running time is often nearly as bad as the worst case running time. However, when the worst case is especially bad and also unlikely, the average case running time can be found using probabilistic analysis. Algorithms that rely on [[randomness]] are examples where the average case running time is more relevant than the worst case. The cost model of asymptotic analysis is the cost (e.g., in units of time) of each operation in the algorithm. Changing the cost model will change the constant coefficients in the cost function for an algorithm. At the [[limit]], these constant factors can be ignored. In worst case analysis, we care only about large input sizes, in which case the constant factors will not be significant. Analysis of algorithmic complexity is different than [[performance analysis]], where we are comparing two implementations of an algorithm or program. Various [[classes of algorithm complexity]] exist to compare algorithms. ## amortized analysis Amortized analysis is a method of analyzing algorithmic complexity across $n$ operations in the worst case. ### accounting method Using the accounting method, imagine pre-paying $1 for all future operations in case they happen (i.e. in the worst case all possible future operations do happen). For example, every time you push to a list, pay $1 to push and deposit an additional $1 to pop later. Thus the cost of pushing to a list is $2 $n$ where $n$ is the number of push operations. We'll find that in practice this method can show that running time of algorithms is less than what would be calculated otherwise. Consider a dynamic array. In the worst case, adding an element causes you to double the size of the array. Thus the cost might be $\theta(n^2)$. However, this is a gross overestimation as it is not possible that every element will result in a doubling of the array. Using the accounting method, we can pay $1 for the insert and and pre-pay $2 to pay for the copy of itself and one of the preceding elements upon each resize of the array (adding 8 elements to an array of size 16 results in $16, enough to pay for all elements to be copied). Thus the cost of resize is pre-paid. The complexity of $n$ insertions is thus the cost of each insertion plus the pre-paid resize or $\theta(n + 2n) = \theta(3n)$. This is considerably better than $\theta(n^2)$. ### potential method Potential functions offer a mathematically rigorous way of performing amortized analysis and an alternative to the accounting method. The amortized cost of an operation is the cost of the operation plus the change in potential of the data structure. The potential function is $\Phi(D)$ for any data structure instance $D$. The amortized cost of each operation is $\hat c_i = c_i + \Phi(D_{i+1}) - \Phi(D_i)$ For a dynamic array, the potential function is given as $\Phi(T) = 2k - n$. This is like the $2 pre-paid by each element less the cost already paid to resize the array $n$ in the last step.