![]() Therefore, if there is algorithm that works by repeatedly reducing the problem to a subproblem of size that is the square root of the original problem size, that algorithm will terminate after O(log log n) steps. Since n = 2 k, this means that k = log 2 n, and therefore the number of square roots taken is O(log k) = O(log log n). Therefore, there can be only O(log k) square roots applied before k drops to 1 or lower (in which case n drops to 2 or lower). Each time you take the square root of n, you halve the exponent in this equation. So take any number n and write it as n = 2 k. That's interesting, because this connects back to what we already know - you can only divide the number k in half O(log k) times before it drops to zero. On each iteration, we cut the exponent of the power of two in half. Notice that we followed the sequence 2 16 → 2 8 → 2 4 → 2 2 → 2 1. ![]() Le'ts rewrite the above sequence in terms of powers of two: Now, let's do some math to make this rigorous. ![]() Because you can only halve a quantity k O(log k) times before it drops down to a constant (say, 2), this means you can only take square roots O(log log n) times before you've reduced the number down to some constant (say, 2). This means that, each time you take a square root, you're roughly halving the number of digits in the number. How many digits are there in the numbers n and √n? There are approximately log n digits in the number n, and approximately log (√n) = log (n 1/2) = (1/2) log n digits in √n. Why is this?įirst, an intuitive explanation. ![]() Notice that it only takes four steps to get all the way down to 2. This process takes 16 steps, and it's also the case that 65,536 = 2 16.īut, if we take the square root at each level, we get How many times do we have to divide this by 2 until we get down to 1? If we do this, we get Instead of dividing the input in half at each layer, what happens if we take the square root of the size at each layer?įor example, let's take the number 65,536. Interestingly, there is a similar way of shrinking down the size of a problem that yields runtimes of the form O(log log n). This is why, for example, binary search has complexity O(log n). If this is the case, the algorithm must terminate after O(log n) iterations, because after doing O(log n) divisions by a constant, the algorithm must shrink the problem size down to 0 or 1. Shrinking by a Square RootĪs mentioned in the answer to the linked question, a common way for an algorithm to have time complexity O(log n) is for that algorithm to work by repeatedly cut the size of the input down by some constant factor on each iteration. So the derivative of xlogx by the first principle is equal to 1 logx.ĭerivative of xcosx: The derivative of xcosx is cosx – xsinx.ĭerivative of xe x: The derivative of xe x is e x(1 x).ĭerivative of 1: The derivative of 1 is zero.ĭerivative of 1/x: The derivative of 1/x is -1/x 2.O(log log n) terms can show up in a variety of different places, but there are typically two main routes that will arrive at this runtime. $\dfrac \times 1 \log x$ as the limit of log(1 x)/x is 1 when x→0. Now, by the product rule of derivatives, the differentiation of xlogx is equal to We will use the derivatives of x and log x given below. Derivative of xlogx by Product RuleĪs xlogx is a product of two functions x and logx, we can find the derivative of xlogx by the product rule of differentiation. ![]() In the next two sections below, we will evaluate the derivative of xlogx using the product rule and the first principle of derivatives (i.e., the limit definition of derivatives). Here, the differentiation has been taken with respect to x. The formula for the derivative of xlogx is given as follows: d(xlogx)/dx = 1 logx. ![]()
0 Comments
Leave a Reply. |