Integer square root

Greatest integer less than or equal to square root

In number theory, the integer square root (isqrt) of a non-negative integer n is the non-negative integer m which is the greatest integer less than or equal to the square root of n,

isqrt ( n ) = n . {\displaystyle \operatorname {isqrt} (n)=\lfloor {\sqrt {n}}\rfloor .}

For example, isqrt ( 27 ) = 27 = 5.19615242270663... = 5. {\displaystyle \operatorname {isqrt} (27)=\lfloor {\sqrt {27}}\rfloor =\lfloor 5.19615242270663...\rfloor =5.}

Introductory remark

Let y {\displaystyle y} and k {\displaystyle k} be non-negative integers.

Algorithms that compute (the decimal representation of) y {\displaystyle {\sqrt {y}}} run forever on each input y {\displaystyle y} which is not a perfect square.[note 1]

Algorithms that compute y {\displaystyle \lfloor {\sqrt {y}}\rfloor } do not run forever. They are nevertheless capable of computing y {\displaystyle {\sqrt {y}}} up to any desired accuracy k {\displaystyle k} .

Choose any k {\displaystyle k} and compute y × 100 k {\textstyle \lfloor {\sqrt {y\times 100^{k}}}\rfloor } .

For example (setting y = 2 {\displaystyle y=2} ):

k = 0 : 2 × 100 0 = 2 = 1 k = 1 : 2 × 100 1 = 200 = 14 k = 2 : 2 × 100 2 = 20000 = 141 k = 3 : 2 × 100 3 = 2000000 = 1414 k = 8 : 2 × 100 8 = 20000000000000000 = 141421356 {\displaystyle {\begin{aligned}&k=0:\lfloor {\sqrt {2\times 100^{0}}}\rfloor =\lfloor {\sqrt {2}}\rfloor =1\\&k=1:\lfloor {\sqrt {2\times 100^{1}}}\rfloor =\lfloor {\sqrt {200}}\rfloor =14\\&k=2:\lfloor {\sqrt {2\times 100^{2}}}\rfloor =\lfloor {\sqrt {20000}}\rfloor =141\\&k=3:\lfloor {\sqrt {2\times 100^{3}}}\rfloor =\lfloor {\sqrt {2000000}}\rfloor =1414\\&\vdots \\&k=8:\lfloor {\sqrt {2\times 100^{8}}}\rfloor =\lfloor {\sqrt {20000000000000000}}\rfloor =141421356\\&\vdots \\\end{aligned}}}

Compare the results with 2 = 1.41421356237309504880168872420969807856967187537694... {\displaystyle {\sqrt {2}}=1.41421356237309504880168872420969807856967187537694...}

It appears that the multiplication of the input by 100 k {\displaystyle 100^{k}} gives an accuracy of k decimal digits.[note 2]

To compute the (entire) decimal representation of y {\displaystyle {\sqrt {y}}} , one can execute isqrt ( y ) {\displaystyle \operatorname {isqrt} (y)} an infinite number of times, increasing y {\displaystyle y} by a factor 100 {\displaystyle 100} at each pass.

Assume that in the next program ( sqrtForever {\displaystyle \operatorname {sqrtForever} } ) the procedure isqrt ( y ) {\displaystyle \operatorname {isqrt} (y)} is already defined and — for the sake of the argument — that all variables can hold integers of unlimited magnitude.

Then sqrtForever ( y ) {\displaystyle \operatorname {sqrtForever} (y)} will print the entire decimal representation of y {\displaystyle {\sqrt {y}}} .[note 3]

// Print sqrt(y), without halting
void sqrtForever(unsigned int y)
{
	unsigned int result = isqrt(y);
	printf("%d.", result);	// print result, followed by a decimal point

	while (true)	// repeat forever ...
	{
		y = y * 100;	// theoretical example: overflow is ignored
		result = isqrt(y);
		printf("%d", result % 10);	// print last digit of result
	}
}

The conclusion is that algorithms which compute isqrt() are computationally equivalent to algorithms which compute sqrt().[1]

Basic algorithms

The integer square root of a non-negative integer y {\displaystyle y} can be defined as

y = x : x 2 y < ( x + 1 ) 2 , x N {\displaystyle \lfloor {\sqrt {y}}\rfloor =x:x^{2}\leq y<(x+1)^{2},x\in \mathbb {N} }

For example, isqrt ( 27 ) = 27 = 5 {\displaystyle \operatorname {isqrt} (27)=\lfloor {\sqrt {27}}\rfloor =5} because 6 2 > 27  and  5 2 27 {\displaystyle 6^{2}>27{\text{ and }}5^{2}\ngtr 27} .

Algorithm using linear search

The following C programs are straightforward implementations.

// Integer square root
// (using linear search, ascending)
unsigned int isqrt(unsigned int y)
{
	// initial underestimate, L <= isqrt(y)
	unsigned int L = 0;

	while ((L + 1) * (L + 1) <= y)
		L = L + 1;

	return L;
}
// Integer square root
// (using linear search, descending)
unsigned int isqrt(unsigned int y)
{
	// initial overestimate, isqrt(y) <= R
	unsigned int R = y;

	while (R * R > y)
		R = R - 1;

	return R;
}

Linear search using addition

In the program above (linear search, ascending) one can replace multiplication by addition, using the equivalence

( L + 1 ) 2 = L 2 + 2 L + 1 = L 2 + 1 + i = 1 L 2. {\displaystyle (L+1)^{2}=L^{2}+2L+1=L^{2}+1+\sum _{i=1}^{L}2.}

// Integer square root
// (linear search, ascending) using addition
unsigned int isqrt(unsigned int y)
{
	unsigned int L = 0;
	unsigned int a = 1;
	unsigned int d = 3;

	while (a <= y)
	{
		a = a + d;	// (a + 1) ^ 2
		d = d + 2;
		L = L + 1;
	}

	return L;
}

Algorithm using binary search

Linear search sequentially checks every value until it hits the smallest x {\displaystyle x} where x 2 > y {\displaystyle x^{2}>y} .

A speed-up is achieved by using binary search instead. The following C-program is an implementation.

// Integer square root (using binary search)
unsigned int isqrt(unsigned int y)
{
	unsigned int L = 0;
	unsigned int M;
	unsigned int R = y + 1;

    while (L != R - 1)
    {
        M = (L + R) / 2;

		if (M * M <= y)
			L = M;
		else
			R = M;
	}

    return L;
}

Numerical example

For example, if one computes isqrt ( 2000000 ) {\displaystyle \operatorname {isqrt} (2000000)} using binary search, one obtains the [ L , R ] {\displaystyle [L,R]} sequence

[ 0 , 2000001 ] [ 0 , 1000000 ] [ 0 , 500000 ] [ 0 , 250000 ] [ 0 , 125000 ] [ 0 , 62500 ] [ 0 , 31250 ] [ 0 , 15625 ] [ 0 , 7812 ] [ 0 , 3906 ] [ 0 , 1953 ] [ 976 , 1953 ] [ 976 , 1464 ] [ 1220 , 1464 ] [ 1342 , 1464 ] [ 1403 , 1464 ] [ 1403 , 1433 ] [ 1403 , 1418 ] [ 1410 , 1418 ] [ 1414 , 1418 ] [ 1414 , 1416 ] [ 1414 , 1415 ] {\displaystyle {\begin{aligned}&[0,2000001]\rightarrow [0,1000000]\rightarrow [0,500000]\rightarrow [0,250000]\rightarrow [0,125000]\rightarrow [0,62500]\rightarrow [0,31250]\rightarrow [0,15625]\\&\rightarrow [0,7812]\rightarrow [0,3906]\rightarrow [0,1953]\rightarrow [976,1953]\rightarrow [976,1464]\rightarrow [1220,1464]\rightarrow [1342,1464]\rightarrow [1403,1464]\\&\rightarrow [1403,1433]\rightarrow [1403,1418]\rightarrow [1410,1418]\rightarrow [1414,1418]\rightarrow [1414,1416]\rightarrow [1414,1415]\end{aligned}}}

This computation takes 21 iteration steps, whereas linear search (ascending, starting from 0 {\displaystyle 0} ) needs 1414 steps.

Algorithm using Newton's method

One way of calculating n {\displaystyle {\sqrt {n}}} and isqrt ( n ) {\displaystyle \operatorname {isqrt} (n)} is to use Heron's method, which is a special case of Newton's method, to find a solution for the equation x 2 n = 0 {\displaystyle x^{2}-n=0} , giving the iterative formula

x k + 1 = 1 2 ( x k + n x k ) , k 0 , x 0 > 0. {\displaystyle x_{k+1}={\frac {1}{2}}\!\left(x_{k}+{\frac {n}{x_{k}}}\right),\quad k\geq 0,\quad x_{0}>0.}

The sequence { x k } {\displaystyle \{x_{k}\}} converges quadratically to n {\displaystyle {\sqrt {n}}} as k {\displaystyle k\to \infty } .

Stopping criterion

One can prove[citation needed] that c = 1 {\displaystyle c=1} is the largest possible number for which the stopping criterion

| x k + 1 x k | < c {\displaystyle |x_{k+1}-x_{k}|<c}
ensures x k + 1 = n {\displaystyle \lfloor x_{k+1}\rfloor =\lfloor {\sqrt {n}}\rfloor } in the algorithm above.

In implementations which use number formats that cannot represent all rational numbers exactly (for example, floating point), a stopping constant less than 1 should be used to protect against round-off errors.

Domain of computation

Although n {\displaystyle {\sqrt {n}}} is irrational for many n {\displaystyle n} , the sequence { x k } {\displaystyle \{x_{k}\}} contains only rational terms when x 0 {\displaystyle x_{0}} is rational. Thus, with this method it is unnecessary to exit the field of rational numbers in order to calculate isqrt ( n ) {\displaystyle \operatorname {isqrt} (n)} , a fact which has some theoretical advantages.

Using only integer division

For computing n {\displaystyle \lfloor {\sqrt {n}}\rfloor } for very large integers n, one can use the quotient of Euclidean division for both of the division operations. This has the advantage of only using integers for each intermediate value, thus making the use of floating point representations of large numbers unnecessary. It is equivalent to using the iterative formula

x k + 1 = 1 2 ( x k + n x k ) , k 0 , x 0 > 0 , x 0 Z . {\displaystyle x_{k+1}=\left\lfloor {\frac {1}{2}}\!\left(x_{k}+\left\lfloor {\frac {n}{x_{k}}}\right\rfloor \right)\right\rfloor ,\quad k\geq 0,\quad x_{0}>0,\quad x_{0}\in \mathbb {Z} .}

By using the fact that

1 2 ( x k + n x k ) = 1 2 ( x k + n x k ) , {\displaystyle \left\lfloor {\frac {1}{2}}\!\left(x_{k}+\left\lfloor {\frac {n}{x_{k}}}\right\rfloor \right)\right\rfloor =\left\lfloor {\frac {1}{2}}\!\left(x_{k}+{\frac {n}{x_{k}}}\right)\right\rfloor ,}

one can show that this will reach n {\displaystyle \lfloor {\sqrt {n}}\rfloor } within a finite number of iterations.

In the original version, one has x k n {\displaystyle x_{k}\geq {\sqrt {n}}} for k 1 {\displaystyle k\geq 1} , and x k > x k + 1 {\displaystyle x_{k}>x_{k+1}} for x k > n {\displaystyle x_{k}>{\sqrt {n}}} . So in the integer version, one has x k n {\displaystyle \lfloor x_{k}\rfloor \geq \lfloor {\sqrt {n}}\rfloor } and x k x k > x k + 1 x k + 1 {\displaystyle x_{k}\geq \lfloor x_{k}\rfloor >x_{k+1}\geq \lfloor x_{k+1}\rfloor } until the final solution x s {\displaystyle x_{s}} is reached. For the final solution x s {\displaystyle x_{s}} , one has n x s n {\displaystyle \lfloor {\sqrt {n}}\rfloor \leq \lfloor x_{s}\rfloor \leq {\sqrt {n}}} and x s + 1 x s {\displaystyle \lfloor x_{s+1}\rfloor \geq \lfloor x_{s}\rfloor } , so the stopping criterion is x k + 1 x k {\displaystyle \lfloor x_{k+1}\rfloor \geq \lfloor x_{k}\rfloor } .

However, n {\displaystyle \lfloor {\sqrt {n}}\rfloor } is not necessarily a fixed point of the above iterative formula. Indeed, it can be shown that n {\displaystyle \lfloor {\sqrt {n}}\rfloor } is a fixed point if and only if n + 1 {\displaystyle n+1} is not a perfect square. If n + 1 {\displaystyle n+1} is a perfect square, the sequence ends up in a period-two cycle between n {\displaystyle \lfloor {\sqrt {n}}\rfloor } and n + 1 {\displaystyle \lfloor {\sqrt {n}}\rfloor +1} instead of converging.

Example implementation in C

// Square root of integer
unsigned int int_sqrt(unsigned int s)
{
	// Zero yields zero
    // One yields one
	if (s <= 1) 
		return s;

    // Initial estimate (must be too high)
	unsigned int x0 = s / 2;

	// Update
	unsigned int x1 = (x0 + s / x0) / 2;

	while (x1 < x0)	// Bound check
	{
		x0 = x1;
		x1 = (x0 + s / x0) / 2;
	}		
	return x0;
}

Numerical example

For example, if one computes the integer square root of 2000000 using the algorithm above, one obtains the sequence

1000000 500001 250002 125004 62509 31270 15666 7896 4074 2282 1579 1422 1414 1414 {\displaystyle {\begin{aligned}&1000000\rightarrow 500001\rightarrow 250002\rightarrow 125004\rightarrow 62509\rightarrow 31270\rightarrow 15666\rightarrow 7896\\&\rightarrow 4074\rightarrow 2282\rightarrow 1579\rightarrow 1422\rightarrow 1414\rightarrow 1414\end{aligned}}}
In total 13 iteration steps are needed. Although Heron's method converges quadratically close to the solution, less than one bit precision per iteration is gained at the beginning. This means that the choice of the initial estimate is critical for the performance of the algorithm.

When a fast computation for the integer part of the binary logarithm or for the bit-length is available (like e.g. std::bit_width in C++20), one should better start at

x 0 = 2 ( log 2 n ) / 2 + 1 , {\displaystyle x_{0}=2^{\lfloor (\log _{2}n)/2\rfloor +1},}
which is the least power of two bigger than n {\displaystyle {\sqrt {n}}} . In the example of the integer square root of 2000000, log 2 n = 20 {\displaystyle \lfloor \log _{2}n\rfloor =20} , x 0 = 2 11 = 2048 {\displaystyle x_{0}=2^{11}=2048} , and the resulting sequence is
2048 1512 1417 1414 1414. {\displaystyle 2048\rightarrow 1512\rightarrow 1417\rightarrow 1414\rightarrow 1414.}
In this case only four iteration steps are needed.

Digit-by-digit algorithm

The traditional pen-and-paper algorithm for computing the square root n {\displaystyle {\sqrt {n}}} is based on working from higher digit places to lower, and as each new digit pick the largest that will still yield a square n {\displaystyle \leq n} . If stopping after the one's place, the result computed will be the integer square root.

Using bitwise operations

If working in base 2, the choice of digit is simplified to that between 0 (the "small candidate") and 1 (the "large candidate"), and digit manipulations can be expressed in terms of binary shift operations. With * being multiplication, << being left shift, and >> being logical right shift, a recursive algorithm to find the integer square root of any natural number is:

def integer_sqrt(n: int) -> int:
    assert n >= 0, "sqrt works for only non-negative inputs"
    if n < 2:
        return n

    # Recursive call:
    small_cand = integer_sqrt(n >> 2) << 1
    large_cand = small_cand + 1
    if large_cand * large_cand > n:
        return small_cand
    else:
        return large_cand


# equivalently:
def integer_sqrt_iter(n: int) -> int:
    assert n >= 0, "sqrt works for only non-negative inputs"
    if n < 2:
        return n

    # Find the shift amount. See also [[find first set]],
    # shift = ceil(log2(n) * 0.5) * 2 = ceil(ffs(n) * 0.5) * 2
    shift = 2
    while (n >> shift) != 0:
        shift += 2

    # Unroll the bit-setting loop.
    result = 0
    while shift >= 0:
        result = result << 1
        large_cand = (
            result + 1
        )  # Same as result ^ 1 (xor), because the last bit is always 0.
        if large_cand * large_cand <= n >> shift:
            result = large_cand
        shift -= 2

    return result

Traditional pen-and-paper presentations of the digit-by-digit algorithm include various optimizations not present in the code above, in particular the trick of pre-subtracting the square of the previous digits which makes a general multiplication step unnecessary. See Methods of computing square roots § Binary numeral system (base 2) for an example.[2]

In programming languages

Some programming languages dedicate an explicit operation to the integer square root calculation in addition to the general case or can be extended by libraries to this end.

See also

Notes

  1. ^ The square roots of the perfect squares (e.g., 0, 1, 4, 9, 16) are integers. In all other cases, the square roots of positive integers are irrational numbers.
  2. ^ It is no surprise that the repeated multiplication by 100 is a feature in Jarvis (2006)
  3. ^ The fractional part of square roots of perfect squares is rendered as 000....

References

  1. ^ see 'Methods of computing square roots'.
  2. ^ Woo, C (June 1985). "Square root by abacus algorithm (archived)". Archived from the original on 2012-03-06.
  3. ^ LispWorks Ltd (1996). "CLHS: Function SQRT, ISQRT". www.lispworks.com.
  4. ^ Python Software Foundation (2001). "Mathematical functions". Python Standard Library documentation. (since version 3.8).

External links

  • Jarvis, Ashley Frazer (2006). "Square roots by subtraction" (PDF). Mathematical Spectrum. 37: 119–122.
  • Minsky, Marvin (1967). "9. The Computable Real Numbers". Computation: Finite and Infinite Machines. Prentice-Hall. ISBN 0-13-165563-9. OCLC 0131655639.
  • "A geometric view of the square root algorithm".
  • v
  • t
  • e
Primality testsPrime-generatingInteger factorizationMultiplicationEuclidean divisionDiscrete logarithmGreatest common divisorModular square rootOther algorithms
  • Italics indicate that algorithm is for numbers of special forms