The Miller-Rabin test is a method for testing whether an integer is prime or composite. Wikipedia says:
The more bases a we test, the better the accuracy of the test. It can be shown that for any odd composite n, at least 3/4 of the bases a are witnesses for the compositeness of n. The Miller–Rabin test is strictly stronger than the Solovay–Strassen primality test in the sense that for every composite n, the set of strong liars for n is a subset of the set of Euler liars for n, and for many n, the subset is proper. If n is composite then the Miller–Rabin primality test declares n probably prime with a probability at most 4^ − k. On the other hand, the Solovay–Strassen primality test declares n probably prime with a probability at most 2^ − k.
On average the probability that a composite number is declared probably prime is significantly smaller than 4^ − k. Damgård, Landrock and Pomerance compute some explicit bounds. Such bounds can, for example, be used to generate primes; however, they should not be used to verify primes with unknown origin, since in cryptographic applications an adversary might try to send you a pseudoprime in a place where a prime number is required. In such cases, only the error bound of 4^ − k can be relied upon.
Right above this, k is the number of iterations that you go through.
So does this mean that if you do 100 iterations, the odds of it getting a false positive are 4^-100?
And according to the second link, it can be deterministic for numbers less than 10^16 in its particular implementation.
Interactively demonstrates the Miller-Rabin Primality Test on a step-by-step basis. This implementation is deterministic for odd integers smaller than 341,550,071,728,321.
Note, that the test is actually known to be deterministic for all values below 10^16 , i.e., it will find any of the 279,238,341,033,925 primes below this value .
So how did they determine when it was deterministic? Did they just test it for a certain number of iterations and see when the first pseudoprime popped up?