Diferente pentru blog/your-bisection-search-is-wrong intre reviziile #10 si #25

Nu exista diferente intre titluri.

Diferente intre continut:

Let's choose f(x) = x^3^ - c. f is continuous and x is the cubic root of c, when f(x) = 0. Thus, we can apply the bisection method.
== code(python) |
def cubicRoot(c): {
def cubicRoot(c):
  lo, hi = 0.0, c
  while lo * lo * lo != c:
    mid = (lo + hi) / 2;
    mid = (lo + hi) / 2
    if mid * mid * mid < c:
       lo = mid
    else:
What else? The code doesn’t work for negative values of c. This is easily fixable:
== code(python) |
def cubicRoot(c): {
def cubicRoot(c):
  lo, hi = 0.0, c
  while lo * lo * lo != c:
    mid = (lo + hi) / 2;
    mid = (lo + hi) / 2
    if mid * mid * mid < c:
       lo = mid
    else:
Instead of:
==code(python) |
while lo * lo * lo != c
while lo * lo * lo != c:
==
switch to
==code(python) |
while abs(c - lo * lo * lo) < 1e-3:
while abs(c - lo * lo * lo) > 1e-3:
==
This doesn't work. For large numbers like 1e37 the code goes into an infinite loop. Try to figure out why. Discuss it in the comment section. Let’s try using the *relative error* (|(a - b) / b| < eps). There are some weird cases when a and b are close or equal to 0. Can the code be cleaner?
After each while loop iteration we learn something new about x’s range. A double has only 64 bits of precision. So instead of a tricky floating point stopping criteria we can run the loop a fixed number of times so that the interval is as small as the precision of our floating point numbers allows. Now the algorithm is:
After each while loop iteration we learn something new about x’s range. A double has only 64 bits of precision. So instead of a tricky floating point stopping criteria we can run the loop a fixed number of times so that the interval is as small as the precision of our floating point numbers allows.
 
*Tomek Czajka(of topcoder fame) pointed out that my final version was buggy as well. I chose the number of iterations to be 120 but that’s way too small. It doesn't work for c = 1e60.*
 
A double is represented by the mantissa which consists of 52 bits and the exponent which contains 11 bits(signed). One loop iteration either decreases the exponent of our interval by 1 or we find out a new bit of the mantissa. The maximum value of the exponent is 2^10^ and the mantissa has 52 bits. Thus we need about 1100 steps to figure out the answer.
==code(python) |
def _cubicRoot(c):
    lo, hi = 0.0, max(1, c)
    for iter in range(0, 120):
    for iter in range(0, 1100):
        mid = lo + (hi - lo) / 2
        if (mid * mid * mid < c):
            lo = mid
        else:
            hi = mid
 
    return lo
 
def cubicRoot(c):
    if c < 0:
        return -_cubicRoot(-c)
    else:
        return _cubicRoot(c)
==
 
No more epsilon! But now, because of cases with large exponents, the code runs pretty slow on all cases. An idea is to stop as soon as we don't decrease the lo..hi interval, instead of doing a constant number of iterations. So here’s this faster version:
 
==code(python) |
def _cubicRoot(c):
    lo, hi = 0.0, max(1, c)
    prev_range_size = hi - lo
    while True:
        mid = lo + (hi - lo) / 2
        if (mid * mid * mid < c):
            lo = mid
        else:
            hi = mid
        if hi - lo == prev_range_size:
            break
        prev_range_size = hi - lo
    return lo
        return _cubicRoot(c)
==
Done! No more epsilon! The result is as accurate as possible given a double representation.
This is still slow when we’re dealing with large numbers. To address it one can binary search first on the exponent, or get close to the real exponent by dividing the original exponent by 3. Try it out in the comment section.
*BTW Tomek Czajka pointed out that my final version is buggy as well. It doesn't work for c = 1e60. The problem is the number of iterations is too small, and for it to work correctly it needs about 1100 steps.*
bq. I'm curious, did your solution have any of these problems?
We’ve addressed negative numbers, numbers in (0, 1), overflow problems, absolute and relative error problems.
notes:
I'm curious, did your solution have any of these problems?
* Thanks Tomek for pointing out the iteration problem and possible efficient solutions.
* When c is large, mid * mid * mid will overflow but the algorithm still works.
* We’ve addressed negative numbers, numbers in (0, 1), overflow problems, absolute and relative error problems.
* Some tests for your own code -1, 0, 1, 2, 8, 0.001, 1e30, 1e60
* In practice faster methods like Newton Rapson method are used.

Diferente intre securitate:

public
protected

Topicul de forum nu a fost schimbat.