Implementation limitations of float.as_integer_ratio()










10















Recently, a correspondent mentioned float.as_integer_ratio(), new in Python 2.6, noting that typical floating point implementations are essentially rational approximations of real numbers. Intrigued, I had to try π:



>>> float.as_integer_ratio(math.pi);
(884279719003555L, 281474976710656L)


I was mildly surprised not to see the more accurate result due to Arima,:



(428224593349304L, 136308121570117L)


For example, this code:



#! /usr/bin/env python
from decimal import *
getcontext().prec = 36
print "python: ",Decimal(884279719003555) / Decimal(281474976710656)
print "Arima: ",Decimal(428224593349304) / Decimal(136308121570117)
print "Wiki: 3.14159265358979323846264338327950288"


produces this output:




python: 3.14159265358979311599796346854418516
Arima: 3.14159265358979323846264338327569743
Wiki: 3.14159265358979323846264338327950288


Certainly, the result is correct given the precision afforded by 64-bit floating-point numbers, but it leads me to ask: How can I find out more about the implementation limitations of as_integer_ratio()? Thanks for any guidance.



Additional links: Stern-Brocot tree and Python source.










share|improve this question



















  • 3





    The accepted answer is misleading. The as_integer_ratio method returns the numerator and denominator of a fraction whose value exactly matches the value of the floating-point number passed to it. If you want a perfectly accurate representation of your float as a fraction, use as_integer_ratio. If you want a simplified approximation with smaller denominator and numerator, look into fractions.Fraction.limit_denominator. IOW, math.pi is an approximation to π. But 884279719003555/281474976710656 is not an approximation to math.pi; it's exactly equal to it.

    – Mark Dickinson
    Feb 11 '18 at 14:37












  • @MarkDickinson: Your point is well-taken; it clarifies this related answer. Although the accepted answer could use some maintenance, it helped me see where my thinking had gone awry.

    – trashgod
    Feb 12 '18 at 2:18
















10















Recently, a correspondent mentioned float.as_integer_ratio(), new in Python 2.6, noting that typical floating point implementations are essentially rational approximations of real numbers. Intrigued, I had to try π:



>>> float.as_integer_ratio(math.pi);
(884279719003555L, 281474976710656L)


I was mildly surprised not to see the more accurate result due to Arima,:



(428224593349304L, 136308121570117L)


For example, this code:



#! /usr/bin/env python
from decimal import *
getcontext().prec = 36
print "python: ",Decimal(884279719003555) / Decimal(281474976710656)
print "Arima: ",Decimal(428224593349304) / Decimal(136308121570117)
print "Wiki: 3.14159265358979323846264338327950288"


produces this output:




python: 3.14159265358979311599796346854418516
Arima: 3.14159265358979323846264338327569743
Wiki: 3.14159265358979323846264338327950288


Certainly, the result is correct given the precision afforded by 64-bit floating-point numbers, but it leads me to ask: How can I find out more about the implementation limitations of as_integer_ratio()? Thanks for any guidance.



Additional links: Stern-Brocot tree and Python source.










share|improve this question



















  • 3





    The accepted answer is misleading. The as_integer_ratio method returns the numerator and denominator of a fraction whose value exactly matches the value of the floating-point number passed to it. If you want a perfectly accurate representation of your float as a fraction, use as_integer_ratio. If you want a simplified approximation with smaller denominator and numerator, look into fractions.Fraction.limit_denominator. IOW, math.pi is an approximation to π. But 884279719003555/281474976710656 is not an approximation to math.pi; it's exactly equal to it.

    – Mark Dickinson
    Feb 11 '18 at 14:37












  • @MarkDickinson: Your point is well-taken; it clarifies this related answer. Although the accepted answer could use some maintenance, it helped me see where my thinking had gone awry.

    – trashgod
    Feb 12 '18 at 2:18














10












10








10


1






Recently, a correspondent mentioned float.as_integer_ratio(), new in Python 2.6, noting that typical floating point implementations are essentially rational approximations of real numbers. Intrigued, I had to try π:



>>> float.as_integer_ratio(math.pi);
(884279719003555L, 281474976710656L)


I was mildly surprised not to see the more accurate result due to Arima,:



(428224593349304L, 136308121570117L)


For example, this code:



#! /usr/bin/env python
from decimal import *
getcontext().prec = 36
print "python: ",Decimal(884279719003555) / Decimal(281474976710656)
print "Arima: ",Decimal(428224593349304) / Decimal(136308121570117)
print "Wiki: 3.14159265358979323846264338327950288"


produces this output:




python: 3.14159265358979311599796346854418516
Arima: 3.14159265358979323846264338327569743
Wiki: 3.14159265358979323846264338327950288


Certainly, the result is correct given the precision afforded by 64-bit floating-point numbers, but it leads me to ask: How can I find out more about the implementation limitations of as_integer_ratio()? Thanks for any guidance.



Additional links: Stern-Brocot tree and Python source.










share|improve this question
















Recently, a correspondent mentioned float.as_integer_ratio(), new in Python 2.6, noting that typical floating point implementations are essentially rational approximations of real numbers. Intrigued, I had to try π:



>>> float.as_integer_ratio(math.pi);
(884279719003555L, 281474976710656L)


I was mildly surprised not to see the more accurate result due to Arima,:



(428224593349304L, 136308121570117L)


For example, this code:



#! /usr/bin/env python
from decimal import *
getcontext().prec = 36
print "python: ",Decimal(884279719003555) / Decimal(281474976710656)
print "Arima: ",Decimal(428224593349304) / Decimal(136308121570117)
print "Wiki: 3.14159265358979323846264338327950288"


produces this output:




python: 3.14159265358979311599796346854418516
Arima: 3.14159265358979323846264338327569743
Wiki: 3.14159265358979323846264338327950288


Certainly, the result is correct given the precision afforded by 64-bit floating-point numbers, but it leads me to ask: How can I find out more about the implementation limitations of as_integer_ratio()? Thanks for any guidance.



Additional links: Stern-Brocot tree and Python source.







python math






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 9 '18 at 17:09







trashgod

















asked Jan 16 '10 at 5:19









trashgodtrashgod

187k17141710




187k17141710







  • 3





    The accepted answer is misleading. The as_integer_ratio method returns the numerator and denominator of a fraction whose value exactly matches the value of the floating-point number passed to it. If you want a perfectly accurate representation of your float as a fraction, use as_integer_ratio. If you want a simplified approximation with smaller denominator and numerator, look into fractions.Fraction.limit_denominator. IOW, math.pi is an approximation to π. But 884279719003555/281474976710656 is not an approximation to math.pi; it's exactly equal to it.

    – Mark Dickinson
    Feb 11 '18 at 14:37












  • @MarkDickinson: Your point is well-taken; it clarifies this related answer. Although the accepted answer could use some maintenance, it helped me see where my thinking had gone awry.

    – trashgod
    Feb 12 '18 at 2:18













  • 3





    The accepted answer is misleading. The as_integer_ratio method returns the numerator and denominator of a fraction whose value exactly matches the value of the floating-point number passed to it. If you want a perfectly accurate representation of your float as a fraction, use as_integer_ratio. If you want a simplified approximation with smaller denominator and numerator, look into fractions.Fraction.limit_denominator. IOW, math.pi is an approximation to π. But 884279719003555/281474976710656 is not an approximation to math.pi; it's exactly equal to it.

    – Mark Dickinson
    Feb 11 '18 at 14:37












  • @MarkDickinson: Your point is well-taken; it clarifies this related answer. Although the accepted answer could use some maintenance, it helped me see where my thinking had gone awry.

    – trashgod
    Feb 12 '18 at 2:18








3




3





The accepted answer is misleading. The as_integer_ratio method returns the numerator and denominator of a fraction whose value exactly matches the value of the floating-point number passed to it. If you want a perfectly accurate representation of your float as a fraction, use as_integer_ratio. If you want a simplified approximation with smaller denominator and numerator, look into fractions.Fraction.limit_denominator. IOW, math.pi is an approximation to π. But 884279719003555/281474976710656 is not an approximation to math.pi; it's exactly equal to it.

– Mark Dickinson
Feb 11 '18 at 14:37






The accepted answer is misleading. The as_integer_ratio method returns the numerator and denominator of a fraction whose value exactly matches the value of the floating-point number passed to it. If you want a perfectly accurate representation of your float as a fraction, use as_integer_ratio. If you want a simplified approximation with smaller denominator and numerator, look into fractions.Fraction.limit_denominator. IOW, math.pi is an approximation to π. But 884279719003555/281474976710656 is not an approximation to math.pi; it's exactly equal to it.

– Mark Dickinson
Feb 11 '18 at 14:37














@MarkDickinson: Your point is well-taken; it clarifies this related answer. Although the accepted answer could use some maintenance, it helped me see where my thinking had gone awry.

– trashgod
Feb 12 '18 at 2:18






@MarkDickinson: Your point is well-taken; it clarifies this related answer. Although the accepted answer could use some maintenance, it helped me see where my thinking had gone awry.

– trashgod
Feb 12 '18 at 2:18













3 Answers
3






active

oldest

votes


















3














The algorithm used by as_integer_ratio only considers powers of 2 in the denominator. Here is a (probably) better algorithm.






share|improve this answer

























  • Aha, 281474976710656 = 2^48. Now I see where the values came from. Interesting to compare implementations: svn.python.org/view/python/trunk/Objects/…

    – trashgod
    Jan 16 '10 at 7:20






  • 9





    Saying the algorithm is not accurate is a flawed explanation. float.as_integer_ratio() simply returns you a (numerator, denominator) pair which is rigorously equal to the floating-point number in question (that's why the denominator is a power of two, since standard floating-point numbers have a base-2 exponent). The loss in accuracy comes from the floating-point representation itself, not from float.as_integer_ratio() which is actually lossless.

    – Antoine P.
    Jan 16 '10 at 12:10











  • IIUC, the algorithm is sufficiently accurate for a given floating-point precision. The genesis of the denominator is what puzzled me. The algorithm would never produce Arima's unique result, and there would be no point given the required precision.

    – trashgod
    Jan 16 '10 at 18:52






  • 2





    This really illustrates why link only (or near link only) answers are discouraged, both links are now broken

    – Chris_Rands
    Feb 9 '18 at 14:26


















3














May I recommend gmpy's implementation of the Stern-Brocot tree:



>>> import gmpy
>>> import math
>>> gmpy.mpq(math.pi)
mpq(245850922,78256779)
>>> x=_
>>> float(x)
3.1415926535897931
>>>


again, the result is "correct within the precision of 64-bit floats" (53-bit "so-called" mantissas;-), but:



>>> 245850922 + 78256779
324107701
>>> 884279719003555 + 281474976710656
1165754695714211L
>>> 428224593349304L + 136308121570117
564532714919421L


...gmpy's precision is obtained so much cheaper (in terms of sum of numerator and denominator values) than Arima's, much less Python 2.6's!-)






share|improve this answer























  • I see the benefit. I've used GMP from Ada before, so gmpy will be handy. code.google.com/p/adabindinggmpmpfr

    – trashgod
    Jan 16 '10 at 7:13


















3














You get better approximations using



fractions.Fraction.from_float(math.pi).limit_denominator()


Fractions are included since maybe version 3.0.
However, math.pi doesn't have enough accuracy to return a 30 digit approximation.






share|improve this answer






















    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f2076290%2fimplementation-limitations-of-float-as-integer-ratio%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3














    The algorithm used by as_integer_ratio only considers powers of 2 in the denominator. Here is a (probably) better algorithm.






    share|improve this answer

























    • Aha, 281474976710656 = 2^48. Now I see where the values came from. Interesting to compare implementations: svn.python.org/view/python/trunk/Objects/…

      – trashgod
      Jan 16 '10 at 7:20






    • 9





      Saying the algorithm is not accurate is a flawed explanation. float.as_integer_ratio() simply returns you a (numerator, denominator) pair which is rigorously equal to the floating-point number in question (that's why the denominator is a power of two, since standard floating-point numbers have a base-2 exponent). The loss in accuracy comes from the floating-point representation itself, not from float.as_integer_ratio() which is actually lossless.

      – Antoine P.
      Jan 16 '10 at 12:10











    • IIUC, the algorithm is sufficiently accurate for a given floating-point precision. The genesis of the denominator is what puzzled me. The algorithm would never produce Arima's unique result, and there would be no point given the required precision.

      – trashgod
      Jan 16 '10 at 18:52






    • 2





      This really illustrates why link only (or near link only) answers are discouraged, both links are now broken

      – Chris_Rands
      Feb 9 '18 at 14:26















    3














    The algorithm used by as_integer_ratio only considers powers of 2 in the denominator. Here is a (probably) better algorithm.






    share|improve this answer

























    • Aha, 281474976710656 = 2^48. Now I see where the values came from. Interesting to compare implementations: svn.python.org/view/python/trunk/Objects/…

      – trashgod
      Jan 16 '10 at 7:20






    • 9





      Saying the algorithm is not accurate is a flawed explanation. float.as_integer_ratio() simply returns you a (numerator, denominator) pair which is rigorously equal to the floating-point number in question (that's why the denominator is a power of two, since standard floating-point numbers have a base-2 exponent). The loss in accuracy comes from the floating-point representation itself, not from float.as_integer_ratio() which is actually lossless.

      – Antoine P.
      Jan 16 '10 at 12:10











    • IIUC, the algorithm is sufficiently accurate for a given floating-point precision. The genesis of the denominator is what puzzled me. The algorithm would never produce Arima's unique result, and there would be no point given the required precision.

      – trashgod
      Jan 16 '10 at 18:52






    • 2





      This really illustrates why link only (or near link only) answers are discouraged, both links are now broken

      – Chris_Rands
      Feb 9 '18 at 14:26













    3












    3








    3







    The algorithm used by as_integer_ratio only considers powers of 2 in the denominator. Here is a (probably) better algorithm.






    share|improve this answer















    The algorithm used by as_integer_ratio only considers powers of 2 in the denominator. Here is a (probably) better algorithm.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Nov 13 '18 at 19:46









    mirh

    1506




    1506










    answered Jan 16 '10 at 5:24









    Victor LiuVictor Liu

    2,77911928




    2,77911928












    • Aha, 281474976710656 = 2^48. Now I see where the values came from. Interesting to compare implementations: svn.python.org/view/python/trunk/Objects/…

      – trashgod
      Jan 16 '10 at 7:20






    • 9





      Saying the algorithm is not accurate is a flawed explanation. float.as_integer_ratio() simply returns you a (numerator, denominator) pair which is rigorously equal to the floating-point number in question (that's why the denominator is a power of two, since standard floating-point numbers have a base-2 exponent). The loss in accuracy comes from the floating-point representation itself, not from float.as_integer_ratio() which is actually lossless.

      – Antoine P.
      Jan 16 '10 at 12:10











    • IIUC, the algorithm is sufficiently accurate for a given floating-point precision. The genesis of the denominator is what puzzled me. The algorithm would never produce Arima's unique result, and there would be no point given the required precision.

      – trashgod
      Jan 16 '10 at 18:52






    • 2





      This really illustrates why link only (or near link only) answers are discouraged, both links are now broken

      – Chris_Rands
      Feb 9 '18 at 14:26

















    • Aha, 281474976710656 = 2^48. Now I see where the values came from. Interesting to compare implementations: svn.python.org/view/python/trunk/Objects/…

      – trashgod
      Jan 16 '10 at 7:20






    • 9





      Saying the algorithm is not accurate is a flawed explanation. float.as_integer_ratio() simply returns you a (numerator, denominator) pair which is rigorously equal to the floating-point number in question (that's why the denominator is a power of two, since standard floating-point numbers have a base-2 exponent). The loss in accuracy comes from the floating-point representation itself, not from float.as_integer_ratio() which is actually lossless.

      – Antoine P.
      Jan 16 '10 at 12:10











    • IIUC, the algorithm is sufficiently accurate for a given floating-point precision. The genesis of the denominator is what puzzled me. The algorithm would never produce Arima's unique result, and there would be no point given the required precision.

      – trashgod
      Jan 16 '10 at 18:52






    • 2





      This really illustrates why link only (or near link only) answers are discouraged, both links are now broken

      – Chris_Rands
      Feb 9 '18 at 14:26
















    Aha, 281474976710656 = 2^48. Now I see where the values came from. Interesting to compare implementations: svn.python.org/view/python/trunk/Objects/…

    – trashgod
    Jan 16 '10 at 7:20





    Aha, 281474976710656 = 2^48. Now I see where the values came from. Interesting to compare implementations: svn.python.org/view/python/trunk/Objects/…

    – trashgod
    Jan 16 '10 at 7:20




    9




    9





    Saying the algorithm is not accurate is a flawed explanation. float.as_integer_ratio() simply returns you a (numerator, denominator) pair which is rigorously equal to the floating-point number in question (that's why the denominator is a power of two, since standard floating-point numbers have a base-2 exponent). The loss in accuracy comes from the floating-point representation itself, not from float.as_integer_ratio() which is actually lossless.

    – Antoine P.
    Jan 16 '10 at 12:10





    Saying the algorithm is not accurate is a flawed explanation. float.as_integer_ratio() simply returns you a (numerator, denominator) pair which is rigorously equal to the floating-point number in question (that's why the denominator is a power of two, since standard floating-point numbers have a base-2 exponent). The loss in accuracy comes from the floating-point representation itself, not from float.as_integer_ratio() which is actually lossless.

    – Antoine P.
    Jan 16 '10 at 12:10













    IIUC, the algorithm is sufficiently accurate for a given floating-point precision. The genesis of the denominator is what puzzled me. The algorithm would never produce Arima's unique result, and there would be no point given the required precision.

    – trashgod
    Jan 16 '10 at 18:52





    IIUC, the algorithm is sufficiently accurate for a given floating-point precision. The genesis of the denominator is what puzzled me. The algorithm would never produce Arima's unique result, and there would be no point given the required precision.

    – trashgod
    Jan 16 '10 at 18:52




    2




    2





    This really illustrates why link only (or near link only) answers are discouraged, both links are now broken

    – Chris_Rands
    Feb 9 '18 at 14:26





    This really illustrates why link only (or near link only) answers are discouraged, both links are now broken

    – Chris_Rands
    Feb 9 '18 at 14:26













    3














    May I recommend gmpy's implementation of the Stern-Brocot tree:



    >>> import gmpy
    >>> import math
    >>> gmpy.mpq(math.pi)
    mpq(245850922,78256779)
    >>> x=_
    >>> float(x)
    3.1415926535897931
    >>>


    again, the result is "correct within the precision of 64-bit floats" (53-bit "so-called" mantissas;-), but:



    >>> 245850922 + 78256779
    324107701
    >>> 884279719003555 + 281474976710656
    1165754695714211L
    >>> 428224593349304L + 136308121570117
    564532714919421L


    ...gmpy's precision is obtained so much cheaper (in terms of sum of numerator and denominator values) than Arima's, much less Python 2.6's!-)






    share|improve this answer























    • I see the benefit. I've used GMP from Ada before, so gmpy will be handy. code.google.com/p/adabindinggmpmpfr

      – trashgod
      Jan 16 '10 at 7:13















    3














    May I recommend gmpy's implementation of the Stern-Brocot tree:



    >>> import gmpy
    >>> import math
    >>> gmpy.mpq(math.pi)
    mpq(245850922,78256779)
    >>> x=_
    >>> float(x)
    3.1415926535897931
    >>>


    again, the result is "correct within the precision of 64-bit floats" (53-bit "so-called" mantissas;-), but:



    >>> 245850922 + 78256779
    324107701
    >>> 884279719003555 + 281474976710656
    1165754695714211L
    >>> 428224593349304L + 136308121570117
    564532714919421L


    ...gmpy's precision is obtained so much cheaper (in terms of sum of numerator and denominator values) than Arima's, much less Python 2.6's!-)






    share|improve this answer























    • I see the benefit. I've used GMP from Ada before, so gmpy will be handy. code.google.com/p/adabindinggmpmpfr

      – trashgod
      Jan 16 '10 at 7:13













    3












    3








    3







    May I recommend gmpy's implementation of the Stern-Brocot tree:



    >>> import gmpy
    >>> import math
    >>> gmpy.mpq(math.pi)
    mpq(245850922,78256779)
    >>> x=_
    >>> float(x)
    3.1415926535897931
    >>>


    again, the result is "correct within the precision of 64-bit floats" (53-bit "so-called" mantissas;-), but:



    >>> 245850922 + 78256779
    324107701
    >>> 884279719003555 + 281474976710656
    1165754695714211L
    >>> 428224593349304L + 136308121570117
    564532714919421L


    ...gmpy's precision is obtained so much cheaper (in terms of sum of numerator and denominator values) than Arima's, much less Python 2.6's!-)






    share|improve this answer













    May I recommend gmpy's implementation of the Stern-Brocot tree:



    >>> import gmpy
    >>> import math
    >>> gmpy.mpq(math.pi)
    mpq(245850922,78256779)
    >>> x=_
    >>> float(x)
    3.1415926535897931
    >>>


    again, the result is "correct within the precision of 64-bit floats" (53-bit "so-called" mantissas;-), but:



    >>> 245850922 + 78256779
    324107701
    >>> 884279719003555 + 281474976710656
    1165754695714211L
    >>> 428224593349304L + 136308121570117
    564532714919421L


    ...gmpy's precision is obtained so much cheaper (in terms of sum of numerator and denominator values) than Arima's, much less Python 2.6's!-)







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Jan 16 '10 at 5:28









    Alex MartelliAlex Martelli

    627k12810401280




    627k12810401280












    • I see the benefit. I've used GMP from Ada before, so gmpy will be handy. code.google.com/p/adabindinggmpmpfr

      – trashgod
      Jan 16 '10 at 7:13

















    • I see the benefit. I've used GMP from Ada before, so gmpy will be handy. code.google.com/p/adabindinggmpmpfr

      – trashgod
      Jan 16 '10 at 7:13
















    I see the benefit. I've used GMP from Ada before, so gmpy will be handy. code.google.com/p/adabindinggmpmpfr

    – trashgod
    Jan 16 '10 at 7:13





    I see the benefit. I've used GMP from Ada before, so gmpy will be handy. code.google.com/p/adabindinggmpmpfr

    – trashgod
    Jan 16 '10 at 7:13











    3














    You get better approximations using



    fractions.Fraction.from_float(math.pi).limit_denominator()


    Fractions are included since maybe version 3.0.
    However, math.pi doesn't have enough accuracy to return a 30 digit approximation.






    share|improve this answer



























      3














      You get better approximations using



      fractions.Fraction.from_float(math.pi).limit_denominator()


      Fractions are included since maybe version 3.0.
      However, math.pi doesn't have enough accuracy to return a 30 digit approximation.






      share|improve this answer

























        3












        3








        3







        You get better approximations using



        fractions.Fraction.from_float(math.pi).limit_denominator()


        Fractions are included since maybe version 3.0.
        However, math.pi doesn't have enough accuracy to return a 30 digit approximation.






        share|improve this answer













        You get better approximations using



        fractions.Fraction.from_float(math.pi).limit_denominator()


        Fractions are included since maybe version 3.0.
        However, math.pi doesn't have enough accuracy to return a 30 digit approximation.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Jan 16 '10 at 9:54









        fesnofesno

        311




        311



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f2076290%2fimplementation-limitations-of-float-as-integer-ratio%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Use pre created SQLite database for Android project in kotlin

            Darth Vader #20

            Ondo