Pandas read_sql_query returning None for all values in some columns



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I am using pandas read_sql_query to read data from a MySQL database table into a pandas dataframe. Some columns in this table have all NULL values. For those columns the pandas dataframe contains None in every row. For all other columns the dataframe contains NaN where there was a NULL value. Can anyone explain why None is returned for the all NULL columns? And how do I make sure I have all NaNs, hopefully without doing manual conversions?
I should add that two of the columns causing this problem are float and the third is of type double,



EDIT



Here is an example. The columns pef and fer contain all NULLS in the database.



from sqlalchemy import create_engine
import pandas as pd
import math

querystr = "SELECT * FROM dbname.mytable"
engine = create_engine('mysql+pymysql://username:password@localhost/' + "dbname")
df = pd.read_sql_query(querystr, engine)
df.head()

sys dias pef fer
0 NaN NaN None None
1 159.0 92.666 None None
2 NaN NaN None None
3 NaN NaN None None
4 102.0 63.333 None None


In the MySQL database these columns are defined as:



Columns: 
sys float
dias float
pef float
fer float


I would expect the columns pef and fer to contain NaN in each row, not None.










share|improve this question
























  • Can you add a minimal example of how your data looks like in your database, how they look when you parse them using Pandas and how you expect them to appear? Just edit your question to include those + any code you are using currently.

    – Rafael
    Nov 15 '18 at 8:24


















0















I am using pandas read_sql_query to read data from a MySQL database table into a pandas dataframe. Some columns in this table have all NULL values. For those columns the pandas dataframe contains None in every row. For all other columns the dataframe contains NaN where there was a NULL value. Can anyone explain why None is returned for the all NULL columns? And how do I make sure I have all NaNs, hopefully without doing manual conversions?
I should add that two of the columns causing this problem are float and the third is of type double,



EDIT



Here is an example. The columns pef and fer contain all NULLS in the database.



from sqlalchemy import create_engine
import pandas as pd
import math

querystr = "SELECT * FROM dbname.mytable"
engine = create_engine('mysql+pymysql://username:password@localhost/' + "dbname")
df = pd.read_sql_query(querystr, engine)
df.head()

sys dias pef fer
0 NaN NaN None None
1 159.0 92.666 None None
2 NaN NaN None None
3 NaN NaN None None
4 102.0 63.333 None None


In the MySQL database these columns are defined as:



Columns: 
sys float
dias float
pef float
fer float


I would expect the columns pef and fer to contain NaN in each row, not None.










share|improve this question
























  • Can you add a minimal example of how your data looks like in your database, how they look when you parse them using Pandas and how you expect them to appear? Just edit your question to include those + any code you are using currently.

    – Rafael
    Nov 15 '18 at 8:24














0












0








0








I am using pandas read_sql_query to read data from a MySQL database table into a pandas dataframe. Some columns in this table have all NULL values. For those columns the pandas dataframe contains None in every row. For all other columns the dataframe contains NaN where there was a NULL value. Can anyone explain why None is returned for the all NULL columns? And how do I make sure I have all NaNs, hopefully without doing manual conversions?
I should add that two of the columns causing this problem are float and the third is of type double,



EDIT



Here is an example. The columns pef and fer contain all NULLS in the database.



from sqlalchemy import create_engine
import pandas as pd
import math

querystr = "SELECT * FROM dbname.mytable"
engine = create_engine('mysql+pymysql://username:password@localhost/' + "dbname")
df = pd.read_sql_query(querystr, engine)
df.head()

sys dias pef fer
0 NaN NaN None None
1 159.0 92.666 None None
2 NaN NaN None None
3 NaN NaN None None
4 102.0 63.333 None None


In the MySQL database these columns are defined as:



Columns: 
sys float
dias float
pef float
fer float


I would expect the columns pef and fer to contain NaN in each row, not None.










share|improve this question
















I am using pandas read_sql_query to read data from a MySQL database table into a pandas dataframe. Some columns in this table have all NULL values. For those columns the pandas dataframe contains None in every row. For all other columns the dataframe contains NaN where there was a NULL value. Can anyone explain why None is returned for the all NULL columns? And how do I make sure I have all NaNs, hopefully without doing manual conversions?
I should add that two of the columns causing this problem are float and the third is of type double,



EDIT



Here is an example. The columns pef and fer contain all NULLS in the database.



from sqlalchemy import create_engine
import pandas as pd
import math

querystr = "SELECT * FROM dbname.mytable"
engine = create_engine('mysql+pymysql://username:password@localhost/' + "dbname")
df = pd.read_sql_query(querystr, engine)
df.head()

sys dias pef fer
0 NaN NaN None None
1 159.0 92.666 None None
2 NaN NaN None None
3 NaN NaN None None
4 102.0 63.333 None None


In the MySQL database these columns are defined as:



Columns: 
sys float
dias float
pef float
fer float


I would expect the columns pef and fer to contain NaN in each row, not None.







python pandas sqlalchemy nan nonetype






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 16 '18 at 0:47







panda

















asked Nov 15 '18 at 8:18









pandapanda

8011




8011












  • Can you add a minimal example of how your data looks like in your database, how they look when you parse them using Pandas and how you expect them to appear? Just edit your question to include those + any code you are using currently.

    – Rafael
    Nov 15 '18 at 8:24


















  • Can you add a minimal example of how your data looks like in your database, how they look when you parse them using Pandas and how you expect them to appear? Just edit your question to include those + any code you are using currently.

    – Rafael
    Nov 15 '18 at 8:24

















Can you add a minimal example of how your data looks like in your database, how they look when you parse them using Pandas and how you expect them to appear? Just edit your question to include those + any code you are using currently.

– Rafael
Nov 15 '18 at 8:24






Can you add a minimal example of how your data looks like in your database, how they look when you parse them using Pandas and how you expect them to appear? Just edit your question to include those + any code you are using currently.

– Rafael
Nov 15 '18 at 8:24













1 Answer
1






active

oldest

votes


















0














The problem is an open issue and is explained here: here: https://github.com/pandas-dev/pandas/issues/14314




read_sql_query just gets result sets back, without any column type
information. If you use the read_sql_table functions, there it uses
the column type information through SQLAlchemy.




It seems that read_sql_query only checks the first 3 values returned in a column to determine the type of the column. So if the first 3 values are NULL it cannot determine the type of the column and so returns None.



So a partial workaround is to use read_sql_table. I changed my code to use read_sql_table and it returns NaN values as expected even for the all NULL columns. But in my real application I really need to use read_sql_query. So I am now replacing any None values with NaN as soon as the results are returned:



df.replace([None], np.nan, inplace=True)





share|improve this answer

























    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53315035%2fpandas-read-sql-query-returning-none-for-all-values-in-some-columns%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    The problem is an open issue and is explained here: here: https://github.com/pandas-dev/pandas/issues/14314




    read_sql_query just gets result sets back, without any column type
    information. If you use the read_sql_table functions, there it uses
    the column type information through SQLAlchemy.




    It seems that read_sql_query only checks the first 3 values returned in a column to determine the type of the column. So if the first 3 values are NULL it cannot determine the type of the column and so returns None.



    So a partial workaround is to use read_sql_table. I changed my code to use read_sql_table and it returns NaN values as expected even for the all NULL columns. But in my real application I really need to use read_sql_query. So I am now replacing any None values with NaN as soon as the results are returned:



    df.replace([None], np.nan, inplace=True)





    share|improve this answer





























      0














      The problem is an open issue and is explained here: here: https://github.com/pandas-dev/pandas/issues/14314




      read_sql_query just gets result sets back, without any column type
      information. If you use the read_sql_table functions, there it uses
      the column type information through SQLAlchemy.




      It seems that read_sql_query only checks the first 3 values returned in a column to determine the type of the column. So if the first 3 values are NULL it cannot determine the type of the column and so returns None.



      So a partial workaround is to use read_sql_table. I changed my code to use read_sql_table and it returns NaN values as expected even for the all NULL columns. But in my real application I really need to use read_sql_query. So I am now replacing any None values with NaN as soon as the results are returned:



      df.replace([None], np.nan, inplace=True)





      share|improve this answer



























        0












        0








        0







        The problem is an open issue and is explained here: here: https://github.com/pandas-dev/pandas/issues/14314




        read_sql_query just gets result sets back, without any column type
        information. If you use the read_sql_table functions, there it uses
        the column type information through SQLAlchemy.




        It seems that read_sql_query only checks the first 3 values returned in a column to determine the type of the column. So if the first 3 values are NULL it cannot determine the type of the column and so returns None.



        So a partial workaround is to use read_sql_table. I changed my code to use read_sql_table and it returns NaN values as expected even for the all NULL columns. But in my real application I really need to use read_sql_query. So I am now replacing any None values with NaN as soon as the results are returned:



        df.replace([None], np.nan, inplace=True)





        share|improve this answer















        The problem is an open issue and is explained here: here: https://github.com/pandas-dev/pandas/issues/14314




        read_sql_query just gets result sets back, without any column type
        information. If you use the read_sql_table functions, there it uses
        the column type information through SQLAlchemy.




        It seems that read_sql_query only checks the first 3 values returned in a column to determine the type of the column. So if the first 3 values are NULL it cannot determine the type of the column and so returns None.



        So a partial workaround is to use read_sql_table. I changed my code to use read_sql_table and it returns NaN values as expected even for the all NULL columns. But in my real application I really need to use read_sql_query. So I am now replacing any None values with NaN as soon as the results are returned:



        df.replace([None], np.nan, inplace=True)






        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Nov 16 '18 at 1:51

























        answered Nov 16 '18 at 1:20









        pandapanda

        8011




        8011





























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53315035%2fpandas-read-sql-query-returning-none-for-all-values-in-some-columns%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Use pre created SQLite database for Android project in kotlin

            Darth Vader #20

            Ondo