Adding LSTM layers in Keras produces input error









up vote
0
down vote

favorite












First of all apologies if I word this wrong, I'm relatively new to TensorFlow



I am designing a model for simple classification of a dataset, each column is an attribute and the final column is the class. I split these and generate a dataframe in the usual way.



If I generated a model with dense layers, it works great:



def baseline_model():
# create model
model = Sequential()
model.add(Dense(30, input_dim=len(dataframe.columns)-2, activation='sigmoid'))
model.add(Dense(20,activation='sigmoid'))
model.add(Dense(unique, activation='softmax'))

# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
return model


If I were to add, say, an LSTM layer to the model:



def baseline_model():
# create model
model = Sequential()
model.add(Dense(30, input_dim=len(dataframe.columns)-2, activation='sigmoid'))


#this bit here >
model.add(LSTM(20, return_sequences=True))
model.add(Dense(20,activation='sigmoid'))
model.add(Dense(unique, activation='softmax'))

# Compile model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
return model


I get the following error when I execute the code:



ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2


I'm not sure where these variables are coming from although, maybe it's the classes? I have three classes of data ('POSITIVE', 'NEGATIVE', 'NEUTRAL') mapped to sets of well over 2,000 attribute values - they're statistical extractions of timewindowed EEG brainwave data from multiple electrodes classed by emotional state.



Note:
The 'input_dim=len(dataframe.columns)-2' produces the number of attributes (inputs), I do this as I'd like the script to work with CSV datasets of different sizes on the fly



Also, there are no tabs in my code pasted but it is indented and will compile



The full code is pasted here: https://pastebin.com/1aXp9uDA for presentation purposes. Apologies in advance for the terrible practices! This is just an initial project, I do plan on cleaning it all up later on!










share|improve this question

























    up vote
    0
    down vote

    favorite












    First of all apologies if I word this wrong, I'm relatively new to TensorFlow



    I am designing a model for simple classification of a dataset, each column is an attribute and the final column is the class. I split these and generate a dataframe in the usual way.



    If I generated a model with dense layers, it works great:



    def baseline_model():
    # create model
    model = Sequential()
    model.add(Dense(30, input_dim=len(dataframe.columns)-2, activation='sigmoid'))
    model.add(Dense(20,activation='sigmoid'))
    model.add(Dense(unique, activation='softmax'))

    # Compile model
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    model.summary()
    return model


    If I were to add, say, an LSTM layer to the model:



    def baseline_model():
    # create model
    model = Sequential()
    model.add(Dense(30, input_dim=len(dataframe.columns)-2, activation='sigmoid'))


    #this bit here >
    model.add(LSTM(20, return_sequences=True))
    model.add(Dense(20,activation='sigmoid'))
    model.add(Dense(unique, activation='softmax'))

    # Compile model
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    model.summary()
    return model


    I get the following error when I execute the code:



    ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2


    I'm not sure where these variables are coming from although, maybe it's the classes? I have three classes of data ('POSITIVE', 'NEGATIVE', 'NEUTRAL') mapped to sets of well over 2,000 attribute values - they're statistical extractions of timewindowed EEG brainwave data from multiple electrodes classed by emotional state.



    Note:
    The 'input_dim=len(dataframe.columns)-2' produces the number of attributes (inputs), I do this as I'd like the script to work with CSV datasets of different sizes on the fly



    Also, there are no tabs in my code pasted but it is indented and will compile



    The full code is pasted here: https://pastebin.com/1aXp9uDA for presentation purposes. Apologies in advance for the terrible practices! This is just an initial project, I do plan on cleaning it all up later on!










    share|improve this question























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      First of all apologies if I word this wrong, I'm relatively new to TensorFlow



      I am designing a model for simple classification of a dataset, each column is an attribute and the final column is the class. I split these and generate a dataframe in the usual way.



      If I generated a model with dense layers, it works great:



      def baseline_model():
      # create model
      model = Sequential()
      model.add(Dense(30, input_dim=len(dataframe.columns)-2, activation='sigmoid'))
      model.add(Dense(20,activation='sigmoid'))
      model.add(Dense(unique, activation='softmax'))

      # Compile model
      model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
      model.summary()
      return model


      If I were to add, say, an LSTM layer to the model:



      def baseline_model():
      # create model
      model = Sequential()
      model.add(Dense(30, input_dim=len(dataframe.columns)-2, activation='sigmoid'))


      #this bit here >
      model.add(LSTM(20, return_sequences=True))
      model.add(Dense(20,activation='sigmoid'))
      model.add(Dense(unique, activation='softmax'))

      # Compile model
      model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
      model.summary()
      return model


      I get the following error when I execute the code:



      ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2


      I'm not sure where these variables are coming from although, maybe it's the classes? I have three classes of data ('POSITIVE', 'NEGATIVE', 'NEUTRAL') mapped to sets of well over 2,000 attribute values - they're statistical extractions of timewindowed EEG brainwave data from multiple electrodes classed by emotional state.



      Note:
      The 'input_dim=len(dataframe.columns)-2' produces the number of attributes (inputs), I do this as I'd like the script to work with CSV datasets of different sizes on the fly



      Also, there are no tabs in my code pasted but it is indented and will compile



      The full code is pasted here: https://pastebin.com/1aXp9uDA for presentation purposes. Apologies in advance for the terrible practices! This is just an initial project, I do plan on cleaning it all up later on!










      share|improve this question













      First of all apologies if I word this wrong, I'm relatively new to TensorFlow



      I am designing a model for simple classification of a dataset, each column is an attribute and the final column is the class. I split these and generate a dataframe in the usual way.



      If I generated a model with dense layers, it works great:



      def baseline_model():
      # create model
      model = Sequential()
      model.add(Dense(30, input_dim=len(dataframe.columns)-2, activation='sigmoid'))
      model.add(Dense(20,activation='sigmoid'))
      model.add(Dense(unique, activation='softmax'))

      # Compile model
      model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
      model.summary()
      return model


      If I were to add, say, an LSTM layer to the model:



      def baseline_model():
      # create model
      model = Sequential()
      model.add(Dense(30, input_dim=len(dataframe.columns)-2, activation='sigmoid'))


      #this bit here >
      model.add(LSTM(20, return_sequences=True))
      model.add(Dense(20,activation='sigmoid'))
      model.add(Dense(unique, activation='softmax'))

      # Compile model
      model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
      model.summary()
      return model


      I get the following error when I execute the code:



      ValueError: Input 0 is incompatible with layer lstm_1: expected ndim=3, found ndim=2


      I'm not sure where these variables are coming from although, maybe it's the classes? I have three classes of data ('POSITIVE', 'NEGATIVE', 'NEUTRAL') mapped to sets of well over 2,000 attribute values - they're statistical extractions of timewindowed EEG brainwave data from multiple electrodes classed by emotional state.



      Note:
      The 'input_dim=len(dataframe.columns)-2' produces the number of attributes (inputs), I do this as I'd like the script to work with CSV datasets of different sizes on the fly



      Also, there are no tabs in my code pasted but it is indented and will compile



      The full code is pasted here: https://pastebin.com/1aXp9uDA for presentation purposes. Apologies in advance for the terrible practices! This is just an initial project, I do plan on cleaning it all up later on!







      python tensorflow keras deep-learning






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 9 at 12:50









      Jordan

      386




      386






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          In your original code you have an input dimension of 2 which is shaped (batch, feature). When you add an LSTM, you're telling Keras you want to do the classification given the last N timesteps, hence you need an input that is shaped (batch, timestep, feature). It's easy to think that an LSTM will look back across all inputs in the batch but unfortunately it doesn't. You have to manually organize your data to present all the timestep elements together.



          To split up your data you generally do a sliding window of length N (where N is how many values you wish the LSTM to look back across). You can slide the window N steps each time, meaning there's no overlap of the data or you can simply slide it one sample, meaning you get multiple copies of your data. There are numerous blogs on how to do this. Take a look at this one How to Reshape Input Data for LSTM.



          You also have one other issue. For your LSTM, you probably want "return_sequences=False". With this equal to True, you would need to have an output "Y" value for each element of your timestep. You probably want your "Y" value to represent the next value in your time-series. Keep this in mind when organizing your data.



          The above link provides some nice examples or you can search for more in-depth ones. If you follow those it should be clear how to reorganize things for an LSTM.






          share|improve this answer






















            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53226038%2fadding-lstm-layers-in-keras-produces-input-error%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote













            In your original code you have an input dimension of 2 which is shaped (batch, feature). When you add an LSTM, you're telling Keras you want to do the classification given the last N timesteps, hence you need an input that is shaped (batch, timestep, feature). It's easy to think that an LSTM will look back across all inputs in the batch but unfortunately it doesn't. You have to manually organize your data to present all the timestep elements together.



            To split up your data you generally do a sliding window of length N (where N is how many values you wish the LSTM to look back across). You can slide the window N steps each time, meaning there's no overlap of the data or you can simply slide it one sample, meaning you get multiple copies of your data. There are numerous blogs on how to do this. Take a look at this one How to Reshape Input Data for LSTM.



            You also have one other issue. For your LSTM, you probably want "return_sequences=False". With this equal to True, you would need to have an output "Y" value for each element of your timestep. You probably want your "Y" value to represent the next value in your time-series. Keep this in mind when organizing your data.



            The above link provides some nice examples or you can search for more in-depth ones. If you follow those it should be clear how to reorganize things for an LSTM.






            share|improve this answer


























              up vote
              0
              down vote













              In your original code you have an input dimension of 2 which is shaped (batch, feature). When you add an LSTM, you're telling Keras you want to do the classification given the last N timesteps, hence you need an input that is shaped (batch, timestep, feature). It's easy to think that an LSTM will look back across all inputs in the batch but unfortunately it doesn't. You have to manually organize your data to present all the timestep elements together.



              To split up your data you generally do a sliding window of length N (where N is how many values you wish the LSTM to look back across). You can slide the window N steps each time, meaning there's no overlap of the data or you can simply slide it one sample, meaning you get multiple copies of your data. There are numerous blogs on how to do this. Take a look at this one How to Reshape Input Data for LSTM.



              You also have one other issue. For your LSTM, you probably want "return_sequences=False". With this equal to True, you would need to have an output "Y" value for each element of your timestep. You probably want your "Y" value to represent the next value in your time-series. Keep this in mind when organizing your data.



              The above link provides some nice examples or you can search for more in-depth ones. If you follow those it should be clear how to reorganize things for an LSTM.






              share|improve this answer
























                up vote
                0
                down vote










                up vote
                0
                down vote









                In your original code you have an input dimension of 2 which is shaped (batch, feature). When you add an LSTM, you're telling Keras you want to do the classification given the last N timesteps, hence you need an input that is shaped (batch, timestep, feature). It's easy to think that an LSTM will look back across all inputs in the batch but unfortunately it doesn't. You have to manually organize your data to present all the timestep elements together.



                To split up your data you generally do a sliding window of length N (where N is how many values you wish the LSTM to look back across). You can slide the window N steps each time, meaning there's no overlap of the data or you can simply slide it one sample, meaning you get multiple copies of your data. There are numerous blogs on how to do this. Take a look at this one How to Reshape Input Data for LSTM.



                You also have one other issue. For your LSTM, you probably want "return_sequences=False". With this equal to True, you would need to have an output "Y" value for each element of your timestep. You probably want your "Y" value to represent the next value in your time-series. Keep this in mind when organizing your data.



                The above link provides some nice examples or you can search for more in-depth ones. If you follow those it should be clear how to reorganize things for an LSTM.






                share|improve this answer














                In your original code you have an input dimension of 2 which is shaped (batch, feature). When you add an LSTM, you're telling Keras you want to do the classification given the last N timesteps, hence you need an input that is shaped (batch, timestep, feature). It's easy to think that an LSTM will look back across all inputs in the batch but unfortunately it doesn't. You have to manually organize your data to present all the timestep elements together.



                To split up your data you generally do a sliding window of length N (where N is how many values you wish the LSTM to look back across). You can slide the window N steps each time, meaning there's no overlap of the data or you can simply slide it one sample, meaning you get multiple copies of your data. There are numerous blogs on how to do this. Take a look at this one How to Reshape Input Data for LSTM.



                You also have one other issue. For your LSTM, you probably want "return_sequences=False". With this equal to True, you would need to have an output "Y" value for each element of your timestep. You probably want your "Y" value to represent the next value in your time-series. Keep this in mind when organizing your data.



                The above link provides some nice examples or you can search for more in-depth ones. If you follow those it should be clear how to reorganize things for an LSTM.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 9 at 13:57

























                answered Nov 9 at 13:48









                bivouac0

                1,086414




                1,086414



























                     

                    draft saved


                    draft discarded















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53226038%2fadding-lstm-layers-in-keras-produces-input-error%23new-answer', 'question_page');

                    );

                    Post as a guest














































































                    Popular posts from this blog

                    Use pre created SQLite database for Android project in kotlin

                    Darth Vader #20

                    Ondo