Regarding error using Keras functional API
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I have a regression dataset:
X_train (float64) Size = (1616, 3) -> i.e. 3 predictors
Y_train (float64) Size = (1616, 2) -> i.e. 2 targets
I tried doing Hyperas using Functional API (my main purpose is to use the loss_weights
option during compiling):
inputs1 = Input(shape=(X_train.shape[0], X_train.shape[1]))
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(inputs1)
x = Dropout(uniform(0, 1))(x)
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
if conditional(choice(['three', 'four'])) == 'four':
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
output1 = Dense(1, activation='linear')(x)
output2 = Dense(1, activation='linear')(x)
model = Model(inputs = inputs1, outputs = [output1,output2])
adam = keras.optimizers.Adam(lr=choice([10**-3,10**-2, 10**-1]))
rmsprop = keras.optimizers.RMSprop(lr=choice([10**-3,10**-2, 10**-1]))
sgd = keras.optimizers.SGD(lr=choice([10**-3,10**-2, 10**-1]))
choiceval = choice(['adam', 'rmsprop','sgd'])
if choiceval == 'adam':
optimizer = adam
elif choiceval == 'rmsprop':
optimizer = rmsprop
else:
optimizer = sgd
model.compile(loss='mae', metrics=['mae'],optimizer=optimizer, loss_weights=[0.5,0.5])
earlyStopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=50, verbose=0, mode='auto')
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max')
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.5, cooldown=1, patience=10, min_lr=1e-4,verbose=2)
callbacks_list = [earlyStopping, checkpoint, lr_reducer]
history = model.fit(X_train, Y_train,
batch_size=choice([16,32,64,128]),
epochs=choice([20000]),
verbose=2,
validation_data=(X_val, Y_val),
callbacks=callbacks_list)
However, upon running it, I get the following error:
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (1616, 3)
I would greatly appreciate if someone could point me to the direction of what is going wrong here. I suspect the input (i.e. X_train
, Y_train
) and also the Input shape might be at fault. Would appreciate any help here.
UPDATE
Ok so, indeed the fault was at the Input line:
I changed it to: inputs1 = Input(shape=(X_train.shape[1],))
.
However, now I received another error:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[0.19204772, 0.04878049],
[0.20226056, 0. ],
[0.12029842, 0.04878049],
...,
[0.45188627, 0.14634146],
[0.26942276, 0.02439024],
[0.12942418, 0....
python machine-learning keras hyperas
add a comment |
I have a regression dataset:
X_train (float64) Size = (1616, 3) -> i.e. 3 predictors
Y_train (float64) Size = (1616, 2) -> i.e. 2 targets
I tried doing Hyperas using Functional API (my main purpose is to use the loss_weights
option during compiling):
inputs1 = Input(shape=(X_train.shape[0], X_train.shape[1]))
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(inputs1)
x = Dropout(uniform(0, 1))(x)
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
if conditional(choice(['three', 'four'])) == 'four':
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
output1 = Dense(1, activation='linear')(x)
output2 = Dense(1, activation='linear')(x)
model = Model(inputs = inputs1, outputs = [output1,output2])
adam = keras.optimizers.Adam(lr=choice([10**-3,10**-2, 10**-1]))
rmsprop = keras.optimizers.RMSprop(lr=choice([10**-3,10**-2, 10**-1]))
sgd = keras.optimizers.SGD(lr=choice([10**-3,10**-2, 10**-1]))
choiceval = choice(['adam', 'rmsprop','sgd'])
if choiceval == 'adam':
optimizer = adam
elif choiceval == 'rmsprop':
optimizer = rmsprop
else:
optimizer = sgd
model.compile(loss='mae', metrics=['mae'],optimizer=optimizer, loss_weights=[0.5,0.5])
earlyStopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=50, verbose=0, mode='auto')
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max')
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.5, cooldown=1, patience=10, min_lr=1e-4,verbose=2)
callbacks_list = [earlyStopping, checkpoint, lr_reducer]
history = model.fit(X_train, Y_train,
batch_size=choice([16,32,64,128]),
epochs=choice([20000]),
verbose=2,
validation_data=(X_val, Y_val),
callbacks=callbacks_list)
However, upon running it, I get the following error:
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (1616, 3)
I would greatly appreciate if someone could point me to the direction of what is going wrong here. I suspect the input (i.e. X_train
, Y_train
) and also the Input shape might be at fault. Would appreciate any help here.
UPDATE
Ok so, indeed the fault was at the Input line:
I changed it to: inputs1 = Input(shape=(X_train.shape[1],))
.
However, now I received another error:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[0.19204772, 0.04878049],
[0.20226056, 0. ],
[0.12029842, 0.04878049],
...,
[0.45188627, 0.14634146],
[0.26942276, 0.02439024],
[0.12942418, 0....
python machine-learning keras hyperas
add a comment |
I have a regression dataset:
X_train (float64) Size = (1616, 3) -> i.e. 3 predictors
Y_train (float64) Size = (1616, 2) -> i.e. 2 targets
I tried doing Hyperas using Functional API (my main purpose is to use the loss_weights
option during compiling):
inputs1 = Input(shape=(X_train.shape[0], X_train.shape[1]))
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(inputs1)
x = Dropout(uniform(0, 1))(x)
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
if conditional(choice(['three', 'four'])) == 'four':
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
output1 = Dense(1, activation='linear')(x)
output2 = Dense(1, activation='linear')(x)
model = Model(inputs = inputs1, outputs = [output1,output2])
adam = keras.optimizers.Adam(lr=choice([10**-3,10**-2, 10**-1]))
rmsprop = keras.optimizers.RMSprop(lr=choice([10**-3,10**-2, 10**-1]))
sgd = keras.optimizers.SGD(lr=choice([10**-3,10**-2, 10**-1]))
choiceval = choice(['adam', 'rmsprop','sgd'])
if choiceval == 'adam':
optimizer = adam
elif choiceval == 'rmsprop':
optimizer = rmsprop
else:
optimizer = sgd
model.compile(loss='mae', metrics=['mae'],optimizer=optimizer, loss_weights=[0.5,0.5])
earlyStopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=50, verbose=0, mode='auto')
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max')
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.5, cooldown=1, patience=10, min_lr=1e-4,verbose=2)
callbacks_list = [earlyStopping, checkpoint, lr_reducer]
history = model.fit(X_train, Y_train,
batch_size=choice([16,32,64,128]),
epochs=choice([20000]),
verbose=2,
validation_data=(X_val, Y_val),
callbacks=callbacks_list)
However, upon running it, I get the following error:
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (1616, 3)
I would greatly appreciate if someone could point me to the direction of what is going wrong here. I suspect the input (i.e. X_train
, Y_train
) and also the Input shape might be at fault. Would appreciate any help here.
UPDATE
Ok so, indeed the fault was at the Input line:
I changed it to: inputs1 = Input(shape=(X_train.shape[1],))
.
However, now I received another error:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[0.19204772, 0.04878049],
[0.20226056, 0. ],
[0.12029842, 0.04878049],
...,
[0.45188627, 0.14634146],
[0.26942276, 0.02439024],
[0.12942418, 0....
python machine-learning keras hyperas
I have a regression dataset:
X_train (float64) Size = (1616, 3) -> i.e. 3 predictors
Y_train (float64) Size = (1616, 2) -> i.e. 2 targets
I tried doing Hyperas using Functional API (my main purpose is to use the loss_weights
option during compiling):
inputs1 = Input(shape=(X_train.shape[0], X_train.shape[1]))
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(inputs1)
x = Dropout(uniform(0, 1))(x)
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
if conditional(choice(['three', 'four'])) == 'four':
x = Dense(choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)]), activation=choice(['tanh','relu', 'sigmoid']))(x)
x = Dropout(uniform(0, 1))(x)
output1 = Dense(1, activation='linear')(x)
output2 = Dense(1, activation='linear')(x)
model = Model(inputs = inputs1, outputs = [output1,output2])
adam = keras.optimizers.Adam(lr=choice([10**-3,10**-2, 10**-1]))
rmsprop = keras.optimizers.RMSprop(lr=choice([10**-3,10**-2, 10**-1]))
sgd = keras.optimizers.SGD(lr=choice([10**-3,10**-2, 10**-1]))
choiceval = choice(['adam', 'rmsprop','sgd'])
if choiceval == 'adam':
optimizer = adam
elif choiceval == 'rmsprop':
optimizer = rmsprop
else:
optimizer = sgd
model.compile(loss='mae', metrics=['mae'],optimizer=optimizer, loss_weights=[0.5,0.5])
earlyStopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=50, verbose=0, mode='auto')
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max')
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.5, cooldown=1, patience=10, min_lr=1e-4,verbose=2)
callbacks_list = [earlyStopping, checkpoint, lr_reducer]
history = model.fit(X_train, Y_train,
batch_size=choice([16,32,64,128]),
epochs=choice([20000]),
verbose=2,
validation_data=(X_val, Y_val),
callbacks=callbacks_list)
However, upon running it, I get the following error:
ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (1616, 3)
I would greatly appreciate if someone could point me to the direction of what is going wrong here. I suspect the input (i.e. X_train
, Y_train
) and also the Input shape might be at fault. Would appreciate any help here.
UPDATE
Ok so, indeed the fault was at the Input line:
I changed it to: inputs1 = Input(shape=(X_train.shape[1],))
.
However, now I received another error:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[0.19204772, 0.04878049],
[0.20226056, 0. ],
[0.12029842, 0.04878049],
...,
[0.45188627, 0.14634146],
[0.26942276, 0.02439024],
[0.12942418, 0....
python machine-learning keras hyperas
python machine-learning keras hyperas
edited Nov 15 '18 at 14:57
today
12k22542
12k22542
asked Nov 15 '18 at 14:49
CorseCorse
145110
145110
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
Since your model has two output layers, you need to pass a list of two arrays as the true target (i.e. y
) when calling fit()
method. For example like this:
model.fit(X_train, [Y_train[:,0:1], Y_train[:,1:]], ...)
Thanks, I did that and i got this: Epoch 1/20000 - 1s - loss: 0.2504 - dense_4_loss: 0.3083 - dense_5_loss: 0.1925 - dense_4_mean_absolute_error: 0.3083 - dense_5_mean_absolute_error: 0.1925 - val_loss: 0.1225 - val_dense_4_loss: 0.1793 - val_dense_5_loss: 0.0657 - val_dense_4_mean_absolute_error: 0.1793 - val_dense_5_mean_absolute_error: 0.065
– Corse
Nov 15 '18 at 15:01
why are there so many losses?
– Corse
Nov 15 '18 at 15:01
ok its the losses for the combined one, and the 2 output layers.
– Corse
Nov 15 '18 at 15:05
@Corse The combined loss, the losses for each output layer and the metric values for each output layer. However, since you are usingmae
as the loss function as well you can remove it as metric (or use a different metric instead). The same thing applies to validation as well.
– today
Nov 15 '18 at 15:05
by the way, I'm assuming i should do this as well score, acc = model.evaluate(X_val, [epidist_train,mw_train], verbose=2). I got this strange error: ValueError: Input arrays should have the same number of samples as target arrays.
– Corse
Nov 15 '18 at 15:06
|
show 4 more comments
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53322045%2fregarding-error-using-keras-functional-api%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Since your model has two output layers, you need to pass a list of two arrays as the true target (i.e. y
) when calling fit()
method. For example like this:
model.fit(X_train, [Y_train[:,0:1], Y_train[:,1:]], ...)
Thanks, I did that and i got this: Epoch 1/20000 - 1s - loss: 0.2504 - dense_4_loss: 0.3083 - dense_5_loss: 0.1925 - dense_4_mean_absolute_error: 0.3083 - dense_5_mean_absolute_error: 0.1925 - val_loss: 0.1225 - val_dense_4_loss: 0.1793 - val_dense_5_loss: 0.0657 - val_dense_4_mean_absolute_error: 0.1793 - val_dense_5_mean_absolute_error: 0.065
– Corse
Nov 15 '18 at 15:01
why are there so many losses?
– Corse
Nov 15 '18 at 15:01
ok its the losses for the combined one, and the 2 output layers.
– Corse
Nov 15 '18 at 15:05
@Corse The combined loss, the losses for each output layer and the metric values for each output layer. However, since you are usingmae
as the loss function as well you can remove it as metric (or use a different metric instead). The same thing applies to validation as well.
– today
Nov 15 '18 at 15:05
by the way, I'm assuming i should do this as well score, acc = model.evaluate(X_val, [epidist_train,mw_train], verbose=2). I got this strange error: ValueError: Input arrays should have the same number of samples as target arrays.
– Corse
Nov 15 '18 at 15:06
|
show 4 more comments
Since your model has two output layers, you need to pass a list of two arrays as the true target (i.e. y
) when calling fit()
method. For example like this:
model.fit(X_train, [Y_train[:,0:1], Y_train[:,1:]], ...)
Thanks, I did that and i got this: Epoch 1/20000 - 1s - loss: 0.2504 - dense_4_loss: 0.3083 - dense_5_loss: 0.1925 - dense_4_mean_absolute_error: 0.3083 - dense_5_mean_absolute_error: 0.1925 - val_loss: 0.1225 - val_dense_4_loss: 0.1793 - val_dense_5_loss: 0.0657 - val_dense_4_mean_absolute_error: 0.1793 - val_dense_5_mean_absolute_error: 0.065
– Corse
Nov 15 '18 at 15:01
why are there so many losses?
– Corse
Nov 15 '18 at 15:01
ok its the losses for the combined one, and the 2 output layers.
– Corse
Nov 15 '18 at 15:05
@Corse The combined loss, the losses for each output layer and the metric values for each output layer. However, since you are usingmae
as the loss function as well you can remove it as metric (or use a different metric instead). The same thing applies to validation as well.
– today
Nov 15 '18 at 15:05
by the way, I'm assuming i should do this as well score, acc = model.evaluate(X_val, [epidist_train,mw_train], verbose=2). I got this strange error: ValueError: Input arrays should have the same number of samples as target arrays.
– Corse
Nov 15 '18 at 15:06
|
show 4 more comments
Since your model has two output layers, you need to pass a list of two arrays as the true target (i.e. y
) when calling fit()
method. For example like this:
model.fit(X_train, [Y_train[:,0:1], Y_train[:,1:]], ...)
Since your model has two output layers, you need to pass a list of two arrays as the true target (i.e. y
) when calling fit()
method. For example like this:
model.fit(X_train, [Y_train[:,0:1], Y_train[:,1:]], ...)
answered Nov 15 '18 at 14:53
todaytoday
12k22542
12k22542
Thanks, I did that and i got this: Epoch 1/20000 - 1s - loss: 0.2504 - dense_4_loss: 0.3083 - dense_5_loss: 0.1925 - dense_4_mean_absolute_error: 0.3083 - dense_5_mean_absolute_error: 0.1925 - val_loss: 0.1225 - val_dense_4_loss: 0.1793 - val_dense_5_loss: 0.0657 - val_dense_4_mean_absolute_error: 0.1793 - val_dense_5_mean_absolute_error: 0.065
– Corse
Nov 15 '18 at 15:01
why are there so many losses?
– Corse
Nov 15 '18 at 15:01
ok its the losses for the combined one, and the 2 output layers.
– Corse
Nov 15 '18 at 15:05
@Corse The combined loss, the losses for each output layer and the metric values for each output layer. However, since you are usingmae
as the loss function as well you can remove it as metric (or use a different metric instead). The same thing applies to validation as well.
– today
Nov 15 '18 at 15:05
by the way, I'm assuming i should do this as well score, acc = model.evaluate(X_val, [epidist_train,mw_train], verbose=2). I got this strange error: ValueError: Input arrays should have the same number of samples as target arrays.
– Corse
Nov 15 '18 at 15:06
|
show 4 more comments
Thanks, I did that and i got this: Epoch 1/20000 - 1s - loss: 0.2504 - dense_4_loss: 0.3083 - dense_5_loss: 0.1925 - dense_4_mean_absolute_error: 0.3083 - dense_5_mean_absolute_error: 0.1925 - val_loss: 0.1225 - val_dense_4_loss: 0.1793 - val_dense_5_loss: 0.0657 - val_dense_4_mean_absolute_error: 0.1793 - val_dense_5_mean_absolute_error: 0.065
– Corse
Nov 15 '18 at 15:01
why are there so many losses?
– Corse
Nov 15 '18 at 15:01
ok its the losses for the combined one, and the 2 output layers.
– Corse
Nov 15 '18 at 15:05
@Corse The combined loss, the losses for each output layer and the metric values for each output layer. However, since you are usingmae
as the loss function as well you can remove it as metric (or use a different metric instead). The same thing applies to validation as well.
– today
Nov 15 '18 at 15:05
by the way, I'm assuming i should do this as well score, acc = model.evaluate(X_val, [epidist_train,mw_train], verbose=2). I got this strange error: ValueError: Input arrays should have the same number of samples as target arrays.
– Corse
Nov 15 '18 at 15:06
Thanks, I did that and i got this: Epoch 1/20000 - 1s - loss: 0.2504 - dense_4_loss: 0.3083 - dense_5_loss: 0.1925 - dense_4_mean_absolute_error: 0.3083 - dense_5_mean_absolute_error: 0.1925 - val_loss: 0.1225 - val_dense_4_loss: 0.1793 - val_dense_5_loss: 0.0657 - val_dense_4_mean_absolute_error: 0.1793 - val_dense_5_mean_absolute_error: 0.065
– Corse
Nov 15 '18 at 15:01
Thanks, I did that and i got this: Epoch 1/20000 - 1s - loss: 0.2504 - dense_4_loss: 0.3083 - dense_5_loss: 0.1925 - dense_4_mean_absolute_error: 0.3083 - dense_5_mean_absolute_error: 0.1925 - val_loss: 0.1225 - val_dense_4_loss: 0.1793 - val_dense_5_loss: 0.0657 - val_dense_4_mean_absolute_error: 0.1793 - val_dense_5_mean_absolute_error: 0.065
– Corse
Nov 15 '18 at 15:01
why are there so many losses?
– Corse
Nov 15 '18 at 15:01
why are there so many losses?
– Corse
Nov 15 '18 at 15:01
ok its the losses for the combined one, and the 2 output layers.
– Corse
Nov 15 '18 at 15:05
ok its the losses for the combined one, and the 2 output layers.
– Corse
Nov 15 '18 at 15:05
@Corse The combined loss, the losses for each output layer and the metric values for each output layer. However, since you are using
mae
as the loss function as well you can remove it as metric (or use a different metric instead). The same thing applies to validation as well.– today
Nov 15 '18 at 15:05
@Corse The combined loss, the losses for each output layer and the metric values for each output layer. However, since you are using
mae
as the loss function as well you can remove it as metric (or use a different metric instead). The same thing applies to validation as well.– today
Nov 15 '18 at 15:05
by the way, I'm assuming i should do this as well score, acc = model.evaluate(X_val, [epidist_train,mw_train], verbose=2). I got this strange error: ValueError: Input arrays should have the same number of samples as target arrays.
– Corse
Nov 15 '18 at 15:06
by the way, I'm assuming i should do this as well score, acc = model.evaluate(X_val, [epidist_train,mw_train], verbose=2). I got this strange error: ValueError: Input arrays should have the same number of samples as target arrays.
– Corse
Nov 15 '18 at 15:06
|
show 4 more comments
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53322045%2fregarding-error-using-keras-functional-api%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown