getting the weights of intermediate layer in keras Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsHow to Obtain Output of Intermediate Model in KerasHow to Create Shared Weights Layer in KerasKeras: visualizing the output of an intermediate layerDot Product between two Keras intermediate variablesWhat are default keras layer weightsKeras intermediate layer (attention model) outputSimple prediction with KerasValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Value of loss and accuracy does not change over EpochsImages Score Regression only regresses to the average of the target values

Are bags of holding fireproof?

Does GDPR cover the collection of data by websites that crawl the web and resell user data

Compiling and throwing simple dynamic exceptions at runtime for JVM

Can 'non' with gerundive mean both lack of obligation and negative obligation?

What is the ongoing value of the Kanban board to the developers as opposed to management

How is an IPA symbol that lacks a name (e.g. ɲ) called?

tabularx column has extra padding at right?

Married in secret, can marital status in passport be changed at a later date?

What helicopter has the most rotor blades?

/bin/ls sorts differently than just ls

Do chord progressions usually move by fifths?

Why these surprising proportionalities of integrals involving odd zeta values?

How to create a command for the "strange m" symbol in latex?

Is there a way to convert Wolfram Language expression to string?

How do I overlay a PNG over two videos (one video overlays another) in one command using FFmpeg?

Marquee sign letters

Why did Bronn offer to be Tyrion Lannister's champion in trial by combat?

What's the connection between Mr. Nancy and fried chicken?

Are Flameskulls resistant to magical piercing damage?

Network questions

Recursive calls to a function - why is the address of the parameter passed to it lowering with each call?

Is my guitar’s action too high?

Does Prince Arnaud cause someone holding the Princess to lose?

Weaponising the Grasp-at-a-Distance spell



getting the weights of intermediate layer in keras



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsHow to Obtain Output of Intermediate Model in KerasHow to Create Shared Weights Layer in KerasKeras: visualizing the output of an intermediate layerDot Product between two Keras intermediate variablesWhat are default keras layer weightsKeras intermediate layer (attention model) outputSimple prediction with KerasValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Value of loss and accuracy does not change over EpochsImages Score Regression only regresses to the average of the target values










4












$begingroup$


I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.



What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:



InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]


How can I resolve this issue?



Here is the code that I have done so far:



train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]









share|improve this question











$endgroup$











  • $begingroup$
    Which layer's output are expecting to keep as face feature vectors?
    $endgroup$
    – Kiritee Gak
    Mar 24 at 14:09










  • $begingroup$
    @KiriteeGak last convolutional layer in this example 7th
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 14:39















4












$begingroup$


I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.



What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:



InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]


How can I resolve this issue?



Here is the code that I have done so far:



train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]









share|improve this question











$endgroup$











  • $begingroup$
    Which layer's output are expecting to keep as face feature vectors?
    $endgroup$
    – Kiritee Gak
    Mar 24 at 14:09










  • $begingroup$
    @KiriteeGak last convolutional layer in this example 7th
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 14:39













4












4








4





$begingroup$


I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.



What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:



InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]


How can I resolve this issue?



Here is the code that I have done so far:



train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]









share|improve this question











$endgroup$




I have an image dataset 376 classes each class has 15 pictures corresponds to a person. I would like to get the feature vector that corresponds to each person.



What I have done is, after I compiled the model I then used this link
as a reference to get the weights of the last convolutional layer. However, when I do this, I get the error:



InvalidArgumentError: You must feed a value for placeholder tensor 'conv_layer' with dtype float and shape [?,19,19,360]


How can I resolve this issue?



Here is the code that I have done so far:



train_data = np.array(train_data, dtype=np.float32)
test_data = np.array(test_data, dtype=np.float32)
train_data = train_data / 180 # to make the array values between 0-1
test_data = test_data / 180
train_label = keras.utils.to_categorical(train_label, 376)
test_label = keras.utils.to_categorical(test_label, 376)
# CNN MODEL
model = Sequential()
model.add(Conv2D(180, (3, 3), padding='same', input_shape=(180, 180, 3),
activation="relu")) #180 is the number of filters
model.add(Conv2D(180, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Conv2D(360, (3, 3), padding='same', activation="relu"))
model.add(Conv2D(360, (3, 3), activation="relu"))
conv_layer = model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
flatten_layer = model.add(Flatten())
model.add(Dense(496, activation="relu"))
model.add(Dropout(0.5))
dense_layer = model.add(Dense(376, activation="softmax"))
#compiling the model
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy']
)
model.fit(
train_data,
train_label,
batch_size=32,
epochs=40,
verbose = 2 ,
validation_split=0.1,
shuffle=True)
# getting intermediate layer weights
get_layer_output = K.function([model.layers[0].input],
[model.layers[11].output])
layer_output = get_layer_output([conv_layer])[0]






machine-learning deep-learning keras cnn image-recognition






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 24 at 16:14









Ethan

706625




706625










asked Mar 24 at 12:47









Alfaisal AlbakriAlfaisal Albakri

235




235











  • $begingroup$
    Which layer's output are expecting to keep as face feature vectors?
    $endgroup$
    – Kiritee Gak
    Mar 24 at 14:09










  • $begingroup$
    @KiriteeGak last convolutional layer in this example 7th
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 14:39
















  • $begingroup$
    Which layer's output are expecting to keep as face feature vectors?
    $endgroup$
    – Kiritee Gak
    Mar 24 at 14:09










  • $begingroup$
    @KiriteeGak last convolutional layer in this example 7th
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 14:39















$begingroup$
Which layer's output are expecting to keep as face feature vectors?
$endgroup$
– Kiritee Gak
Mar 24 at 14:09




$begingroup$
Which layer's output are expecting to keep as face feature vectors?
$endgroup$
– Kiritee Gak
Mar 24 at 14:09












$begingroup$
@KiriteeGak last convolutional layer in this example 7th
$endgroup$
– Alfaisal Albakri
Mar 24 at 14:39




$begingroup$
@KiriteeGak last convolutional layer in this example 7th
$endgroup$
– Alfaisal Albakri
Mar 24 at 14:39










1 Answer
1






active

oldest

votes


















3












$begingroup$

The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.






share|improve this answer











$endgroup$












  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 18:13










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    Mar 24 at 18:25











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47895%2fgetting-the-weights-of-intermediate-layer-in-keras%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3












$begingroup$

The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.






share|improve this answer











$endgroup$












  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 18:13










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    Mar 24 at 18:25















3












$begingroup$

The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.






share|improve this answer











$endgroup$












  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 18:13










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    Mar 24 at 18:25













3












3








3





$begingroup$

The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.






share|improve this answer











$endgroup$



The easiest way to create a truncated output from a network is create a sub-network of it and apply weights of your trained network. The following example is a modification of what you have shown up there, but it will guide you out



Network you want to train originally




model = Sequential()
model.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model.add(Conv2D(10, (3, 3), activation="relu"))
model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(5, activation="softmax"))
model.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])

model.fit(
train_data,
train_label)


Now create a subnetwork from which you want the outputs, like from above example




model_new = Sequential()
model_new.add(Conv2D(10, (3, 3), padding='same', input_shape=(60, 60, 3),
activation="relu"))
model_new.add(Conv2D(10, (3, 3), activation="relu"))
model_new.add(MaxPooling2D(pool_size=(3, 3)))
model_new.add(Dropout(0.25))
model_new.add(Flatten())

model_new.compile(
loss='categorical_crossentropy',
optimizer='adam',
metrics=['mse'])

# You need to apply fit on random array's created, just so as to initialise
# weights. Anyways you will replacing them with original ones from above.
model_new.fit(train_data, y=np.random.rand(40, 3610))


Now take weights from the first trained network and replace the weights of the second network like




model_new.set_weights(weights=model.get_weights())


You can check whether the weights are changed in the above step by actually adding these check statements like




print("Are arrays equal before fit - ",
any([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))

model_new.set_weights(weights=model.get_weights())
print("Are arrays equal after applying weights - ",
all([np.array_equal(a1, a2) for a1, a2 in zip(model_new.get_weights(), model.get_weights()[:4])]))


This should yeild




Are arrays equal before fit - False
Are arrays equal after applying weights - True


Hope this helps.







share|improve this answer














share|improve this answer



share|improve this answer








edited Mar 24 at 18:26

























answered Mar 24 at 16:34









Kiritee GakKiritee Gak

1,3591521




1,3591521











  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 18:13










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    Mar 24 at 18:25
















  • $begingroup$
    works perfectly thanks . one more question , how do i know which array corresponds to image class?
    $endgroup$
    – Alfaisal Albakri
    Mar 24 at 18:13










  • $begingroup$
    What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
    $endgroup$
    – Kiritee Gak
    Mar 24 at 18:25















$begingroup$
works perfectly thanks . one more question , how do i know which array corresponds to image class?
$endgroup$
– Alfaisal Albakri
Mar 24 at 18:13




$begingroup$
works perfectly thanks . one more question , how do i know which array corresponds to image class?
$endgroup$
– Alfaisal Albakri
Mar 24 at 18:13












$begingroup$
What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
$endgroup$
– Kiritee Gak
Mar 24 at 18:25




$begingroup$
What do you mean by array? Are you saying output of a filter? You accurately cannot find it. Remember after flattening you have a huge vector and you mapped all of them with some weight onto low dim. using dense layers. So any of the values from the filters would have contributed to the class weight.
$endgroup$
– Kiritee Gak
Mar 24 at 18:25

















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47895%2fgetting-the-weights-of-intermediate-layer-in-keras%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Færeyskur hestur Heimild | Tengill | Tilvísanir | LeiðsagnarvalRossið - síða um færeyska hrossið á færeyskuGott ár hjá færeyska hestinum

He _____ here since 1970 . Answer needed [closed]What does “since he was so high” mean?Meaning of “catch birds for”?How do I ensure “since” takes the meaning I want?“Who cares here” meaningWhat does “right round toward” mean?the time tense (had now been detected)What does the phrase “ring around the roses” mean here?Correct usage of “visited upon”Meaning of “foiled rail sabotage bid”It was the third time I had gone to Rome or It is the third time I had been to Rome

Slayer Innehåll Historia | Stil, komposition och lyrik | Bandets betydelse och framgångar | Sidoprojekt och samarbeten | Kontroverser | Medlemmar | Utmärkelser och nomineringar | Turnéer och festivaler | Diskografi | Referenser | Externa länkar | Navigeringsmenywww.slayer.net”Metal Massacre vol. 1””Metal Massacre vol. 3””Metal Massacre Volume III””Show No Mercy””Haunting the Chapel””Live Undead””Hell Awaits””Reign in Blood””Reign in Blood””Gold & Platinum – Reign in Blood””Golden Gods Awards Winners”originalet”Kerrang! Hall Of Fame””Slayer Looks Back On 37-Year Career In New Video Series: Part Two””South of Heaven””Gold & Platinum – South of Heaven””Seasons in the Abyss””Gold & Platinum - Seasons in the Abyss””Divine Intervention””Divine Intervention - Release group by Slayer””Gold & Platinum - Divine Intervention””Live Intrusion””Undisputed Attitude””Abolish Government/Superficial Love””Release “Slatanic Slaughter: A Tribute to Slayer” by Various Artists””Diabolus in Musica””Soundtrack to the Apocalypse””God Hates Us All””Systematic - Relationships””War at the Warfield””Gold & Platinum - War at the Warfield””Soundtrack to the Apocalypse””Gold & Platinum - Still Reigning””Metallica, Slayer, Iron Mauden Among Winners At Metal Hammer Awards””Eternal Pyre””Eternal Pyre - Slayer release group””Eternal Pyre””Metal Storm Awards 2006””Kerrang! Hall Of Fame””Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Bullet-For My Valentine booed at Metal Hammer Golden Gods Awards””Unholy Aliance””The End Of Slayer?””Slayer: We Could Thrash Out Two More Albums If We're Fast Enough...””'The Unholy Alliance: Chapter III' UK Dates Added”originalet”Megadeth And Slayer To Co-Headline 'Canadian Carnage' Trek”originalet”World Painted Blood””Release “World Painted Blood” by Slayer””Metallica Heading To Cinemas””Slayer, Megadeth To Join Forces For 'European Carnage' Tour - Dec. 18, 2010”originalet”Slayer's Hanneman Contracts Acute Infection; Band To Bring In Guest Guitarist””Cannibal Corpse's Pat O'Brien Will Step In As Slayer's Guest Guitarist”originalet”Slayer’s Jeff Hanneman Dead at 49””Dave Lombardo Says He Made Only $67,000 In 2011 While Touring With Slayer””Slayer: We Do Not Agree With Dave Lombardo's Substance Or Timeline Of Events””Slayer Welcomes Drummer Paul Bostaph Back To The Fold””Slayer Hope to Unveil Never-Before-Heard Jeff Hanneman Material on Next Album””Slayer Debut New Song 'Implode' During Surprise Golden Gods Appearance””Release group Repentless by Slayer””Repentless - Slayer - Credits””Slayer””Metal Storm Awards 2015””Slayer - to release comic book "Repentless #1"””Slayer To Release 'Repentless' 6.66" Vinyl Box Set””BREAKING NEWS: Slayer Announce Farewell Tour””Slayer Recruit Lamb of God, Anthrax, Behemoth + Testament for Final Tour””Slayer lägger ner efter 37 år””Slayer Announces Second North American Leg Of 'Final' Tour””Final World Tour””Slayer Announces Final European Tour With Lamb of God, Anthrax And Obituary””Slayer To Tour Europe With Lamb of God, Anthrax And Obituary””Slayer To Play 'Last French Show Ever' At Next Year's Hellfst””Slayer's Final World Tour Will Extend Into 2019””Death Angel's Rob Cavestany On Slayer's 'Farewell' Tour: 'Some Of Us Could See This Coming'””Testament Has No Plans To Retire Anytime Soon, Says Chuck Billy””Anthrax's Scott Ian On Slayer's 'Farewell' Tour Plans: 'I Was Surprised And I Wasn't Surprised'””Slayer””Slayer's Morbid Schlock””Review/Rock; For Slayer, the Mania Is the Message””Slayer - Biography””Slayer - Reign In Blood”originalet”Dave Lombardo””An exclusive oral history of Slayer”originalet”Exclusive! Interview With Slayer Guitarist Jeff Hanneman”originalet”Thinking Out Loud: Slayer's Kerry King on hair metal, Satan and being polite””Slayer Lyrics””Slayer - Biography””Most influential artists for extreme metal music””Slayer - Reign in Blood””Slayer guitarist Jeff Hanneman dies aged 49””Slatanic Slaughter: A Tribute to Slayer””Gateway to Hell: A Tribute to Slayer””Covered In Blood””Slayer: The Origins of Thrash in San Francisco, CA.””Why They Rule - #6 Slayer”originalet”Guitar World's 100 Greatest Heavy Metal Guitarists Of All Time”originalet”The fans have spoken: Slayer comes out on top in readers' polls”originalet”Tribute to Jeff Hanneman (1964-2013)””Lamb Of God Frontman: We Sound Like A Slayer Rip-Off””BEHEMOTH Frontman Pays Tribute To SLAYER's JEFF HANNEMAN””Slayer, Hatebreed Doing Double Duty On This Year's Ozzfest””System of a Down””Lacuna Coil’s Andrea Ferro Talks Influences, Skateboarding, Band Origins + More””Slayer - Reign in Blood””Into The Lungs of Hell””Slayer rules - en utställning om fans””Slayer and Their Fans Slashed Through a No-Holds-Barred Night at Gas Monkey””Home””Slayer””Gold & Platinum - The Big 4 Live from Sofia, Bulgaria””Exclusive! Interview With Slayer Guitarist Kerry King””2008-02-23: Wiltern, Los Angeles, CA, USA””Slayer's Kerry King To Perform With Megadeth Tonight! - Oct. 21, 2010”originalet”Dave Lombardo - Biography”Slayer Case DismissedArkiveradUltimate Classic Rock: Slayer guitarist Jeff Hanneman dead at 49.”Slayer: "We could never do any thing like Some Kind Of Monster..."””Cannibal Corpse'S Pat O'Brien Will Step In As Slayer'S Guest Guitarist | The Official Slayer Site”originalet”Slayer Wins 'Best Metal' Grammy Award””Slayer Guitarist Jeff Hanneman Dies””Kerrang! Awards 2006 Blog: Kerrang! Hall Of Fame””Kerrang! Awards 2013: Kerrang! Legend”originalet”Metallica, Slayer, Iron Maien Among Winners At Metal Hammer Awards””Metal Hammer Golden Gods Awards””Bullet For My Valentine Booed At Metal Hammer Golden Gods Awards””Metal Storm Awards 2006””Metal Storm Awards 2015””Slayer's Concert History””Slayer - Relationships””Slayer - Releases”Slayers officiella webbplatsSlayer på MusicBrainzOfficiell webbplatsSlayerSlayerr1373445760000 0001 1540 47353068615-5086262726cb13906545x(data)6033143kn20030215029