Preface:
keras provides by default how to obtain the output of a certain node of a certain layer, but does not provide an interface for how to obtain the output of a certain layer. Therefore, sometimes we need to obtain the output of a certain layer, and we need to write code ourselves. However, given the characteristics of keras high-level packaging, it is actually very simple to write. This article provides two common methods to implement, based on the model and code of the previous article:keras custom callback function to view training loss and accuracy
1. Model loading and information viewing of each layer
From the previous definition, we can see the previous article, which has a total of 8 network layers defined, and is defined as follows:
(Convolution2D(filters=6, kernel_size=(5, 5), padding='valid', input_shape=(img_rows, img_cols, 1), activation='tanh')) (MaxPooling2D(pool_size=(2, 2))) (Convolution2D(filters=16, kernel_size=(5, 5), padding='valid', activation='tanh')) (MaxPooling2D(pool_size=(2, 2))) (Flatten()) (Dense(120, activation='tanh')) (Dense(84, activation='tanh')) (Dense(n_classes, activation='softmax'))
There is no name for each layer here, so it is better to give each layer a name, so here we use index to access the layer, as follows:
for index in range(8): layer=model.get_layer(index=index) # layer=[index] # This way it is the same for getting each layer. print(model) '''The running results are as follows: < object at 0x0000012A4F232E10> < object at 0x0000012A4F232E10> < object at 0x0000012A4F232E10> < object at 0x0000012A4F232E10> < object at 0x0000012A4F232E10> < object at 0x0000012A4F232E10> < object at 0x0000012A4F232E10> < object at 0x0000012A4F232E10> '''
Of course, since it is a list, all layer information can be printed out at one time, that is,
print() # print out all layers
2. Model loading
Prepare test data
# Training parameterslearning_rate = 0.001 epochs = 10 batch_size = 128 n_classes = 10 # Define image dimension reshapeimg_rows, img_cols = 28, 28 # Load the mnist dataset in keras, divided into 60,000 training sets and 10,000 test sets(x_train, y_train), (x_test, y_test) = mnist.load_data() # Convert images into (samples, width, height, channels) formatx_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1) x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1) # Convert the data format of X_train, X_test to float32x_train = x_train.astype('float32') x_test = x_test.astype('float32') # Normalize X_train, X_test 0-1x_train /= 255 x_test /= 255 # Convert output 0-9 to ont-hot formy_train = np_utils.to_categorical(y_train, n_classes) y_test = np_utils.to_categorical(y_test, n_classes)
Model loading
model=.load_model('./models/lenet5_weight.h5')
Notes:
Each layer of keras has an input and output attribute, but it is only for single-node layers, otherwise we don’t need to write the output function ourselves.
If a layer has a single node (. If it is not a shared layer), you can get its input tensor, output tensor, input size, and output size:
layer.input_shape layer.output_shape
If the layer has multiple nodes (see:The concept of layer nodes and shared layers), you can use the following functions:
layer.get_input_at(node_index) layer.get_output_at(node_index) layer.get_input_shape_at(node_index) layer.get_output_shape_at(node_index)
3. Method definition for obtaining the output of a certain layer
3.1 The first implementation method
def get_output_function(model,output_layer_index): ''' model: The model to save output_layer_index: The index of the layer to be retrieved ''' vector_funcrion=([[0].input],[[output_layer_index].output]) def inner(input_data): vector=vector_funcrion([input_data])[0] return vector return inner # Now just test one image#Select a picture, select the first onex= np.expand_dims(x_test[1],axis=0) The shape of #[1,28,28,1] get_feature=get_output_function(model,6) # The return value of this function is still a function, get the output of the 6th layer feature=get_feature(x) # is equivalent to calling the inner function defined in itprint(feature) '''The result of the operation is [[-0.99986297 -0.9988328 -0.9273474 0.9101525 -0.9054705 -0.95798373 0.9911243 0.78576803 0.99676156 0.39356467 -0.9724135 -0.74534595 0.8527011 -0.9968267 -0.9420816 -0.32765102 -0.41667578 0.99942905 0.92333794 0.7565034 -0.38416263 -0.994241 0.3781617 0.9621943 0.9443946 0.9671554 -0.01000021 -0.9984282 -0.96650964 -0.9925837 -0.48193568 -0.9749565 -0.79769516 0.9651831 0.9678705 -0.9444472 0.9405674 0.97538495 -0.12366439 -0.9973782 0.05803521 0.9159217 -0.9627071 0.99898154 0.99429387 -0.985909 0.5787794 -0.9789403 -0.94316894 0.9999644 0.9156823 0.46314353 -0.01582102 0.98359734 0.5586145 -0.97360635 0.99058044 0.9995654 -0.9800733 0.99942625 0.8786553 -0.9992093 0.99916387 -0.5141877 0.99970615 0.28427476 0.86589384 0.7649907 -0.9986046 0.9999706 -0.9892468 0.99854743 -0.86872625 -0.9997323 0.98981035 -0.87805724 -0.9999373 -0.7842255 -0.97456616 -0.97237325 -0.729563 0.98718935 0.9992022 -0.5294769 ]] '''
However, the above implementation method does not seem to be very simple, and there are even simpler methods. In the source of ideas and keras, the entire model can also be treated as a layer layer, and the implementation is as follows.
3.2 The second implementation method
import keras import numpy as np from import mnist from import Model model=.load_model('./models/lenet5_weight.h5') #Select a picture, select the first onex= np.expand_dims(x_test[1],axis=0) The shape of #[1,28,28,1] # Use the model as a layer and output the output of layer 7layer_model = Model(inputs=, outputs=[6].output) feature=layer_model.predict(x) print(feature) '''The result of the operation is: [[-0.99986297 -0.9988328 -0.9273474 0.9101525 -0.9054705 -0.95798373 0.9911243 0.78576803 0.99676156 0.39356467 -0.9724135 -0.74534595 0.8527011 -0.9968267 -0.9420816 -0.32765102 -0.41667578 0.99942905 0.92333794 0.7565034 -0.38416263 -0.994241 0.3781617 0.9621943 0.9443946 0.9671554 -0.01000021 -0.9984282 -0.96650964 -0.9925837 -0.48193568 -0.9749565 -0.79769516 0.9651831 0.9678705 -0.9444472 0.9405674 0.97538495 -0.12366439 -0.9973782 0.05803521 0.9159217 -0.9627071 0.99898154 0.99429387 -0.985909 0.5787794 -0.9789403 -0.94316894 0.9999644 0.9156823 0.46314353 -0.01582102 0.98359734 0.5586145 -0.97360635 0.99058044 0.9995654 -0.9800733 0.99942625 0.8786553 -0.9992093 0.99916387 -0.5141877 0.99970615 0.28427476 0.86589384 0.7649907 -0.9986046 0.9999706 -0.9892468 0.99854743 -0.86872625 -0.9997323 0.98981035 -0.87805724 -0.9999373 -0.7842255 -0.97456616 -0.97237325 -0.729563 0.98718935 0.9992022 -0.5294769 ]] '''
It can be seen that the result is the same as above.
Summarize:
Due to the actual conversion relationship between the keras layer and the model, a very flexible output method is provided. It is recommended to use the second method to obtain the output of a certain layer. Summary into the following main steps (four steps):
import keras import numpy as np from import mnist from import Model # Step 1: Prepare to enter datax= np.expand_dims(x_test[1],axis=0) The shape of #[1,28,28,1] # Step 2: Load the trained modelmodel=.load_model('./models/lenet5_weight.h5') # Step 3: Use the model as a layer and output the output of layer 7layer_model = Model(inputs=, outputs=[6].output) # Step 4: Call the newly created "Zeng Model" predict method to get the output of the modelfeature=layer_model.predict(x) print(feature)
The above keras tips - obtaining the output method of a certain network layer is all the content I share with you. I hope you can give you a reference and I hope you can support me more.