keras custom function, normal in the model to write their own custom function, and then in the model compiled in the line of code to write the interface can be. As you can see below, focal_loss and fbeta_score are two functions that we defined ourselves before adding them, and 'accuracy' in metrics is a metric function that comes with keras.
def focal_loss(): ... return xx def fbeta_score(): ... return yy (optimizer=Adam(lr=0.0001), loss=[focal_loss],metrics=['accuracy',fbeta_score] )
After the training, the model loading also need to add an extra line, through the load_model in the custom_objects will be defined by the two functions in the form of a dictionary will be able to load the model normally.
weight_path = './weights.h5'
model = load_model(weight_path,custom_objects={'focal_loss': focal_loss,'fbeta_score':fbeta_score})
Additional knowledge:How keras uses customized loss and evaluation functions for training and prediction
1. Sometimes training model, the existing loss and evaluation function is not enough to scientifically train and evaluate the model, then you need to customize some loss evaluation function, such as focal loss loss function and dice evaluation function for unet training.
2. Import custom loss and evaluation functions in training modeling.
# Add customized loss and evaluation functions at model compilation time (optimizer = Adam(lr=1e-4), loss=[binary_focal_loss()], metrics=['accuracy',dice_coef]) #Customize the loss and evaluation functions def binary_focal_loss(gamma=2, alpha=0.25): """ Binary form of focal loss. for binary classification problemsfocal loss focal_loss(p_t) = -alpha_t * (1 - p_t)**gamma * log(p_t) where p = sigmoid(x), p_t = p or 1 - p depending on if the label is 1 or 0, respectively. References: /pdf/1708. Usage: (loss=[binary_focal_loss(alpha=.25, gamma=2)], metrics=["accuracy"], optimizer=adam) """ alpha = (alpha, dtype=tf.float32) gamma = (gamma, dtype=tf.float32) def binary_focal_loss_fixed(y_true, y_pred): """ y_true shape need be (None,1) y_pred need be compute after sigmoid """ y_true = (y_true, tf.float32) alpha_t = y_true * alpha + (K.ones_like(y_true) - y_true) * (1 - alpha) p_t = y_true * y_pred + (K.ones_like(y_true) - y_true) * (K.ones_like(y_true) - y_pred) + () focal_loss = - alpha_t * ((K.ones_like(y_true) - p_t), gamma) * (p_t) return (focal_loss) return binary_focal_loss_fixed #''' The #smooth parameter prevents the denominator from being 0 def dice_coef(y_true, y_pred, smooth=1): intersection = (y_true * y_pred, axis=[1,2,3]) union = (y_true, axis=[1,2,3]) + (y_pred, axis=[1,2,3]) return ( (2. * intersection + smooth) / (union + smooth), axis=0)
Note the name of the loss function recorded when the model is saved: guess which one!
a:binary_focal_loss()
b:binary_focal_loss_fixed
3. When the model predicts, also load the customized loss and evaluation function, or it will report an error.
It's time to tell the answer to the above, the name of the loss saved in the model is: binary_focal_loss_fixed, in the model prediction, when defining the custom_objects dictionary, the key must be the same as the name saved in the model, otherwise it will not be able to find the loss function. so when you customize the function, try to avoid using my This kind of function nesting, lest it bring some unexpected troubles.
model = load_model('./unet_' + label + '_20.h5',custom_objects={'binary_focal_loss_fixed': binary_focal_loss(),'dice_coef': dice_coef})
Above this keras custom loss function and model loading write-up is all I have to share with you, I hope to give you a reference, and I hope you support me more.