SoFunction
Updated on 2024-12-19

Python implementation of deep learning neural network regression based on TensorFlow interface

1 Write in front

1. The present paper is based onTensorFlow interface to a deep learning network, rather thanTensorFlow 2.0generic term for sth.Kerasinterface; onKerasThe interface implements deep learning regression, which we will cover in the next blog.

2. The code in this paper is based onDNNRegressorregression (statistics)as an example; and since it is based onDeep learning of interfacesregression (statistics)together withcategorizationThe whole is more similar, so the two have a tactile effect.

3. The second part of this paper presents a breakdown of the code and the third part presents the complete code.

4. Relevant version information:

  • PythonVersion:3.8.5
  • TensorFlowVersion:2.4.1
  • Compiler version:Spyder 4.1.5

2 Introduction to Code Breakdown

2.1 Preparations

First you need to introduce the relevant libraries and packages.

import os
import openpyxl
import numpy as np
import pandas as pd
import tensorflow as tf
import  as stats
import  as plt
from sklearn import metrics
from sklearn.model_selection import train_test_split

Secondly, based onTensorFlowThe code tends to output more log messages, which affects our understanding of the execution of the code. There are four types of log messages output by the code, in descending order of severity:INFO(Notification)<WARNING(Warning) <ERROR(Error)<FATAL(fatal); we can use the following code for theTensorFlowis constrained by the output log messages of the

['TF_CPP_MIN_LOG_LEVEL']='3'

where 3 stands for output onlyFATALinformation. Note, however, that this code needs to be placed in theimport tensorflowof the front:

import os
['TF_CPP_MIN_LOG_LEVEL']='3'
import openpyxl
import numpy as np
import pandas as pd
import tensorflow as tf
import  as stats
import  as plt
from sklearn import metrics
from sklearn.model_selection import train_test_split

2.2 Parameter Configuration

One of the main features of deep learning code is that it has a large number of parameters that we need to define manually. In order to avoid looking up and down when adjusting parameters, we can concentrate the main parameters together, which is convenient for us to adjust later.

Where the meaning of specific parameters is described in detail in subsequent sections of this paper.

# Define all types of variables in one place, which is very beneficial for code with a lot of variables such as machine learning
MyModelPath="G:/CropYield/03_DL/02_DNNModle" # Determine where the model from each training session is saved
MyDataPath="G:/CropYield/03_DL/00_Data/" # Determine the location of input data
MyResultSavePath="G:/CropYield/03_DL/03_OtherResult/" # Determine where model accuracy results (RMSE, etc.) are saved with model parameters
TestSize=0.2 # Determine what percentage of the data is in the test set
RandomSeed=(low=24,high=25) # Determine the random number seed that divides the training and testing sets
OptMethod='Adam' # Determine the optimization methodology used for the model
LearningRate=0.01 # Determine the learning rate
DecayStep=200 # of steps to determine the decline in learning rate
DecayRate=0.96 # Determine the rate of decline in learning rates
HiddenLayer=[64,128] # Determine the number of hidden layers with the number of neurons corresponding to each layer
ActFun='' # Determine the activation function
Dropout=0.3 # Determine the value of Dropout
LossReduction='.v1.ReductionV2.SUM_OVER_BATCH_SIZE' # Specify the reduction method for each batch of training error
BatchNorm='False' # Determine whether to use Batch Normalizing
TrainBatchSize=110 # Determine the size of a Batch of training data
TrainStep=3000 # Determine the number of Steps for training data
EvalBatchSize=1 # Determine the size of a Batch of validation data
PredictBatchSize=1 # Determine the size of a Batch of predictive data (i.e. test set)

2.3 Original model deletion

DNNRegressorEach execution saves the current model run in the specified path. To ensure that the next model save is not disturbed by the results of the previous model run, we can delete all files in the model folder.

# DeleteOldModel function, deletes the model saved in the last run
def DeleteOldModel(ModelPath):
    AllFileName=(ModelPath) # Get all files and folders in the ModelPath path.
    for i in AllFileName:
        NewPath=(ModelPath,i) # Combine the fetched file or folder name with the ModelPath path, respectively
        if (NewPath): # If the combined new path is a folder
            DeleteOldModel(NewPath) # Recursive call to DeleteOldModel function
        else:
            (NewPath) # If it's not a new folder, but a file, then delete it

# Call the DeleteOldModel function to delete the model saved from the last run
DeleteOldModel(MyModelPath)

Note that the above code only deletes files in the specified path, not folders. If you need to delete the folder at the same time, you can modify the last few sentences of the above code function.

2.4 Data Import and Data Segmentation

My data has been saved in the.csvfile, so you can use thepd.read_csvDirect Read.

where each column of the data is a feature and each row is the full set of features with the dependent variable (which is the following)Yield) combined into a sample.

# LoadData function to load all the data
def LoadData(DataPath):
    MyData=pd.read_csv(DataPath,names=['EVI0610','EVI0626','EVI0712','EVI0728','EVI0813','EVI0829',
                                       'EVI0914','EVI0930','EVI1016','Lrad06','Lrad07','Lrad08',
                                       'Lrad09','Lrad10','Prec06','Prec07','Prec08','Prec09',
                                       'Prec10','Pres06','Pres07','Pres08','Pres09','Pres10',
                                       'SIF161','SIF177','SIF193','SIF209','SIF225','SIF241',
                                       'SIF257','SIF273','SIF289','Shum06','Shum07','Shum08',
                                       'Shum09','Shum10','SoilType','Srad06','Srad07','Srad08',
                                       'Srad09','Srad10','Temp06','Temp07','Temp08','Temp09',
                                       'Temp10','Wind06','Wind07','Wind08','Wind09','Wind10',
                                       'Yield'],header=0) # Load the data specified by the DataPath path, the contents of the names are the names of the columns
    return MyData

# Initial data processing
AllXY=LoadData(MyDataPath) # Call the LoadData function to get the data
Label={"Yield":("Yield")} # Extract the dependent variable from the full set of data
AllX,AllY=AllXY,((Label)) # Separate independent variables from dependent variables

# Delineate data training and test sets
TrainX,TestX,TrainY,TestY=train_test_split(AllX,
                                           AllY,
                                           test_size=TestSize, # Specify the percentage of test sets in the data
                                           random_state=RandomSeed # Specify the random number seed that divides the training and test sets
                                           )

2.5 Feature Columns Definition

Feature Columnsis a bridge between your initial data and the model; its like a list that the model takes to your data (i.e., this paper)2.4(part of the data you imported) in the search by the name of the columns one by one, if the name of a column in the initial data in theFeature Columnsin the initial data, then the model takes all the data from this column in the initial data to itself and trains it.

Since we are looking to import all of the features of the data, then we can loop directly through the independent variables of all of the data to import the names of all of the featuresFeature Columns

It is important to note here that only continuous numeric variables can be used with thetf.feature_column.numeric_columnprocessing; if it is a category variable it can be uniquely encoded, etc.

# The model in the estimator interface needs to use the "Feature columns" object as input data, so that the model knows what data to read.
FeatureColumn=[] # Define a new "Feature columns" object.
for key in ():
    (tf.feature_column.numeric_column(key=key)) # Import all dependent variable data (all need to be continuous)

2.6 Model optimization method construction and model structure construction

The model optimization method, i.e., the model inoptimizerHowever, sometimes the optimization method is more complex (e.g., it introduces a learning rate decrease), and it is inconvenient to configure the optimization method when building the model. Therefore, we first construct the model optimization method.

# Define model optimization methods
# Optimizer=OptMethod # Optimize the method with the method specified by OptMethod
Optimizer=lambda:(
    learning_rate=..exponential_decay(learning_rate=LearningRate, # Initial learning rate
                                                       global_step=..get_global_step(),
                                                       # global steps to calculate the learning rate after already decaying
                                                       # get_global_step() function automatically gets the current number of steps that have been executed
                                                       decay_steps=DecayStep, # of assigned steps completed with reduced learning rate
                                                       decay_rate=DecayRate # Attenuation rate
                                                       ) # Adam's method based on exponential decrease in learning rate was selected, which helps to reduce the risk of overfitting
                                                         # This function returns the corresponding learning rate for each
    )

The above code has twoOptimizer=, the first one is as straightforward as entering the name of the optimization method, the name includes:'Adagrad''Adam''Ftrl''RMSProp'SGD'; the default isAdagrad

The second one is configured with additional information based on the chosen optimization method. The second one, for example, which represents the Adam optimization method with exponentially decreasing learning rate. Among them...exponential_decayCan be regarded as a function that calculates the learning rate for each training, he returns the corresponding learning rate for each time. Maybe that's not very understandable, look at this formula: its return value islearning_rate *decay_rate ^ (global_step / decay_steps)I don't know if I understand.

We choose the second optimization method and therefore comment out the first one.

Subsequently, we define the structure of the model.

# Build deep learning models based on DNNRegressor
DNNModel=(feature_columns=FeatureColumn, # Specify the "Feature columns" object used for the model.
                                   hidden_units=HiddenLayer, # Specify the number of hidden layers and the number of neurons corresponding to each layer
                                   optimizer=Optimizer, # Specify the optimization method used for the model
                                   activation_fn=eval(ActFun), # Specify the activation function
                                   dropout=Dropout, # Specify the value of Dropout
                                   label_dimension=1, # of dimensions of the output data, i.e., the number of dependent variables
                                   model_dir=MyModelPath, # Specify where the model from each training session is stored
                                   # loss_reduction=eval(LossReduction), # Specify the reduction method for each batch of training error
                                   batch_norm=eval(BatchNorm) # Specify whether to use Batch Normalizing
                                   )

The construction of the model is better understood against the comments on the code; among other things, I put theloss_reductionThe reason for commenting out is that it may be due toTensorFlowversion, its always reporting an error, so just use the default value; and the lastbatch_normthat determines whether or not to proceedBatch NormalizingBatch NormalizingThe deep neural network can be kept equally distributed at each layer, thus accelerating network convergence and enhancing network robustness.

You can refer to the official website for other parameters, so I won't repeat them here for now.

2.7 Model training

The part about training the model, I think instead, is probably a bit harder to understand than the construction of the model. Let's look at the code first:

# Train models based on training data
(input_fn=lambda:InputFun(TrainX,
                                        TrainY,
                                        True,
                                        TrainBatchSize
                                        ), # Call the InputFun function; the InputFun function returns the " "object that can be used by the
                                           # train function identifies and brings into the model; since the InputFun function returns the number of pieces of data of BatchSize size at a time, the
                                           # So it needs to be executed multiple times, preceded by the lambda
               steps=TrainStep # Specify the number of steps for model training
               )

We can understand it this way: intrainfunction with only one argumentinput_fn; and the input to this parameter is a new function - this new function is the big-nameinput functionUp.

He looks like this:

# InputFun function, Input function used for training data and validation data
def InputFun(Features,Labels,Training,BatchSize):
    Datasets=.from_tensor_slices((dict(Features),Labels)) # Load the data
    if Training:
        Datasets=(1000).repeat() # For training data, it needs to be shuffle, repeat
    return (BatchSize) # Outputs the processed data in BatchSize increments.

So let's start by looking atinput function--that is, the code forInputFunFunction. The purpose of this function is actually quite simple, in the words of the official website, its to input the data types supported by the model - only after theinput functionprocessed before the data can beDNNRegressorRecognition. Sounds so awesome, how does it actually work?

It's simple, it just needs to convert the initial data into a specific format, which is a tuple (tuple), this tuple has two elements:

in one casefeaturesIt is a dictionary. Each key of this dictionary is the name of each feature, for example, using plant characteristics to distinguish between types of flowers, then the flower "leaf length", "leaf thickness" and so on is the name of each feature, that is, here one of the "keys"; and the value of this dictionary is the value of all the samples corresponding to the array of values. "The value of this dictionary is an array of values of all the samples corresponding to this feature.

the second islabel, is the full sample corresponding to thelabel, which is also the dependent variable.

I don't know if you understand, let's take a simple example. If we use the temperature and precipitation of two places to predict the crop yield of these two places: their temperature is 10 ℃, 20 ℃, precipitation is 15 mm, 25 mm, crop yield is 100 kg per hectare, 150 kg per hectare - thentupleIt consists of two parts:

tuple=(features,label)
features={'Temperature':([10,20]),'Precipitation':([15,25])}
label=([100,150])

How's that? Did you get it?

Having understood that, let's move on toInputFunfunction. First..from_tensor_slicesis used to load and convert the input data into theDataseform; subsequently, if it is in the training state, then the data is disrupted.shuffle(1000)--The equivalent of shuffling the data to prevent the initial data from having a certain trend. For example, if we do classification, in which the first 80% of the initial data are the first class, the second 20% are the second class, then if we do not disrupt the data, it will make the first 80% of the data trained with the results of the first class (i.e., the model only recognizes the first class), in the second 20% of the test, the results are all the first class; so to disrupt. One of the1000bebuffer_sizeparameter, this data must be larger than the number of samples of your data. As for the.shuffle(1000)I have not been able to figure out the principle of this function, so you can learn more about it if you are interested.

.repeat()is to repeat the dataset, which is necessary because we need to train several rounds on the full data (i.e., severalEpoch), so duplicate the initial data.

Subsequently, the use of.batch()function outputBatchSizedata, that is, a batch of data; whereBatchSizeIt's the number of pieces of data in each batch.

That's what this is.InputFunfunction. See again.trainFunction function: as you can see, thisInputFunfunction is to output one batch at a time (BatchSize(individual) data; and when we train, we definitely have to keep inputting data batch by batch, so this explains why theInputFunThe function is preceded by alambda--BecauseInputFunThe function has to pass the processed data in multiple passes to thetrain

2.8 Model Validation and Testing

Understanding the above makes the next step much easier to understand. We need to perform the operations of validation and testing - in fact, validation is also utilizing the test set data, and the reason why I also performed the testing is because I hope to obtain the test set prediction results to get a more intuitive understanding of the model accuracy level.

# InputFunPredict function, Input function used for test data
def InputFunPredict(Features,BatchSize):
    return .from_tensor_slices(dict(Features)).batch(BatchSize) # Load data to BatchSize outputs one at a time

# Validate the model and save the validation results
EvalResult=(input_fn=lambda:InputFun(TestX,
                                                      TestY,
                                                      False,
                                                      EvalBatchSize
                                                      )
                             )
# Print validation results
print('ev:{}'.format(EvalResult))

# Test model accuracy results based on test data
PredictValues=(input_fn=lambda:InputFunPredict(TestX,
                                                               PredictBatchSize
                                                               )
                               )

Among other things, when validating.evaluateappliedInputFunThe function is actually the same function used for the training set, except that the validation does not need to be disrupted.shuffle(1000)and repetition.repeat()operation; and when testing.predict(used form a nominal expression)InputFunFunctions, on the other hand, are new in that they require only the input of the independent variable and not the dependent variable.

2.9 Accuracy Assessment, Fitting Image Plotting and Model Parameters and Accuracy Results Saving

There is no need to talk too much about the accuracy ratings and the fitted images ~ in the end, it is best to save the model parameters and accuracy measure results in theExcelform, which makes it easier to adjust the parameter process afterward. Here will not be introduced one by one, you can control the code in the comments.

# AccuracyVerification function for calculation and plotting of accuracy verification metrics
def AccuracyVerification(PredictLabels,TestLabels):
    value=0
    PredictValuesList=[]
    for k in PredictLabels:
        value=('predictions')[0]
        (value)
    TestLabels=()
    TestYList=sum(TestLabels,[])
    # The above are the dependent variables from which the test data were obtained and the dependent variables from which the model predictions were made.
    Pearsonr=(TestYList,PredictValuesList) # Calculate the Pearson correlation coefficient
    R2=metrics.r2_score(TestYList,PredictValuesList) # Calculate the R-square
    RMSE=metrics.mean_squared_error(TestYList,PredictValuesList)**0.5 # Calculate RMSE
    ()
    (TestYList,PredictValuesList,'r*')
    ('Actual Values')
    ('Predicted Values')
    # Above is plotting the fitted image
    print('Pearson correlation coefficient is {0}, and RMSE is {1}.'.format(Pearsonr[0],RMSE))
    return (Pearsonr[0],R2,RMSE,PredictValuesList)

# WriteAccuracy function that saves the parameters involved in the model with the final accuracy result
def WriteAccuracy(*WriteVar):
    ExcelData=openpyxl.load_workbook(WriteVar[0])
    SheetName=ExcelData.get_sheet_names() # Get all Sheets
    WriteSheet=ExcelData.get_sheet_by_name(SheetName[0]) # Get the specified Sheet
    WriteSheet= # Activate the specified Sheet
    MaxRowNum=WriteSheet.max_row # Get the first blank row corresponding to the specified Sheet
    for i in range(len(WriteVar)-1):
        exec("(MaxRowNum+1,i+1).value=WriteVar[i+1]") # Execute the statement with exec to write the information
    (WriteVar[0]) # Save the document

# Call AccuracyVerification function for accuracy verification metrics calculation and plotting
AccuracyResult=AccuracyVerification(PredictValues,TestY)
PearsonR,R2,RMSE,PredictY=AccuracyResult[0],AccuracyResult[1],AccuracyResult[2],AccuracyResult[3]

# Call the WriteAccuracy function to save the parameters involved in the model with the final accuracy results
WriteAccuracy(MyResultSavePath,PearsonR,R2,RMSE,TestSize,RandomSeed,OptMethod,LearningRate,DecayStep,
              DecayRate,','.join('%s' %i for i in HiddenLayer),ActFun,Dropout,LossReduction,
              BatchNorm,TrainBatchSize,TrainStep,EvalBatchSize,PredictBatchSize)

At this point, all the code break down introduction are over~!

3 Detailed Code

# -*- coding: utf-8 -*-
"""
Created on Tue Feb 23 16:13:21 2021

@author: Chutj
"""

# Load necessary libraries, packages, etc.
import os
['TF_CPP_MIN_LOG_LEVEL']='3'
import openpyxl
import numpy as np
import pandas as pd
import tensorflow as tf
import  as stats
import  as plt
from sklearn import metrics
from sklearn.model_selection import train_test_split

# ===============*** Function declaration area ***===============

# DeleteOldModel function, deletes the model saved in the last run
def DeleteOldModel(ModelPath):
    AllFileName=(ModelPath) # Get all files and folders in the ModelPath path.
    for i in AllFileName:
        NewPath=(ModelPath,i) # Combine the fetched file or folder name with the ModelPath path, respectively
        if (NewPath): # If the combined new path is a folder
            DeleteOldModel(NewPath) # Recursive call to DeleteOldModel function
        else:
            (NewPath) # If it's not a new folder, but a file, then delete it

# LoadData function to load all the data
def LoadData(DataPath):
    MyData=pd.read_csv(DataPath,names=['EVI0610','EVI0626','EVI0712','EVI0728','EVI0813','EVI0829',
                                       'EVI0914','EVI0930','EVI1016','Lrad06','Lrad07','Lrad08',
                                       'Lrad09','Lrad10','Prec06','Prec07','Prec08','Prec09',
                                       'Prec10','Pres06','Pres07','Pres08','Pres09','Pres10',
                                       'SIF161','SIF177','SIF193','SIF209','SIF225','SIF241',
                                       'SIF257','SIF273','SIF289','Shum06','Shum07','Shum08',
                                       'Shum09','Shum10','SoilType','Srad06','Srad07','Srad08',
                                       'Srad09','Srad10','Temp06','Temp07','Temp08','Temp09',
                                       'Temp10','Wind06','Wind07','Wind08','Wind09','Wind10',
                                       'Yield'],header=0) # Load the data specified by the DataPath path, the contents of the names are the names of the columns
    return MyData

# InputFun function, Input function used for training data and validation data
def InputFun(Features,Labels,Training,BatchSize):
    Datasets=.from_tensor_slices((dict(Features),Labels)) # Load the data
    if Training:
        Datasets=(1000).repeat() # For training data, it needs to be shuffle, repeat
    return (BatchSize) # Outputs the processed data in BatchSize increments.

# InputFunPredict function, Input function used for test data
def InputFunPredict(Features,BatchSize):
    return .from_tensor_slices(dict(Features)).batch(BatchSize) # Load data to BatchSize outputs one at a time

# AccuracyVerification function for calculation and plotting of accuracy verification metrics
def AccuracyVerification(PredictLabels,TestLabels):
    value=0
    PredictValuesList=[]
    for k in PredictLabels:
        value=('predictions')[0]
        (value)
    TestLabels=()
    TestYList=sum(TestLabels,[])
    # The above are the dependent variables from which the test data were obtained and the dependent variables from which the model predictions were made.
    Pearsonr=(TestYList,PredictValuesList) # Calculate the Pearson correlation coefficient
    R2=metrics.r2_score(TestYList,PredictValuesList) # Calculate the R-square
    RMSE=metrics.mean_squared_error(TestYList,PredictValuesList)**0.5 # Calculate RMSE
    ()
    (TestYList,PredictValuesList,'r*')
    ('Actual Values')
    ('Predicted Values')
    # Above is plotting the fitted image
    print('Pearson correlation coefficient is {0}, and RMSE is {1}.'.format(Pearsonr[0],RMSE))
    return (Pearsonr[0],R2,RMSE,PredictValuesList)

# WriteAccuracy function that saves the parameters involved in the model with the final accuracy result
def WriteAccuracy(*WriteVar):
    ExcelData=openpyxl.load_workbook(WriteVar[0])
    SheetName=ExcelData.get_sheet_names() # Get all Sheets
    WriteSheet=ExcelData.get_sheet_by_name(SheetName[0]) # Get the specified Sheet
    WriteSheet= # Activate the specified Sheet
    MaxRowNum=WriteSheet.max_row # Get the first blank row corresponding to the specified Sheet
    for i in range(len(WriteVar)-1):
        exec("(MaxRowNum+1,i+1).value=WriteVar[i+1]") # Execute the statement with exec to write the information
    (WriteVar[0]) # Save the document


# ===============*** The code thus begins execution ***===============
# ++++++++++ --- Suggested starting point from here --- ++++++++++

# Define all types of variables in one place, which is very beneficial for code with a lot of variables such as machine learning
MyModelPath="G:/CropYield/03_DL/02_DNNModle" # Determine where the model from each training session is saved
MyDataPath="G:/CropYield/03_DL/00_Data/" # Determine the location of input data
MyResultSavePath="G:/CropYield/03_DL/03_OtherResult/" # Determine where model accuracy results (RMSE, etc.) are saved with model parameters
TestSize=0.2 # Determine what percentage of the data is in the test set
RandomSeed=(low=24,high=25) # Determine the random number seed that divides the training and test sets
OptMethod='Adam' # Determine the optimization methodology used for the model
LearningRate=0.01 # Determine the learning rate
DecayStep=200 # of steps to determine the decline in learning rate
DecayRate=0.96 # Determine the rate of decline in learning rates
HiddenLayer=[64,128] # Determine the number of hidden layers with the number of neurons corresponding to each layer
ActFun='' # Determine the activation function
Dropout=0.3 # Determine the value of Dropout
LossReduction='.v1.ReductionV2.SUM_OVER_BATCH_SIZE' # Specify the reduction method for each batch of training error
BatchNorm='False' # Determine whether to use Batch Normalizing
TrainBatchSize=110 # Determine the size of a Batch of training data
TrainStep=3000 # Determine the number of Steps for training data
EvalBatchSize=1 # Determine the size of a Batch of validation data
PredictBatchSize=1 # Determine the size of a Batch of predictive data (i.e. test set)

# Call the DeleteOldModel function to delete the model saved from the last run
DeleteOldModel(MyModelPath)

# Initial data processing
AllXY=LoadData(MyDataPath) # Call the LoadData function to get the data
Label={"Yield":("Yield")} # Extract the dependent variable from the full set of data
AllX,AllY=AllXY,((Label)) # Separate independent variables from dependent variables

# Delineate data training and test sets
TrainX,TestX,TrainY,TestY=train_test_split(AllX,
                                           AllY,
                                           test_size=TestSize, # Specify the percentage of test sets in the data
                                           random_state=RandomSeed # Specify the random number seed that divides the training and test sets
                                           )

# The model in the estimator interface needs to use the "Feature columns" object as input data, so that the model knows what data to read.
FeatureColumn=[] # Define a new "Feature columns" object.
for key in ():
    (tf.feature_column.numeric_column(key=key)) # Import all dependent variable data (all need to be continuous)

# Define model optimization methods
# Optimizer=OptMethod # Optimize the method with the method specified by OptMethod
Optimizer=lambda:(
    learning_rate=..exponential_decay(learning_rate=LearningRate, # Initial learning rate
                                                       global_step=..get_global_step(),
                                                       # global steps to calculate the learning rate after already decaying
                                                       # get_global_step() function automatically gets the current number of steps that have been executed
                                                       decay_steps=DecayStep, # of assigned steps completed with reduced learning rate
                                                       decay_rate=DecayRate # Attenuation rate
                                                       ) # Adam's method based on exponential decrease in learning rate was selected, which helps to reduce the risk of overfitting
                                                         # This function returns the corresponding learning rate for each
    )


# Build deep learning models based on DNNRegressor
DNNModel=(feature_columns=FeatureColumn, # Specify the "Feature columns" object used for the model.
                                   hidden_units=HiddenLayer, # Specify the number of hidden layers and the number of neurons corresponding to each layer
                                   optimizer=Optimizer, # Specify the optimization method used for the model
                                   activation_fn=eval(ActFun), # Specify the activation function
                                   dropout=Dropout, # Specify the value of Dropout
                                   label_dimension=1, # of dimensions of the output data, i.e., the number of dependent variables
                                   model_dir=MyModelPath, # Specify where the model from each training session is stored
                                   # loss_reduction=eval(LossReduction), # Specify the reduction method for each batch of training error
                                   batch_norm=eval(BatchNorm) # Specify whether to use Batch Normalizing
                                   )

# . .set_verbosity(.) # Display INFO level log messages to the screen

# Train models based on training data
(input_fn=lambda:InputFun(TrainX,
                                        TrainY,
                                        True,
                                        TrainBatchSize
                                        ), # Call the InputFun function; the InputFun function returns the " "object that can be used by the
                                           # train function identifies and brings into the model; since the InputFun function returns the number of pieces of data of BatchSize size at a time, the
                                           # So it needs to be executed multiple times, preceded by the lambda
               steps=TrainStep # Specify the number of steps for model training
               ) 

# Validate the model and save the validation results
EvalResult=(input_fn=lambda:InputFun(TestX,
                                                      TestY,
                                                      False,
                                                      EvalBatchSize
                                                      )
                             )
# Print validation results
print('ev:{}'.format(EvalResult))

# Test model accuracy results based on test data
PredictValues=(input_fn=lambda:InputFunPredict(TestX,
                                                               PredictBatchSize
                                                               )
                               )

# Call AccuracyVerification function for accuracy verification metrics calculation and plotting
AccuracyResult=AccuracyVerification(PredictValues,TestY)
PearsonR,R2,RMSE,PredictY=AccuracyResult[0],AccuracyResult[1],AccuracyResult[2],AccuracyResult[3]

# Call the WriteAccuracy function to save the parameters involved in the model with the final accuracy results
WriteAccuracy(MyResultSavePath,PearsonR,R2,RMSE,TestSize,RandomSeed,OptMethod,LearningRate,DecayStep,
              DecayRate,','.join('%s' %i for i in HiddenLayer),ActFun,Dropout,LossReduction,
              BatchNorm,TrainBatchSize,TrainStep,EvalBatchSize,PredictBatchSize)

The above is Python based TensorFlow interface to achieve deep learning neural network regression details, more information about Python TensorFlow deep learning neural network regression, please pay attention to my other related articles!