SoFunction
Updated on 2024-10-29

About usage in tensorflow

(usage

The Sequential() method is a container for the

describing the network structure of a neural network.

Describe the network structure from the input layer to the output layer in the input parameter of Sequential()

model = ([network infrastructure])  #Describe the layers of the network

Examples of network structures

straighten layers

() #Straightening layers can change the size of the tensor,Straighten the input features into a one-dimensional array,is the layer without computational parameters

full connectivity layer

(Number of neurons,

activation = "activation function“,

kernel_regularizer = "regularization method)

Among them:

  • Activation can be selected from relu, softmax, sigmoid, tanh, etc.
  • kernel_regularizer optional .l1(), .l2()

convolutional layer

.Conv2D(filter = Number of convolution kernels,

kernel_size = Convolution kernel size,

strides = convolutional step,

padding = ”valid“ or "same")

LSTM layer

()

sample code (computing)

# First step, import
import tensorflow as tf #Import Module
from sklearn import datasets #Importing datasets from sklearn
import numpy as np #Import Scientific Computing Module
import keras
 
# Step two, train, test
x_train = datasets.load_iris().data #Importing the inputs of an iris dataset
 
y_train = datasets.load_iris().target #Importing labels for iris datasets
 
(120) # Set random seeds so that the results are the same every time for easy cross-referencing
 
(x_train) # Use the shuffle() method to disorganize the input x_train
 
(120) # Set random seeds so that the results are the same every time for easy cross-referencing
 
(y_train) # Use the shuffle() method to disorganize the input y_train
 
.set_seed(120) #Let the number of seeds in tensorflow be set to 120
 
#Step 3, ()
model = ([ #Build a neural network using ()
    (3, activation = "softmax", kernel_regularizer = .l2()) # Fully connected layer, three neurons, activation function is softmax, use l2 regularization
])
 
#Step 4, ()
(  # Use the () method to configure the training method
    optimizer = (lr = 0.1), #Using the SGD optimizer with a learning rate of 0.1
    loss = (from_logits = False), #Configuration loss function
    metrics = ['sparse_categorical_accuracy'] #Labeling network evaluation metrics
)
 
# Step 5, ()
(  # Use the () method to perform the training process.
    x_train, y_train, # Inform the training set of the inputs as well as the labels.
    batch_size = 32, # The size of each batch of batch is 32.
    epochs = 500, # of iterations epochs is 500
    validation_split = 0.2, # Dividing 80% from the test set to the training set
    validation_freq = 20 # of test intervals 20
)
 
# Step 6, ()
() #Printing Neural Network Structures,Number of statistical parameters

summarize

The above is a personal experience, I hope it can give you a reference, and I hope you can support me more.