This article example describes the simple use of Python Tensor FLow. Shared for your reference, as follows:
1. Basic concepts
Tensor represents a tensor, which is a data structure of multidimensional arrays.Flow represents a flow, which is the process of transforming between tensors through computation.TensorFLow represents the programming process through a computational graph, where the data flows between each node, and after being processed by a node flows to the next node.
A computational graph is a directed graph with the following components: nodes: represent an operation. Edges: represent data transfer and control dependencies between nodes, where a solid line represents a data transfer relationship between two nodes and a dashed line represents the existence of a control correlation between two nodes.
A tensor is a representation of all data, which can be thought of as a multidimensional array. A zero-order tensor is a scalar, which represents a number, and a first-order tensor is a one-dimensional array, i.e., a vector. n-order tensors are also n-dimensional arrays. A tensor does not hold a specific number, it holds the computational process.
The following example adds the values of nodes 1 and 2 to get node 3.
import tensorflow as tf node1=(3.0,tf.float32,name='node1') # Create floating point nodes node2=(4.0,tf.float32,name='node2') node3=(node1,node2) # Node three performs a summing operation, originating from nodes 1 and 2. ses=() print(node3) # Output tensor: Tensor("Add_3:0", shape=(), dtype=float32) print((node3)) # Run node three through session, add nodes 1 and 2, output: 7.0 () #When not in use,Close session
The output of direct print(node3) is not a concrete value, but a tensor structure. Because creating a computational graph only builds the computational model, only the session execution run() can get the concrete result.
Tensor("Add_3:0", shape=(), dtype=float32), Add represents the node name, 3 means this is the 3rd output of the node. shape represents the dimension information of the tensor, () represents the scalar. dtype represents the type of the tensor, the type of each tensor is unique, and an error will be reported if it doesn't match, and the default type without decimal is int32, and the default type with decimal is float35. The following example shows a more complex tensor type:
tensor1=([[[1,1,1],[1,2,1]], [[2,1,1],[2,2,1]], [[3,1,1],[3,2,1]], [[4,1,1],[4,2,1]]],name='tensor1') print(tensor1) ss=() print((tensor1)[3,0,0]) #Access specific elements of tensor1 #exports:Tensor("tensor1:0", shape=(4, 2, 3), dtype=int32) 4
Where shape=(4,2,3) means that there are 4 arrays in the outermost layer of tensor1, with 2 subarrays within each array, and the subarrays consist of 3 numbers. The specific elements of them can be accessed by means of multidimensional arrays, [3,0,0] is the first element of the first subarray in the fourth array, 4.
There are also nodes in the computational graph that represent operations, such as adding, subtracting, multiplying, dividing, assigning initial values, etc. The operations have their own attributes, which need to be determined when the graph is created, and there are sequential and other dependencies between the operations, which are intuitively visible through the edges of the graph.
2、Operation
conversations
Session owns and manages all the resources of TensorFLow, run the calculation through Session to get the result, remember to close the session to recycle the resources after the calculation is done. The following is the flow of using Session:
# Define the calculation chart tensor1=([1,2,3]) #Create session ss=() # Utilize sessions for computational operations print((tensor1)) # Close the session ()
Session can also be used through the python context manager, which automatically closes the session and frees resources when the context is exited.
tensor1=([1,2,3]) with () as ss: # Context Manager print((tensor1))
It is also possible to get the value of the tensor using eval() by specifying a default session:
tensor1=([1,2,3]) ss=() with ss.as_default(): # Specify the default session print(())
Generated sessions are automatically set as default in interactive environments via InteractiveSession():
tensor1=([1,2,3]) ss=() #Auto-register default sessions print(()) ()
Variables, constants
TensorFLow defines constants through the constant function, which can be assigned an initial value and a name.
a=(10,'int_a')
Whereas variables need to be not only defined, but also initialized before they can be used, initialization operations need to be not only defined, but also executed
node1=(3.0,name='node1') # Define variables node2=(4.0,name='node2') res=(node1,node2,name='res') ss=() init=tf.global_variables_initializer() # Define all variable initialization operations (init) # Perform initialization operations print((res)) ()
The variables of TensorFLow generally do not need to be assigned manually because the system will automatically adjust them during the training process. If you don't want to have them automatically assigned by the model, you can specify the attribute trainable=False when defining them and manually assign them via the assign function
var1=(0,name='var') one=(1) var2=(var1,one) # Variable 1 plus 1 gives variable 2 update=(var1,var2) # Define an update operation that assigns variable 2 to variable 1. init=tf.global_variables_initializer() ss=() (init) for _ in range(10): (update) # Perform an update operation print((var1)) () #exports:1 2 3 4 5 6 7 8 9 10
When performing the (update) operation, since update requires var1 and var2 dependencies, and getting var2 requires performing the add operation, just running an update triggers the entire computational network.
placeholder
Sometimes when defining a variable, you don't know its specific value, and you only enter the corresponding value at runtime, while the definition of variables in tensorflow needs to assign initial values, then you need to use the placeholder placeholder to define it and pass in the specific value when calculating it. A simple usage example:
node1=(tf.float32,name='node1') # Define placeholders, specify their type, structure, name node2=(tf.float32,name='node2') m=(node1,node2,'multinode') ss=() res=(m,feed_dict={node1:1.2,node2:3.4}) # Assign values to placeholders at runtime via feed_dict print(res) ()
It is also possible to put multiple operations into a single feed operation.
node1=(tf.float32,[3],name='node1') # The second parameter specifies that the type of the placeholder is a 3-dimensional array node2=(tf.float32,[3],name='node2') m=(node1,node2,'multinode') s=(node1,node2,'subnode') ss=() # Put the two operations m, s together and return two results resm,ress=([m,s],feed_dict={node1:[1.0,2.0,4.0],node2:[3.0,5.0,6.0]}) print(resm) # Output: [ 3. 10. 24. ] ()
3、TensorBoard
TensorBoard is a visualization tool for TensorFLow, which visually represents the running state of TensorFLow through the log files output from the program run. Its programming is as follows:
node1=(3.0,name='node1') node2=(4.0,name='node2') res=(node1,node2,name='res') ss=() init=tf.global_variables_initializer() (init) print((res)) () # Clear the default graph and other nodes tf.reset_default_graph() # Define the default path where logs are stored logdir='D:\Temp\TensorLog' #Generatewriter to write the current computation graph to a log writer=(logdir,tf.get_default_graph()) ()
TensorBoard has been installed with Anaconda, first through the Anaconda Prompt to enter the log file storage directory, and then enter tensorboard --logdir=D:\Temp\TensorLog, set the path to the log storage, after the completion of the browser's localhost: port 6006 you can see the TensorBoard. 6006 port you can see TensorBoard, you can also change the default port by --port command.
Using TensorBoard to display the image, the formatted image data is displayed through (), where the input image_imput data is in a four-dimensional format, with the first dimension indicating how many rows of data are input at a time, and -1 indicating uncertainty. 28, 28, 1 indicates that the image data is of size 28×28 and that its color channel is 1.
Histogram data can be displayed via (). Scalar data can be displayed via (). After all summaries have been defined, a summary operation can be defined to aggregate all summaries via the summary.merge_all() function.
Define writer for log file writing after creating session, when training is performed, a merge operation will be performed for each batch of training and the result will be written to the log.
The following is a summary of the information provided through theMulti-layer neural networks to solve the MNIST handwriting recognition problemexample, displaying some of this data through TensorBoard:
#TensorBoard usage # Define where logs are kept log_dir='D:\Temp\MachineLearning\TensorLog' #Display Pictures image_input=(x,[-1,28,28,1]) ('input',image_input,10) #Maximum number of images displayed at one time: 10 # Show Histogram ('Y',Y3) # Show scalar loss ('loss',loss_function) ('accurancy',accuracy) # Define the summary summary operation merge_op=.merge_all() ss=() (tf.global_variables_initializer()) #definewriter writer=(log_dir,) for epoch in range(train_epochs): for batch in range(batch_num): # Read data in batches for training xs,ys=.next_batch(batch_size) (optimizer,feed_dict={x:xs,y:ys}) # Execute the summary operation and write the result to the log file summary_str=(merge_op,feed_dict={x:xs,y:ys}) writer.add_summary(summary_str,epoch) loss,acc=([loss_function,accuracy],\ feed_dict={x:,y:}) print('Round %2d of training: loss is: %9f, accuracy: %.4f'%(epoch+1,loss,acc))
The results of the run are shown below for the picture, ACCURACY, LOSS scalar plot, Y1 histogram and the consequent generated distribution plot respectively:
Readers interested in more Python related content can check out this site's topic: thePython Data Structures and Algorithms Tutorial》、《Summary of Python encryption and decryption algorithms and techniques》、《Summary of Python coding manipulation techniques》、《Summary of Python function usage tips》、《Summary of Python string manipulation techniquesand thePython introductory and advanced classic tutorials》
I hope the description of this article will help you in Python programming.