SoFunction
Updated on 2025-03-02

Detailed introduction to using tensorflow in Python

TensorFlow is the second-generation deep learning framework open sourced by Google in November 2015, and is an improved version of the first-generation framework DistBelief.

TensorFlow supports python and c/c++ languages, and can perform calculations on CPU or GPU, and supports packaging and publishing using virtualenv or docker.

Define variables

In order to use tensorflow, first we need to import it

import tensorflow as tf

For symbolic variables, we create a new one

x = (tf.float32, [None, 784])

Here x is not a specific value, it is just a placeholder. Later we need to use tensorflow to calculate the formula, and we will use it as input.

In the model, we need weights weights and biases biases, so here we use Variable to process the definition. Variable can be modified throughout the calculation process.

w = (([784, 10]))
b = (([10]))

While creating a new Variable, we also initialize it, and then

y = ((x, w) + b)

In this way, we successfully implemented our model

train

We use cross-entropy as our cost function

H_{y'}(y) = -\sum_i y'_i \log(y_i)

y is the probability distribution we predict, y' is the true probability distribution

To achieve cross entropy, we need a new placeholder as input to the correct answer

y_ = (tf.float32, [None, 10])
cross_entropy = -tf.reducen_sum(y_ * (y))

Implementing the optimization model through gradient descent

train_step = (learning_rate).minimize(cross_entropy)

The last thing we need to do before we use this model is

init = tf.initialize_all_variables()
with () as sess:
  (init)

Now, I can train this model 1,000 times, ☺️

for i in xrange(1000):
  batch_xs, batch_ys = .next_batch(100)
  (train_step, feed_dict = {x: batch_xs, y_: batch_ys})

Small batches using random data are called random training

Model rating

First, we compare the correct number between the real y_ and the model obtained y

correct_prediction = ((y, 1), (y_, 1))

This will return a boolean list, such as [True, False, True, True]

accuracy = tf.reduce_mean((correc_prediction, tf.float32))
print ((accuracy, feed_dict = {x: , y_: }))

Finally, the accuracy rate is obtained through the above calculations

Get started

TensorFlow is not a purely neural network framework, but a framework that uses data flow graphs for numerical analysis.

TensorFlow uses a directed graph to represent a computing task. The nodes of the graph are called ops (operations) to represent the processing of data, and the edge flow of the graph describes the flow of data.

The framework calculation process is to process the flow composed of tensors. This is also the source of the TensorFlow name.

TensorFlow uses tensor to represent data. Tensor means a tensor, that is, a high-dimensional array, and uses representation in python.

TensorFlow uses Session to execute graphs and maintains state using Variable. It is an ops that can only be output and is often used as a data source.

Let's build a simple graph with only two constants as input and then multiplying matrix:

from tensorflow import Session, device, constant, matmul

'''Build a simple graph with only two constants as input and then multiplying matrix:'''

#If you do not use the with session() statement, you need to execute() manually.#with device device specifies the device to perform the calculation:# "/cpu:0": The CPU of the machine.# "/gpu:0": The first GPU of the machine, if any.# "/gpu:1": The second GPU of the machine, and so on.
with Session() as session: # Create the context of the execution graph  with device('/cpu:0'): # Specify the computing device    mat1 = constant([[3, 3]]) # Create source node    mat2 = constant([[2], [2]])
    product = matmul(mat1, mat2) # Specify the front node of the node, create a graph    result = (product) # Perform calculations    print(result)

The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.