SoFunction
Updated on 2024-10-30

Logistic Steele Regression for PyTorch Zero-Basic Getting Started

Learning Summaries

(1) and the previous model training is similar, just in the linear model based on the addition of a sigmoid, and then the loss function is changed to the cross-entropy BCE function (of course, you can also use other functions), in addition to the beginning of the data y_data from the numerical value is changed to the category 0 and 1 (in this case for the dichotomous, note that)x_datacap (a poem)y_data(Here too, it is in the form of a matrix).

I. The sigmoid function

The logistic function is a sigmoid function (there are other sigmoid functions), but because it is so widely used, pytorch defaults to calling the logistic function a sigmoid function. There are also various sigmoid functions as follows:

在这里插入图片描述

II. Differences with Linear

The difference between logistic Sti and linear models of UNIT is shown below:

在这里插入图片描述

sigmoidfunction is parameterless, so you don't need to initialize it (just call the(can be).
Also the loss function was changed from MSE to cross entropy BCE: as close as possible to the real classification.

在这里插入图片描述

As shown in the table to the right of the figure below, as y ^ \hat{y} y^ gets closer to y then the BCE Loss value is smaller.

在这里插入图片描述

III. logistic Steele regression (classification) PyTorch implementation

# -*- coding: utf-8 -*-
"""
Created on Mon Oct 18 08:35:00 2021

@author: 86493
"""
import torch
import  as nn
import  as plt  
import  as F
import numpy as np

# Prepare data
x_data = ([[1.0], [2.0], [3.0]])
y_data = ([[0], [0], [1]])


losslst = []

class LogisticRegressionModel():
    def __init__(self):
        super(LogisticRegressionModel, self).__init__()
         = (1, 1)
        
    def forward(self, x):
    	# The only difference between a network with a linear model is in this line, more
        y_predict = ((x))
        return y_predict
    
model = LogisticRegressionModel()

# Use cross entropy as a loss function
criterion = (size_average = False)
optimizer = ((), 
                            lr = 0.01)

# Training
for epoch in range(1000):
    y_predict = model(x_data)
    loss = criterion(y_predict, y_data)
    # Printing a loss object automatically calls __str__.
    print(epoch, ())
    (())
    # Backpropagation after gradient clearing
    optimizer.zero_grad()
    ()
    ()

# Drawing pictures
(range(1000), losslst)
('Loss')
('epoch')
()


# test
# of hours of study per week, 200 points
x = (0, 10, 200)
x_t = (x).view((200, 1))
y_t = model(x_t)
y = y_t.()
(x, y)
# classifier for paintings etc probability of pass = 0.5red horizontal line
([0, 10], [0.5, 0.5], c = 'r')
('Hours')
('Probability of Pass')
()
()

在这里插入图片描述

It can be seen that the dividing line between being in a pass and fail situation is Hours = 2.5.

在这里插入图片描述

Reference

pytorch official documentation

To this point this article on PyTorch zero basic introduction to the logistic Steele regression article is introduced to this, more related to PyTorch logistic Steele regression content, please search for my previous articles or continue to browse the following related articles I hope you will support me more in the future!