Ascending - descending order
This method uses default sorting in subqueries and reverse sorting in main query. The principle is as follows:
Copy the codeThe code is as follows:
DECLARE @temp TABLE (
PK /* PK Type */ NOT NULL PRIMARY
)
INSERT INTO @temp
SELECT TOP @PageSize PK FROM (
SELECT TOP (@StartRow + @PageSize)
PK,
SortColumn /*If sorting column is defferent from the PK, SortColumn must
be fetched as well, otherwise just the PK is necessary */
ORDER BY SortColumn /* default order – typically ASC */)
ORDER BY SortColumn /* reversed default order – typically DESC */
SELECT FROM Table JOIN @Temp temp ON =
ORDER BY SortColumn /* default order */
Line Count
The basic logic of this method depends on the SET ROWCOUNT expression in SQL, which allows unnecessary rows to be skipped and required row records can be obtained
Copy the codeThe code is as follows:
DECLARE @Sort /* the type of the sorting column */
SET ROWCOUNT @StartRow
SELECT @Sort = SortColumn FROM Table ORDER BY SortColumn
SET ROWCOUNT @PageSize
SELECT FROM Table WHERE SortColumn >= @Sort ORDER BY SortColumn
Subquery
There are two other methods that I have considered, and their sources are different. The first one is the well-known Triple Query or self-query method. A more thorough method I found is described in the following article
SQL Server server pagination
Although you need to subscribe, you can download a zip file containing the definition of a subquery stored procedure. List 4 The SELECT_WITH_PAGINGStoredProcedure.txt file contains a complete, general dynamic SQL. In this article, I also use a similar general logic that contains all other stored procedures. The principle here is to connect to the whole process, I did some reductions to the original code, as recordcount is not needed in my tests)
Copy the codeThe code is as follows:
SELECT FROM Table WHERE PK IN
(SELECT TOP @PageSize PK FROM Table WHERE PK NOT IN
(SELECT TOP @StartRow PK FROM Table ORDER BY SortColumn)
ORDER BY SortColumn)
ORDER BY SortColumn
cursor
While watching the Google discussion group, I found the last method, you can click here to view the original post. This method uses a server-side dynamic cursor. Many people try to avoid using cursors because they have no relationship and their orderliness makes them inefficient, but looking back, paging is actually an orderly task, and no matter which method you use, you must go back to the start line record. In the previous method, select all rows before the start record, add the required row records, and then delete all previous rows. Dynamic cursors have a FETCH RELATIVE option to complete magical jumps. The basic logic is as follows:
Copy the codeThe code is as follows:
DECLARE @PK /* PK Type */
DECLARE @tblPK TABLE (
PK /* PK Type */ NOT NULL PRIMARY KEY
)
DECLARE PagingCursor CURSOR DYNAMIC READ_ONLY FOR
SELECT @PK FROM Table ORDER BY SortColumn
OPEN PagingCursor
FETCH RELATIVE @StartRow FROM PagingCursor INTO @PK
WHILE @PageSize > 0 AND @@FETCH_STATUS = 0
BEGIN
INSERT @tblPK(PK) VALUES(@PK)
FETCH NEXT FROM PagingCursor INTO @PK
SET @PageSize = @PageSize - 1
END
CLOSE PagingCursor
DEALLOCATE PagingCursor
SELECT FROM Table JOIN @tblPK temp ON =
ORDER BY SortColumn
Generalization of complex queries
I pointed out before that all stored procedures are universal in dynamic SQL, so in theory they can use any kind of complex queries. Below is an example of a complex query based on Northwind database.
Copy the codeThe code is as follows:
SELECT AS Customer,
+ ', ' + + ', ' +
AS Address,
SUM([Order Details].UnitPrice*[Order Details].Quantity) AS
[Total money spent]
FROM Customers
INNER JOIN Orders ON =
INNER JOIN [Order Details] ON = [Order Details].OrderID
WHERE <> 'USA' AND <> 'Mexico'
GROUP BY , , ,
HAVING (SUM([Order Details].UnitPrice*[Order Details].Quantity))>1000
ORDER BY Customer DESC, Address DESC
The paging storage call that returns to the second page is as follows:
EXEC ProcedureName
/* Tables */
'Customers
INNER JOIN Orders ON =
INNER JOIN [Order Details] ON = [Order Details].OrderID',
/* PK */
'',
/* ORDER BY */
' DESC, DESC',
/* PageNumber */
2,
/* Page Size */
10,
/* Fields */
' AS Customer,
+ '', '' + + '', '' +
AS Address,
SUM([Order Details].UnitPrice*[Order Details].Quantity) AS [Total money spent]',
/* Filter */
' <> ''USA'' AND <> ''Mexico''',
/*Group By*/
', , ,
,
HAVING (SUM([Order Details].UnitPrice*[Order Details].Quantity))>1000'
It is worth noting that alias are used in the ORDER BY statement in the original query, but you'd better not do this in the paging stored procedure, because skipping the previous rows in this way is time-consuming. In fact, there are many ways to implement, but the principle is not to include all fields at the beginning, but just include the primary key column (equivalent to the sorted column in the RowCount method), so as to speed up the task completion. Only in the request page can all required fields be obtained. Moreover, there is no field alias in the final query, and in the row-skipping query, the index column must be used in advance.
There is another problem with the RowCount stored procedure. To achieve generalization, only one column is allowed in the ORDER BY statement, which is also a problem with the ascending-descending method and the cursor method. Although they can sort several columns, they must ensure that there is only one field in the primary key. I guess it would be possible to solve this problem with more dynamic SQL, but it doesn't seem worth it in my opinion. Although this is very likely to happen, they do not happen very frequently. Usually you can use the above principle to independently paging stored procedures.
Performance Testing
In my tests, I used four methods and I'm very interested in knowing if you have a better method. Regardless, I need to compare these methods and evaluate their performance. First of all, my first idea is to write a test application with paginated DataGrid and then test the page results. Of course, this does not reflect the real response time of the stored procedure, so the console application appears to be more suitable. I also joined a web application, but not for performance testing, but an example of DataGrid custom paging and stored procedures working together. Both applications can be found in the Paging Test Solution.
In the test, I used an automatically generated big data table, which inserted about 500,000 pieces of data. If you don't have a table like this to experiment, you can click here to download a table design and stored procedure script for generating data. Instead of using a self-incremented primary key column, I use a unique identification code to identify the record. If I use the script mentioned above, you might consider adding a self-increment column after generating the table, which will be numerically sorted by the primary key, which also means you intend to get the data for the current page with a paging stored procedure with the primary key sort.
To implement performance testing, I did it by calling a specific stored procedure multiple times in a loop and then calculating the average corresponding time. Considering the caching reasons, in order to more accurately model the actual situation - the time the same page obtains data for multiple calls to a stored procedure is usually not suitable for evaluation. Therefore, when we call the same stored procedure, the page number requested by each call should be random. Of course, we must assume that the number of pages is fixed, 10-20 pages, and data with different page numbers may be retrieved many times, but they are obtained randomly.
One thing we can easily notice is that the corresponding time is determined by the distance between the page data to be retrieved relative to the position where the result set starts. The further away from the starting position of the result set, the more records there are to be skipped. This is why I don't include the first 20 into my random sequence as well. As an alternative, I would use 2 pages to the n power, and the size of the loop is the number of different pages required *1000, so each page is almost fetched 1000 times (there will definitely be some deviation due to random reasons)
Previous page12345Next pageRead the full text