SoFunction
Updated on 2025-04-05

Summary of practical experience

DataSet and DataReader

When designing an application, consider the level of functionality required by the application to determine the use of DataSet or DataReader.

To do the following through the application, use DataSet:

1) Navigate between multiple discrete tables of the result.

2) Manipulate data from multiple data sources (for example, mixed data from multiple databases, an XML file, and a spreadsheet).

3) Exchange data between layers or use XML Web services. Unlike DataReader, DataSet can be passed to remote clients.

4) Reuse the same set of records to achieve performance improvements through cache (such as sorting, searching, or filtering data).

5) Each record requires a lot of processing. Extended processing of each row returned with DataReader will extend the necessary time for the connection serving DataReader, which affects performance.

6) Use XML operations to operate on data, such as extensible stylesheet language conversion (XSLT conversion) or XPath query.

Use DataReader in your application for the following situations:

1) No need to cache data.

2) The result set to be processed is too large and cannot be put in memory.

3) Quick access to data in a forward-only, read-only manner once you need to.

Note When populating DataSet, DataAdapter uses DataReader. Therefore, the improved performance of using DataAdapter instead of DataSet is manifested as saving the dataSet memory occupies and the loops required to fill the DataSet. Generally speaking, this performance improvement is only symbolic, so design decisions should be based on the required functionality.

Benefits of using strongly typed DataSet

Another benefit of DataSet is that it can be inherited to create a strongly typed DataSet. The benefits of strongly typed DataSet include design-time type checking, and the benefits of Microsoft Visual for ending strongly typed DataSet statements. After modifying the schema or relational structure of the DataSet, you can create a strongly typed DataSet, exposing rows and columns as objects' attributes, rather than as items in the set. For example, the Name column of the row in the client table is not disclosed, but the Name property of the Customer object is disclosed. Typed DataSet derives from the DataSet class, so no functionality of the DataSet is sacrificed. That is, typed DataSets are still accessible remotely and are provided as data source for data binding controls such as DataGrid. If the architecture is agnostic in advance, it can still benefit from the functionality of a general DataSet, but it cannot benefit from the additional functionality of a strongly typed DataSet.

Handle empty references in strongly typed DataSets

When using a strongly typed DataSet, you can use DataSet's XML Schema Definition Language (XSD) schema to ensure that strongly typed DataSet handles empty references correctly. The nullValue identifier allows you to use a specified value instead of DBNull, retain a null reference, or throw an exception. Which option to choose depends on the context of the application. By default, an exception is thrown if an empty reference is encountered.

Refresh data in DataSet

If you want to refresh the value in the DataSet with the updated value on the server, use it. If there is a primary key defined on the DataTable, a new row match is made based on the primary key, and the value on the server is applied when changed to an existing row. Even if these data are modified before refreshing, the RowState of the refresh row is still set to Unchanged. Note that if no primary key is defined for DataTable, add a new row with the primary key value that may be repeated.

If you want to refresh the table with the current value from the server and keep any changes to rows in the table, you must first fill the table with a new DataTable, and then merge the DataTable into the DataSet with the preserveChanges value true.

Search for data in DataSet

When querying rows matching specific conditions in DataSet, index-based lookup can be used to improve search performance. When the PrimaryKey value is assigned to the DataTable, an index is created. When a DataView is created for a DataTable, an index is also created. Here are some tips for using index-based search.

1) If you query the columns that make up the DataTable's PrimaryKey, use instead.

2) For queries involving non-primary key columns, you can use DataView to improve performance for multiple queries of data. When the sort order is applied to the DataView, an index used for search is created. DataView exposes Find and FindRows methods to query data in the underlying DataTable.

3) If you do not need a sorted view of the table, you can still utilize index-based lookups by creating a DataView for the DataTable. Note that this will benefit only if multiple query operations are performed on the data. If you only perform a single query, the processing required to create an index will reduce the performance improvements caused by using the index.

DataView Construct

If the DataView is created and the Sort, RowFilter, or RowStateFilter properties are modified, the DataView will index the data in the DataTable. When creating a DataView object, you need to use the DataView constructor, which uses the Sort, RowFilter, and RowStateFilter values ​​as constructor parameters (along with the underlying DataTable). The result is an index created once. Creating an "empty" DataView and then setting the Sort, RowFilter, or RowStateFilter properties will cause the index to be created at least twice.

Pagination

It is possible to explicitly control what kind of data is returned from the data source and how much data is cached locally in the DataSet. There is no unique answer to pagination of query results, but here are some tips you should consider when designing your application.

1) Avoid overloading with startRecord and maxRecords values. When the DataSet is populated this way, only the number of records specified by the maxRecords parameter (starting from the record identified by the startRecord parameter) is used to populate the DataSet, but the full query is always returned anyway. This causes unnecessary processing to read "unnecessary" records; and in order to return additional records, unnecessary server resources will be exhausted.

2) The technique used to return only one page of records at a time is to create SQL statements, combining the WHERE clause, the ORDER BY clause and the TOP predicate. This technique depends on the existence of a way to uniquely identify each row. When browsing the next page record, modify the WHERE clause to contain all records whose unique identifier is greater than the last unique identifier on the current page. When browsing the previous page record, modify the WHERE clause to return all records whose unique identifier is smaller than the first unique identifier on the current page. Both queries only return the record's TOP page. When browsing the previous page, you need to sort the results in descending order. This will effectively return to the last page of the query (maybe reorder the results before displaying if needed).

3) Another technique that returns only one page of records at a time is to create SQL statements that combine the use of TOP predicates and embedded SELECT statements. This technique does not rely on the existence of a way to uniquely identify each row. The first step in using this technique is to multiply the number of pages required by the page size. The result is then passed to the TOP predicate of SQL Query, which is arranged in ascending order. Embed this query into another query, which selects the TOP page size from the embedded query results arranged in descending order. Essentially, the last page of the embedded query is returned. For example, to return the third page of the query result (page size is 10), you should write the command as follows:

SELECT TOP 10 * FROM
(SELECT TOP 30 * FROM Customers ORDER BY Id ASC) AS Table1
ORDER BY Id DESC

Note: The result page returned from the query is displayed in descending order. If needed, it should be reordered.

1) If the data does not change frequently, a record cache can be maintained locally in the DataSet to improve performance. For example, 10 pages of useful data can be stored in a local DataSet and new data is only queried from the data source when the user browses beyond the first and last pages of the cache.

Fill DataSet with Schema

When populating a DataSet with data, the method uses the existing architecture of the DataSet and populates it with the data returned from the SelectCommand. If no table name in the DataSet matches the table name to be filled, the Fill method creates a table. By default, Fill defines only columns and column types.

By setting the MissingSchemaAction property of DataAdapter, you can override the default behavior of Fill. For example, to have Fill create a table schema, and also include primary key information, unique constraints, column properties, whether to allow nullity, maximum column length, read-only columns, and automatic increment columns, specify as. Alternatively, before calling, you can call to ensure that the schema is in place when the DataSet is populated.

A call to FillSchema will generate an additional trip to the server to retrieve additional schema information. For best performance, you need to specify the schema of the DataSet before calling Fill, or set the MissingSchemaAction of the DataAdapter.

Best practices for using CommandBuilder

Assuming that SelectCommand executes a single table SELECT, CommandBuilder will automatically generate the DataAdapter's InsertCommand, UpdateCommand, and DeleteCommand properties based on the DataAdapter's SelectCommand property. Here are some tips for using CommandBuilder for best performance.

1) The use of CommandBuilder should be limited to design-time or ad-hoc solutions. The processing necessary to generate the DataAdapter command attributes affects performance. If you know the contents of the INSERT/UPDATE/DELETE statements in advance, set them explicitly. A better design trick is to create stored procedures for INSERT/UPDATE/DELETE commands and explicitly configure DataAdapter command properties to use them.

2) CommandBuilder uses the SelectCommand property of DataAdapter to determine the values ​​of other command properties. If the DataAdapter's SelectCommand itself has changed, make sure to call RefreshSchema to update the command properties.

3) If the DataAdapter command property is empty (the command property is empty by default), CommandBuilder generates only one command for it. CommandBuilder will not override it if the command property is explicitly set. If you want CommandBuilder to generate commands for command properties that have been set before, set the command properties to empty.

Batch SQL statements

Many databases support merging or batching multiple commands into a single command to execute. For example, SQL Server allows you to separate commands with a semicolon ";". Combining multiple commands into a single command can reduce the number of trips to the server and improve application performance. For example, all predetermined deletions can be stored locally in the application and then issued a batch command call to delete them from the data source.
While doing so does improve performance, it can increase the complexity of the application when managing data updates in DataSets. To keep it simple, you might want to create a DataAdapter for each DataTable in the DataSet.

Fill DataSet with multiple tables

If multiple tables are retrieved and DataSet is populated with a batch SQL statement, the first table is named after the table name assigned to the Fill method. The following table is named with a number starting from 1 and incrementing 1 as specified to the Fill method. For example, if you run the following code:

‘Visual Basic
Dim da As SqlDataAdapter = New SqlDataAdapter(“SELECT * FROM Customers; SELECT * FROM Orders;”, myConnection)
Dim ds As DataSet = New DataSet()
(ds, “Customers”)

//C#
SqlDataAdapter da = new SqlDataAdapter(“SELECT * FROM Customers; SELECT * FROM Orders;”, myConnection);
DataSet ds = new DataSet();
(ds, “Customers”);

Data from the Customers table is placed in a DataTable named "Customers". The data from the Orders table is placed in a DataTable named "Customers1".

After filling in the DataSet, it is easy to change the TableName property of the "Customers1" table to "Orders". However, the subsequent fill will cause the "Customers" table to be refilled, while the "Orders" table will be ignored and another "Customers1" table will be created. To remedy this situation, create a DataTableMapping, map "Customers1" to "Orders", and create another table map for other subsequent tables. For example:

‘Visual Basic
Dim da As SqlDataAdapter = New SqlDataAdapter(“SELECT * FROM Customers; SELECT * FROM Orders;”, myConnection)
(“Customers1″, “Orders”)
Dim ds As DataSet = New DataSet()
(ds, “Customers”)
//C#
SqlDataAdapter da = new SqlDataAdapter(“SELECT * FROM Customers; SELECT * FROM Orders;”, myConnection);
(“Customers1″, “Orders”);
DataSet ds = new DataSet();
(ds, “Customers”);

Using DataReader

Here are some tips for getting the best performance with DataReader, and also answer some common questions about using DataReader.

1) The DataReader must be turned off before accessing any output parameters of the relevant Command.

2) After completing the reading of data, you must always close the DataReader. If using Connection is only used to return DataReader, close it immediately after closing DataReader.

Another way to explicitly close the Connection is to pass it to the ExecuteReader method to ensure that the associated connection is closed when closing the DataReader. This is especially useful if you return a DataReader from a method and cannot control the closing of the DataReader or related connections.

1) DataReader cannot be accessed remotely between layers. DataReader is designed for connected data access.

2) When accessing column data, use typed accessors, such as GetString, GetInt32, etc. This allows you to avoid the processing required to cast the Object returned by GetValue to a specific type.

3) Only one DataReader can be opened at a time for a single connection. In ADO, if a single connection is opened and two record sets using in-only, read-only cursors are requested, ADO implicitly opens a second, unpooled connection to the datastore during the cursor lifetime, and then implicitly closes the connection. For, there are very few actions done by "secret". If you want to open two DataReaders at the same time on the same datastore, you must explicitly create two connections, one for each DataReader. This is one way to provide more control for the use of pooled connections.

4) By default, DataReader loads the entire line into memory every time it reads. This allows random access to columns within the current row. If this random access is not required, it will be passed to the ExecuteReader call in order to improve performance. This changes the default behavior of DataReader to load data into memory only when requested. Note that the returned columns are required sequentially accessed. That is, once the returned column has been read, it cannot read its value anymore.

5) If you have completed reading data from DataReader, but there are still a large number of pending unread results, call Command's Cancel before calling DataReader's Close. Calling the Close of DataReader causes the suspended results to be retrieved and the stream is cleared before closing the cursor. Calling Command's Cancel will abandon the results on the server, so that DataReader does not have to read these results when it is closed. If you want to return output parameters from Command, you also need to call Cancel to abandon them. If you need to read any output parameters, do not call Command's Cancel, just call DataReader's Close.

Binary large object (BLOB)

When searching for a binary large object (BLOB) with DataReader, it should be passed to the ExecuteReader method call. Because the default behavior of DataReader is that every time Read reads load the entire line into memory, and because the BLOB value may be very large, the result may cause a large amount of memory to be used up due to a single BLOB. SequentialAccess sets the behavior of the DataReader to load only the requested data. You can then also use GetBytes or GetChars to control how much data is loaded each time.

Remember, when using SequentialAccess, you cannot access the different fields returned by the DataReader in order. That is, if the query returns three columns, where the third column is a BLOB, and if you want to access the data in the first two columns, you must first access the value of the first column and then access the value of the second column before accessing the BLOB data. This is because the data is now returned sequentially and once the DataReader has read the data, the data is no longer available.

Use commands

Several different methods of command execution are provided and different options for optimizing command execution. Here are some tips about choosing the best command execution and how to improve the performance of the command execution.

Best Practices for Using OleDbCommand

Command execution between different .NET framework data providers is as standardized as possible. However, there are still differences between data providers. Here are some tips to fine-tune the command execution of the .NET framework data provider for OLE DB.

1) Use the stored procedure to call it according to the ODBC CALL syntax. Use just secretly generates ODBC CALL syntax.

2) Be sure to set the type, size (if applicable), as well as the precision and range (if the parameter type is numeric or decimal). Note that if parameter information is not explicitly provided, OleDbCommand will recreate the OLE DB parameter accessor for each execution command.

Best Practices for Using SqlCommand

Quick tips for executing stored procedures with SqlCommand: If you call a stored procedure, specify the CommandType property of SqlCommand as the CommandType of StoredProcedure. This way, by explicitly identifying the command as a stored procedure, there is no need to analyze the command before execution.

Using Prepare method

The method can improve performance for repeated parameterized commands acting on the data source. Prepare indicates that the data source is called for multiple times to optimize the command. To effectively utilize Prepare, you need to thoroughly understand how the data source responds to Prepare calls. For some data sources (such as SQL Server 2000), commands are implicitly optimized and do not have to call Prepare. Prepare is more effective for other data sources (such as SQL Server 7.0).

Explicitly specify schema and metadata

As long as the user does not specify metadata information, many objects will infer metadata information. Here are some examples:

1) Method, if there are no tables and columns in the DataSet, the method will create tables and columns in the DataSet.

2) CommandBuilder, which will generate DataAdapter command attributes for single table SELECT commands.

3) , it will populate the Parameters collection of Command object.

However, every time these features are used, there will be performance losses. It is recommended that these features be used primarily in design-time and ad-hoc applications. When possible, explicitly specify the schema and metadata. This includes defining tables and columns in DataSet, defining the Command properties of DataAdapter, and defining Parameter information for Command.

ExecuteScalar and ExecuteNonQuery

If you want to return single values ​​like the results of Count(*), Sum(Price), or Avg(Quantity), you can use it. ExecuteScalar returns the value of the first row and the first column, and returns the result set as a scalar value. Because it can be done in one single step, ExecuteScalar not only simplifies the code, but also improves performance; if you use DataReader, it takes two steps to complete (i.e., ExecuteReader+value).

When using SQL statements that do not return rows, such as modifying data (such as INSERT, UPDATE, or DELETE) or returning only output parameters or return values, use ExecuteNonQuery. This avoids any unnecessary processing used to create an empty DataReader.

Test Null

If the columns in the table (in the database) are allowed to be empty, you cannot test whether the parameter value is "equal" to empty. Instead, you need to write a WHERE clause to test whether both the column and the parameters are empty. The following SQL statement returns some rows whose LastName column is equal to the value assigned to the @LastName parameter, or both the LastName column and @LastName parameters are empty.

SELECT * FROM Customers
WHERE ((LastName = @LastName) OR (LastName IS NULL AND @LastName IS NULL))

Pass Null as parameter value

In the command for the database, when sending a null value as a parameter value, null cannot be used (Nothing in Visual Basic .NET). It needs to be used. For example:

‘Visual Basic
Dim param As SqlParameter = New SqlParameter(“@Name”, , 20)
 = 

//C#
SqlParameter param = new SqlParameter(“@Name”, , 20);
 = ;

Execute transactions

The transaction model has been changed. In ADO, when StartTransaction is called, any update operation after the call is considered part of the transaction. However, when the Connection .BeginTransaction is called, a Transaction object is returned, which needs to be associated with the Command's Transaction property. This design allows multiple root transactions to be performed on a single connection. If the property is not set to a Transaction initiated for the related Connection, the Command fails and throws an exception.

The upcoming .NET framework will allow you to manually register in existing distributed transactions. This is ideal for an object pooling scheme; in which a pool object opens a connection at once, but involves the object in multiple independent transactions. This feature is not available in the .NET Framework 1.0 release.

Use connection

High-performance applications maintain minimum connections to data sources in use and utilize performance enhancement technologies such as connection pooling. The following topics provide some tips to help get better performance when using connections to a data source.

Connection pool

SQL Server, OLE DB, and .NET framework data providers implicitly buffer connections for ODBC. By specifying different attribute values ​​in the connection string, you can control the behavior of the connection pool.

Optimize connections with DataAdapter

DataAdapter's Fill and Update methods automatically open the connection specified for the relevant command attributes when the connection is closed. If the Fill or Update method opens the connection, Fill or Update will close it when the operation is completed. For best performance, the connection to the database is kept open only if needed. At the same time, reduce the number of times multi-operation connections are opened and closed.

If you only execute a single Fill or Update method call, it is recommended to allow the Fill or Update method to implicitly open and close the connection. If there are many calls to Fill and Update, it is recommended to open the connection explicitly, call Fill and Update, and then explicitly close the connection.

Additionally, when a transaction is executed, the connection is explicitly opened before the transaction is started, and the connection is closed after commit. For example:

‘Visual Basic
Public Sub RunSqlTransaction(da As SqlDataAdapter, myConnection As SqlConnection, ds As DataSet)
()
Dim myTrans As SqlTransaction = ()
 = myTrans

Try
(ds)
()
(“Update successful.”)
Catch e As Exception
Try
()
Catch ex As SqlException
If Not  Is Nothing Then
(“An exception of type ” & ().ToString() & ” was encountered while attempting to roll back the transaction.”)
End If
End Try

(“An exception of type ” & ().ToString() & ” was encountered.”)
(“Update failed.”)
End Try
()
End Sub

//C#
public void RunSqlTransaction(SqlDataAdapter da, SqlConnection myConnection, DataSet ds)
{
();
SqlTransaction myTrans = ();
 = myTrans;

try
{
(ds);
();
(“Update successful.”);
}
catch(Exception e)
{
try
{
();
}
catch (SqlException ex)
{
if ( != null)
{
(“An exception of type ” + () +” was encountered while attempting to roll back the transaction.”);
}
}

(());
(“Update failed.”);
}
();
}

Always close Connection and DataReader

Always close them explicitly after you complete the use of Connection or DataReader objects. Although garbage collection ends up cleaning up objects and thus freeing connections and other managed resources, garbage collection is only performed when needed. Therefore, it is still your responsibility to ensure that any valuable resources are explicitly released. And, Connections that are not explicitly closed may not be returned to the pool. For example, a connection that is out of range but not explicitly closed will be returned to the connection pool only when the connection pool size reaches its maximum and the connection is still valid.

Note Do not call Close or Dispose on Connection, DataReader, or any other managed objects in the Finalize method of the class. When final completion, only unmanaged resources that the class itself directly owns. If the class does not have any unmanaged resources, do not include Finalize methods in the class definition.

Using "Using" statement in C#

For C# programmers, a convenient way to ensure that Connection and DataReader objects are always closed is to use the using statement. When the using statement leaves its scope of action, it will automatically call the Dispose of the object being "used". For example:

//C#
string connString = “Data Source=localhost;Integrated Security=SSPI;Initial Catalog=Northwind;”;

using (SqlConnection conn = new SqlConnection(connString))
{
SqlCommand cmd = ();
 = “SELECT CustomerId, CompanyName FROM Customers”;

();

using (SqlDataReader dr = ())
{
while (())
(“{0}\t{1}”, (0), (1));
}
}

Using statements cannot be used with Microsoft Visual Basic .NET.

Avoid accessing properties

If the connection is already open, the property performs a local OLE DB call to the DATASOURCEINFO attribute set of the DBPROP_CONNECTIONSTATUS attribute, which may result in round trips to the data source. That is, checking State properties can be expensive. So check the State property only if needed. If you need to check this property frequently, listening to the StateChange event of OleDbConnection may make the application perform better.

Integration with XML

Extensive XML integration is provided in DataSet and some XML features provided by SQL Server 2000 and later are exposed. XML features in SQL Server 2000 and later can also be widely accessed using SQLXML 3.0. Here are the tips and information about using XML and.

DataSet and XML

DataSet is tightly integrated with XML and provides the following functions:

1) Load the architecture or relational structure of DataSet from the XSD architecture.

2) Load the content of the DataSet from XML.

3) If no schema is provided, the schema of DataSet can be inferred from the content of the XML document.

4) Write the DataSet architecture as an XSD architecture.

5) Write the content of the DataSet as XML.

6) Synchronous access to the relationship representation of data using DataSet, and the hierarchical representation of data using XmlDataDocument.

Note This synchronization can be used to apply XML functionality (e.g., XPath query and XSLT conversion) to data in a DataSet, or to provide a relational view of all or a subset of data in an XML document while preserving the original XML fidelity.

Architectural inference

When loading DataSet from XML files, you can load the DataSet schema from the XSD schema, or predefined tables and columns before loading the data. If there is no XSD schema available and you don't know which tables and columns to define for the content of the XML file, you can infer the schema based on the XML document structure.

Architectural inference is useful as a migration tool, but should be limited to design-stage applications, because inference processing has the following limitations.

1) Inferences to the architecture will introduce additional processing that affects the performance of the application.

2) All types of inferred columns are strings.

3) The inference processing is not deterministic. That is, it is based on the contents of the XML file, not the predetermined schema. Therefore, for two XML files with the same predetermined schema, two completely different inferred schemas are obtained due to their different contents.

SQL Server for XML Query

If you are returning the query result from SQL Server 2000 FOR XML, you can have the .NET framework data provider for SQL Server directly create an XmlReader using the method.

SQLXML Hosting Class

There are some classes in the .NET framework that expose the functionality of XML used in SQL Server 2000. These classes are found in the namespace, and they add the ability to execute XPath queries and XML template files and apply XSLT transformations to data.

The SQLXML hosting class is included in the XML (SQLXML 2.0) distribution for Microsoft SQL Server 2000, and can be linked to XML for Microsoft SQL Server 2000 Web Release 2 (SQLXML 2.0)

More useful tips

Here are some common tips for writing code.

Avoid automatic incremental value conflicts

Just like most data sources, DataSet allows you to identify columns that automatically increment their value when adding a new row. When using auto-incremented columns in DataSet, if the auto-incremented columns are from the data source, local number conflicts between rows added to the DataSet and rows added to the data source can be avoided.

For example, consider a table whose primary key column CustomerID is automatically incremented. Two new customer information rows are added to the table and receive automatic increments of CustomerID values ​​1 and 2. Then, only the second customer line is passed to the DataAdapter method Update, and the newly added line receives an automatic increment of CustomerID value 1 at the data source, which does not match the value 2 in the DataSet. When DataAdapter fills the second row in the table with the return value, a constraint conflict occurs because the first customer row already uses the CustomerID value 1.

To avoid this, it is recommended to create the columns in the DataSet as AutoIncrementStep value equals -1 and AutoIncrementSeed value equals 0 when using columns that are automatically incremented on the DataSet. In addition, make sure that the automatic increment identification value generated by the data source starts at 1 and increments with a positive order value. Therefore, DataSet generates a negative number for the automatic increment value, which does not conflict with the positive automatic increment value generated by the data source. Another option is to use columns of GUID type instead of columns that are automatically incremented. The algorithm that generates GUID values ​​should never make the GUID values ​​generated in the data source the same as the GUID values ​​generated in the DataSet.

If the auto-incremented column is only used as a unique value and makes no sense, consider using a GUID instead of the auto-incremented column. They are unique and avoid the additional work necessary to use auto-incremented columns.

Check for open concurrent conflicts

By design, since DataSet is disconnected from the data source, when multiple clients update data on the data source according to an open concurrency model, it is necessary to ensure that the application avoids conflicts.

There are several techniques when testing open concurrent conflicts. A technique involves including timestamp columns in a table. Another technique is to verify that the original values ​​of all columns in a row still match the values ​​found in the database when tested using the WHERE clause in the SQL statement.

Multithreaded programming

Optimize performance, throughput, and scalability. Therefore, the object does not lock resources and must be used only for single threads. One exception is DataSet, which is thread-safe for multiple readers. However, the DataSet needs to be locked when writing.

Access ADO using COM Interop only when needed

The design is designed to be the best solution for many applications. However, some applications require features that are only available with ADO objects, such as ADO Multidimensional (ADOMD). In these cases, the application can access ADO with COM Interop. Note that using COM Interop to access data with ADO will result in performance degradation. When designing an application, first determine whether the design needs are met before implementing the design of accessing ADO with COM Interop.

The above is all the content of this article. I hope it will be helpful to everyone's study and I hope everyone will support me more.