This export website function refers to triggering the entry into the ashx function through the foreground javascript, which realizes the copying of a folder (including its subfolders and files) in the server to another location in the server. Of course, the folder itself is a website. Therefore, the two most important functions of exporting websites, in addition to the triggering of javascript, are the operation of copying folders of C# ashx files.
The following code is to call the function file through the JavaScript Request function to pass the subpath of the folder that needs to be copied and the two parameters of the subpath copied to the location. The background function getWebList function is a function in the background, and you can ignore this function. The getBack function needs to be written, and the result can be obtained through this function. Of course, the Webside_load function also needs to be triggered by onclick, so we will not list them all here.
The following is a reference snippet for C# implementing the export website function:
//Webside_load export website function Webside_load(sID, iWebTemplateID) {//alert(0); //alert(sID); alert(iWebTemplateID); //The directory assigned: is the folder and file under the template ID sTartDir = "/uploadfile/webTemplate/" + iWebTemplateID; //Target directory: under the work ID sEndDir = "/uploadfile/showweb/" + sID + "/"; //alert(sourceDir); alert(targetDir); var variable = ["sTartDir", "sEndDir"]; var value = [sTartDir, sEndDir]; //alert(value); Request("getWebList", variable, value, getBack, WebUrl + "/", svrNamespace); } function getBack() { var xmlhttp = xmlHttpRequest; var Result = ; alert(Result); }
Through the above javascript, you can get data from background functions, and to obtain data from background functions, you need to traverse the function file with copy added below.
The following is a quoted snippet:
<%@ WebHandler Language="C#" Class="copy" %> using System; using ; using ; public class copy : IHttpHandler { //Recursively traverse all files in folders and subfiles. public void ProcessRequest(HttpContext context) { HttpRequest Request = ; HttpResponse Response = ; HttpServerUtility Server = ; //Specify the output header and encoding = "text/html"; = "utf-8"; HttpFileCollection fs = ; string sTartDir = ["sTartDir"]; string sEndDir = ["sEndDir"]; sTartDir = (sTartDir); sEndDir = (sEndDir); //Test //string sTartDir = ("../uploadfile/webTemplate/2"); //string sEndDir = ("../uploadfile/showweb/2012082700000001/"); MyDirectory_Copy(sTartDir, sEndDir); ("Exported successfully!"); } static void MyDirectory_Copy(string sTartDir, string sEndDir) { //Judge whether both directories exist if (!(sTartDir)) return; if (!(sEndDir)) return; //Get the folder name string sTarteFolderName = ((sTartDir).ToString(), "").Replace((), ""); //Judge whether the folder is assigned successfully if (sTartDir == sEndDir + sTarteFolderName) return; //The path to copy to string endPath = sEndDir + () + sTarteFolderName; if ((endPath)) { (endPath, true); } (endPath); //Copy the file string[] files = (sTartDir); for (int i = 0; i < ; i++) { (files[i], endPath + () + (files[i])); } //Copy the directory string[] dires = (sTartDir); for (int j = 0; j < ; j++) { MyDirectory_Copy(dires[j], endPath); } } public bool IsReusable { get { return false; } } }
Obtain the sTartDir source file directory and the sTartDir target directory from the foreground and get their absolute path. Then execute the DirectoryCopy function to obtain the folder name of the source file, assign the absolute path of the target file plus the folder name to the new target file directory to judge through recursive loop, and execute the copying process.
This method is similar to the C# traversal folder mentioned earlier, but here is internal traversal copying, which is not exactly the same as traversal in the system.
The above is the full introduction to the functions of the C# export website. I hope it will be helpful to everyone's learning.