精通
英语
和
开源
,
擅长
开发
与
培训
,
胸怀四海
第一信赖
You are probably reading this because you have created a DotNetNuke module and want to deploy it as a private assembly without doing all of the work of creating a package by hand. Therefore, you probably are keenly aware of what DotNetNuke is, so I won't go into any detail about it except to say that it is a great platform upon which to build powerful web applications. For those who are here for any other reason, you can learn more about it at the DotNetNuke web site.当开发完一个模块,想以私有组装部署,且不想手工实现创建包装程序的所有工作。所以,你可能敏锐感知DotNetNuke是什么,我也不多说,只想提示它是个伟大的平台,它能构造强大的web应用。
I built the DNN Module Packager for two reasons. The first was that it seemed that any other freely available tool didn't really go all the way to make it possible to generate a deployable private assembly out of the box (or even get close without a significant amount of attention to the package) straight from my development environment. There are some applications that will generate a PA shell that you can build from, but that's not really what I wanted either.我构建DNN模块包装程序的目的有2个。第1是指,其它免费工具功能不全,不能生成一个可部署的私有组装(或者接近目标但模块的重视程度不够)。有些应用会生成PA壳,但和我预想的目标不一样。
The second reason was that the only other applications that seem to do what I want (I say seem because I wasn't willing to pay money to find out) cost money. And that seems silly for something that is so basic to creating modules in DotNetNuke (keep in mind I'm a much better philanthropist than a business man). I wanted a tool that would create a private assembly from a module I had been working on in my locally installed DotNetNuke development environment. The basic idea was to simply be able to build the module focusing on its functionality and letting another application handle creating the private assembly package.
To develop this application, I had several choices. I opted to build a desktop GUI based application for the following reasons:
If you're not like me, feel free to rip into the code and make it into a module or anything else. Just remember that the license is GPL.
I started out creating an application that was more like an MDI with a place to edit the .dnn file once it had been generated, but I felt that was overkill and didn't necessarily even make sense. What I decided would be better was to use a wizard style interface since all I am intending to do is walk the end-user through the process of creating a Private Assembly for a custom DNN module. There are several wizard controls out there, but I found Al Gardener's Designer centric Wizard control to really be the best. It's great because his event handling is at the page level. This means that you can shuffle pages around in any order you want even after you've done some work on them, and the logic won't be affected in any way. I highly recommend his control.
I should also make one final note. I have used several different tools in this application all of which are open-source. For zipping, I use the ICSharpCode.SharpZipLib assembly from the Sharp Develop guys, and of course, as I already mentioned, I am using Al Gardener's Designer centric Wizard control. The project links in this article only contain binaries for these libraries, however, you can get the source for each of them at the links I've provided in this introduction.
They always say that you should never assume because it just makes an "ass" out of "u" and "me". Well, in this case, it's the only way I could get this application to work, so if I've made you feel like an ass, I apologize. In all seriousness though, the beauty of these types of assumptions is that some of them can be changed if a better way emerges. I leave it to the community to help make that determination. OK, so on to the "ass" "u" "me" ptions.
The application assumes that:
I access the IIS Metabase through directory services, so it must be available. And I generated a Primary Interop Assembly (PIA) for SQLDMO so that I could access it through managed code. This PIA is included in the project and in the installer, however, it won't work unless SQLDMO is actually available on your machine.
I will discuss the pertinent parts of each of these steps in detail with code, however, it will help a bit to explain now at a high level how I thought about the process and what each of the steps are. When I considered the problem, I felt that what I wanted was a way to tell the application where to find a few pieces of information and have it handle the rest. To me, the logical starting place was allowing the user to select the web application (a.k.a. virtual directory) in which the custom module in question is implemented. Here is the basic flow:
Once the process has completed, if you specified that you wanted the application to keep the temporary files, you will find a ZIP file in the directory you specified for the ZIP file's output as well as a directory with the module name that will contain all of the loose files. Also, if you specified that you wanted the application to auto detect any assembly dependencies you may have missed, if it found any, you will see a report of what it found in the final page.
(For all you Neil Stephenson fans, I keep wanting to call this thing the Metaverse In order to obtain a list of all available web applications, it seemed that the simplest way was to access the Metabase. From the Microsoft web site, in a nutshell, "The metabase is a hierarchical store of configuration information and schema that are used to configure IIS". For all intents and purposes, the metabase is an XML based configuration file (and schema). For more information, check out the site where the above definition came from. With a quick Internet search, I was able to find some basic information on how to programmatically access the Metabase. After seeing a few examples on the web and playing around with the MetaEdit utility (download it now to understand more clearly), which allowed me to see the schema hierarchy for the data, I had everything I needed to write the following code:
public Hashtable GetSitePaths() { Hashtable tmp = new Hashtable(); DirectoryEntry root = new DirectoryEntry("IIS://localhost/w3svc/1/root"); foreach( DirectoryEntry e in root.Children ) { tmp[ e.Name ] = e.Path; } return tmp; }
I used the System.DirectoryServices namespace to obtain a list of all of the directory entries for the server root. Notice the line DirectoryEntry root = newDirectoryEntry("IIS://localhost/w3svc/1/root");. That path gives us a list of all of the local web applications (virtual directories). If you look at the MetaEdit utility, you can see this graphically:
When GetSitePaths method returns, the hashtable is populated with a list of virtual directory names and their corresponding virtual directory paths. This hashtable is used to populate the combo box and as a lookup for the path to the virtual directory once the selection has been made.
Note about the use of Hashtables: This may be a common practice for you as a programmer, but I'm going to explain this a little bit for others who do not necessarily use this technique. I use hashtables for many of my data collections for one or both of the following reasons:
When we selected a web application, the SelectedIndexChange event was fired. At this point, I am able to convert the virtual path associated with that selection into a local file system path, and simply locate theWeb.config file for that application. Once I have the Web.config file, I can extract the connection string. I use that connection string for any database purposes in the remaining steps in the process. Here is the code I use to obtain the connection string:
private void cmbVirtualDirectories_SelectedIndexChanged(object sender, System.EventArgs e) { // Make sure our combo box has a valid selection if( this.cmbVirtualDirectories.Text.Length > 0 ) { // Translate the web path into a local filesystem path sitePath = new VirtualDirectoryUtility().GetVirtualDirLocalPath( (string)this.sites[ this.cmbVirtualDirectories.Text ] ) + "\\"; if( sitePath.Length > 0 ) { // Check to see if a web.config file exists if( !File.Exists( sitePath + "Web.config" ) ) { // If not then fail and let user know // they need to make a different selection MessageBox.Show( this, "The web directory you selected has no Web.config file. " + "Please select a different web directory", "Invalid Web Directory", MessageBoxButtons.OK, MessageBoxIcon.Exclamation ); return; } // Load the web config into an XML document XmlDocument config = new XmlDocument(); config.Load( sitePath + "Web.config" ); // First check to see whether we're using DNN 2 or 3 XmlNode siteSqlServer = config["configuration"]["appSettings"].SelectSingleNode( "add[@key = \"SiteSqlServer\"]" ); if( siteSqlServer != null ) this.version = DNN_VERSION.DNN3; // Find out which data provider type we are using XmlNode dataNode = config["configuration"]["dotnetnuke"]["data"]; // If both of these are null, then we're not // finding what we need to continue -- // must go back and try again if( dataNode == null && siteSqlServer == null ) { // this node is required. If it's not there, // then we need to error out. MessageBox.Show( this, "The Web.config file found does" + " have the proper XML format. " + "Please check to make sure that you" + " are using a valid DotNetNuke 2 installation. " + "Please select a different web directory", "Invalid Web.config File", MessageBoxButtons.OK, MessageBoxIcon.Exclamation ); return; } if( dataNode != null && siteSqlServer == null ) this.version = DNN_VERSION.DNN2; this.dataProviderType = dataNode.Attributes["defaultProvider"].Value; // Find the correct key field switch( this.version ) { case DNN_VERSION.DNN2: XmlNode providerNode = config["configuration"]["dotnetnuke"]["data"] ["providers"].SelectSingleNode( "add[@name=\"" + dataProviderType + "\"]" ); // Set the connection string instance // variable for use throughout // the rest of the process and then break out of the loop. if( dataProviderType.Equals( "SqlDataProvider" ) ) { connectionString = providerNode.Attributes["connectionString"].Value; } else { // Have to attach the datasource to the .. connection string manually or we won't be // able to connect to it. We assume that is is in the //(root)/Providers/DataProviders/AccessDataProvider directory string fileName = providerNode.Attributes["databaseFilename"].Value; connectionString = providerNode.Attributes["connectionString"].Value + "Data Source=" + sitePath + @"Providers\DataProviders\AccessDataProvider\" + fileName; } break; case DNN_VERSION.DNN3: connectionString = siteSqlServer.Attributes["value"].Value; break; } // Enable the next button now that we have a valid selection. this.wizardMain.NextEnabled = true; } } }
In the line sitePath = new VirtualDirectoryUtility().GetVirtualDirLocalPath( (string)this.sites[ this.cmbVirtualDirectories.Text ] ) + "\\";, I am calling out to a utility to provide me with a translation of the virtual directory path to the actual file system path. To gain a better understanding of this, let's say, for instance, that I select a DotNetNuke installation on my local system called DNNSB. The virtual path associated with the DNNSB virtual directory as loaded by my GetSitePaths() method (see above) would be "IIS://localhost/w3svc/1/root/DNNSB". Now, the utility function code to convert the virtual path to a file system path looks like this:
public string GetVirtualDirLocalPath( string directoryEntry ) { DirectoryEntry virtualPath = new DirectoryEntry( directoryEntry ); return virtualPath.Properties["Path"].Value.ToString(); }
This simply opens the "IIS://localhost/w3svc/1/root/DNNSB" directory entry and looks up the "Path" property. "Path" contains the full path to the file system directory.
Note: If you would like to know what other properties are available in the DirectoryEntry.Propertiescollection when loading a virtual directory, open the Metabase with the MetaEdit utility and look at the properties under the Schema node. Remember that the MetaEdit utility is just a simple tool that provides a tree view of the Metabase XML and schema files. If you're not familiar with XML, just remember that the schema is a definition of the data, and what you see under the LM node is the actual implementation of the data. Drill down in the MetaEdit utility under the LM node to a specific virtual directory under "IIS://localhost/w3svc/1/root" and you will see what properties are actually implemented on the virtual directory you select.
With the file system path known, I simply append "Web.config" to the path and load it into an XML document. I can then search through the XML nodes until I find the appropriate token for either DNN2 or DNN3. Notice that the method supports grabbing the connection string for both SQL Server and Microsoft Access. Once I have a valid connection string, I can obtain a list of module definitions found in the database. The code to do that looks like this:
private void LoadModuleDefs() { try { // Ensure that we have a valid connection string if( connectionString.Length <= 0 ) { // If not, warn the user and go back to the // web application selection page in order // to select a valid application. MessageBox.Show( this, "The connection string was not found while" + " parsing the web.config file for the current web" + " application. Please go back and select" + " a different web application.", "Bad Connection String", MessageBoxButtons.OK, MessageBoxIcon.Error ); this.wizardMain.BackTo( this.wpSiteSelect ); return; } Cursor.Current = Cursors.WaitCursor; // Connection to Sql Server if( this.dataProviderType.Equals( "SqlDataProvider" ) ) { // Connect to the DB SqlConnection connection = new SqlConnection( connectionString ); connection.Open(); // Obtain all Desktop Module definitions in the selected DB SqlCommand command = new SqlCommand( "SELECT DesktopModuleID, FriendlyName FROM DesktopModules", connection ); SqlDataReader reader = command.ExecuteReader(); // Instantiate the modules instance variable modules = new Hashtable(); while( reader.Read() ) { // Use the "FriendlyName" as the key and // the "DesktopModuleID" as the value when // adding to our modules hashtable instance variable modules[reader[1].ToString()] = reader[0].ToString(); // Add the "FriendlyName" to the list of module definitions this.cmbModuleDefinitions.Items.Add( reader[1].ToString() ); } reader.Close(); connection.Close(); } // Connection to Access else { // Connect to the DB OleDbConnection connection = new OleDbConnection( connectionString ); connection.Open(); // Obtain all Desktop Module definitions in the selected DB OleDbCommand command = new OleDbCommand( "SELECT DesktopModuleID, FriendlyName FROM DotNetNuke_DesktopModules", connection); OleDbDataReader reader = command.ExecuteReader(); // Instantiate the modules instance variable modules = new Hashtable(); while(reader.Read()) { // Use the "FriendlyName" as the key and // the "DesktopModuleID" as the value when // adding to our modules hashtable instance variable modules[reader[1].ToString()] = reader[0].ToString(); // Add the "FriendlyName" to the list of module definitions this.cmbModuleDefinitions.Items.Add( reader[1].ToString() ); } reader.Close(); connection.Close(); } Cursor.Current = Cursors.Default; } catch ( Exception ex ) { // Something went wrong. We should notify the user and the go back to the // web application selection page. MessageBox.Show( this, ex.Message, "Exception Caught" ); this.wizardMain.BackTo( this.wpSiteSelect ); return; } }
Notice once again that I have provided support for both SQL Server and Microsoft Access. Now that the module definitions are loaded into the combo box, the user can select the module they would like to package up and then click Next.
I toyed with the idea of just auto-detecting the module dependencies exclusively, but it seemed to me that, at best, it was probably only slightly more efficient to do so, and at worst (e.g., if it's not very accurate at detecting them), you would have to select your assemblies manually anyhow. The path I chose was to enable the user to select the assemblies used by the module manually from a list of assemblies found in the site bin directory. Then, later in the process, I give the user the ability to allow the auto-detection feature to try to find anything that may be missing. (I discuss the auto-detection feature in greater depth later--turns out that auto-detection works pretty well.) The list box gets populated with all of the assemblies found in the bin directory (I just append "\bin\" to the path found in the Metabase to find this) of the virtual directory currently selected. The list box allows multiple selections. Users simply select the assemblies associated with their module and click Next.
The next step allows the user to provide a prefix that is used to locate database objects related to the module being packaged.
Please read the "Assumptions" section at the beginning of this article so you know what to expect when it comes to creating your database objects with a prefix. This part simply uses the SQLDMO to script out the objects that it finds according to the prefix provided. I created another utility class to accomplish this. You will find the code below. Keep in mind that by showing this part now, it may seem that it is at this point in the wizard process when this code gets run. In actuality, the processing doesn't start until you get to the progress screen at the end of the wizard. I am only collecting the prefix at this stage, however, it seems like a good place in the article to discuss what the application will do with this prefix once processing has actually begun. Here is the code:
public static string GetScript(string hostName, string dbName, string username, string password, string prefix ) { // Instantiate the Sql Server Object SQLDMO.SQLServer srv = new SQLDMO.SQLServer(); // If there is no username provided, we assume that we're going to use // a trusted connection if( username.Length <= 0 ) { srv.LoginSecure = true; srv.Connect( hostName, "", "" ); } // Otherwise we log in with the specified credentials else srv.Connect(hostName, username, password ); // We have to set some scripting parameters before we start SQLDMO.SQLDMO_SCRIPT_TYPE param = SQLDMO_SCRIPT_TYPE.SQLDMOScript_Default| // Script out indexes SQLDMO.SQLDMO_SCRIPT_TYPE.SQLDMOScript_Indexes | // Script out drop statements SQLDMO_SCRIPT_TYPE.SQLDMOScript_Drops | // Prefix the object name with the database owner SQLDMO_SCRIPT_TYPE.SQLDMOScript_OwnerQualify; string script = ""; foreach(SQLDMO.Database db in srv.Databases) { // Have to iterate through the list of databases to find the one that // was specified if(db.Name!=null && db.Name.ToLower().Equals(dbName.ToLower()) ) { // First search through all of the tables and locate the ones // with the prefix provided (case insensitive) foreach( SQLDMO.Table table in db.Tables ) { if( table.Name.ToLower().StartsWith( prefix.ToLower() ) ) { // Append the script for the current table script += table.Script( param, null, null, SQLDMO.SQLDMO_SCRIPT2_TYPE.SQLDMOScript2_Default ); } } // Next search through all of the stored procedures and locate // the ones with the prefix provided (case insensitive) foreach( SQLDMO.StoredProcedure proc in db.StoredProcedures ) { if( proc.Name.ToLower().StartsWith( prefix.ToLower() ) ) { // Append the script for the current stored procedure script += proc.Script( param, null, SQLDMO.SQLDMO_SCRIPT2_TYPE.SQLDMOScript2_Default ); } } break; } } return script; }
You can see that I specify several parameters for the SQLDMO scripting mechanism to use, including using default settings, creating indexes, creating drop statements, and prefixing the object names with the database owner name. When we return from this method, we have a complete script in hand. To achieve "out-of-the-box" usability, we now need to replace the database owner prefix (which must be "[dbo].") with the {databaseOwner} token that the DotNetNuke module installer expects. Here is what that calling code looks like:
private void CreateSqlScript() { try { // Get the script according to the specified connection string and // db prefix string script = DbScriptingUtility.GetScript( connectionString, this.dbPrefix ); // Replace the db owner with the token expected by the DNN Module // Installer script = script.Replace( "[dbo].", "{databaseOwner}" ); // Set the version information string ver = ( this.manifestCreator.Version.Length > 0 ) ? this.manifestCreator.Version : "01.00.00"; // Write out the file to the temporary directory where our other // files are being copied. StreamWriter writer = File.CreateText( tempDirPath + "\\" + TEMP_DIR_NAME + "\\" + ver + ".SqlDataProvider" ); writer.Write( script ); writer.Close(); } catch ( Exception ex ) { MessageBox.Show( this, ex.Message, "Exception Caught" ); } }
For right now, you can ignore the code where we are using the manifestCreator. We will be discussing that in greater depth in the next section. Just suffice it to say that once these methods have run, we will have a database script that the DNN module installer can use to create our objects when the private assembly is installed through the DNN file manager.
The manifest creator is really the heart and soul of the DNN Module Packager. It generates our .dnn manifest file with all of the information we have provided. It begins by getting information about the module from the DesktopModules table in the database. From there, we acquire module name, description, and version. Next, we select module control data by joining against the ModuleDefinitions and ModuleControls tables. Once this data is available, we simply add it to the XML .dnn file in the appropriate order with the appropriate tags. Here is theManifestCreator's Generate method:
public XmlDocument Generate( string connectionString, string desktopModuleId, string dnnVersion ) { // Set the intance variables this.connectionString = connectionString; this.desktopModuleId = desktopModuleId; // connectionString and desktopModuleId cannot be null or empty Debug.Assert( this.connectionString.Length > 0, "ManifestCreator.connectionString must be set before generating." ); Debug.Assert( this.desktopModuleId.Length > 0, "ManifestCreator.desktopModuleId must be set before generating." ); // Create our XML document and its top level node doc = new XmlDocument(); XmlElement topElement = doc.CreateElement( "dotnetnuke" ); // Add some attributes to the top level node topElement.SetAttribute( "version", dnnVersion ); topElement.SetAttribute( "type", "Module" ); doc.AppendChild( topElement ); // Create the folders tag XmlElement folders = doc.CreateElement( "folders" ); topElement.AppendChild( folders ); // Create the folder tag XmlElement folder = doc.CreateElement( "folder" ); folders.AppendChild( folder ); // Call virtual method to load the module definition // information. Overriden child class version is called. // See SqlServerManifestCreator or AccessManifestCreator for details. LoadModuleDefinitionInfo(); // Create xml elements to hold the data found in the db XmlElement name = doc.CreateElement( "name" ); XmlElement desc = doc.CreateElement( "description" ); XmlElement version = doc.CreateElement( "version" ); // Set the xml inner text to reflect the data found in the db name.InnerText = friendlyName.Replace( " ", "_" ); desc.InnerText = description; if( ver == null || ver.Length <= 0 ) ver = "01.00.00"; version.InnerText = ver; // Append these nodes to the folder node folder.AppendChild( name ); folder.AppendChild( desc ); folder.AppendChild( version ); // Create the modules node. XmlElement modules = doc.CreateElement( "modules" ); folder.AppendChild( modules ); // Call virtual method to load the list of modules. // Overriden child class version is called. // See SqlServerManifestCreator or AccessManifestCreator for details. LoadModules(); // For each of the modules we found, // we have to create a module xml node as well // as each of the control nodes. Hashtable files = new Hashtable(); bool exPathWasSet = false; foreach( string key in friendlyNames.Keys ) { // Create the module node XmlElement mod = doc.CreateElement( "module" ); // Create the friendlyname node for the module node XmlElement fName = doc.CreateElement( "friendlyname" ); // Append the nodes to the document modules.AppendChild( mod ); mod.AppendChild( fName ); // Set the friendly name equal to the key name fName.InnerText = key; // Created the controls node to hold all of the control nodes XmlElement controls = doc.CreateElement( "controls" ); mod.AppendChild( controls ); foreach( object[] row in items ) { // If this is the module we're currently working on if( row[8].ToString().Equals( key ) ) { // Create a control node XmlElement control = doc.CreateElement( "control" ); controls.AppendChild( control ); // 2 - key, 3 - title, 4 - src, 5 - iconfile, 6 - controltype // Create the control key node if( row[2].ToString().Length > 0 ) { XmlElement itemKey = doc.CreateElement( "key" ); itemKey.InnerText = row[2].ToString(); control.AppendChild( itemKey ); } // Create the control title node if( row[3].ToString().Length > 0 ) { XmlElement itemTitle = doc.CreateElement( "title" ); itemTitle.InnerText = row[3].ToString(); control.AppendChild( itemTitle ); } // Create the control src node if( row[4].ToString().Length > 0 ) { XmlElement itemSrc = doc.CreateElement( "src" ); string source = row[4].ToString(); itemSrc.InnerText = source.Substring( source.LastIndexOf( "/" ) + 1 ); // Keep a list of all of the files found // while iterating to keep from // having to run through the dataset again. files[source] = 1; control.AppendChild( itemSrc ); if( !exPathWasSet ) { this.extendedPath = source.Substring( 0, source.LastIndexOf( "/" ) ); exPathWasSet = true; } } // Create the control iconfile node if( row[5].ToString().Length > 0 ) { XmlElement itemIconFile = doc.CreateElement( "iconfile" ); itemIconFile.InnerText = row[5].ToString(); // Keep a list of all of the files found // while iterating to keep from // having to run through the dataset again. files[row[5].ToString()] = 1; control.AppendChild( itemIconFile ); } // Create the control type node and set the correct type name if( row[6].ToString().Length > 0 ) { XmlElement itemType = doc.CreateElement( "type" ); switch( row[6].ToString() ) { case "-2": itemType.InnerText = "Skin Object"; break; case "-1": itemType.InnerText = "Anonymous"; break; case "0": itemType.InnerText = "View"; break; case "1": itemType.InnerText = "Edit"; break; case "2": itemType.InnerText = "Admin"; break; case "3": itemType.InnerText = "Host"; break; } control.AppendChild( itemType ); } } } } // Create the files node xfiles = doc.CreateElement( "files" ); folder.AppendChild( xfiles ); int fileCount = 0; // contentFiles can be accessed as a property // once the generate method has run. This // is where we populate it with a list of all of the files contentFiles = new string[files.Keys.Count]; foreach( string key in files.Keys ) { // Add the filename to the contentFiles array contentFiles[fileCount] = key; // Create the file node for the current file XmlElement file = doc.CreateElement( "file" ); xfiles.AppendChild( file ); // Create a name node to be appended to the file node XmlElement fileName = doc.CreateElement( "name" ); // We only need the filename for the manifest file, // so we extract it from the path string source = key.Substring( key.LastIndexOf( "/" ) + 1 ); // Set the name node's inner text fileName.InnerText = source; file.AppendChild( fileName ); fileCount++; } this.hasBeenGenerated = true; return doc; }
There are a significant number of comments in this code to try to explain what exactly is going on, so read through those thoroughly. Don't let the code overwhelm you though. If you have ever generated an XML document tag by tag before, then you'll recognize that there really isn't anything earth shattering going on here. It is just taking the data we got from the database, along with a list of the content files, assembly files, and the SQL script file, and generating an XML based .dnn manifest file.
If you do not have a version set for your module, the version used by the application will automatically default to 01.00.00. If you don't want this to happen, then you will need to change your module's version in the database before running the packager. The module version can be set in the DesktopModules table manually using Enterprise Manager. In order to check and see if you have set the module version, log into your DotNetNuke site as the host account and select Host > Module Definitions. Now click on the edit pencil next to your module definition. You will see a screen like the one below. Notice that the version field is not editable (which is the reason you have to set it manually in the database).
If the version is not set in your module, then go into the DesktopModules database table in Enterprise Manager and locate your module's record. Set your version number in the Version column in the format xx.xx.xx. This version is used for both the .dnn file info and for your SqlDataProvider SQL file which is named according to the value found (as in 01.00.00.SqlDataProvider).
Detecting module dependencies was definitely one of the more interesting parts of this project. Again, I had to make several assumptions in order to make it work. The basic process of dependency detection is done by obtaining a list of all of the content files and reading through the header lines to find:
There are two lines that I look for. The first is the Control tag. It looks something like this:
<%@ Control Language="c#" AutoEventWireup="false"Codebehind="EditProduct.ascx.cs" Inherits="SkyeRoad.ProductCatalog.EditProduct" TargetSchema="http://schemas.microsoft.com/intellisense/ie5"%>
I search for the Inherits tag and grab that value. I assume that the assembly name that references this object will be the full class namespace minus the class name. So in the example code above, I am assuming that the assembly (DLL) we're looking for is SkyeRoad.ProductCatalog.dll since the full class namespace isSkyeRoad.ProductCatalog.EditProduct.
I also look for lines that contain information about an external control. It looks something like this:
<%@ Register TagPrefix="SRSWC" Namespace="SkyeRoad.WebControls" Assembly="SkyeRoad.WebControls"%>
I search for the Assembly tag and grab that value and append .dll to it. In this case, it isn't necessary to assume what the assembly name might be since it is clearly spelled out for us.
A third way I detect dependencies is I use the Assembly.GetReferencedAssemblies() method on any primary assembly. I refer to any assembly found in the Control tag of a content file as a primary assembly.
Once again, I created a utility class to handle the work of locating dependencies. Below is the methodFindAssemblies(). Notice that this class uses regular expressions to determine if a line contains the tags we're looking for.
public static string[] FindAssemblies( string[] contentFiles, string sitePath ){ // We create a hashtable to hold the names of the assemblies we find. Since // the default behaviour of a hashtable is to store unique keys only, it // provides a simple shortcut to ensure that we only have one of each // assembly found. Since we're iterating through multiple content files, // it is possible that we'll find the same assembly name twice so this // ensures that it will only be in the final list once. Hashtable assemblies = new Hashtable(); foreach( string contentFilepath in contentFiles ) { StreamReader reader = File.OpenText( contentFilepath ); // FileInfo fInfo = new FileInfo( contentFilepath ); string line = ""; while( ((line = reader.ReadLine()) != null) ) { if( line.IndexOf( "<%@" ) > -1 ) { // If we find an Assembly key, then we have some sort of // other dependency on the page, probably a web control of // some sort. We will need to list this as a dependency Regex regex = new Regex( "Assembly\\s*=\\s*\"?(?[^\"]+)\"", RegexOptions.IgnoreCase ); Match match = regex.Match( line ); if( match.Success ) assemblies[match.Groups["assembly"].Value + ".dll"] = 0; // The Inherits tag indicates what namespace the main assembly // is in for this particular content file. We need to grab // this as well regex = new Regex( "Inherits\\s*=\\s*\"?(?<namespace>[^\"]+)\"", RegexOptions.IgnoreCase ); match = regex.Match( line ); if( match.Success ) { // Because the inherits tag gets us a full namespace, we // need to leave off the name of the actual object in // order to know the assembly name. string nameSpace = match.Groups["namespace"].Value.Substring( 0, match.Groups["namespace"].Value.LastIndexOf( "." ) ); assemblies[nameSpace + ".dll"] = 1; } } } reader.Close(); ArrayList dependencies = new ArrayList(); // Iterate through our assemblies list and use reflection to // determine any dependencies we may have missed. foreach( string key in assemblies.Keys ) { // If it was set to 1 then we know that it is our primary assembly if( (int)assemblies[key] == 1 ) { if( File.Exists( sitePath + "\\bin\\" + key ) ) { AssemblyName[] assemblyNames = Assembly.LoadFrom( sitePath + "\\bin\\" + key ).GetReferencedAssemblies(); foreach( AssemblyName aName in assemblyNames ) { // Make sure they aren't framework assemblies, // mscorlib, or the dotnetnuke assembly itself if( !aName.Name.ToLower().StartsWith( "system" ) && !aName.Name.ToLower().StartsWith( "microsoft" ) && !aName.Name.ToLower().StartsWith( "mscorlib" ) && !aName.Name.ToLower().StartsWith( "dotnetnuke" ) ) { dependencies.Add( aName.Name + ".dll" ); } } } } } foreach( string dep in dependencies ) { assemblies[dep] = 1; } } // Convert our hashtable to an array to pass back to the caller string[] retArray = new string[ assemblies.Count ]; int index = 0; foreach( string key in assemblies.Keys ) { retArray[index++] = key; } // Return the array return retArray; }
I start out by opening the content file and reading through it. If a line contains the string "<%@", then I know I have found one of the lines I am interested in. Next, I check the line to see if it has the Assembly or Inheritstags. If it does, I extract the value for each of those and determine what assembly I need from there according to the assumptions I stated earlier in this section. Then, I add the assembly to a hashtable and set its value to 1 if it is a primary assembly and a 0 if it is not (see above for definition of "primary assembly").
The next thing I do is iterate through all of the assemblies I have found, and if they have a 1 assigned to them, I use reflection to obtain a list of dependencies through the GetReferencedAssemblies() call. You'll notice that I have chosen to exclude certain assemblies from the detection process. We don't need .NET framework assemblies to be included in our module, nor do we need MSCorLib or the DotNetNuke assembly. There may be others that I've overlooked, but I will only know for sure over time and with a lot of use.下件事是遍历所有找到的程序集,且如果有1指示,我使用映射通过调用GetReferencedAssemblies()来获取依赖列表。注意我选择排除某些检测进程的程序集。不需要.NET框架程序集,也不需要MSCorLib和DotNetNuke程序集。被忽略的还有可能更多,在过去时日久后,我只记的这些,其它的使用了很多。
The final step before returning is to convert the contents to an array from a hashtable. This creates some extra overhead, but I prefer to return a regular array than a hashtable in this instance. This is really a personal preference more than anything. There is really nothing special about doing it this way.返回前的最后步骤是从哈希表转换内容到数组。这会有所消耗,但是我宁愿返回正规的数组,而不是哈希表。