Tuesday, November 30, 2010

Recycle application pools using appcmd.exe in IIS 7

While hacking along on some deployment scripts today I encountered an issue when I needed to recycle specific application pools via code.

In IIS 7, appcmd.exe was added to enable a programmatic way of interacting with the IIS. On Technet, the syntax to recycle an app pool is as follows:

appcmd recycle apppool /apppool.name: string
This will however result in the following error message:

ERROR ( message:The attribute "apppool.name" is not supported in the current command usage. )
Which can feel a bit strange. I then noticed that the article on Technet has a typo and the space between /apppool.name: and the name of the application pool should not be there. Without it the command will execute perfectly.

As an addition, application pools with spaces in their names can be recycled by simply wrapping the name in quotes as so:

appcmd recycle apppool /apppool.name:"My Application Pool Name" 
I added a comment about the issue in the Technet article for others with the same issue. I guess there is not a way to submit corrections to articles other than comments?

Friday, November 19, 2010

Calling stored procedures from BizTalk (and other applications) and the FMTONLY flag

Most developers will run into the issue of not being able to generate metadata from a stored procedure even though it is perfectly valid and can be accessed and run without a hitch from the SQL Server Management Studio or directly from code. A lot of the time there will be an error message on the lines of

Error while retrieving or generating the WSDL. Adapter message: Retrieval of Operation Metadata has failed while building WSDL at 'TypedProcedure/dbo/FetchTestData'

Microsoft.ServiceModel.Channels.Common.MetadataException: Retrieval of Operation Metadata has failed while building WSDL at 'TypedProcedure/dbo/FetchTestData' ---> System.Data.SqlClient.SqlException: Invalid object name '#temp01'.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)

In some cases there will be no error at all (SSIS is prone to this for example).

The above error message will give us something to go on though. When reviewing the stored procedure we can see that a temporary table is created and filled with data. Later on this table is used in a select in order to return data to the client. Nothing unusual, still it fails. The stored procedure looks like this (never mind the necessity of the temp table, it is just a demo of the issue):

ALTER PROCEDURE [dbo].[FetchTestData]
(@a4 varchar(4))
SELECT t1, t2, t3 INTO #temp01 FROM Table_1
SELECT t1, t2, t3 FROM #temp01

The next step is then to run profiler during a run of the generate wizard to see what is actually happening in the background. How does the metadata get generated by the wizard and why does it fail?

When running the generation, we can see the following bits in the trace

exec [dbo].[FetchTestData] @a4=NULL

Now we are getting somewhere. The FMTONLY setting is used in order to not process any rows but just return response metadata to the client. However, our stored procedure uses a temporary table which FMTONLY=ON will cause to not be created since any modifications are allowed to be made. When trying to do a select on the temporary table, it will fail since it never was created causing the error messages mentioned above.

There is a way around this issue though. Since we know what is happening, we can revert the execution of SET FMTONLY ON that the adapter does before the execution of the procedure. We should however not just add a command of SET FMTONLY OFF to the beginning of our procedure. Actually executing the entire procedure which such a solution will result in might not be a good choice. If we only do a select on data it is fine, but if the procedure also includes insert, update and delete statements, these will be called as well.

Instead, we check for the FMTONLY flag early on and if it is set, we switch it off when needed and then switch it back on again. Our modified and metadata-generation-secure procedure now look like this:

ALTER PROCEDURE [dbo].[FetchTestData]
(@a4 varchar(4))
DECLARE @FmtOnlyIsSet bit = 0
IF (1=0) BEGIN SET @FmtOnlyIsSet = 1 END
IF @FmtOnlyIsSet = 1

SELECT t1, t2, t3 INTO #temp01 FROM Table_1

IF @FmtOnlyIsSet IS NULL

SELECT t1, t2, t3 FROM #temp01


What magic is done here?

First, we declare a variable that can hold the current setting for the FMTONLY flag, we call it @FmtOnlyIsSet and set it to false by default.

The IF (1=0) bit may look a bit off, but is in fact quite clever. When FMTONLY is set to ON, all conditional statements will be ignored but the content will be executed. This since all possible return paths has to be checked. By checking for an impossible match (1=0) we can be sure that the statement inside the IF will be run only if FMTONLY is ON. Hence we set our FMTONLY flag to true here.

Then we simply check if our flag is set to true when needed and if so, switch off the FMTONLY setting. Afterwards we do the same check and switch it back on. This part is important due to what I mentioned above. If we don't switch the FMTONLY setting back on, all statements will be run just as in a normal execution of the procedure which might not be wanted.

It is noteworthy to know that the FMTONLY setting not only will be used during metadata generation in development, but also when actually calling the procedure from the application. I noticed this when using typed datasets in BizTalk server with the WCF-SQL adapter. I couldn't do alterations to the stored procedure which I handled by instead doing a mock procedure to generate schemas from. Then I assumed that I could safely call the original procedure from BizTalk but I still got the invalid object error message. A quick look in Profiler showed that the adapter will do two passes to the stored procedure. First once with FMTONLY set to ON and then one without to actually execute the code.

My guess is that the adapter is smart enough to do a check that the signature for the procedure matches the previously generated and deployed metadata before executing code that could change data in the database. If the returned dataset wouldn't match the schema, we would know before any code has been executed.

I have only seen this when using typed datasets though which make sense. By using the technique described above, it isn't an issue at all. I'd rather like the idea that the contract is checked first before executing the procedure. It is also not posing a performance hit. The result from the metadata extraction will be cached so only the first call will need to fetch metadata. I am still not sure for how long the metadata is cached before it is refreshed in a call but it seems to hold it for quite a while.

Thursday, November 11, 2010

BizTalk in the cloud - Integration as a service

I totally missed it, but a rough two weeks ago it was published on the BizTalk Server Team Blog that the future is in the cloud.

When I attended the European BizTalk Conference and the sessions based on the book Applied Architecture Patterns on the Microsoft Platform, I got the feeling that Azure is something I should start working with. Now I'm sure that it is so. Especially since the BizTalk/Azure hybrid will be relased as a CTP sometime during the spring/summer of 2011.

As I concluded during the conference, it is at this point (and in the nearest future) not a replacement we have in front of us, but an addition of tools to build solutions with. The same is written by Daniel Probert in his blogpost on the subject.

I'm also happy to see that the cloud is not viewed as the "solution to everything". The future integration platform from Microsoft will be offered as an on-premises product based on AppFabric. I'm looking forward to this since I believe it will solve a lot of problems I'm facing today regarding complex low-latency processes that will work extremely well in an AppFabric on-site platform.

So regarding integration in the future, we now have a pretty clear direction to head in and I believe after reading the announcement from Microsoft that it is the right path. Since most of the platform is going to the cloud, so should also the integration, while still having an option to keep things off-cloud if security, performance and other requirements dictate so.

Tuesday, November 9, 2010

Error when importing bindings: "Failed to update binding information."

When importing bindings into a BizTalk application, the following error message might appear:

TITLE: Import Bindings
Failed to update binding information. (mscorlib)
Cannot update send port "MoAGSendPort". (Microsoft.BizTalk.Deployment)

Cannot update transport information (address "C:\temp\SHS\ut\1.mb510i1_%SourceFileName%"). (Microsoft.BizTalk.Deployment)

The following items could not be matched up to hosts due to name and/or trust level mismatches:
Item: 'FILE' Host: 'SendHost' Trust level: 'Untrusted'
You must do one of the following:
1) Create hosts with these names and trust levels and try again
2) Re-export the MSI without the binding files and have a post import script apply a suitable binding file. (Microsoft.BizTalk.Deployment)

While the message might be correct regarding the host name or trust level, a more common reason to the failure is that the host doesn't have an adapter handler specified that matches the bindings.

In the Admin Console, browse to BizTalk Server 2009 Administration > BizTalk Group > Platform Settings > Adapters and then look at the adapter mentioned in the message. In my case it says "Item: FILE", so the File adapter is where I'm heading and it is indeed missing a send handler for the SendHost host.

To add a send handler, right-click on the adapter (or in the detail view of the window) and select New > Send Handler...

Then select the host that needed the specific send handler and click Ok.

Restart the host instance in question and then try to import the binding again.

WCF exception Could not establish trust relationship for the SSL/TLS secure channel with authority 'server:port'

I was recently working with some WCF services using the wshttp binding and therefore calling them over SSL. I had a certificate set up, but when trying to browse the wsdl in my test client, I couldn't browse the metadata. In the eventlog, I found the following error messages.

Exception Information Type[SecurityNegotiationException] Source[mscorlib] Message[Could not establish trust relationship for the SSL/TLS secure channel with authority 'server:port'.]

Exception Information Type[WebException] Source[System] Message[The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.]

Exception Information Type[AuthenticationException] Source[System] Message[The remote certificate is invalid according to the validation procedure.]

The next step was to try to browse the wsdl in Internet Explorer, where I got this:

Now it made sense. I tried to browse the service using an endpoint URL of https://localhost:11001/path when the certificate I was using was issued to the actual server name as can be seen both in Internet Explorer when checking the certificate information as well as in the MMC Certificate snap-in.

In other words, even if localhost and my servers full name can be used interchangeably in most cases, it isn't so when we are talking about security certificates where the server name is quite vital. After switching to the correct endpoint URL, in my case https://server1.domain.com:11001/path, it worked as expected.

It should be noted that there is a way of bypassing the certificate verification in the client by setting the ServerCertificateValidationCallback property as below:

using System.Net.Security;
using System.Security.Cryptography.X509Certificates;

System.Net.ServicePointManager.ServerCertificateValidationCallback += delegate(object sender, X509Certificate cert, X509Chain chain, SslPolicyErrors sslError)
return true;
This is however quite dangerous and should not be used in production code. A good practice if code as this is used is to wrap it in #if DEBUG statements to keep it from getting to production (but then you risk having all tests go through without any problems and have a hard to find error in the production environment that cannot be replicated in test).

Monday, November 8, 2010

WCF exception PlainXmlWriter+MaxSizeExceededException

When logging and tracing are switched on for a WCF service, the following exception might be thrown

A message was not logged.
Exception: System.InvalidOperationException: There was an error generating the XML document. ---> System.ServiceModel.Diagnostics.PlainXmlWriter+MaxSizeExceededException: Exception of type 'System.ServiceModel.Diagnostics.PlainXmlWriter+MaxSizeExceededException' was thrown.
The reason is that the maxSizeOfMessageToLog configuration parameter is set to a value that is lower than the size of the message that was trying to be logged.

<messagelogging logentiremessage="true" logmalformedmessages="true" logmessagesatservicelevel="true" logmessagesattransportlevel="true" maxmessagestolog="30000" maxSizeOfMessageToLog="200000">
And while on the subject, it can be worth checking out the MSDN recommended settings for tracing and message logging at http://msdn.microsoft.com/en-us/library/aa702726.aspx.

Friday, November 5, 2010

T-SQL Select any row, but only one per key value

A colleague of mine asked me for help with a database query. The table in question where of the normal type with an id column and several columns of data. However, the id column was not holding unique id's but the same id could be used several times, with different data for each occurrence. The task was to select only one row per id and it could be any one of the available.

My example table looked like this

The first column, t1, is holding the to-be distinct id:s. t2 and t3 is random data.

A common solution to this problem seems to be to use cursors or temporary tables. I couldn't see why it shouldn't be possible to do such a select without using them, so after some thinking, I came up with the following

SELECT t1, t2, t3
SELECT t1, t2, t3, ROW_NUMBER() OVER (PARTITION BY t1 ORDER BY t1) rowrank
FROM Table_1
) temp_a
WHERE rowrank <= 1

Basically we select over the rows creating a ranking value for each occurrence of the id. Then we select from this data set only the rows with the rank of 1, giving us just the first occurence per id. Hence we get this output

So for every occurrence of an id in the key column t1, only one row will be selected. With modification of the inner query, a choice can be made of which one of the rows that is deemed more interesting than the others (but in this case, it didn't matter).