I'm running a Raspberry Pi at home for various tasks and access it remotely using SSH for all maintanence tasks. This means that the HDMI control in the Pi is unused and hence can be switched off for maybe not better performance, but at least for a lower working temperature.
When logged in, you can switch the HDMI off by running
tvservice -o
However, this will reset at boot time. So one way to handle this is to simply add the command to the boot scripts.
I did it the quick way by adding the command to the boot sequence:
vi /etc/init.d/customboot.sh
Add
### BEGIN INIT INFO
# Provides: customboot
# Required-Start: networking
# Required-Stop: networking
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Custom boot commands
# Description: Custom boot commands
### END INIT INFO
echo "Turning off tvservice"
/opt/vc/bin/tvservice -o
Then add permissions and add the script to the boot sequence
sudo chmod +x /etc/init.d/customboot.sh
sudo update-rc.d customboot.sh defaults
By doing this, I lowered the idle temperature of my Pi with about two degrees C:
Ramblings, thoughts and experiences from the life as a BizTalk architect (as well as everything else I catch sight of).
Tuesday, November 4, 2014
Thursday, July 3, 2014
Slow performance in PowerShell Get-ChildItem over UNC paths
I got a ticket submitted to me that proved to be quite interesting. The basics where that a specific maintenance script were running for ages on a server causing some issues.
The script itself was quite simple, it did a recursive gci on a path and performed an action on each item matching a where clause that identified empty folders. Nothing wrong there. However, even if the specified path contained a massive amount of folders and files, it still shouldn't have been running for days (literally) causing the ticket to be created.
When looking into the matter, I found this blog post on the Windows PowerShell blog that goes into detail why Get-ChildItem has slow performance at times. The main culprit is the .NET APIs that is too chatty and causes a lot of overhead traffic over the network when trying to query for files. This was fixed in PowerShell 3.0 that uses new API:s.
A quick test using a folder with 700 subfolders and a total of 40 000 files where I execute the script line
(gci d:\temp -Recurse | where-object {$_.PSIsContainer}) | Where-object {($_.GetFiles().Count + $_.GetDirectories().Count) -eq 0} | out-file d:\dir.txt
reveals the following execution time numbers:
Using PowerShell 2.0 and accessing the path as an UNC: 100s
Using PowerShell 3.0 and accessing the path as an UNC: 33s
Using PowerShell 2.0 and accessing the path locally: 6s
Using PowerShell 3.0 and accessing the path locally: 5s
In my case, the server was both running PowerShell 2.0 and accessing the path as an UNC causing a major performance hit. The solution is simply to both run the script locally on the server where the task has to be performed as well as upgrading to at least PowerShell version 3.0. As can be seen from my quick test, the best performance gain is clearly to run the script locally giving about 17 times better performance.
While no rocket science that accessing tons of files locally has to be faster then doing it over the network, it is still fairly common to see scripts executed on application servers and performing tasks on file shares.
This also proves that it is important in our business to stay on top of new versions of tools and frameworks in order to catch these types of improvements. I can be certain that there is quite a number of people out there bashing gci heavily for it's slow performance when in fact it has improved a lot between PowerShell 2.0 and 3.0.
The script itself was quite simple, it did a recursive gci on a path and performed an action on each item matching a where clause that identified empty folders. Nothing wrong there. However, even if the specified path contained a massive amount of folders and files, it still shouldn't have been running for days (literally) causing the ticket to be created.
When looking into the matter, I found this blog post on the Windows PowerShell blog that goes into detail why Get-ChildItem has slow performance at times. The main culprit is the .NET APIs that is too chatty and causes a lot of overhead traffic over the network when trying to query for files. This was fixed in PowerShell 3.0 that uses new API:s.
A quick test using a folder with 700 subfolders and a total of 40 000 files where I execute the script line
(gci d:\temp -Recurse | where-object {$_.PSIsContainer}) | Where-object {($_.GetFiles().Count + $_.GetDirectories().Count) -eq 0} | out-file d:\dir.txt
reveals the following execution time numbers:
Using PowerShell 2.0 and accessing the path as an UNC: 100s
Using PowerShell 3.0 and accessing the path as an UNC: 33s
Using PowerShell 2.0 and accessing the path locally: 6s
Using PowerShell 3.0 and accessing the path locally: 5s
In my case, the server was both running PowerShell 2.0 and accessing the path as an UNC causing a major performance hit. The solution is simply to both run the script locally on the server where the task has to be performed as well as upgrading to at least PowerShell version 3.0. As can be seen from my quick test, the best performance gain is clearly to run the script locally giving about 17 times better performance.
While no rocket science that accessing tons of files locally has to be faster then doing it over the network, it is still fairly common to see scripts executed on application servers and performing tasks on file shares.
This also proves that it is important in our business to stay on top of new versions of tools and frameworks in order to catch these types of improvements. I can be certain that there is quite a number of people out there bashing gci heavily for it's slow performance when in fact it has improved a lot between PowerShell 2.0 and 3.0.
Tuesday, June 24, 2014
Base64 encode/decode part of messages in map - part 2
Back in November I wrote a post about encoding messages as base64 strings in a BizTalk map. I never added the explicit implementation of how to decode the message though. However, I was asked to provide an example of it, so here is part 2: how to decode a base64 string from an XML message and output it in BizTalk.
Consider the scenario to be the opposite of what we did in part 1.
We have a message on the input side that has two fields, one with an id as a string, and another string element that holds the entire base64 encoded string.
<ns0:Root xmlns:ns0="http://BizTalk_Server_Project6.Schema1">
<SomeId>1</SomeId>
<EncodedData>PG5zMDpSb290IHhtbG5zOm5zMD0iaHR0cDovL0JpelRhbGtfU2VydmVyX1Byb2plY3Q2LlNjaGVtYTIiPg0KICA8RGF0YT5EYXRhXzA8L0RhdGE+IA0KICA8RGF0YTI+RGF0YTJfMDwvRGF0YTI+IA0KPC9uczA6Um9vdD4=</EncodedData>
</ns0:Root>
On the output side, we want to have a message that conforms to the schema we have defined. The message is the entire contents of the base64 encoded string in the input.
If we parse the encoded string, we get:
<ns0:Root xmlns:ns0="http://BizTalk_Server_Project6.Schema2">
<Data>Data_0</Data>
<Data2>Data2_0</Data2>
</ns0:Root>
Which conforms fully to the schema defined.
So, the task is to extract the contents from the EncodedData element, decode them, and use the full string as the output message. All in a single BizTalk map.
The map will look like this to begin with, with the two schemas chosen:
Similar to when we encoded, we add first a scripting functoid that has no connections, and paste the code for the decode function in it:
public string Base64DecodeData(string param1)
{
byte[] decodedBytes = Convert.FromBase64String(param1);
return System.Text.UTF8Encoding.UTF8.GetString(decodedBytes);
}
We simply take a string as an input parameter, decode this and return the decoded string back.
Then we create a new scripting functoid that we set to inline Xsl and paste this code into it:
<xsl:variable name="data">
<xsl:value-of select="//EncodedData" />
</xsl:variable>
<xsl:value-of select="userCSharp:Base64DecodeData($data)" disable-output-escaping="yes" />
This functoid is connected to the output root node giving us a map that looks like this:
When executing this map with the input message above, we will get the output message properly formatted.
The tricks used here is two. First off, we use the same pattern as before with calling a predefined function we have in another scripting functoid in order to call a simple C# function from our XSL code. Then in the XSL, we first extract the string from our input message and store it in a variable. This is then passed into our C# function and we get a string back. However, if we do not specify the disable-output-escaping="yes" in our value-of select, we would get the string entity encoded. With this extra property set, the string will be output as it is and the way we want it.
The same technique can of course easily be used to just output part of a message by simply connecting the scripting functoid to the node you want to populate (if for instance you have a schema that has a node defined as xs:any that you want to use).
Consider the scenario to be the opposite of what we did in part 1.
We have a message on the input side that has two fields, one with an id as a string, and another string element that holds the entire base64 encoded string.
<ns0:Root xmlns:ns0="http://BizTalk_Server_Project6.Schema1">
<SomeId>1</SomeId>
<EncodedData>PG5zMDpSb290IHhtbG5zOm5zMD0iaHR0cDovL0JpelRhbGtfU2VydmVyX1Byb2plY3Q2LlNjaGVtYTIiPg0KICA8RGF0YT5EYXRhXzA8L0RhdGE+IA0KICA8RGF0YTI+RGF0YTJfMDwvRGF0YTI+IA0KPC9uczA6Um9vdD4=</EncodedData>
</ns0:Root>
On the output side, we want to have a message that conforms to the schema we have defined. The message is the entire contents of the base64 encoded string in the input.
If we parse the encoded string, we get:
<ns0:Root xmlns:ns0="http://BizTalk_Server_Project6.Schema2">
<Data>Data_0</Data>
<Data2>Data2_0</Data2>
</ns0:Root>
Which conforms fully to the schema defined.
So, the task is to extract the contents from the EncodedData element, decode them, and use the full string as the output message. All in a single BizTalk map.
The map will look like this to begin with, with the two schemas chosen:
Similar to when we encoded, we add first a scripting functoid that has no connections, and paste the code for the decode function in it:
public string Base64DecodeData(string param1)
{
byte[] decodedBytes = Convert.FromBase64String(param1);
return System.Text.UTF8Encoding.UTF8.GetString(decodedBytes);
}
We simply take a string as an input parameter, decode this and return the decoded string back.
Then we create a new scripting functoid that we set to inline Xsl and paste this code into it:
<xsl:variable name="data">
<xsl:value-of select="//EncodedData" />
</xsl:variable>
<xsl:value-of select="userCSharp:Base64DecodeData($data)" disable-output-escaping="yes" />
This functoid is connected to the output root node giving us a map that looks like this:
When executing this map with the input message above, we will get the output message properly formatted.
The tricks used here is two. First off, we use the same pattern as before with calling a predefined function we have in another scripting functoid in order to call a simple C# function from our XSL code. Then in the XSL, we first extract the string from our input message and store it in a variable. This is then passed into our C# function and we get a string back. However, if we do not specify the disable-output-escaping="yes" in our value-of select, we would get the string entity encoded. With this extra property set, the string will be output as it is and the way we want it.
The same technique can of course easily be used to just output part of a message by simply connecting the scripting functoid to the node you want to populate (if for instance you have a schema that has a node defined as xs:any that you want to use).
Monday, June 2, 2014
Error 5644 when trying to enable a SQL Server notification receive location in BizTalk Server
When using the SQL Server broker functionality in order to have SQL Server push notifications to BizTalk instead of polling a table, I've found that the following error is quite common to encounter.
The Messaging Engine failed to add a receive location "Event1" with URL "mssql://localhost//EventDB" to the adapter "WCF-Custom". Reason: "Microsoft.ServiceModel.Channels.Common.TargetSystemException: The notification callback returned an error. Info=Invalid. Source=Statement. Type=Subscribe.The error message tells you that the statement entered is invalid, but not in which way. There are a lot of rules to comply with that all are available on MSDN: http://msdn.microsoft.com/en-us/library/ms181122.aspx And beside the "normal" ones such as "do not use aggregate functions" and "do not use an asterisk to define the resultset", one that constantly haunts me are "table names must be qualified with two-part names", meaning that you have to prefix all tables with the schema name, such as "dbo.EventTable". Just using select blah from EventTable where processed=0 will generate the error above.
Thursday, May 29, 2014
SQL Server: Invalid prefix or suffix characters error message
When trying to do different operations in SQL Server Management Studio such as "Edit Top 200 Rows", you might get the error message "Invalid prefix or suffix characters. (MS Visual Database Tools)".
This is most likely due to the fact that the Management Studio tool is an older version than the database you are connected to and trying to perform the operation on. For instance, using SQL Server Management Studio for SQL Server 2008R2 and connecting to a SQL Server 2012 database will render the above error message when trying to perform the "Edit Top 200 Rows" operation on a table or view.
The solution is to use the same version of Management Studio as the database in question.
This is most likely due to the fact that the Management Studio tool is an older version than the database you are connected to and trying to perform the operation on. For instance, using SQL Server Management Studio for SQL Server 2008R2 and connecting to a SQL Server 2012 database will render the above error message when trying to perform the "Edit Top 200 Rows" operation on a table or view.
The solution is to use the same version of Management Studio as the database in question.
Friday, April 4, 2014
BizTalk with codeplex SFTP adapter, doing a file move on AfterGet
A client has a setup
with the Codeplex SFTP adapter, reading files from a site and then deleting
them when read using the Delete parameter in the AfterGet property within the
receive location.
A need arised where
the files would need to be moved to an archive folder in the SFTP site instead
of being deleted.
There is no mention
of this in the documentation of the adapter, however, by checking the
underlying code one can see how we can do the move.
The OnBatchComplete method located in BatchHandler.cs, will be called when a message has been sent successfully to BizTalk and hence be removed/renamed on the SFTP server. The line
string renameFileName = CommonFunctions.CombinePath(Path.GetDirectoryName(fileName), batchMessage.AfterGetFilename);
Is the interesting part. We see that when the Rename option is chosen in the adapter properties, the CombinePath method will be called to create the new full path for the file based on the current directory where it is located, and a new filename. This can be utilized by simply setting the new filename to include a relative path as well as the filename and by doing that create a move of the file and have the rename optional. The latter since the macro replacements will be done after the creation of the full path.
Suppose we read files located in /home/marcus/btstest/ and after reading these should be moved to /home/marcus/btstest/archive/. In this case, you would set the AfterGet File Name property to read archive/%SourceFileName%
If you instead want to move the files to a directory outside of your current path, you can do so by specifying the parent by the usual two dots (..) like for instance ../btstestarchive/%SourceFileName% for the directory located at /home/marcus/btstestarchive/.
One issue can occur though if you use this technique to move files to a file archive in order to have a log on the ftp server side of all files that has been processed. If the filename is already available in the archive folder and the adapter tries to move yet another file to this folder with the same name, it will fail. It will be logged in the event log as "Method: Blogical.Shared.Adapters.Sftp.SharpSsh.Sftp.Rename
Error: Unable to rename /home/marcus/btstest/testfile.xml to /home/marcus/btstest/archive/testfile.xml", but the original file will be left in the pickup folder on the SFTP site.
BizTalk will not try to process this file again, until you restart the host that is. Then it will be picked up once more, sent to the messagebox and the rename function will fail again. You could make it a bit safer by using the %DateTime% or %UniversalDateTime% macros that also are available besides %SourceFileName%. Yet another option is to extend the BatchHandler.cs with the %MessageID% macro and create a GUID to add to the filename in order to get it truly unique.
The OnBatchComplete method located in BatchHandler.cs, will be called when a message has been sent successfully to BizTalk and hence be removed/renamed on the SFTP server. The line
string renameFileName = CommonFunctions.CombinePath(Path.GetDirectoryName(fileName), batchMessage.AfterGetFilename);
Is the interesting part. We see that when the Rename option is chosen in the adapter properties, the CombinePath method will be called to create the new full path for the file based on the current directory where it is located, and a new filename. This can be utilized by simply setting the new filename to include a relative path as well as the filename and by doing that create a move of the file and have the rename optional. The latter since the macro replacements will be done after the creation of the full path.
Suppose we read files located in /home/marcus/btstest/ and after reading these should be moved to /home/marcus/btstest/archive/. In this case, you would set the AfterGet File Name property to read archive/%SourceFileName%
If you instead want to move the files to a directory outside of your current path, you can do so by specifying the parent by the usual two dots (..) like for instance ../btstestarchive/%SourceFileName% for the directory located at /home/marcus/btstestarchive/.
One issue can occur though if you use this technique to move files to a file archive in order to have a log on the ftp server side of all files that has been processed. If the filename is already available in the archive folder and the adapter tries to move yet another file to this folder with the same name, it will fail. It will be logged in the event log as "Method: Blogical.Shared.Adapters.Sftp.SharpSsh.Sftp.Rename
Error: Unable to rename /home/marcus/btstest/testfile.xml to /home/marcus/btstest/archive/testfile.xml", but the original file will be left in the pickup folder on the SFTP site.
BizTalk will not try to process this file again, until you restart the host that is. Then it will be picked up once more, sent to the messagebox and the rename function will fail again. You could make it a bit safer by using the %DateTime% or %UniversalDateTime% macros that also are available besides %SourceFileName%. Yet another option is to extend the BatchHandler.cs with the %MessageID% macro and create a GUID to add to the filename in order to get it truly unique.
Monday, March 31, 2014
Fix for jerky mouse movements in remote desktop/Citrix sessions to Windows Server 2012
When connecting with Citrix to a Windows Server 2012, you might experience that the mouse pointer is lagging severely causing a jerky movement and making it a complete dud trying to make any proper work on the server.
If this is the case, you should make sure that the Enable pointer shadow checkbox in the Mouse Properies is unchecked. This feature has been the issue with many machines I've encountered lately and unchecking the property has made the UI more enjoyable.
If this is the case, you should make sure that the Enable pointer shadow checkbox in the Mouse Properies is unchecked. This feature has been the issue with many machines I've encountered lately and unchecking the property has made the UI more enjoyable.
Friday, March 7, 2014
Recreating a BizTalk Active Directory group will cause access to BizTalk to fail
When setting up and configuring a BizTalk environment you will also add the necessary Active Directory groups to allow access for different accounts to the platform.
In most cases, you will not touch these groups again, but there might come a time when you remove and add the groups with the same name within Active Directory. Doing so will change the SID of the group and it will no longer match the SID stored in SQL Server for the Login.
Trying to access the BizTalk Server Administration Console as a BizTalk Server Operator will for instance yield the following error when the corresponding AD group has been recreated:
You will get an error saying "BizTalk Server cannot access SQL Server. [...] Access permissions have been denied to the current user. [...] Login failed for user [...]".
You will also notice a logged error in SQL Server saying that "Login failed for user [...] Reason: Could not find a login matching the name [...]"
When looking at both the Active Directory accounts/groups as well as the SQL Server security settings, everything will look ok.
In order to correct this, you will have to remove and then add the corresponding Login in SQL Server. This might also cause you to have to delete and then add the corresponding User in the different BizTalk Databases (in general the BizTalkMgmtDb, BizTalkDTADb and BizTalkMsgBoxDb). This will also be notified by SQL Server when removing the Login.
After doing these remove/adds in SQL Server, you should be able to access the resources as expected.
Note that this is documented briefly in the Troubleshooting BizTalk Server Administration page on MSDN, but you more or less have to know where/and for what to look in order to find it.
In most cases, you will not touch these groups again, but there might come a time when you remove and add the groups with the same name within Active Directory. Doing so will change the SID of the group and it will no longer match the SID stored in SQL Server for the Login.
Trying to access the BizTalk Server Administration Console as a BizTalk Server Operator will for instance yield the following error when the corresponding AD group has been recreated:
You will get an error saying "BizTalk Server cannot access SQL Server. [...] Access permissions have been denied to the current user. [...] Login failed for user [...]".
You will also notice a logged error in SQL Server saying that "Login failed for user [...] Reason: Could not find a login matching the name [...]"
When looking at both the Active Directory accounts/groups as well as the SQL Server security settings, everything will look ok.
In order to correct this, you will have to remove and then add the corresponding Login in SQL Server. This might also cause you to have to delete and then add the corresponding User in the different BizTalk Databases (in general the BizTalkMgmtDb, BizTalkDTADb and BizTalkMsgBoxDb). This will also be notified by SQL Server when removing the Login.
After doing these remove/adds in SQL Server, you should be able to access the resources as expected.
Note that this is documented briefly in the Troubleshooting BizTalk Server Administration page on MSDN, but you more or less have to know where/and for what to look in order to find it.
Friday, February 28, 2014
The difference between single and double quotes in PowerShell
I had a colleague ask me the other day about whether there was a difference between using single or double quotes when defining a string variable in PowerShell. For that specific questions, the answer was "no". However, there is an important distinction to make between the two.
With double quotes, the text within the quotes will be parsed. For instance, the following code will output "Hello Marcus":
$name = 'Marcus'
$output = "Hello $name"
write-host $output
However, when changing the second line to using single quotes as so:
$output = 'Hello $name'
The output will instead be "Hello $name"
The parsers handling of the different quotes also means that if you are to use escape characters, for instance `n, you have to put the string within double quotes for the escape characters to work. If you want to use a double quote within such a string, you can write it using the escape character:
$mystring = "This is a variable called `"mystring`""
Having the possibility of using the two different quotes can also allow for creating strings that contain one of the two quote characters, for instance when building query strings:
$name = 'Marcus'
$query = "SELECT Id, City from Customers WHERE Name LIKE '%$name%'"
Knowing this difference between single and double quotes is quite crucial for being able to build PowerShell scripts and not get into a mess when trying to create dynamic execution of other scripts or programs that require parameters to be passed to them.
With double quotes, the text within the quotes will be parsed. For instance, the following code will output "Hello Marcus":
$name = 'Marcus'
$output = "Hello $name"
write-host $output
However, when changing the second line to using single quotes as so:
$output = 'Hello $name'
The output will instead be "Hello $name"
The parsers handling of the different quotes also means that if you are to use escape characters, for instance `n, you have to put the string within double quotes for the escape characters to work. If you want to use a double quote within such a string, you can write it using the escape character:
$mystring = "This is a variable called `"mystring`""
Having the possibility of using the two different quotes can also allow for creating strings that contain one of the two quote characters, for instance when building query strings:
$name = 'Marcus'
$query = "SELECT Id, City from Customers WHERE Name LIKE '%$name%'"
Knowing this difference between single and double quotes is quite crucial for being able to build PowerShell scripts and not get into a mess when trying to create dynamic execution of other scripts or programs that require parameters to be passed to them.
Subscribe to:
Posts (Atom)