Wednesday, November 1, 2017

Collect MSDTC trace logs, access denied

During a support case, I needed to collect MSDTC trace logs. However, I ran into the issue of not having access to the c:\windows\system32\msdtc folder.

The way to access this folder and the trace files collected, is to do it via an elevated command prompt. Running cmd as administrator will give you access to the folder and allow you to copy the needed trace files.

Friday, October 13, 2017

Bulk modify items in Visual Studio Team Services

I just had a need to add around 500 work items for a new project in Visual Studio Team Services. We use VSTS for keeping track in our projects and use the Scrum template. In this case, it was a bit different from a normal project since 98% of the work items were defined before hand and most of the priority was also set. I had a list of all the work items and needed to populate the scrum board with these in a quick way. Still, what I learned can be very useful in other projects as well, but helped me tremendously in this case.

I have heard about the export/import functionality to Excel earlier, but were curious on how to do it in VSTS today. The documentation found online was a bit off and didn't quite tell me what I wanted. Most pages and articles found pointed me to old versions of TFS and Visual Studio. This post is for me (and others) to use as a reference in the future.


Start out by creating a custom query in VSTS that return all objects you want in a tree view format. In my case I want absolutely everything, but you could always filter out specifics to work with if you'd like.


Then make sure to enable the Team ribbon add-in in Excel. I had several versions installed (most likely due to multiple installations of Visual Studio) so I enabled the one with the highest version number.

Go to Options and then Add-ins in the left menu bar. At the bottom you'll find a drop-down where you can select COM Add-ins, then click Go.



If you don't have the Team Foundation Add-in in the list, you can get it by either installing Visual Studio or install the Team Foundation Server Office Integration from here (a bit down in the downloads list).

Then create a new Excel book and connect to VSTS via the newly enabled Team ribbon by selecting New List and connect to the VSTS and then select the new "All items" query.

Add those columns you are interested in. In my case it was Remaining Work and Effort, since I already had those numbers.


Also, in order to be able to work with items on different levels, you need to make sure that you have Title-columns that correspond to the levels. In this case, Title 1 is used for Backlog Item and I have to add a level for the Tasks. Click Add Tree Level in the ribbon and you will get a new column for Title 2 that will contain the Task title.




Then start to populate the table in a normal Excel way. Don't miss out on utilizing the filtering functionality in the table to quickly select items and set properties on those. One example is to filter out everything with "deploy" in the title and set the Activity property to Deployment by using Drag-and-fill. This is WAY faster than doing it in the web GUI, even if you there can select and edit multiple items at once.



Then it is just a matter of Publishing to VSTS to populate the board with the items. So click Publish in the ribbon.


Jump over to VSTS and refresh the page to get your updated backlog.


Just be vary of others updating the board while you work. Changes may be overwritten and/or messed up.

Tuesday, September 26, 2017

Getting notifications from a mouse trap using RuuviTag, Raspberry Pi 3, Windows IoT Core (and a string)

I was part of a Kickstarter campaign that ended up with me receiving five RuuviTags. They are small battery powered tags that communicate via Bluetooth Low Energy (BLE) and are equipped with a sensor chip. They are of course extendible and can be flashed with custom firmware. All in all, fun toys to tinker with when time permits.

Since I have a few mouse traps scattered around the house in locations that I rarely visit, I'd like to get a notification when one of them has trapped a mouse. Since the RuuviTag has acceleration data as part of the data available from the sensors (the other are temperature, atmospheric pressure, humidity and battery voltage), I thought this could be used to check whether a mouse trap has been active or not.



My basic setup is this:
A mouse trap with the RuuviTag attached. This is using stock Ruuvi firmware in RAW mode. This means that it at 1Hz interval will broadcast 20 bytes of raw sensor data via bluetooth. I have a Raspberry Pi 3 running Windows IoT Core in my house that will intercept the signal. When a change in the acceleration data is identified, it is interpreted as the mouse trap has been activated (or flipped over or in any way tampered with). If so, a REST call is made to Pushover that in turn will send a notification to my cell phone. I already use Pushover for other notifications in my home such as notifying me that tool batteries are fully loaded, that the car heater hasn't been switched off (if for instance it has been activated in the morning, but we never took the car to work) etc.

Now, one could argue that the sensor should be able to itself identify a change in the acceleration data and then send a notification regarding this instead of offloading the processing to the RPi3. However, since the tags, while having extremely good bluetooth range, can be missed by the RPi3, the current setup will ensure that I always catch a change in acceleration data. Even if I miss out on five transmits from the tag directly after a change in acceleration data, the sixth transmit will differ from the last one received, and hence trigger the logic.

I will however look into turning down the transmit interval since 1Hz is quite often for this need. I guess I can prolong the battery life a bit by doing so.

On the plus side, since I have at least one tag in the garage, I can utilize the other data sent by the tag. The temperature can be used as input to the car heater logic so that the time for the car heater will differ depending on the temperature inside the garage. Right now, I use outdoor temperature, but since the car is inside, it is not 100% correct for optimizing performance/cost.

The code is not advanced at all. I briefly looked into pairing, GATT profiles and other stuff before realizing that I didn't need any of that.

I simply add a BluetoothLEAdvertisementWatcher to catch any BLE signals received.

BluetoothLEAdvertisementWatcher watcher = new BluetoothLEAdvertisementWatcher();
watcher.Received += OnAdvertisementReceived;
watcher.Start();

I also have a small class to hold info regarding my tags and the received data.

bleTags.Add(new BLETag { BTAddress = 230418796132391, IDNumber = 5, Name = "Mouse trap garage" });
bleTags.Add(new BLETag { BTAddress = 241760085512663, IDNumber = 4, Name = "Mouse trap garage #2" });

Then in my OnAdvertisementReceived event, I check the tags in my class against the received BTAddress to see if the advertisement is for a tag I have interest in (there are quite a lot of bluetooth traffic in my house).

bleTags.FindIndex(x => x.BTAddress.Equals(eventArgs.BluetoothAddress));

Then I need to get the second data section from the advertisement since that is where the raw sensor data is stored.

BluetoothLEAdvertisementDataSection BLEDataSection2 = eventArgs.Advertisement.DataSections[1];
var dataReader = DataReader.FromBuffer(BLEDataSection2.Data);
byte[] fileContent = new byte[dataReader.UnconsumedBufferLength];
dataReader.ReadBytes(fileContent);

The data string is then parsed byte by byte according to the specification to get each sensor value.

To check if the orientation has changed, I currently just check if any of the acceleration values has changed more than 200mG, which will give me enough slack to not trigger on normal sensor fluctuation, but catch any movement from the sensor.

public bool OrientationHasChanged()
{
bool retval = false;

if (PreviousSensorData != null)
{
double xdiff = Math.Abs(PreviousSensorData.AccelerationX - SensorData.AccelerationX);
double ydiff = Math.Abs(PreviousSensorData.AccelerationY - SensorData.AccelerationY);
double zdiff = Math.Abs(PreviousSensorData.AccelerationZ - SensorData.AccelerationZ);

double difflimit = 200; // breaking limit in mG
if (xdiff > difflimit || ydiff > difflimit || zdiff > difflimit)
retval = true;
}

return retval;
}

The sensor in turn is currently fastened to a spring loaded arm on the mouse trap using a string so that when the mouse trap is moved ever so slightly or that it is triggered, the tag will move enough to change the acceleration data more than the 200mG needed to indicate a change.

When testing it out, it has showed to work very well. From a mouse meeting its creator to me getting a push notification, it is 1-5 seconds. This due to the sensor broadcasting at 1Hz, not all advertisements getting to the RPi and finally the push notification that has to make it around the world and back to my phone.

The code is most likely going to change. Altering the filtering of advertisements, trying to increase the quality of transmissions so that none are dropped and similar things are in the pipeline.

Tuesday, December 8, 2015

Using the XMLTransmit pipeline to write your own custom XML declaration

A question in the MSDN BizTalk Forums caught my eye a few days ago. It was about wanting to set the attribute standalone="yes" in the XML declaration on an outbound message.

I have never had the need to set this attribute before, so wasn't sure of whether the map properties or any other setting could make it be written. The other responses in the thread told be that it couldn't. It was then I thought of an old post I wrote regarding using the XMLTransmit pipeline properties to write an xml-stylesheet reference to a message. This surely had to be able to write a custom XML declaration instead. And it did.

What you basically do is set the property AddXmlDeclaration to false, and then write your full custom XML declaration in the property XmlAsmProcessingInstructions. This will write out your custom string as the XML declaration. Or in reality, it will not write it as the declaration, but since we omit the real one, it will take the place of it.



Just make sure to have the other properties correctly set (the charset) or it might not match the document itself which may or may not cause havok in the receiving end.

Friday, October 9, 2015

Faster filtering in PowerShell than using Where-Object

In a project I'm using a PowerShell script to read in a lot of .csv files and then do some lookups between these in order to get the wanted output. This all works well, but has been a bit slow lately due to a lot more data in the files.

I narrowed the speed issue down to a few lines where I iterate over some files in order to find a few rows that I need. Basically, the Where-Object cmdlet is the culprit.

A simplified example is below:

$mycsvfile = Import-Csv .\mydata.csv
$dataiwant = $mycsvfile | Where-Object {$_.idno -eq 5}

I had a few similar lines in the script, making the time to select the data add upp to 9 seconds, which was far to long in this case.

I found this post that show a few variants on filtering collections, and by simply using the PS v4 .Where() notation to do the same thing, I could bring this down to a single second.

$mycsvfile = Import-Csv .\mydata.csv
$dataiwant = $mycsvfile.Where({$_.idno -eq 5})

So lesson learned: PowerShell is evolving quickly and what I thought was a nice way to do something might very well be just fine, but there might also be a quicker way just around the corner. In this example, I'm sacrificing the streaming capabilities, but gain a lot of performance, just by changing a few characters.

Tuesday, September 29, 2015

XSD in Visual Studio and the warning message "Request for the permission of type 'System.Security.Permissions.FileIOPermission"

When working in a project in Visual Studio that had a schema repository added, the warning message "Request for the permission of type 'System.Security.Permissions.FileIOPermission" showed up in various places where I had schema imports. The warning did not show up everywhere though and the project built just fine.


It turned out that the reason was that some of the xsd files where downloaded or copied from a network share and were therefore blocked in Windows, leading up to the warning message.


The warnings could be removed by simply unblocking the offending files, and restarting Visual Studio.

This also lead me to the quick way to unblock a lot of files at once. Simply use PowerShell to iterate over the files and use the Unblock-File cmdlet:
gci -r | Unblock-File

Tuesday, November 4, 2014

Raspberry Pi, kill the tvservice if you are not using HDMI

I'm running a Raspberry Pi at home for various tasks and access it remotely using SSH for all maintanence tasks. This means that the HDMI control in the Pi is unused and hence can be switched off for maybe not better performance, but at least for a lower working temperature.

When logged in, you can switch the HDMI off by running
tvservice -o

However, this will reset at boot time. So one way to handle this is to simply add the command to the boot scripts.

I did it the quick way by adding the command to the boot sequence:
vi /etc/init.d/customboot.sh

Add
### BEGIN INIT INFO
# Provides:          customboot
# Required-Start:    networking
# Required-Stop:     networking
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Custom boot commands
# Description:       Custom boot commands
### END INIT INFO

echo "Turning off tvservice"
/opt/vc/bin/tvservice -o

Then add permissions and add the script to the boot sequence
sudo chmod +x /etc/init.d/customboot.sh
sudo update-rc.d customboot.sh defaults


By doing this, I lowered the idle temperature of my Pi with about two degrees C:


Thursday, July 3, 2014

Slow performance in PowerShell Get-ChildItem over UNC paths

I got a ticket submitted to me that proved to be quite interesting. The basics where that a specific maintenance script were running for ages on a server causing some issues.

The script itself was quite simple, it did a recursive gci on a path and performed an action on each item matching a where clause that identified empty folders. Nothing wrong there. However, even if the specified path contained a massive amount of folders and files, it still shouldn't have been running for days (literally) causing the ticket to be created.

When looking into the matter, I found this blog post on the Windows PowerShell blog that goes into detail why Get-ChildItem has slow performance at times. The main culprit is the .NET APIs that is too chatty and causes a lot of overhead traffic over the network when trying to query for files. This was fixed in PowerShell 3.0 that uses new API:s.

A quick test using a folder with 700 subfolders and a total of 40 000 files where I execute the script line

(gci d:\temp -Recurse | where-object {$_.PSIsContainer}) | Where-object {($_.GetFiles().Count + $_.GetDirectories().Count) -eq 0} | out-file d:\dir.txt

reveals the following execution time numbers:

Using PowerShell 2.0 and accessing the path as an UNC: 100s
Using PowerShell 3.0 and accessing the path as an UNC: 33s
Using PowerShell 2.0 and accessing the path locally: 6s
Using PowerShell 3.0 and accessing the path locally: 5s

In my case, the server was both running PowerShell 2.0 and accessing the path as an UNC causing a major performance hit. The solution is simply to both run the script locally on the server where the task has to be performed as well as upgrading to at least PowerShell version 3.0. As can be seen from my quick test, the best performance gain is clearly to run the script locally giving about 17 times better performance.

While no rocket science that accessing tons of files locally has to be faster then doing it over the network, it is still fairly common to see scripts executed on application servers and performing tasks on file shares.

This also proves that it is important in our business to stay on top of new versions of tools and frameworks in order to catch these types of improvements. I can be certain that there is quite a number of people out there bashing gci heavily for it's slow performance when in fact it has improved a lot between PowerShell 2.0 and 3.0.

Tuesday, June 24, 2014

Base64 encode/decode part of messages in map - part 2

Back in November I wrote a post about encoding messages as base64 strings in a BizTalk map. I never added the explicit implementation of how to decode the message though. However, I was asked to provide an example of it, so here is part 2: how to decode a base64 string from an XML message and output it in BizTalk.

Consider the scenario to be the opposite of what we did in part 1.

We have a message on the input side that has two fields, one with an id as a string, and another string element that holds the entire base64 encoded string.

<ns0:Root xmlns:ns0="http://BizTalk_Server_Project6.Schema1">
 
<SomeId>1</SomeId>
 
<EncodedData>PG5zMDpSb290IHhtbG5zOm5zMD0iaHR0cDovL0JpelRhbGtfU2VydmVyX1Byb2plY3Q2LlNjaGVtYTIiPg0KICA8RGF0YT5EYXRhXzA8L0RhdGE+IA0KICA8RGF0YTI+RGF0YTJfMDwvRGF0YTI+IA0KPC9uczA6Um9vdD4=</EncodedData>

</ns0:Root>

On the output side, we want to have a message that conforms to the schema we have defined. The message is the entire contents of the base64 encoded string in the input.

If we parse the encoded string, we get:
<ns0:Root xmlns:ns0="http://BizTalk_Server_Project6.Schema2">
 
<Data>Data_0</Data>
 
<Data2>Data2_0</Data2>
</ns0:Root>

Which conforms fully to the schema defined.

So, the task is to extract the contents from the EncodedData element, decode them, and use the full string as the output message. All in a single BizTalk map.

The map will look like this to begin with, with the two schemas chosen:


Similar to when we encoded, we add first a scripting functoid that has no connections, and paste the code for the decode function in it:

public string Base64DecodeData(string param1)
{
            byte[] decodedBytes = Convert.FromBase64String(param1);
            return System.Text.UTF8Encoding.UTF8.GetString(decodedBytes);
}


We simply take a string as an input parameter, decode this and return the decoded string back.

Then we create a new scripting functoid that we set to inline Xsl and paste this code into it:

<xsl:variable name="data">
 
<xsl:value-of select="//EncodedData" />
</xsl:variable>
<xsl:value-of select="userCSharp:Base64DecodeData($data)" disable-output-escaping="yes" />

This functoid is connected to the output root node giving us a map that looks like this:


When executing this map with the input message above, we will get the output message properly formatted.

The tricks used here is two. First off, we use the same pattern as before with calling a predefined function we have in another scripting functoid in order to call a simple C# function from our XSL code. Then in the XSL, we first extract the string from our input message and store it in a variable. This is then passed into our C# function and we get a string back. However, if we do not specify the disable-output-escaping="yes" in our value-of select, we would get the string entity encoded. With this extra property set, the string will be output as it is and the way we want it.

The same technique can of course easily be used to just output part of a message by simply connecting the scripting functoid to the node you want to populate (if for instance you have a schema that has a node defined as xs:any that you want to use).

Monday, June 2, 2014

Error 5644 when trying to enable a SQL Server notification receive location in BizTalk Server

When using the SQL Server broker functionality in order to have SQL Server push notifications to BizTalk instead of polling a table, I've found that the following error is quite common to encounter.

 The Messaging Engine failed to add a receive location "Event1" with URL "mssql://localhost//EventDB" to the adapter "WCF-Custom". Reason: "Microsoft.ServiceModel.Channels.Common.TargetSystemException: The notification callback returned an error. Info=Invalid. Source=Statement. Type=Subscribe.
The error message tells you that the statement entered is invalid, but not in which way. There are a lot of rules to comply with that all are available on MSDN: http://msdn.microsoft.com/en-us/library/ms181122.aspx And beside the "normal" ones such as "do not use aggregate functions" and "do not use an asterisk to define the resultset", one that constantly haunts me are "table names must be qualified with two-part names", meaning that you have to prefix all tables with the schema name, such as "dbo.EventTable". Just using select blah from EventTable where processed=0 will generate the error above.