Friday, December 28, 2012

Solved: "Prefix '..' does not map to a namespace" when binding to a custom dependency property

I quickly want to share some information about a strange error I got when binding to one of my dependency properties and how to solve it.

The error is a TypeNameParserException with the following additional information: Prefix 'dp' does not map to a namespace.

TypeNameParserException

I got this when binding to one of my dependency properties with namespace, like this:

<ContextMenu x:Key="contextMenu" Tag="{Binding PlacementTarget.(dp:MyClass.MyDependencyProperty), RelativeSource={RelativeSource Self}, Mode=OneWay}">

As you can see the element is a context menu (which is defined in a Resources section, don't know if this matters). And I am binding to the dependency property dp:MyClass.MyDependencyProperty of the context menu's PlacementTarget (PlacementTarget is the element the context menu is invoked for).

This binding caused the problem because as soon as I opened the context menu I got the TypeNameParserException. When I removed this binding the error went away.

So this lead to the error:
Binding PlacementTarget.(dp:MyClass.MyDependencyProperty)

(Note: there were no compiler warnings, no squiggly lines, everything seemed valid.)
After some fiddling and googling I found the solution.

This solved the error:
Binding Path=PlacementTarget.(dp:MyClass.MyDependencyProperty)

It's simple: you explicitly have to specify the Path part! See the difference? Let's highlight it a bit more:

Binding Path=PlacementTarget.(dp:MyClass.MyDependencyProperty)

Maybe this helps you when you get strange namespace-related errors.

Sunday, December 16, 2012

A Better ULS Log File Viewer

While analyzing SharePoint ULS logs I felt the need for a better ULS log viewer. With more capabilities than the existing ones 1, 2. I don't like them. They are either old, look ugly or don't provide the capabilities I need.

Time to create a better one. Time for Heu's ULS Viewer.

I will provide more information soon.
Update - Download Heu's ULS Viewer here.
Until then here are some teaser screenshots:

Main View
Main view of Heu's ULS Viewer

Features:
  1.  Support for multiple log files
    • file contents will be concatenated
    • Drag & Drop support
    • fast log file parsing
  2. Extended filtering capabilities
    • decide in which of the columns you want to filter
    • two different filter types; you can choose to include items matching your search term, or to exlude items matching it
  3. Grid view of all log entries
    • sort by clicking column header
    • fast sorting even for large data sets
    • select multiple rows and copy them to clipboard by pressing Ctrl+C (good for quickly pasting them to a mail or support ticket)
  4. Detailed display of selected log entry
    • multi-line log entries are merged to one
    • proper formatting of stack traces
Visualization

Visualization of message count over time

Visualizing SharePoint ULS log messages provides a general survey you don't easily get when looking at the raw log data. For now there are two series: one for the overall log message count (in blue) and one for the log messages with level "Unexpected" (in red).

The diagrams respect your current filters.

That's all for today. I hope you are as excited as I am. Stay tuned for more!

Saturday, December 15, 2012

Getting Shell Integration for TFS when using TFS Cloud Services and Visual Studio Express 2012

I recently started using Team Foundation Service 2012 in the cloud together with Visual Studio Express 2012 for Windows Desktop. The cloud service allows me to easily put my sources under version control and schedule automatic builds without having to set up any servers.

From TortoiseSVN I am used to access source control features from the context menu of files and folders. For instance I want to be able to add files to source control directly from the right-click menu.

This is possible. The short-named Microsoft Visual Studio Team Foundation Server 2012 Update 1 Power Tools contains a shell extension which does exactly that. Cool!

But my first attempt to install the Power Tools failed. It thought having Visual Studio Express installed would be enough. But apparently you first have to install Team Explorer for Microsoft Visual Studio 2012.

Having done this the Power Tools installation succeeds and you get the full context menu goodness:

TFS shell integration



Monday, November 5, 2012

Remember: How to delete a Crawled Property

Deleting a crawled property from SharePoint is easy but sometimes requires a little preparation.
Summary | This post shows how to delete a crawled property and how to prepare it.
Deleting a crawled property might be necessary if
  • you are developing a custom indexing connector and are still testing it; obsolete properties will accumulate over time
  • your BDC changes from time to time and every change leads to new crawled properties being created with the next full crawl
  • $otherReason 
The list of all crawled properties can be found at Central Administration -> Search Service Application -> Queries and Results -> Metadata Properties -> Crawled Properties.

Deleting unused crawled properties

In SharePoint 2010 deleting a single crawled property is not possible. But you can delete all unused crawled properties of a category.

Go to Metadata Properties and click Categories:

In the context menu of a category click Edit Category:
Check the option Delete all unmapped crawled properties and click OK:
All unused crawled properties should be gone after clicking OK. But if the one you want to delete is still present then the next chapter is for you to read.

Unmapping and unindexing 

You have to remove all mappings from the property and you also have to remove it from the search index.

Click Edit/Map Property in the context menu of a crawled property:
Edit crawled property
The Edit Crawled Property page opens.
Edit Crawled Property
Make sure you
  • remove all mappings for this property
  • do not include values for this property in the search index
Repeat this for all properties which should be deleted when deleting unused properties of its category.

Note: Some OOB SharePoint crawled properties cannot be deleted.

Wednesday, October 31, 2012

Translation of SharePoint Terms from English to German


When writing technical documentation for SharePoint I frequently need to switch between the English and German terms for SharePoint interior. The German translation tends to produce constructs which often are funny and very long - and hard to remember. So I decided to assemble some vocabulary for quick reference.
Summary | This blog post provides you with SharePoint terms in English and their German translation. The post will be extended over time.

General

EnglishDeutschVersion
SiteWebsite2007
Site CollectionWebsitesammlung 2007
FarmFarm2007
Web ApplicationWebanwendung 2007
PageSeite 2007
Site ActionsWebsiteaktionen 2007
Site SettingsWebsiteeinstellungen2007
People and GroupsBenutzer und Gruppen 2007
Site collection administratorsWebsitesammlungsadministratoren 2007
Advanced permissionsErweiterte Berechtigungen 2007
Look and FeelAussehen und Verhalten 2007
Save site as templateWebsite als Vorlage speichern 2007
Site featuresWebsitefeatures 2007
TemplateVorlage 2007

Permissions

EnglishDeutschVersion
Permission LevelsBerechtigungsstufen2007
Limited AccessBeschränkter Zugriff2007
ReadLesen2007
ContributeTeilnehmen2007
DesignEntwerfen2007
Full ControlVollzugriff2007
List PermissionsListenberechtigungen2007
Manage ListsListen verwalten2007
Override Check OutAuschecken außer Kraft setzen2007
Add ItemsElemente hinzufügen2007
Edit ItemsElemente bearbeiten2007
Delete ItemsElemente löschen2007
View ItemsElemente anzeigen2007
Approve ItemsElemente genehmigen2007
Open ItemsElemente öffnen2007
View VersionsVersionen anzeigen2007
Delete VersionsVersionen löschen2007
Create AlertsBenachrichtigungen erstellen2007
View Application PagesAnwendungsseiten anzeigen2007

Search

EnglishDeutschVersion
Enterprise Search CenterUnternehmenssuchcenter2010
Basic Search CenterBasissuchcenter2010
Search Results Action LinksLinks für Suchaktionen2010
Search Action LinksHyperlinks für Suchergebnisaktionen2010
Search Core ResultsKernergebnisse der Suche2010
Refinement PanelVerfeinerungsbereich2010
Refinement Panel CaptionÜberschrift des Einschränkungsbereichs2010
Result Query OptionsAbfrageoptionen für Ergebnisse2010
RefinementEinschränkung2010
Accuracy IndexGenauigkeitsindex2010
Number of Categories to DisplayAnzahl der anzuzeigenden Kategorien2010
Number of Characters to DisplayAnzahl der anzuzeigenden Zeichen2010

Search Alerts

EnglishDeutschVersion
New AlertNeue Benachrichtigung2010
Alert TitleBenachrichtigungstitel2010

Central Administration

EnglishDeutschVersion
Application ManagementAnwendungsverwaltung2010
Manage web applicationsWebanwendungen verwalten2010
Manage service applicationsDienstanwendungen verwalten2010
Manage content databasesInhaltsdatenbanken verwalten2010

Search Service Application

EnglishDeutschVersion
Queries and ResultsAbfragen und Ergebnisse2010
Authoritative PagesAutorisierende Seiten2010
Federated LocationsPartnerspeicherorte2010
Metadata PropertiesMetadateneigenschaften2010
ScopesBereiche2010
Search Result RemovalSuchergebnisse entfernen2010
Search FederationPartnersuche2010
New locationNeuer Speicherort2010
Federated LocationPartnerspeicherort2010


Sunday, October 28, 2012

Upgrade journey from Windows XP 32-bit to Windows 8 64-bit

Now is the time to upgrade to Windows 8! Microsoft offers it's new version of Windows for a price you can't say "No" to, but only until end of January 2013.

Summary | This blog post shows how to upgrade from 32-bit Windows XP to 64-bit Windows 8 with a purchased downloadable upgrade. This is possible without buying the physical DVD despite contrary reports.

While taking the option to download your new Windows 8 is the quickest and cheapest one it can have its pitfalls. Have a close look at the bitness of your systems! Especially if you are upgrading from 32-bit to 64-bit you have to take care of not getting the wrong upgrade version.

Upgrading to Windows 8 (32-bit and 64-bit) from Windows XP (32-bit)

Downloading installation files - easy

I chose to upgrade an old XP (32-bit) installation of mine. I thought 30 € for a download sounds like a good deal, so I chose this option. After purchasing you have to download and start the setup file Windows8-Setup.exe. It downloads about 3 GB of data (2,05 GB for the 32-bit version, 2,62 GB for the 64-bit version):

Then it asks you how to proceed. Among the available options should be the one to create a medium to install from. This would create an ISO file you can burn to DVD. The setup screen should look like this:

But it didn't.

So the lesson here is:
If you are running Windows XP 32-bit you will not get the option to create an installation medium and you will not be able to create an ISO file.
At least the option was missing for me.

Getting an ISO on Windows XP - not so easy

I want to have an ISO file I can burn, so what do I do now? I decided to try running the setup on another PC running Windows 7. So I needed to find the files downloaded by setup to copy them.

On Windows XP the Windows8-Setup.exe downloads files to a hidden directory ESD on the system disk, e.g. C:\ESD. I copied the folder to the Windows 7 (64-bit) machine (also to C:\ESD) and started the Windows8-Setup.exe there. That looked much better. It recognized the copied ESD folder and offered me the option to create an installation medium.

So I clicked "Install by creating media" and it did as told. It created an ISO file I could use to install Windows 8. 32-bit.

Oops. I got an installation medium for 32-bit Windows 8 but needed 64-bit!

Getting an 64-bit ISO - easy again. But remember those old preview versions!

So the second lesson was:
Starting Windows8-Setup.exe from a 32-bit system will provide you with a 32-bit version of Windows 8.
To get the 64-bit version I obviously had to start the Windows8-Setup.exe and the download on a 64-bit system. On my Windows 7 machine I deleted the ESD folder I previously copied and started the setup again. That worked. It started downloading and the amount of data now was more than before. I again chose "Install by creating media" and got an ISO bigger than before (2,05 GB x32 vs. 2.63 GB x64). This was it.
To get 64-bit Windows 8 run Windows8-Setup.exe on a 64-bit system.
I omitted one detail in the above description: I had to remove the folder C:\Users\<username>\AppData\Local\Microsoft\WebSetup as it still contained content from a previously installed Windows 8 Consumer Preview. I think this is why the Windows 8 setup once showed the wrong registration key (I assume this was the evaluation key for the consumer preview) and tried to read from the folder C:\WindowsESD which still contained the Consumer Preview installation files. Such errors looked like this:

Removing said directory solved these issues.

Third lession:
Remove old files from Windows 8 Consumer Preview setup.

I now have an ISO for 32-bit and 64-bit Windows 8. It took a bit longer than expected but finally it all worked out.

Bonus Lesson - match your licenses and ISO files

You are allowed to upgrade up to 5 old PCs of yours. As this is a nice way of getting rid of old machines I took this offer and bought two upgrades, both delivered digital. Because both upgrades were to Windows 8 64-bit I burned only one DVD. Windows 8 is Windows 8, right?

Wrong!

One license/product key is associated with one Windows8-Setup.exe which will install (or create an ISO for) a Windows 8 that can only be activated using this exact same product key.
Apparently some licensing information is already embedded in the downloaded files. I found this as I installed Windows 8 from the DVD burned after buying the first upgrade but tried to activate using the second product key. This didn't work. So I really had to create two media for two product keys. Out of curiosity I calculated CRC checksums of both ISO files and the result was indeed different.

To spare you the hassle you should burn one DVD per upgrade license.

Thursday, October 25, 2012

PowerShell HowTo: Creating and deleting crawl rules

The last posts covered creating and modifying content sources, now it's time to fine tune the actual crawling. Crawl rules specify which content will be crawled and how.

Setting the rules...

Let's say you want to crawl all URLs which contain the directory pages/. An appropriate crawl rule would look like this:

It's an inclusion rule and the path is http://*pages/*. So all URLs matching this pattern will be crawled.

...in code

Here's the code to create such a crawl rule:
  $Path = "http://*pages/*"
  $SearchApp = Get-SPEnterpriseSearchServiceApplication
  # check if crawl rule already exists; if yes: delete
  if ((Get-SPEnterpriseSearchCrawlRule -SearchApplication $SearchApp -Identity $Path -EA SilentlyContinue)) 
  {
    # remove crawl rule; "-confirm:$false" disables confirmation dialog which would otherwise pop up
    Remove-SPEnterpriseSearchCrawlRule -SearchApplication $SearchApp -Identity $Path1 -confirm:$false
  }

  $Rule = New-SPEnterpriseSearchCrawlRule -SearchApplication $SearchApp -Path $Path -Type InclusionRule -CrawlAsHttp 0 -FollowComplexUrls 0
  $Rule.CaseSensitiveURL = 1
  $Rule.Update()

This code first checks if the crawl rule already exists and deletes it if it does. (You could also display an error message instead.) Then the crawl rule is created.

Some parameters can only be specified when creating the rule with New-SPEnterpriseSearchCrawlRule (like Type), some can only be set afterwards on the existing rule (like CaseSensitiveUrl).

Monday, October 22, 2012

PowerShell HowTo: Setting start addresses of an existing content source

In a previous post we created a completely new content source. Now we are going to modify an existing content source.

The root of all crawling

It's a simple cmdlet, Set-SPEnterpriseSearchCrawlContentSource,  taking a reference to the search application, the access protocol and the content source. The latter will be identified using its name. Here's the code:
     
  $SearchApp = Get-SPEnterpriseSearchServiceApplication
  $StartAddresses = "protocol1://localhost/?entity=sheep,protocol1://localhost/?entity=bird,protocol1://localhost/?entity=fish"
  Set-SPEnterpriseSearchCrawlContentSource -SearchApplication $SearchApp -Identity "Content Source Name" –StartAddresses $StartAddresses -CustomProtocol "protocol1"
Above code sets three start addresses for a single content source:
  • protocol1://localhost/?entity=sheep
  • protocol1://localhost/?entity=bird
  • protocol1://localhost/?entity=fish
In code they are separated by colon. Noteworthy detail: in the configuration page for the content source these addresses will appear in reverse order.

Accessing a content source's properties

If you want to modify a content source to e.g. add start addresses instead of replacing them all you have to get the content source. Get-SPEnterpriseSearchCrawlContentSource ist the cmdlet to use:
     
  $SearchApp = Get-SPEnterpriseSearchServiceApplication
  $ContentSource = Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $searchapp "Content Source Name"
The collection $ContentSource.StartAddresses contains all currently configured addresses. Combine them with yours and use the first script to update the configuration.

Monday, October 15, 2012

PowerShell HowTo: Creating managed properties and adding crawled properties

In the previous posts I showed how you deploy a Custom Indexing Connector (or Search Connector) to SharePoint and how you create a content source that makes use of it.

In this post I will give a quick introduction to Crawled Properties and explain how to create a new Managed Property using PowerShell as well as how to add crawled properties to it.

Where do crawled properties come from?

I won't go into the definition of Managed Properties and Crawled Properties, you can bing that for yourself. But I show you where crawled properties come from (with a custom indexing connector in mind).

Why do I want to know?

You need crawled and managed properties to improve the search experience for your users.

After setting up the indexing connector you need to start a crawl on your external system (if you didn't already do it: start it now and check the Crawl Log for success). The connector will index content it finds (so your users can search for it) and it will create Crawled Properties. These come from your BDC.

So let's keep in mind that we want to improve the search experience. And this can be done by creating Search Refiners centered around BDC entities. And we ultimately need those crawled properties to do this.

It's in the BDC

Let's assume you are crawling a financial LOB system and your BDC model file contains an entity Order which has the following structure:
<TypeDescriptor Name="Order" TypeName="PurchasingConnector.Entities.Order, PurchasingConnector, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0000000000000000">
  <TypeDescriptors>
    <TypeDescriptor Name="OrderNo" TypeName="System.String" IdentifierEntityNamespace="Purchasing" IdentifierEntityName="Order" IdentifierName="OrderNo" />
    <TypeDescriptor Name="Customer" TypeName="System.String" />
    <TypeDescriptor Name="OrderDate" TypeName="System.DateTime" />
  </TypeDescriptors>
</TypeDescriptor>
After doing the first full crawl on your external system your list of crawled properties should have been expanded. Have a look - go to Central Administration -> Search Service Application -> Queries and Results -> Metadata Properties:

Search Service Application - Metadata Properties

In the top menu click Crawled Properties:

This will list all crawled properties including these from your BDC:
  • Order.OrderNo(Text)
  • Order.Customer(Text)
  • Order.OrderDate(Date and Time)
(You probably have to search a bit as they hide among the masses of OOB crawled properties.)

Refiners the way we want them

Ultimately, we want to create a refiner allowing us to filter for the customer of an order.
Search Refiner

To do this we need a managed property (as the tutorials on creating search refiners will tell you). And that is exactly what we're going to created now using PowerShell: a managed property.

Time to create Managed Properties and add Crawled Properties

To stick with the above example we create a managed property named Customer:
$ssaGlobal = Get-SPEnterpriseSearchServiceApplication
$schemaGlobal = New-Object -TypeName Microsoft.Office.Server.Search.Administration.Schema -ArgumentList $ssaGlobal

# check if property already exists - in this case we cancel the operation
$property = GetManagedProperty -ssa $ssaGlobal -schema $schemaGlobal -managedPropertyName "Customer"
if ($property)
{
    Write-Host -f Red "Cannot create managed property because it already exists"
 exit
}

# create managed property with name "Customer" of type "Text"
$property = $schemaGlobal.AllManagedProperties.Create("Customer", "Text")
# set description; there are other properties you could set here
$property.Description = "Customer of Order (Managed Property created from PowerShell)"
That's basically it. But the managed property is still empty. We need to add a mapping for our crawled property. Here is how:
# this is the "Property Set GUID"; it is also used by the custom indexing connector so this is where you need to get it from
$propSetGuidGlobal = "{00000000-0815-0000-0000-000000000000}"
$textVariantType = 31
$mappings = $property.GetMappings();

# try to map crawled property - if the crawled properties doesn't exist nothing bad happens, it will simply be ignored
$mapping = New-Object -TypeName Microsoft.Office.Server.Search.Administration.Mapping -ArgumentList $propSetGuidGlobal, "Order.Customer", $textVariantType, $property.PID
$mappings.Add($mapping)
 
$property.SetMappings($mappings)
$property.Update()
Note that you need to specify the Property Set GUID for the Property Set the crawled property is contained in. For the OOB crawled properties in SharePoint these are documented (somewhere). For a custom indexing connector this ID is defined inside the connector. So this information should probably be contained in the documentation for ease of use.

Also note the type of the crawled property, 31, which means Text. This and more variant type identifiers are listed in this blog post.

Now go and crawl!

After creating or modifying managed properties you have to do a full crawl. Otherwise your managed property won't work as expected.

Saturday, October 13, 2012

Remember: Publish Search Results page after modifying it or your Search Refiners might not show up for other users!

When you edit the Search Results Page of your Search Center:

Search Results Page - Site Actions - Edit Page

To modify the Refinement Panel Web Part to add Custom Refiners:

Search Results Page - Refinement Panel - Edit Web PartSearch Results Page - Refinement Panel - Edit Properties - Filter Category Definition

And you save the page after you are done:

Search Results Page - Save and Close
Don't forget to publish the page!

Search Results Page - Publish
(Or if you don't have enabled Publishing: click Check In right next to Save & Close)

Otherwise other users might not see the search refiner. You will probably notice this when testing with another user's account. Your new search refiners which so far worked flawlessly suddenly do not appear at all.

So if your search refiner shows up under the account you created it but is not visible anymore when logging in with another user's account you might have forgotten to publish the page.



Thursday, October 11, 2012

SharePoint 2010 SP1 changes the account used for indexing content. Not.

After updating SharePoint 2010 from RTM state (14.0.4763.1000) to Service Pack 1 (SP1, 14.0.6029.1000) a content source of the Search Service Application (SSA) suddently stopped indexing content.

The content source in question was of type CustomRepository and used a custom indexing connector to access an external system. It downloaded data via web service. And this suddenly seemed to fail.

What was going on?

Somebody cannot access something

A look into the ULS log revealed errors which happened every time the connector tried to access the web service. The crawl history showed a single top level error and the crawl log had the following entry:

"Error while crawling LOB contents. ( Credentials were not found for the current user within the target application '...'. please set the credentials for the current user. )"
The error message pointed into one direction: the Secure Store. All credentials for accessing the external web service were saved in the secure store. And one account was allowed to get these credentials. The message was suggesting that another account than the allowed one was trying to get the credentials.

But what is the "current" user? Shouldn't the user be the Default Content Access Account of the SSA as configured in the Crawl Rules?

Identity crisis

After looking into the task manager I decided to give credential access to one account: the account mssdmn.exe runs under, which is the account of the SharePoint Server Search 14 service.

And it seemed like

  • before updating to SP1 the Default Content Access Account (as configured in the Crawl Rules) was used to access the secure store credentials
  • after updating to SP1 this account apparently changed to the account of the SharePoint Server Search 14 service.

So the solution was simple, yet mysterious: I changed the account allowed to access the web service credentials. And it worked.

But stop!

Resolution? Confusion.

After a few days I deleted the content source previously affected and added it again. And the indexing stopped again. Same error as before: "Credentials were not found for the current user within the target application '...'. please set the credentials for the current user." What was going on this time?

The account used by Search to access the secure store credentials changed again. To what was set prior to installing SP1: the Default Content Access Account of the SSA. As one would expect.

Strange.

Monday, October 8, 2012

PowerShell HowTo: Deploying a Custom Indexing Connector to SharePoint - Part 2

Deploying a custom indexing connector in SharePoint requires two steps:
In an earlier post we already covered step one and added a protocol to SharePoint. Now it's time to tell SharePoint about our indexing connector. The connector and the protocol will be associated together with the Business Data Catalog (BDC) model file.

Register the indexing connector with SharePoint

The cmdlet used to register the connector is New-SPEnterpriseSearchCrawlCustomConnector which takes our previously registered protocol as parameter as well as the path to the BDC model file. The model file describes the structure of the external system's data. It also contains information about where to find our indexing connector. The connector will be used to handle URLs starting with the given protocol.
     
Function RegisterCustomConnector
{
    param ([Microsoft.Office.Server.Search.Administration.SearchServiceApplication] $ssa, [string] $protocol, [string] $displayName, [string] $modelFilePath) 
    Write-Host "Registering custom connector: $displayName"
    $connector = New-SPEnterpriseSearchCrawlCustomConnector -SearchApplication $ssa -protocol $protocol -ModelFilePath $modelFilePath -Name $displayName
    if ($connector)
    {
      Write-Host -f Green "Successfully registered custom connector: $displayName"
    } else
    {
      throw "Registering custom connector failed: $displayName"
    }
}

$ssa = Get-SPEnterpriseSearchServiceApplication
RegisterCustomConnector -ssa $ssa -protocol "protocol1" -displayName "Connector" -modelFilePath "MyBDC.xml"
The documentation states that the protocol must have the format "protocol1://", but using just "protocol1" (without "colon dash dash") works just fine.

Example of a BDC model file where you can see the assembly and classes specified:
     
<Model name="MyModel" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.microsoft.com/windows/2007/BusinessDataCatalog">
  <LobSystems>
    <LobSystem name="ContosoSystem" type="Custom">
      <Properties>
        <Property name="SystemUtilityTypeName" type="System.String">ConnectorNamespace.ContosoSystemUtility, ConnectorAssembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0000000000000000</Property>
        <Property name="InputUriProcessor" type="System.String">ConnectorNamespace.ContosoLobUri, ConnectorAssembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0000000000000000</Property>
        <Property name="OutputUriProcessor" type="System.String">ConnectorNamespace.ContosoNamingContainer, ConnectorAssembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0000000000000000</Property>
      </Properties>
      <!-- More content here -->
    </LobSystem>
  <!-- and here -->
  </LobSystems>
<!-- and here -->
</Model>
The assembly here is "ConnectorAssembly" which you have to deploy to the Global Assembly Cache (GAC).

(More about attributes used in the BDC model file can be found in the MSDN: "Search Properties for BDC model files".)

Wednesday, October 3, 2012

PowerShell HowTo: Deploying a Custom Indexing Connector to SharePoint - Part 1

Deploying a custom indexing connector in SharePoint requires two steps:
  • Adding a protocol used by SharePoint to call into the indexing connector
  • Registering the indexing connector with SharePoint
In this post I will show how to accomplish the first step via PowerShell.

Both steps are also described in the MSDN but we go one step further and automate it completely using PowerShell.

Adding protocol and handler to the registry

Based on the protocol of a content source's start address (e.g. http or bdc3) SharePoint decides which indexing connector should crawl it. The protocols known to SharePoint are stored in the Registry on the server where the crawling will take place.

So, the protocol used by our indexing connector also needs to be added to the registry. The following script adds the protocol protocol1:
        
Function RegisterProtocolHandler
{
    param ([string] $protocol)
    
    $path = "HKLM:\SOFTWARE\Microsoft\Office Server\14.0\Search\Setup\ProtocolHandlers\"
    Write-Host "Adding protocol handler to registry: $protocol" 
    # creates the property if not present, otherwise updates it
    Set-ItemProperty -path $path -name $protocol -value "OSearch14.ConnectorProtocolHandler.1" -ErrorAction Stop
    Write-Host -f Green "Successfully added protocol handler to registry: $protocol"
} 

RegisterProtocolHandler -protocol "protocol1" 
If you look at the registry afterwards you will see your protocol among the already registered ones:
HKLM:\SOFTWARE\Microsoft\Office Server\14.0\Search\Setup\ProtocolHandlers
But where to get the name of the protocol from in the first place? It has to be included in the documentation of the indexing connector you want to deploy. This is decided by the creator of the indexing connector and could basically be anything like protocol1, abc or helloworld.

Sunday, September 30, 2012

PowerShell HowTo: Create Content Source of Type CustomRepository


In this post I explain how to create a SharePoint content source of type CustomRepository via PowerShell. There are some pitfalls which I will highlight.

CustomRepository

Content sources of type CustomRepository are used to index external data sources which aren't supported by any of the built-in indexing connectors. These content sources are to be used in conjunction with Custom Indexing Connectors, which use a custom protocol to access the external system.

The property page of an already created content source of type CustomRepository looks like in Figure 1.


Edit Content Source of Type CustomRepository
Figure 1: Content source; properties relating to type CustomRepository are highlighted

You can see that a custom connector named Custom Protocol with scheme protocol1 is used to access the external system. How to register the custom connector will be topic of another blog post.

Scripting

You can use PowerShell to script the creation of this type of content source. It is pretty straightforward if you know about the pitfalls.

Use the New-SPEnterpriseSearchCrawlContentSource cmdlet. Here is a working example:
        
  $contentSource = New-SPEnterpriseSearchCrawlContentSource -SearchApplication $ssa -Type CustomRepository -Name "My Content Source" -CustomProtocol "protocol1" -StartAddresses "protocol1://localhost"

Pitfalls

The StartAddresses parameter is not really optional

Don't forget to specify the StartAddresses parameter! The cmdlet's documentation states that this parameter is optional. Well, this is true in the sense of skipping this parameter won't stop the content source from being created. But it will be broken:

Error: The custom connector used by the content source has been removed or undeployed.
Figure 2: "The custom connector used by the content source has been removed or undeployed."
The error message hints that the custom protocol used by the indexing connector is not available anymore. This is misleading. The correct error message would be "There are no start addresses provided and I am freaking out about it for no reason.".

There is also no way to correct this. The radio box next to Custom Protocol cannot be clicked and the OK button of the dialog is disabled. And won't be enabled again. The only way to fix the content source is to delete it.

So remember to provide a start address when creating a content source via cmdlet. 

Don't use start addresses with empty host part

Using a start address like "protocol1://" (with empty host part) can also lead to the above error. I say "can" because this sometimes seemed to work, sometime not.

So to be sure you should always specify a host in your start address.