Nintex 2016, SharePoint 2016, non-compliant roles

You may run into an issue with your Nintex Services not starting on your non-central administration servers as in this image:

You can attempt to browse the web and Nintex support pages, but they will be of little help:

It turns out that the services did not get installed as part of the solution deployment.  In otherwards, the Nintex Services are missing on your SharePoint server.

In this case, you have to install the services manually in order to start them and have your server be compliant. The three services are:

  • Nintex Connector Workflow Queue Service – (C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\BIN\NintexWorkflow\Nintex.Workflow.Connector.QueueService.exe)
  • Nintex External Relay Service -(C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\BIN\ExternalPlatform\Nintex.External.RelayService.exe)
  • Nintex Workflow Start Service – (C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\BIN\NintexWorkflowStart\Nintex.Workflow.Start.Service.exe)

You can manually install these services to the other servers by running the .net installutil utility (Except in the case for two of them you have to use the “sc” tool).  Note that there are two versions of this tool, 32bit and 64bit.  If you use the wrong one, you will get the dreaded “System.BadImageFormatException”, and if it does successfully run, the service won’t be visible to SharePoint due to Nintex “programming practices”.  The install tool is located in:

  • C:\Windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.exe

The series of commands in an administrator command (cmd.exe) window would be:

  • sc create “Nintex Connector Workflow Queue Service” binPath=”C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\BIN\NintexWorkflow\Nintex.Workflow.Connector.QueueService.exe” DisplayName=”Nintex Connector Workflow Queue Service” start=auto
  • C:\Windows\Microsoft.NET\Framework64\v4.0.30319\InstallUtil.exe “C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\BIN\ExternalPlatform\Nintex.External.RelayService.exe”
  • sc create “Nintex Workflow Start Service” binPath=”C:\Program Files\Common Files\microsoft shared\Web Server Extensions\16\BIN\NintexWorkflowStart\Nintex.Workflow.Start.Service.exe” DisplayName=”Nintex Workflow Start Service” start=auto

Why do you have to use “sc create”?  Because the internal name of the Queue Service is actually “Nintex Connector Workflow Queue Service Recycle”.  For the other, the name ends up being “NWStart”…SharePoint doesn’t like it when services don’t match what they expect!

Once you run the install commands, the services will display in your Services applet:

Once they are installed, switch back to Central Administration, click “Start” on the servers.  They should now start without error and your servers will be in “compliance”!


Why Healthcare is so expensive!

You always wondered why, but then you were never in the healthcare industry or work at an exec level to know how the tech systems work at a typical hospital…

Here’s a quick education into the healthcare disaster! 

When you go into the hospital, a chart is written up based on your comments and the doctors evaluation and actions (draw blood, tap your knee, whatever). This chart is then translated into a series of medical codes so they know how to charge them to you or your insurance.  The charts are evaluated by a series of medical “coders”, no, they don’t know C# or Java or JavaScript, they just match the words to “codes” (ICD9/ICD10).

Each chart is “coded” to a specific set of codes.  The codes are then sent off to the billing system.  The billing system will match the codes to the patients insurance (or lack thereof) such that the base charge rate is multiplied times a discount (or again, a lack thereof) or “forced” charge amount for that particular insurance contract (Blue Cross, Medicaid, Medicare, etc).

Most insurance companies want a “discount” on your particular services\codes.  They can negotiate or force (depending how big they are) just about anything for a set of codes.

The crazy part about this is that the base rate that the hospital charges (the “charge master” database system) is pretty much a let’s run a sql query against the database and increase everything by 7%  EVERY YEAR (or some rate adjustment, possibly matching inflation or exec\board discretion).

You should be saying…”wait what”?

Yup, every year, a non-qualified individual will increase the pricing of stuff because the execs tell them too.  Remember the movie “Independence Day” when the jewish guy says something about the price of the hammer…yeah…$400 for a hammer.

Aka, $20 for a bandaid!

Why?  Because the charge master simply gets incremented by a percentage every year in a blanket fashion!

Ok…so the charge master is broken.  Yup.  That’s where the government comes in.  Obama actually kicked some serious ass with this.  He forced certain codes to be charged at a base rate that is run by the government (and you have to follow them for any ACA insurance policies).  If you send in a bill with a code that has a price over the base, REJECTED!  It forced everyone to reevaluate their charge masters (at least for government stuff).

I applaud him for that.

But…here’s the kicker.  The “coders” that “code” your chart.  Those people are instructed to interpret the words in such a way that they can set the chart to match the most expensive codes!  Even though those codes may not even really match!  It’s up to the patient and the insurance company to do a review of the worded chart to see if the codes have been applied appropriately!

This review happens very rarely.

Now that I know how this works, believe me…every time I go into the doctor’s office I ask for my chart and the codes that were sent for it.  I actually got $250 back for this last visit!

What else happens?  Oh, execs and doctors are “spiffed” for overdoses of various items.  Example, you get bit by a snake and need the anti-venom, guess what, you only need one dose…but how many do you actually get?  Anywhere from 1-5 doses!  Each dose (back in the day) can be up to $20K each.  Where does the money go?  The doc, the hospital and the execs get a kickback!   What?!?!  Yup…there is shadiness that occurs for all kinds of things.

And that is your 10 min run down of how corrupt the medical system in America is and why we don’t have universal health care, everyone has a different charge master that no one can agree to the price of anything!


SharePoint eDiscovery OR Records Management, but not both!

NOTE:  This blog post is context sensitive to the current state (9/2017) of on-premises SharePoint 2013 and 2016 environments and has no relation to any new evolving data labeling or retention in Office 365.  It has come to my attention some very significant features will be announced at Ignite around filling the traditional Records Management feature gaps.

Stay tuned for a post on why you should upgrade/migrate to the latest and greatest!

You see many blog posts by many of my peers.  SharePoint eDiscovery is great, Records Management is good (yeah, only good if not bad).  But what you rarely hear is the truth of the matter.

Just like in the highlander series…”there can be only one”.  As new features are added to any product, they will start to collide.  The collisions are becoming more frequent and the regression tests more infinite.

That all being said, if you only had one to choose from, eDiscovery or Records Management.  Which one would you choose?

eDiscovery is pretty awesome when taken as a single entity.  The ability to find and lock down (aka Hold) items across Exchange and SharePoint for the purposes of litigation for example.

Records Management (again as a single entity) is great in that you can enforce the users to not be able to make modifications to declared records. Albeit, only in the sense that they live in the records center or you have enabled the “In-place records management” feature in a site.

So why can you only have one?  Hmm, let’s get to the meat of it shall we?

eDiscovery holds are great in that they allow you to target content such that it does not get modified.  It does this through the somewhat well known Preservation Hold Library.  That’s all fine and dandy, but the reality that most people don’t get is that when you add a “site” as a source and then create the hold, it holds the entire site by default.  Admins (not end users) can get around this by enabling the “hidden” feature of Query based preservation.  But when you do this, it adds an entirely new issue of performance to the whole conversation, but we’ll leave that out here.

Ok, so where does the choice come in?

Well, if you want to put a hold on your entire SharePoint Farm, you have to select all the sites in your sources for the eDiscovery case.  Its doable, painful, but doable.   So what does that do?  It of course will make it such that any time a user modifies a document, it will go into preservation hold.

Ok, so what happens when you need to archive a document to the records center?  You aren’t destroying it…you aren’t modifying it (per se), you are following basic procedures to create a record.  Well, unfortunately my friends, the reality is that you will get an ugly error that the document has in fact moved to the records center, but that a link could not be created:

This is so very bad.  You now have a document sitting in the records center, but yet it is also sitting in the source site.  What happens is you send it again, which is of course what a user will try to do!  Yup, another version is created and unless you have versioning turned on, you get the crazy FileName_ZUOXUDF filename.  The original document will sit there…forever…or…until the hold is lifted and then of course, the data owner may have left the company and whala…a orphaned record that no one really knows what to do with!


SharePoint has never truly been a records management platform (content organizer and rules are a complete failure).

Because of this, it’s the reason four different companies formed (RecordPoint, RecordLion, Gimmal, Collabware) to solve the problems that SharePoint has had for quite some time.  I’ll skip the inadequacies of those products for now, but likely you’ll get my full opinion on them later in life…

This particular instance of features colliding though…it really pushes organizations over the proverbial limit of understanding and patience.

In reality, the way things should go is…are you sending something to a trusted records center?  Yes?  Ok cool…I’ll let that action occur because I know its going to someplace good and monitored and approved.

Not…fail no matter what cuz one feature overrides another!

Pick one…but only one.


Do you trust O365’s trust of others?

So I’m renewing my CISSP and will be making a push into the security space pretty hard in the next few months.   Part of that will be doing things like in this post.

I am deep into identity and auth flows and have been doing a ton with ADFS/OAuth with Intune etc.

A few days ago it hit me that a person has the ability to modify the claims that ADFS sends to O365.  That got me to thinking…

What will happen if O365 does its realm redirect where a user logs in as one person yet the claim for the UPN is different than the original?  Will it work and if so, what are the ramifications of it?

So, here’s my general thinking of what I want to do (didn’t end up being how to do…keep reading):

  • Setup ADFS and a federated domain in O365 (
  • Modify the O365 ADFS claims to be someone\something other than what the actual login implied.
    • IE…I login as, but the claim that is sent back is actually

  • Share a site with
  • See if it all works…

So…first part.  ADFS.  When you setup the O365 relay it adds in its own claims rules so that it gets what it is expecting.  The general set of claims that get sent are:


In addition to the other basic claims as seen here:

Ok, cool.  So here’s my goal.  Set it such that no matter who logs in, the identity and claims that get sent to O365 are really someone else!  That requires a bit of claim manipulation.  Here’s an example:

See what I’m doing?  I’m setting the email to be something else other than the original value from the auth’d AD user!  I do this for all the email claim fields.  The though is that Azure AD will utilize the email\UPN as its “source of truth” for who the user is.  Ok, great.  So let’s try it out!

Going to the faithful, I enter in a set of credentials for my federated domain, and get redirected to ADFS.  I log in to ADFS page, and the first go around, I got this error:

After removing competing claim rules and re-trying the login, this is the beauty you end up:

It worked…kinda.  Notice that id.  That is what I though was the valid ObjectGUID.  Nope, its a bit different than that.  So how the heck do I view the claims that ADFS is sending?

Well, good luck with that with the out of box logging of ADFS.  You can attempt to follow this post to turn on all the verbose logging, but its only helpful for resolving the competing claims issue or the fact I did not have the SAML Consumer setup on the replying party.

So, what do you do?  You swing over to and you setup your ADFS with them, then you can use all their cool debugging tools to see what is actually coming across!  Awesome…

So, after looking at the logs and seeing the original claim rule applied, I see that the ObjectGUID is base64 encoded.  So I copy it and paste it into my claim rule.

Ok, let’s try this again…I sign on, type a username and password, ADFS does its thing, I get redirected back to the site, and whala…I’m in as the other user!  Holy shizer balls, it worked…

It wasn’t what I though would happen being that I had to find the ObjectGUID and that is what Azure AD goes off of, not the email claims.

So what does this mean?  It means whoever has access to ADFS for the remote federated domain, can open up the ADSI edit tool, find the ObjectGUID for a user (or even use basic powershell such as “Get-ADUser username -Properties ObjectGUID | Select *”), paste in some rules and BAM…they are in as that user in the O365 application layer.  

They do NOT have to be a domain admin to query Active Directory for ObjectGUID, nor do you have to be a domain admin to manage ADFS.  You can simply be an ADFS Admin.

This is significant in that unlike a domain admin, when a domain admin changes a password or an AD object, the AD object changes are very likely audited and red flags thrown!

In my experience, a change to ADFS claim rules is very rarely audited and monitored and if it is, it is unlikely to throw up any major red flags.  Any ADFS admin can make a change and login at any time and bam, you can be anyone, anytime.  This leads to…log your ADFS 510 events:

Hope you trust whoever is running the target federated auth server (no matter what it is).  Or that you trust whoever you are sharing things with if they are doing federation and not Azure AD directly!

I think it would be nice to be able to set in the configuration settings that I don’t want to allow my users to share data with a domain that has federation enabled and that it must be managed by Azure AD!  Maybe even be able to set it at a very specific level such as Site/Web.

Free SharePoint Apps for Everyone!

Have had this in my back pocket for a while.  I just checked to see if it still works and sure enough…it does.  You can see I tore deep into the SP App Store design with this 4 year old post.  What I didn’t show was how one can get the app packages for free and by pass paying for the apps.

This only works with Apps that have a “trial”.  Ones that you have to pay for will not be open to this hack as you would have to buy them first.  But technically once you have bought it, you can use this same hack to then go post their app package on the internet.

How does it work you ask?  If you read through the steps of the post referenced above, one of the back end abstracted parts is to download the app package such that it can be deployed to the target web.  The app package will not be put into the database and is required to be downloaded each and every time you request it.  In the App Mgmt database you will find the RawXMLEntitlementToken.

This is generated when you click through the Microsoft billing portion of the app install.  All apps (Free, Trial, Paid) have to run through this in order to get the token.  Once you have this token, you can use it to download the app package by simply pasting this into your browser window:{96ae724f-5d59-49b3-8fcd-79191c3e1728}

This will download the .cab file for the target app.  Once you have this, you don’t need to pay for the App when the trial expires.  Just install it to your app catalog or directly into the site.


Source file could not be replaced with a link – SharePoint

You may run across this in your dealings with SharePoint 2013 or 2016 when attempting to move things to the Records Center and leave a link behind.

It used to be because the file was checked out, but with all the new features of SharePoint these days, you get many components stepping on each others toes.

In this case, you will find the actual error is not bubbling up from the lower levels in the stack.  A quick search in the ULS logged will point you to this gem:

“Version of this item cannot be deleted because it is on hold”

Ah ha…remove the hold that is placed on the document (if you were testing anyway) and then you can continue the move process.  The unfortunate thing is that the document does successfully get submitted to the records center, so a resubmissions will trigger the whole versioning or _ASDSF randomness.  Such is life in SharePoint when you have end points calling other end points that live outside one another’s thread space.

Action of Microsoft.Office.Project.Server.Database.Extension.Upgrade.PDEUpgradeSequence failed – SharePoint 2016 Upgrade

What a crazy error. 

There are a few posts about this but very little that tell you what is going on.  When you attempt to upgrade the farm with the following command (after applying a public update or even after the 2016 upgrade):

PSConfig.exe -cmd upgrade -inplace b2b -wait -force -cmd applicationcontent -install -cmd installfeatures -cmd secureresources

You could get the above error in one of the final upgrade stepsactions.  Psconfig.exe is attempting to upgrade all your service application and content databases to your current binary level.  This particular error will show up if someone in your organization was "smart" enough to decided to make a content database a project server database.  You will see this by looking at the tables in the database.  If you see any that are related to Project Server, then you know someone did something really dumb.

You can get your farm to upgrade by simple removing the offending database (aka detach it), then run the command to upgrade everything else.  

Resolution would be to remove all the project server based tables (make a backup of course) and then try your upgrade again. You could also attempt to tell SharePoint that the database is a "Project Server" database and attempt to upgrade it via Project Server but no guarantee that will work especially if the databases are from 2007/2010 days.  You would need a Project Server environment to upgrade all the way from the old version to the latest.


sprocsSchemaVersion must not be null – SharePoint 2016 Upgrade

You may run across this error.  I inadvertently did.  It happens when the content database upgrade process fails expectantly and then the Mount-ContentDatabase won't execute the upgrade again because it thinks it is already upgraded. Unfortunately, this is a catastrophic failure and will require you to restore your Content database and rerun the content database upgrade. 

How did I run into this?  Well, I opened many powershell windows to be "multi-threaded" in my upgrade and the upgrade code didn't like that at all.  It complained about a shared log file and killed off half the threads.  I guess you shouldn't fire that off more than one instance like you could in 2013 and 2010!

Be sure you always backup your databases before you upgrade in case you need to rollback!

Hybrid Search Results not displaying

Help!  My Hybrid Search isn't working!  What could it be?!?

  1. So you have successfully run the hybrid scripts here.
  2. You have ensured that the results are flowing to your Cloud Search Service Application via the Cloud SSA crawl logs.
  3. You go to do a search using your Azure AD cloud sync'd account and you get…nothing…what!??!
  4. You look things over and over again…maybe I didn't do this, maybe I didn't do that…no, it all looks good!

The possible cause:

  1. You didn't setup a User Profile Service Application (kinda rare I know)
  2. You view your "sync'd" site collection users profile and notice that the local "email address" does not match your cloud email address…doh!
    1. vs
  3. You change the values to match using the edit list item feature of the site users list
  4. You re-run the cloud ssa crawl
  5. Go back to the cloud search center…ta da…results!  Just another reminded that UPNs have to match for the results to process and this value comes from the site collection users list when you have no UPS!

Hope this helps someone!

Other Helpful posts on a similar topic: