Copied from my twitter rant here: https://twitter.com/ip1/status/1039668421615529985

May contain edits and be added to at a later time

A rant about service stack updates (SSU) and why Microsoft really needs to work something better out for this:
When the major monthly cumulative update requires the SSU as a pre-req, it creates major pain points with managed deployment. If you are running “unmanaged” and just let clients go to Microsoft directly to get whatever is applicable, then you probably aren’t worried about this.

Read the rest of this entry »

I found a couple of servers that were reporting they had failed the Configuration Manager Client Health Evaluation. No problem, how about I just manually run the Health Eval scheduled task…
Ah, there’s your problem. “Computer says NO”

health-fail2

Checking the clients CCMEVALTASK.LOG shows lots of lovely red for “Failed to create client evaluation task” and “Failed to delete task Configuration Manager Health Evaluation (0x80070002)”. I already know this is going to be similar to other schedule task creation problems.

health-fail1

Read the rest of this entry »

NOTE: Usual warnings apply. Do a backup before making any changes. If you are unsure about anything in the post then ask or look for more information or help before attempting it.

Over time WSUS will accumulate update metadata that can create performance issues for clients. In large environments this can be quite an issue.

There is a script Microsoft often provides during Premier Support calls to cleanup this update metadata, however there are a few issues:

  • The query can take a *really* long time to run if there are a lot of updates to cleanup. In some cases it can take *days*
  • You need to stop all the WSUS services while it runs
  • If it fails for whatever reason, it will have to start all over because it doesn’t commit the changes until it completes successfully
  • While it runs, the TEMPDB and Transaction logs will grow quite significantly until the data is committed
  • It gives no useful information on progress

There is a TechNet article (This is essential reading and has LOTS of important stuff) and a Forum Post where an improved version was written that gave progress of the cleanup, however it didn’t address the temp/transaction growth issues or the time issues. To this end I have applied my very rudimentary SQL scripting skills.

Read the rest of this entry »

NOTE: The configuration suggestions I mention in this post won’t fix the underlying issue. Depending on the size of your environment they may be enough to get things working for you again. Microsoft is currently working on releasing a hotfix that I have tested and found to resolve this problem

 

Microsoft have released the WSUS server hotfix, details here: https://blogs.technet.microsoft.com/configurationmgr/2017/08/18/high-cpuhigh-memory-in-wsus-following-update-tuesdays/

NOTE2: It turns out there is a new issue from the August 2017 updates that “clears” the update history on a computer that will trigger a full client scan again. This will also cause high load on your WSUS server, although for slightly different reasons, however the suggestions here and the coming updates will help to resolve the load issue from that problem as well.

Microsoft have updated the August cumulative updates to resolve this issue, details here: https://support.microsoft.com/en-us/help/4039396/windows-10-update-kb4039396

 

NOTE3: Microsoft has now published some additional official guidance here: https://blogs.technet.microsoft.com/askcore/2017/08/18/high-cpuhigh-memory-in-wsus-following-update-tuesdays/

This issue is one I first encountered on only a couple of our WSUS servers (2 or 3 of 15 servers) last year in November 2016 after the new cumulative update process was introduced for patching. At first I assumed it to be a failure on my part to do more regular cleanup, or a result of the recent upgrade to ConfigMgr 1610, or an “end of year” rush of activity on the network. This isn’t unusual for the environment I currently manage (Education with approx. 370,000+ devices)

At first I looked at server bottlenecks (we run everything in VMWare) and even SQL DB corruptions. I tried doing WSUS resets, even recreating the database (this is a last resort in a large environment). I then thought maybe it was a Server 2012 WSUS issue as we had other Server 2012 related cases open with Microsoft. To test I rebuilt one server as 2012R2, but the problems persisted. Given it was only happening on a couple of server I assumed it was an issue with those servers in particular and didn’t suspect a larger issue.

Over the Christmas holidays things went quite, so there was nothing more I could do until school returned the following February.

Then everything basically exploded.

The first patch cycle we ran saw the WSUS server rocket to 100% CPU and stay there. Nothing I did could stop this reoccurring. I found ways to bring things under control for a few hours at a time. Endpoint definitions started falling behind because clients couldn’t scan for updates. Then it started happening on a couple more of the servers. At this point I conceded defeat and called in Microsoft. Unfortunately it was another 6 months before they finally identified it was a “function” of WSUS causing the grief and not the configuration or size of our environment.

The Problem

The most obvious symptoms will be clients failing to scan for updates and the WSUS server CPU (w3wp.exe) going very high. Some clients get through, many will fail. The main cause will be Windows 10 clients and the way WSUS has to process the Cumulative Updates.

Read the rest of this entry »

Creating, or recreating, a SQL user account and I forgot to untick the “must change password” option. Damn.

If you try to disable the password policy options, you get a message saying “The CHECK_POLICY and CHECK_EXPIRATION options cannot be turned OFF when MUST_CHANGE is ON. (Microsoft SQL Server, Error: 15128)”

Rather than recreate the account, you can disable the options using a script

Source: http://www.webofwood.com/2009/01/29/fix-a-sql-server-login-which-has-must_change-set-to-on/

USE Master
GO
ALTER LOGIN [username] WITH PASSWORD = 'samepassword'
GO
ALTER LOGIN [username] WITH
      CHECK_POLICY = OFF,
      CHECK_EXPIRATION = OFF;

 

 

When trying to open a OneDrive folder on the computer you get a catastrophic failure message. This is related to the “Offline” attribute for that folder being reset, possibly due to some other program or action that had been performed in the past.

onedrivecatastrophicfailure

The same folder can be opened from Windows Explorer when selecting the folder from the left pane view, but the message appears when opening the folder from the right pane view.

You can check the folder location by right-clicking and selecting “Properties”onedrivepicturesproperties

Open a CMD prompt, CD to the OneDrive folder location found in the properties and use the ATTRIB command to reset the Offline attribute for the folder

ATTRIB -O "Pictures" /s /d

 

onedriveattribreset

If there are a lot of folders showing this issue, the same command can be run against the whole OneDrive folder instead

ATTRIB -O /s /d

Yes, we skipped 1602 and went to 1606 in our Dev environment, but due to various change freezes, conflicts with other projects and change management delays we have decided we will be going from 1511 direct to 1610 in Prod.

The Dev upgrade went OK (using the “FastRing” powershell script) however it was then announced there were some additional bugs found and a newer 1610 installed was released. The updates though have not as yet been posted to update the people that had run the original 1610 install…

Read the rest of this entry »

A quick reference for the error codes when you get Activation error in Windows

http://windows.microsoft.com/en-us/windows-10/activation-errors-windows-10

 

A problem recently encountered was causing major headaches. There was a runbook somewhere in the system that had an action running with the security credentials of a user that had left sometime ago. Their account had recently been disabled and the Orchestrator was logging thousands of errors and causing the Orchestrator database to grow at a massive rate. OBJECTINSTANCEDATA was growing at thousands of rows a second and hit 50 million rows and 16GB in size after only a few days.

Read the rest of this entry »

Each new update of ConfigMgr I will start a new entry for anything specific to that version I want to test or make notes of.

Read the rest of this entry »