-
Managing endpoint policies for the enterprise
Introduction
This one goes out to Tay, also known as SwiftOnSecurity. We had a discussion in late 2024 around policy management in the enterprise, so here we are.
I have been deploying OS, app, device, and whatever else comes along policies in mid to large enterprises for most of my career. They say the best way to learn is through failure and I have had my fair share of outage causing failures with many a lesson along the way.
My first big lesson was always testing scripts before deploying them. This is a simple lesson that is known by most at this point, but valid nonetheless. Mix equal parts confidence, excitement, lack of experience, and a missing parameter and boom, you wipe out ~300 nurse managers user profiles trying to clean up some disk space. This was one of my first big deployments.
My next big lesson was not to modify active production deployments. I was helping a co-worker troubleshoot a ConfigMgr collection that didn’t have any members. Easy, right? Until you realize that collection you just fixed had a required task sequence deployment with multiple restarts in it. What did the collection members jump to? Only about 10,000 workstations and servers. After the unstoppable 6 hour outage, I was terrified explaining that one to our VP. This outage led to the split of our single ConfigMgr environment into five separate workstation and server environments later that year, which I am thankful for today.
These lessons, along with many others, have shaped the robust set of practices outlined below. Our teams apply these practices across multiple endpoint management tools to minimize risk, avoid impacting user experiences, and prevent outages. Whether we’re using Group Policy, Intune, ConfigMgr, JAMF, or other tools, these guidelines help keep everything running smoothly.
Summary
This is a very lengthy post, but I did not want to separate it into multiple posts, so here is a summary of the major points.
- Always test scripts and avoid modifying active production deployments to prevent outages.
- Use lower environments that closely mirror production for safe testing.
- Deploy policies gradually using pilots, rings, and maintenance windows; prioritize critical endpoints last.
- Apply least privilege and separation of duties to minimize risk from human error.
- Document all processes and metrics to ensure awareness and enforce standards.
- Use change control (e.g., ITIL) to manage and simplify changes.
- Automate repetitive or error-prone steps to reduce mistakes.
- Inventory and analyze your environment before making changes; use multiple tools for complete visibility.
- Only implement changes when truly necessary, using data to guide decisions.
- Prepare all scripts, paperwork, and communications before implementation.
- Validate changes thoroughly, ideally by a separate person, and use tools to confirm success across all targeted devices.
Environment setup
Lower environments
- Test, Dev, Staging, Int, QA - Whatever you want to call it, setup a lower environment. It is up to you whether you need one or many.
- Your lower environment should mirror Production as best as possible. Are you really testing if your environment doesn’t match?
Everyone has a test environment, some are just in Production.
Deployment targets
- Setup pilots, rings, exclusions, and segregation of critical endpoints. Examples below.
- You should “slow-roll” all deployments to reduce risk. This ensures impact is minimized for any unknowns. Start small and ramp up deployments to more endpoints. Even if we are highly confident there is no impact, the risk of a single night’s deployment is generally too high.
- If you have endpoints in lower environments, target those first. You should also prioritize any dedicated testing endpoints like test servers.
- Virtual machines are very handy here. One of our common test targets are reclaimed/unused VMs that are set to be decomissioned. Well used machines with no user impact? Yes please!
- Setup testers for your most critical applications.
- In our environment, we have a group of non-IT volunteers that test changes and new technologies early to ensure no impacts. These are production users on production workstations. I cannot express enough the value we get from this. It is the closest we can get to ensuring there are no impacts.
- You should move your critical endpoints to the end of your deployment schedule. Even if you ramp down like 500, 1000, 5000, 250.
- Develop maintenance windows where necessary. Deploy to workstations during non-business hours. Deploy to servers during lowest usage hours. Work with the owners of your endpoints to determine the best time to deploy.
- In our server environment, server owners can actually choose their maintenance windows directly with the patching team. They absolutely love having this freedom.
- Reuse pilot groups where possible. Generally, you will always have deployments that need to target all devices. This is a great way to have pre-populated pilot groups that you can reuse for these deployments. Your first pilot can include those volunteers and critical app testers so that you ensure any impacts are identified early on.
- Separate endpoints into logical buckets to eliminate points of failure. A simple example is splitting workstations and servers out. You could separate by configuration, network connection, department. Use what works for your environment.
Least privilege and separation of duties
- Follow the principles of least privilege. Accidents happen, we are all human. Restrict those who should not have permission to deploy to all devices, all users, all servers, etc.
- For our critical Production deployment systems, we have a two step process as part of onboarding a new engineer.
- Access to lower environment first for two weeks for training/shadowing.
- A test/quiz that must have 80% success before granting Prod access with a meeting to review results.
- Newly onboarded engineers should not handle critial deployments for a period of time until they are familiar and comfortable with that level of potential impact.
- Use seperation of duties where possible. Only experienced engineers should handle the most critical deployments. A common mistake is a new engineer joining a team and immediately getting far-reaching privileges and breaking something. This isn’t the engineers fault, but the process that allowed the engineer to break something.
- The person handling implementation should not be the one performing validation. Have the requestor perform validation or at least a separate member of the deployment team.
Standards and change control
Documentation
In order for these practices to be effective, they need to be documented so that the value is well known and they can be enforced. Awareness is your friend, especially in urgent and emergency circumstances. If your requestors and leaders are not aware of these processes, they generally won’t be happy when they want something to happen now.
- Formalize all of your processes. You can make an official Operational Standard using your enterprise wide Standards process if you have one. If you do not need that level of formality at least put the process in writing in your document or knowledge management system.
- Develop metrics for barriers and safety nets. Include these metrics in your standards and documentation for more awareness. Some examples below.
- What constitutes a high risk change vs. medium or low?
- What are the maximum number of devices you can deploy to in a single night?
- How many test devices must be successful to move to production?
Change control
Another method to enforce these standards is through a change control process. ITIL is a popular choice. I don’t think anyone likes change control, but it is a necessary evil. In many organizations, change control is an enterprise requirement, so you don’t always have to be the bad guy.
- One benefit is that once your process is mature enough, you can start to simplify. This is where Standard changes come in. They usually have less paperwork, a template, or automatic approvals.
- In our environment we use standard changes for our Windows in-place upgrades, monthly patching, driver deployments, policy changes that do not impact user experience (as opposed to those that do), and software deployments to less than ~5% of our workstation fleet (~5000 devices). All of these were possible because we had already completed hundreds of each of these changes using the Normal change process. The process was proven and rock solid so we were allowed leniency.
Preperation, Implementation, and validation
Automation
One of the most disappointing feelings is when you miss a simple checkbox, step, make a syntax error. The simple things. Where it makes sense, you should automate these steps. Even just a simple script to make sure all boxes were checked can be a lifesaver one day. It only takes one time for a script to catch something to prove it’s worth.
Here are a few examples of easily missed implementation steps where automation can help.
- ConfigMgr - Editing task sequences, especially large ones. Run Command Line steps, Run PowerShell steps, adding conditions. Very easy to typo here.
- Group Policy - New linked policies automatically target authenticated users. Yep, that is everyone.
- Intune - Targeting the All Devices virtual group, but forgetting to add your filter. Yep, that is also everyone.
For mature environments, don’t touch prod, use Continuous Integration/Continuous Deployment (CI/CD) pipelines. Human’s make the changes in the lower environment, scripts turn them to Prod. There are some great tools to help with this like IntuneCD (Tobias Almén) and M365DSC (Microsoft).
Implementation preperation
Data and visibility are key to helping eliminate unknowns and reduce risk. If you need to change software or configurations, first inventory your environment to see what it looks like currently. Do not rely on assumptions or accept unknowns where possible.
There are many tools that can provide visibility into your environment and help decision making. Use them. Many environments have the tools in place but they are highly underutilized.
- Endpoint Management tools - ConfigMgr, Intune, JAMF, Workspace One, and others
- Inventory of software, files, settings, and more
- Software metering and usage to see what software is in use
- Real-time capabilities with CMPivot, Run scripts, Platform scripts, etc.
- Digital Employee Experience (DEX) tools - Nexthink, 1E, Lakeside, etc.
- Inventory of software, files, settings, and more
- Real-time capabilities with remote actions and more
- Endpoint Detection and Response tools - Defender, Crowdstrike, and the like
- Inventory from a security perspective like software, patching, vulnerabilities
- Real-time capabilities with running scripts
- Advanced event logging
- Security Information and Event Management (SIEM) tools - Sentinal, Splunk, Sysmon, and more
- Indirect inventory through logging and usage
- Advanced event logging
- Observability platforms - Solarwinds, Dynatrace, Datadog, etc
- Indirect inventory and usage through monitoring
- Scripting - PowerShell, Bash, Python, etc
- They may not be able to get you the full picture, especially at scale, but some visibility is better than nothing. Get a subset of data and extrapolate from that. Just make sure not to break anything.
Sometimes data in one tool might be incomplete or even inconsistent, cross-reference tools if you have to. Here are a few examples.
- I need to see users who have used Chrome browser. I query software metering with ConfigMgr and cross reference it with Defender + Sysmon for process execution.
- I need to see users who visited a website to build an accurate user list. I can use Nexthink for site visits, but I need page details. I can get that from Dynatrace. Using both paints the complete picture.
- I need to see plug and play events for USB devices. Event log data in Splunk has some details, but Defender exposes more events.
Determining the need for change
What is better than covering all your bases and having a flawless implementation? Not need to implement that change in the first place. Use the tools above to make the most educated decisions. You may not need to introduce as much risk or potential impact as you expected. I have had many direct experiences where data saved the day and prevented change. For well-planned changes this is helpful, but it also can save the day in emergency situations. Below are a couple examples.
- Review software installations and usage. How many users have this installed? Of that, how many actually use it? This data alone can help determine the need for change.
- e.g.) We recently did an exercise to reduce the software load in our base image. We found a few of the applications were used by less than 5% of our users, even though business leaders claimed these applications were critical and had high usage. The data won out.
- Use tools at your disposal to query registry, WMI, files and folders, and many other things to determine the need for change.
- e.g.) A few years back we were looking to make a browser change. I used CMPivot to query the specific registry key and found over 2/3 of our environment already had it set because Microsoft had turned it on by default. Now we knew there was little risk to implement on the remaining 1/3.
Preparing for implementation
When you are ready to move forward with implementing something new, save future you some trouble, and do as much preperation as possible before you implement. This means preparing the actual scripts, paperwork, and the actual change beforehand. If you are scheduled to implement at 7pm, do not sign-on at 7pm and start getting everything prepped. Do it the day before or the day of, so that implementation can be just that and nothing more.
- Schedule deployments beforehand, where possible. Most endpoint management tools nowadays allow you to do this.
- Prepare all paperwork beforehand, such as documentation, change controls, and communications to end users or stakeholders.
- Manage your time. If your sign-off at 5pm, do you have a reminder or calendar entry to alert you to sign-on at 7pm? This one is more important the more extreme the implementation time is. Some organizations require changes at 2am or on Saturdays, or even variable times based on lowest usage points.
- Ensure you have all the support you need. Should a co-worker be there? Do you have a separate person handling validation? Should the vendor be on the call? Early on, when we used to do our ConfigMgr upgrades, we always had our Microsoft PFE/DSE/CSE on with us in case of issues.
- If you are automating certain steps, please write that script up before implementation starts. You never know when that 5 minute task will turn into a 60 minute ball of frustration.
Validation
Validation is ensuring that your change was successful. Successful means that it did what you expected it to. You can successfully install software and also have unforeseen negative impacts.
- Validation is always best done by a separate person. Ideally, the requestor for the change should handle validation. They are the ones that must be happy in the end. This will depend on whether they are another person in IT, have permissions to validate, and other environmental circumstances.
- Validate not only that the deployment status was successful, but the actual change on the endpoint is reflected such as software being installed or registry being changed.
- Use the tools listed above to perform validation. Something as simple as querying for the appropriate registry key after making a GPO or Intune change goes a very long way.
- Query all your targeted devices. If you are deploying to 500 devices, how comfortable are you that the 2 you manually checked reflect the other 498? Again, use the tools available to you.
- For all of our AD/EntraID deployment pilot groups for GPO/Intune changes, we create collections in ConfigMgr that sync the members so we can easily query them after each change using real-time tools like CMPivot.
Conclusion
All the practices outlined above are lessons learned through trial and error. Some of them I may not have been the largest supporter of at first, but over time I came to see the benefit which ended up in me and my organization’s favor. You may not need all of these, but consider what makes sense in your environment.
What are some of the ways you have reduced risk in your environment?
p.s, a thank you to the many folks that have helped build these processes along the way. Jerry Boehnlein, Jim Parris, Mike Cook, Darren Chinnon, Scott Forshey, Jason Luckey, Matt Wright, Jason Mattingly, Nick Combs, Tim Robinson, Scott Hublar, Rex Ference, Sam Himburg, Paul Humphrey, Danny Hutchinson, Shaun Poland, and probably a few more than I cannot remember.
-
Automating Wireshark in Windows
Automating Wireshark in Windows
This post is the second of three for automating common debugging tools on Windows endpoints.
From the prior post:
Earlier this year I came across a scenario of an application dropping connections. This was occuring across many hundreds of users and sporadically. Typically, I would attempt to recreate the issue so I could debug, but that was not possible here. I needed a way to be ready for the drop to occur and have all debugging tools setup proactively across a large number of users.
We use ConfigMgr to run scripts on workstations from a central location and it worked well in this scenario.
Initial setup process
This is where running this with ConfigMgr becomes convenient. Just target a collection with the software push and the script.
- Package up Wireshark in ConfigMgr to be pushed silently
- Add a Run Script to ConfigMgr to be ran on devices, Start-WiresharkCapture
- Add the column for Script GUID to the ConfigMgr console and copy it out

- Create collection of target devices
- Trigger the Run script each morning at a specified time on target workstations to create and start the capture using the Invoke-CMScript cmdlet. I used Azure Automation to trigger this daily and re-run hourly to ensure the Wireshark was always running. You can use any automation solution you prefer
Invoke-CMScript -CollectionId <CollectionId> -ScriptGuid <scriptguid>
The script
This automation was not difficult to setup. The programmatic capabilities of Wireshark made it fairly easy. The biggest challenge was cleanly ending the Wireshark processes and restarting, then ensuring only one instance was running at a time so that a capture was taken at all times.
Again, special thanks to my co-workers Darren Chinnon and Raul Colunga for the help on this one.
Wireshark paremeters
- Start WireShark trace
- RingBuffer: 4
- File size: 100 MB
- Start capture immediately
- Save .pcappng to c:\temp\capture
Customize the Wireshark parameters on line 53 and 68.
Script anatomy
- Check for Wireshark.exe in default path
- Check for c:\temp\capture path and create if necessary
- Check if Wireshark.exe is running already
- If Wireshark is not running, start it
- Gather the network interface alias of the interface running DHCP and connected
- Concatenate the Wireshark process with the network interface alias
- Start Wireshark with the identified alias
- If Wireshark is running, check process count >=2
- If multiple processes running, kill processes
- Start Wireshark following the logic above in step #4
Partial Preview
if (Test-Path 'C:\Program Files\Wireshark\Wireshark.exe') { # Check if Wireshark is installed
if (!(Test-Path c:\temp\capture)) { # Check if c:\temp\capture exists
New-Item -Path c:\temp -Name capture -ItemType Directory -Force # Create capture directory
}
$WSProcess = (Get-Process -Name Wireshark -ErrorAction SilentlyContinue)
if ($WSProcess -eq $null) {
# Gather the interface number
$IntAlias = Get-NetIPInterface -AddressFamily IPv4 -ConnectionState Connected -Dhcp Enabled | Select-Object -ExpandProperty InterfaceAlias
$IntListraw = & "C:\Program Files\Wireshark\Wireshark.exe" -D | Out-String
$IntList = ($IntListraw.Split("`n"))
$wsIntName = $IntList | Select-String -SimpleMatch ('(' + $IntAlias + ')')
$wsIntNumber = $wsIntName.ToString()[0]
Write-Output 'Wireshark not running, starting Wireshark.'
# Start the capture
& "C:\Program Files\Wireshark\Wireshark.exe" -i $wsIntNumber -b filesize:100000 -k -w "C:\temp\capture\$($env:username)-$($env:computername).pcapng"
} elseif ($WSProcess.count -ge 2) {
Write-Warning 'Multiple Wireshark processes running, killing, and starting Wireshark!'
Stop-Process -Name dumpcap -Force
Start-Sleep -Seconds 5
Stop-Process -Name wireshark -Force
# Gather the interface number
$IntAlias = Get-NetIPInterface -AddressFamily IPv4 -ConnectionState Connected -Dhcp Enabled | Select-Object -ExpandProperty InterfaceAlias
$IntListraw = & "C:\Program Files\Wireshark\Wireshark.exe" -D | Out-String
$IntList = ($IntListraw.Split("`n"))
$wsIntName = $IntList | Select-String -SimpleMatch ('(' + $IntAlias + ')')
$wsIntNumber = $wsIntName.ToString()[0]
# Start the capture
& "C:\Program Files\Wireshark\Wireshark.exe" -i $wsIntNumber -b filesize:100000 -k -w "C:\temp\capture\$($env:username)-$($env:computername).pcapng"
} else {
Write-Warning 'Wireshark is already running, not starting a new instance!'
}
}
Github link: Start-WiresharkCapture.ps1
Output
Starting Wireshark for the first time

Starting Wireshark when it is already running or running multiple instances

Closing
This script was a bit more of a challenge than the Perfmon script, but allowed Wireshark to capture all network traffic at all times. This lead to the resolution of the issue we were experiencing, so it paid off.
-
Automating Performance Monitor in Windows
This post is the first of three for automating common debugging tools on Windows endpoints.
Earlier this year I came across a scenario of an application dropping connections. This was occuring across many hundreds of users and sporadically. Typically, I would attempt to recreate the issue so I could debug, but that was not possible here. I needed a way to be ready for the drop to occur and have all debugging tools setup proactively across a large number of users.
We use ConfigMgr to run scripts on workstations from a central location and it worked well in this scenario.
Initial setup process
- In Performance Monitor (perfmon.msc), create the necessary Data Collector Set, henceforth abbreviated DCS
- Be sure to specify the directory you want your output in
- Export the DCS you created. Right-click and Save template to .xml.
- Download Start-PerfmonCapture.ps1 linked below
- Copy the contents of the .xml into below script so no artifacts are necessary outside of the script. It is self-contained.
- Add a Run Script to ConfigMgr to be ran on devices, Start-PerfmonCapture
- Add the column for Script GUID to the ConfigMgr console and copy it out

- Create collection of target devices
- Trigger the Run script each morning at a specified time on target workstations to create and start the DCS using the Invoke-CMScript cmdlet. I used Azure Automation to trigger this daily and re-run hourly to ensure the DCS was always running. You can use any automation solution you prefer
Invoke-CMScript -CollectionId <CollectionId> -ScriptGuid <scriptguid>
The script
This script is fairly basic, mostly just took figuring out how to interact with a Data Collector Set using logman.exe and make sure it didn’t clobber the existing DCS that is running.
Special thanks to Aussie Rob SQL, Jonathan Medd, and Rabi Achrafi for the example scripts I found online. References are in the script help text. Also thanks to my co-workers Darren Chinnon and Raul Colunga who helped put this together.
Script anatomy
- Populate the XML for the Data Collector Set
- Specify name of DCS
- Query if DCS already exists with specified name
- If DCS found, check if running
- If running, exit
- If not running, start it
- If DCS not found, create it and start it
Edit the lines below to personalize as needed
- Line 27 through 210 - Your custom XML
- Line 212 - DCS name
Partial Preview
$DCSName = 'PerfMonExample'
$DCSCheck = & logman query $DCSName # Query if DCS already exists
if ($DCSCheck[1] -like "*$($DCSName)") {
Write-Output 'DCS found!'
if ($DCSCheck[2] -like '*Running') {
Write-Output 'Trace running, exiting...'
} else {
Write-Output 'Trace not running, starting...'
& logman start $DCSName
}
} else {
Write-Output 'DCS not found, creating...'
# Create the Data Collector Set
$DCS = New-Object -COM Pla.DataCollectorSet
$DCS.DisplayName = $DCSName
$DCS.SetXml($DCSTemplate)
$DCS.Commit("$DCSName" , $null , 0x0003)
# Start the data collection
Write-Output 'Starting the DCS!'
$DCS.start($false)
}
Github Link: Start-PerfmonCapture.ps1
Output
The script gives a little output for validation, not much though. This is mostly for validation during testing.
Starting the script, there may be additional output from logman.exe for the initial run

Starting the script when DCS is already running

Starting the script when DCS is not running

Closing
Overall, this process worked well and met the need. It wasn’t the first time I had to use Perfmon and it won’t be the last. Up next, Wireshark.
-
Intune missing capabilities for the ConfigMgr administrator
Intune missing capabilities for the ConfigMgr administrator
Even if you haven’t been paying attention to recent development for, or lack thereof, Microsoft Configuration Manager, or to any of the threads on Twitter/X, Reddit, or any other major social media platforms, you still probably know the writing is on the wall for ConfigMgr. Nearly all focus in Contosoland has been devoted to Intune.
This post outlines my personal running list of gaps that Intune doesn’t quite cover for the seasoned ConfigMgr administrator.
My friend and banter extraordinaire, Bryan Dam, posted recently a quote that makes sense in describing this list.
#ConfigMgr gave you 250% of what you need. #Intune gives you 90%, we’ll get it to 100% … eventually.
Much of this list may be in the last 150%, but that doesn’t change a lot of organizations’ dependency on these capabilities. Make your own determination how critical these capabilities are for your organization.
Kim Oppalfens has the best write up I have seen to date of these gaps.
- Realtime scripts
- Realtime application installation
- CMPivot
- Task Sequences with Autopilot
- Custom inventory
- Flexible targeting
- Configuration baselines
- Software metering
- Customizable reporting
- Package distribution
My list includes a few more technical gaps ranging from critical to minor technical details.
Software installation
For the majority of your software installations, Intune should cover your needs. But the following requirements may pose an issue.
- Sequencing complex installs together - Need to deploy multiple installs in a certain order or tie in installs, scripts, and restarts at once? Complex installs like Citrix VDA or sometimes Windows Feature Upgrades are easily handled with ConfigMgr task sequences. No equivalent in Intune without complex PowerShell scripting.
- 30GB maximum package size - This isn’t even that new. This capability was bumped from 8GB to 30Gb around the start of 2024. This may pose an issue for things like Visual Studio, AutoCAD, SAS, and other very large applications. Your best option is to compress the install files into a .zip or .wim file and extract them at install time.
- IME runs as 32-bit not 64-bit - For the majority of software installs, this is no issue. Windows Installer is intelligent enough to install 64-bit to 64-bit, even when called from a 32-bit process. However, if you use PowerShell to wrap your installs, such as the ever popular PowerShell Application Deployment Toolkit, you have to take extra steps to trigger the process as 64-bit. Not ideal, especially if you wrap all things in PSADT.
Shared device scenarios
- Non-persistent VDI scenarios - Want to go full modern Entra ID only + Intune only? Not supported, hard stop. To be fair, most orgs are probably not using ConfigMgr on non-persistent scenarios, but they are using GPO. So going Entra ID only is a killer here. Not exactly an Intune gap, but modern endpoint as a whole.
- No maintenance windows - There are lots of uses for admin-controlled hard coded timeframes when maintenance can occur. Known as Maintenance Windows in ConfigMgr, they are very valuable for highly critical devices, shared devices, and devices running on shared infrastructure like VDI. Based on recent conversations this does seem to be top of mind for Microsoft.
Real time capabilities
- 8 hour policy check-in (ConfigRefresh is 90 minutes) - ConfigMgr, formerly known as “slow moving software” is arguably much faster than Intune. Just check Reddit for comments about Intune being slow. Simpler? Maybe, but the admin can’t exactly control when things will happen. Recent improvements to ConfigRefresh have helped here, but it still can’t beat a 60 minute policy refresh in ConfigMgr for all actions on the endpoint. What does our org do? We have a Run Script in ConfigMgr to trigger a MDM sync using the scheduled task. The forced sync from the Intune portal isn’t reliable enough and cannot be performed in bulk.
- Expire/disable deployments - It’s 11pm at night, you just deployed something to thousands of endpoints. Your helpdesk’s call volume starts to rise. Big red button time. In ConfigMgr, you never want to delete the deployment, you lose all history and record of potentially impacted devices. You instead expire or disable the deployment. In Intune, no such option. you remove the assignment and find another way to identify impacted endpoints. Can you parse the IME log, event log, Sysmon, MDE, or other means to see what occured on the endpoints locally? Sure, but that is way more complex and time consuming when leadership has you on an outage bridge asking for impacts now.
If you are a large organization, you may hit the 200 maximum remediations limit. Our organization has well over 200 ConfigMgr baselines, so we are keeping this workload in ConfigMgr for now.
Targeting
We all know that for modern endpoint management you generally want to target users, not devices. There is a lot of value targeting users, especially as Intune is designed to work better this way. However, you lose out on certain deployment abilities that ConfigMgr delivers beautifully with collections today. Good news though, coming soon to Intune is device inventory! This seems to be the first step in opening up more targeting capabilities besides Entra ID groups and virtual groups + filters.
- Targeting based off installed software - This is our most commonly used scenario. Nearly every software deployment we do follows this template. Collection of target devices excluding devices with X software installed. Build pilot groups off that collection. When your collection hits 0 you are done. It is a combination of targeting + inventory to maximize success of your software deployments. Our organization averages over 3000 software deployments a year, and every little software install success matters.
- Targeting based off installed software versions - Same logic as above, but mainly for upgrades from version X to version Y.
- Targeting based off software usage/metering - Very valuable for software audits, license reclamation, and other software lifecycle scenarios.
- Targeting based off registry keys - How many software applications do you have that store relevant info in the registry? Zscaler Private Access, Digital Guardian, lots of others. Inventory the registry keys, build a collection to target.
- Targeting based off WMI properties - We have almost 100 custom inventory items we store in WMI to solve all our targeting wildest dreams. Drivers, some user profile specific registry key, custom branding, the list goes on.
- Targeting based off management properties (domain, co-managed workload, etc) - This is mostly used for limiting deployments to relevant endpoints. Only want to target your Entra ID joined devices? Only hybrid joined? Only devices in a certain domain? All very valuable scenarios to ensure you limit exposure to devices that should not be targeted.
- Targeting null data such as software not installed - This is valuable for cleaning up your environment. Missing a required configuration or devices that should have X software installed?
- Targeting based off policies (compliant, non-compliant, success, error, etc) - If a software install fails, it will reattempt. What if you need to target a script to a device that failed to apply a config policy successfully? What if you need to apply a script to a non-compliant device?
- Targeting based off user state (user logged on, primary user set, etc) - One of the easiest ways to ensure you are not impacting an end user is if they are not logged into their device. This is great for overnight implementations and critical cleanup tasks. Being able to see the primary user of a device like with user device affinity in ConfigMgr is very valuable here. Great info for cross checking asset management systems and targeting users with multiple devices or accounts that login to multiple users devices.
Ironically, if you do need some of these dynamic capabilities for targeting you can use ConfigMgr to get them in Intune. Check out collection sync. Thanks Cristopher Alaya for the mention!
Closing
This is a lengthy list, and I have been keeping this list since we co-managed all our devices at the start of the pandemic in 2020. The good news is even just a year ago, this last had 5 more items. Many of these items are dropping off with every monthly Intune release and eventually Microsoft will get that last 10%. I personally expect co-management will still be necessary for the next 5 years though, we shall see.
-
Pretty permalinks using IIS 8/IIS 8.5
When I threw this blog together, I had chose one of the default Permalinks options of:
http://domain/index.php/%postname%
I wanted to get that index.php out of there and that is where my journey began.
I am running WordPress on IIS 8.5 and Windows Server 2012 R2. First, I just went in and attempted to edit the Permalinks under settings, but when I went to save I got the below error.

-
Blog intro
Welcome
This is my first post on my new blog. I migrated from Wordpress to Github pages with Jekyll. Just being to able to post straight from Visual Studio Code will be a pleasure.
My old blog was self-hosted on Wordpress, via potentengineer.com.
-Daniel