This rule queries the performance counter "DFS Replicated Folders\Staging Bytes Cleaned up" on monitored computers every 15 minutes.
DFS Replication uses staging folders for each replicated folder to act as caches for new and changed files that are ready to be replicated from sending members to receiving members. These files are stored under the local path of the replicated folder in the DfsrPrivate\Staging folder. By default, the quota size of each staging folder is 4,096 MB. The size of each folder on a member is cumulative per volume, so if there are multiple replicated folders on a member, DFS Replication creates multiple staging folders, each with its own quota. The staging quota for DFS Replication is not a hard limit, and it can grow over its configured size. When the quota is reached, DFS Replication deletes old files from the staging folder to reduce the disk usage under the quota. The staging folder does not reserve hard disk space, and it only consumes as much disk space as is currently needed.
This rule queries the performance counter "DFS Replicated Folders\Staging Bytes Cleaned up" on monitored computers every 15 minutes. This performance counter keeps track of the amount of data that was cleaned up from each replicated folder’s staging area by the DFS Replication service. Monitoring this performance counter enables administrators to understand the usage of each replicated folder’s staging area and figure out if the staging quotas need to be increased.
Since the staging folder is used as a cache for new and changed files during replication activity, it is important to configure the size/quota of staging folders optimally, depending on the workload being replicated.
Optimize the size of staging folders
Although you can adjust the size of each staging folder, you must take the following factors into account while doing so:
Increase the staging folder quota when you must replicate multiple large files that change frequently.
If possible, increase the staging folder quota on hub members that have many replication partners.
If a staging folder quota is configured to be too small, DFS Replication might consume additional CPU and disk resources to regenerate the staged files. Replication might also slow down because the lack of staging space can effectively limit the number of concurrent transfers with partners.
For the initial replication of existing data on the primary member, it is important that you size the staging folder quota large enough so that if multiple large files are blocked in staging due to partners not being able to download the files, the remaining files can continue replicating. To properly size the staging folder for initial replication, you must take into account the size of the files to be replicated. At a minimum, the staging folder quota should be twice the size of the largest file in the replicated folder. For increased performance, the staging folder quota should be increased to the size of the four largest files in the replicated folder on spoke members and to the size of the sixteen largest files in the replicated folder on hub members.
If free disk space is a concern, you might need to configure the staging quota to be lower than the default quota when several replicated folders share staging space on the same volume. Remember that in such configurations, the service will likely spend a lot of time performing staging cleanups, when it runs of space in the staging area.
During normal operation, if the event that indicates the staging quota (event ID 4208 in the DFS Replication event log) is over its configured size and is logged multiple times in an hour, increase the staging quota by 20 percent.
To improve input/output (I/O) throughput, locate staging folders and replicated folders on different physical disks. This can be done by editing the path of the staging folder.
Target | Microsoft.Windows.DfsReplication.Service |
Category | PerformanceCollection |
Enabled | True |
Instance Name | DFS Replicated Folders |
Counter Name | Staging Bytes Cleaned up |
Frequency | 900 |
Alert Generate | False |
Remotable | True |
ID | Module Type | TypeId | RunAs |
---|---|---|---|
DS | DataSource | System.Performance.OptimizedDataProvider | System.PrivilegedMonitoringAccount |
WriteToDB | WriteAction | Microsoft.SystemCenter.CollectPerformanceData | Default |
WriteToDW | WriteAction | Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData | Default |
<Rule ID="Microsoft.Windows.DfsReplication.StagingBytesCleanedup" Enabled="true" Target="Microsoft.Windows.DfsReplication.Service" ConfirmDelivery="false" Remotable="true" Priority="Normal" DiscardLevel="100">
<Category>PerformanceCollection</Category>
<DataSources>
<DataSource ID="DS" RunAs="System!System.PrivilegedMonitoringAccount" TypeID="Performance!System.Performance.OptimizedDataProvider">
<ComputerName>$Target/Host/Property[Type="Windows!Microsoft.Windows.Computer"]/NetworkName$</ComputerName>
<CounterName>Staging Bytes Cleaned up</CounterName>
<ObjectName>DFS Replicated Folders</ObjectName>
<InstanceName/>
<AllInstances>true</AllInstances>
<Frequency>900</Frequency>
<Tolerance>0</Tolerance>
<ToleranceType>Absolute</ToleranceType>
<MaximumSampleSeparation>1</MaximumSampleSeparation>
</DataSource>
</DataSources>
<WriteActions>
<WriteAction ID="WriteToDB" TypeID="SC!Microsoft.SystemCenter.CollectPerformanceData"/>
<WriteAction ID="WriteToDW" TypeID="SCDW!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData"/>
</WriteActions>
</Rule>