Forwarder Performance Tuning

The Stairwell forwarder is designed to have minimal impact on host performance during steady-state operation. However, the initial deployment phase involves a full-disk backscan that can generate noticeable disk I/O, and certain high-churn environments require configuration adjustments to prevent resource contention.


Understanding the backscan

When a forwarder is first installed, it performs a one-time backscan that traverses the file system and uploads files that Stairwell has not previously seen in your environment. This is the most resource-intensive phase of forwarder operation.

What to expect:

  • CPU usage is typically low during backscan -- the forwarder is I/O-bound, not CPU-bound.
  • Disk read I/O will be elevated for the duration of the backscan. On a standard endpoint with a typical file inventory, this takes 2--8 hours. On dense file servers, it may take longer.
  • Network upload activity depends on how many unique files the forwarder encounters. Because Stairwell deduplicates across your environment, later-deployed assets upload far fewer files than the first.
  • After backscan completes, resource usage drops to near-zero during normal operation. The forwarder reacts to new and modified files only.

Reducing backscan impact:

  • Deploy the forwarder during off-hours or maintenance windows on sensitive systems.
  • Use deny paths (see below) to exclude directories unlikely to contain security-relevant files before deployment. This reduces both the backscan duration and ongoing upload volume.

Configuring deny paths

Deny paths tell the forwarder to skip specific directories entirely -- no collection, no upload, no scanning. Use them to exclude:

  • Build output directories (/build, /dist, node_modules, Maven/Gradle caches)
  • Media and asset libraries (video, audio, large binary archives)
  • Backup staging directories
  • Virtual machine disk image directories
  • Log directories containing only text files
⚠️

Deny paths reduce Stairwell's visibility. Only exclude directories where you are confident no security-relevant executables or libraries will be written. Overly broad exclusions create blind spots.

Deny paths are configured in the Stairwell platform under SettingsPoliciesExclusions and applied to assets via policy assignment. See Exclusions for configuration steps.


Developer workstations and high-churn systems

Developer machines present a specific challenge: they generate large volumes of new files continuously (compiled binaries, test artifacts, downloaded packages) and are often politically sensitive to any agent that increases build times or fans.

Recommended approach:

  1. Before deployment, work with the development team to identify build output directories and package caches. Add these as deny paths in a developer-specific policy.
  2. Deploy to a pilot group first. Select 2--3 volunteer developers who can give direct feedback on any observed impact. Adjust deny paths based on their feedback before broader rollout.
  3. Set expectations. The initial backscan on a developer machine with years of accumulated build artifacts can take longer than on a standard endpoint. Communicate this in advance.
  4. Package manager caches (npm, pip, Maven, NuGet, Cargo, etc.) are generally safe to exclude -- these contain known upstream packages, not internally-built binaries. Consult your security team before excluding them.

Recommended pilot sequencing

The order in which you deploy forwarders across your environment affects both performance and collection quality.

Recommended sequence:

  1. Golden image or lab machine -- Start with a clean baseline. This primes Stairwell's knowledge of your standard software stack and makes all subsequent deployments faster (the forwarder won't re-upload files already known from the golden image).
  2. File servers and shared drives -- Deploying here early captures a wide breadth of files quickly. Note that the forwarder ignores mapped network drives on individual endpoints, so file servers should be instrumented directly.
  3. Organizationally similar endpoint cohorts -- Deploy in batches of similar machines (e.g., all finance workstations, then all engineering workstations). Similar machines share most of their files, so each batch reduces the upload burden of the next.
  4. High-churn and sensitive systems -- Developer workstations, build servers, and politically sensitive systems last, with deny paths pre-configured.

This sequencing minimizes total upload volume and ensures each deployment cohort benefits from what the previous cohorts already uploaded.


Monitoring deployment health

Track the following indicators during rollout:

  • Asset check-in rate -- All deployed assets should show a recent last-seen timestamp in Stairwell. Stale assets indicate connectivity or service issues.
  • Sighting volume -- Expect elevated sighting volume during the backscan phase, tapering off to a steady baseline after completion.
  • Backscan status -- Monitor via the Stairwell platform or by querying the local state file on each asset (see Linux Troubleshooting or Windows Troubleshooting for how to check locally).