Skip to main content
All CollectionsUser Guide
Sizing Guidelines
Sizing Guidelines
Ali Murtaza avatar
Written by Ali Murtaza
Updated over a year ago

1. PURPOSE

Based on results from real-world customer usage patterns and lab simulations, this guide provides sizing guidance for organizations planning to deploy the SIRP platform. These recommendations represent the minimum configurations required to maintain acceptable application response time during normal operation. This reduces the risk of performance bottlenecks during times of heightened activity. The recommendations address the number of processors, amount of memory, and amount of disk space. Administrators may adjust resources as necessary based on workload.

2. CONFIGURATION RECOMMENDATION

When deployed, the SIRP platform requires a VM with at least 2 CPUs (cores), 8GB of RAM, and a 200GB disk, regardless of the number of users or workload. As you add users or increase SIRP platform usage, you may need to add resources to maintain acceptable response times. The recommendations are based on three types of workloads: Low, Medium, and High. Baseline guidelines for each type of workload are listed in the following tables. The workloads are described in more detail later in this document.

The workloads used to generate the data for the recommendations in this guide are generalized. You should consider these recommendations as a starting point and adjust resources as necessary, depending on workload expectations.

2.1. LOW WORKLOAD

A Low workload is an average of 2 requests/sec for every 50 users

USERS

CPUS (CORES)

RAM

DISK (INITIAL)

DISK CONSUMED/YEAR

1-50

4

16GB

200GB

up to 30GB

51-100

6

16GB

200GB

up to 40GB

101-150

6

16GB

200GB

up to 50GB

151-200

8

32GB

200GB

up to 70GB

201-250

8

32GB

200GB

up to 95GB

2.2. MEDIUM WORKLOAD

A Medium workload is an average of 4 requests/sec for every 50 users.

USERS

CPUS (CORES)

RAM

DISK (INITIAL)

DISK

CONSUMED/YEAR

1-50

4

16GB

200GB

up to 50GB

51-100

8

16GB

200GB

up to 65GB

101-150

8

32GB

200GB

up to 85GB

151-200

10

32GB

200GB

up to 120GB

201-250

10

32GB

200GB

up to 140GB

2.3. HIGH WORKLOAD

A High workload is an average of 6 requests/sec for every 50 users.

USERS

CPUS (CORES)

RAM

DISK (INITIAL)

DISK

CONSUMED/YEAR

1-50

+

32GB

200GB

up to 65GB

51-100

8

32GB

200GB

up to 100GB

101-150

8

48GB

200GB

up to 135GB

151-200

10

48GB

200GB

up to 170GB

201-250

12

32GB

200GB

up to 200GB

2.4. ADDITIONAL CONSIDERATIONS

Disk space can vary substantially, depending on workload. For example, an organization that additionally uses risk management and vulnerability management modules can consume disk space at a much greater rate than one that only uses incident management module. Make sure to account for this when configuring storage size. Automated playbooks can also impact your workload. Resource utilization as a result of enabling automated playbooks can vary substantially and are dependent on several factors, including the actions that the playbook initiates, how frequently the playbook is run, and how many playbooks are running concurrently.

3. METHODOLOGY

3.1. WORKLOAD DEFINITION

To help define a realistic workload, several customers provided client access logs that were utilized to characterize a typical customer workload. Analysis indicated that a typical SIRP user spends a significant amount of time creating new incidents, reviewing existing incident information, and updating incidents periodically until resolution. Actions that users perform most frequently include: creating incidents, opening existing incidents, completing tasks, adding notes, and adding artifacts. During the time that a user is reading or updating incidents, the browser polls the server for newsfeed, tasks, notification updates, and other information.

NOTE: 50-user simulations that leveraged the Low, Medium, and High workloads generated an average of 2, 4, and 6

requests/sec, respectively.

LOW WORKLOAD

Log analysis indicated that, on average, a low-workload user performed the following actions each day:

  • Create 2 incidents

  • Create 2 vulnerabilities

  • Create 2 threat intelligence feeds

  • Create 2 risks

  • Complete 4 tasks

  • Add 3 notes

  • Add 3 artifacts

  • Run 20 playbooks

  • Polling for updates while reading existing cases

MEDIUM WORKLOAD

The Medium workload simulated a user performing the same tasks as the Low workload but performing twice as many actions over an 8-hour period.

  • Create 4 incidents

  • Create 4 vulnerabilities

  • Create 4 threat intelligence feeds

  • Create 4 risks

  • Resolve 8 tasks

  • Add 6 notes

  • Add 6 artifacts

  • Run 50 playbooks

  • Log in twice as often as low users and generate twice as many polling requests

HIGH WORKLOAD

The High workload also consisted of the same request distribution as the Low workload but at three times the rate.

  • Create 6 incidents

  • Create 6 vulnerabilities

  • Create 6 threat intelligence feeds

  • Create 6 risks

  • Resolve 12 tasks

  • Add 9 notes

  • Add 9 artifacts

  • Run 100 playbooks

  • Log in 3x more often than low users and generate three times as many polling requests

4. CONCLUSION

An organization that sends requests with a distribution similar to the Low workload can support up to 100 users on the minimum recommended configuration (2 CPUs, 8GB RAM, 100 GB disk) without being at a significant risk of a performance bottleneck. As the number of users and/or frequency of requests increases, so does the response time. If response time degrades to undesirable levels, administrators can add CPUs and/or memory to bring response time back to expected levels, using this document as a guideline.

Variations in workload distribution and/or frequency affect resource utilization. Some requests are more resource-intensive than others, so resource utilization is not only dependent on the number of users or requests but also on the types of requests that are being sent to the server. As such, administrators should expect to use the information

provided as a starting point, continually monitor performance, and adjust resources as necessary.

Did this answer your question?