Escolar Documentos
Profissional Documentos
Cultura Documentos
Appliances 3
BeyondTrust UVM20 3
BeyondTrust UVMV20 4
BeyondTrust UVM50 5
Retina 651 5
Beyondinsight (BI) 6
Base Installation 6
Analytics and Reporting (A&R) 7
PowerBroker 7
PowerBroker Unix Linux (PBUL) and PowerBroker Sudo (PBSudo) 7
PowerBroker Password Safe (PBPS) 21
PowerBroker Identity Services (PBIS) 22
PowerBroker for Windows (PBW) 38
PowerBroker for Mac (PBMac) 46
PowerBroker Endpoint Protection Platform (PBEPP) 47
PowerBroker Auditing and Security Suite (PBASS) 47
Retina 48
Retina Network Security Scanner (RNSS) 48
Retina Host Security Scanner (RHSS) 50
Retina CS (RCS, BeyondInsight) 50
Retina Protection Agent (RPA) 50
Infrastructure 51
Enterprise Update Server (EUS) 51
Event Collector Role (EC) 51
Worker Node Role (WN) 52
Conclusion 52
About BeyondTrust 55
APPLIANCES
BeyondTrust offers a full line of integrated IT risk management appliances (UVM) dedicated to
vulnerability management, privileged account management, endpoint protection, configuration
compliance, patch management, and regulatory compliance management. These appliances
provide multi-platform network discovery, automated vulnerability and risk assessment,
centralized policy enforcement, least privilege reporting, and powerful compliance and
regulatory audit capabilities.
BeyondTrust assumes the following hardware and software for all environments based on the
UVM series of appliances in performing capacity planning:
BeyondTrust UVM20
The BeyondTrust UVM20 Security Management Appliance delivers pre-installed and pre-
configured vulnerability and privileged account management capabilities, combining
BeyondTrust’s Retina Network Security Scanner, PowerBroker® Endpoint Protection Platform,
PowerBroker for Windows, and PowerBroker for Mac under the BeyondInsight™ centralized
management, reporting and analytics console.
Appliance Capacity
Maximum Active Assets: 20,000
Maximum Administrative Console Concurrent Users: 10
Default Normalized Data Retention: 90 Days
Default Raw Data Retention: 7 Days
Maximum Events per Day: 250,000
Maximum PowerBroker or Retina Agents: 5,000
BeyondTrust UVMV20
The BeyondTrust UVMv20 Security Management Appliance is a pre-installed and pre-
configured virtual appliance for vulnerability and privileged account management based on all
the capabilities of the UVM20 physical appliance. This virtual appliance is fully licensed and is
available for Microsoft Hypervisor-V and VMware ESXi and NSX.
Operating System Microsoft Windows 2012 R2 Server Open Volume License (OVL) or
Retail License Package (Depending on Localization)
Database Microsoft SQL Server 2014 Standard Edition Open Volume License
(OVL) or Retail License Package (Depending on Localization)
Processor Maximum Dual CPU / 4 Cores for Microsoft License Limitation
Memory 32GB Minimum (Required), 4TB Maximum
Form Factor Virtual Image: VMware VMDK for ESXi 5.0 or higher, or and Microsoft
VHD for Hypervisor-V (2008-R2 and 2012)
Appliance Capacity
Maximum Active Assets: 20,000
Maximum Administrative Console Concurrent Users: 10
Default Normalized Data Retention: 90 Days
Default Raw Data Retention: 7 Days
Maximum Events per Day: 250,000
Maximum PowerBroker or Retina Agents: 5,000
Appliance Capacity
Maximum Active Assets: 50,000
Maximum Administrative Console Concurrent Users: 25
Default Normalized Data Retention: 90 Days
Default Raw Data Retention: 7 Days
Maximum Events per Day: 500,000
Maximum PowerBroker or Retina Agents: 12,500
Retina 651
The Retina Security Management Appliance 651 is designed to provide complete coverage for
vulnerability assessment and asset discovery for any size network environment.
Appliance Capacity
Absolute Maximum Assets per Discovery Scan: Class A
Recommended Maximum Assets per Vulnerability Scan Job: 10,000
Maximum Concurrent Administrative Users: 1
Default Number of Simultaneous Scan Targets: 24
Maximum Number of Simultaneous Scan Targets: 128
Multiple UVM appliances can be connected together using Roles to scale beyond the
specifications per appliance. This scalability can be extended event further using software
versions of the solution and remote installations of Microsoft SQL
BEYONDINSIGHT (BI)
BeyondInsight enables enterprise-services for privileged account management, distributed
vulnerability assessment, and remediation (patch management). The solution can be deployed
to meet virtually any operational requirement including distinct silo structures, air gapped
environments, and provide summaries up to a global security view using distributed
components and Roles. Capacity planning requirements need to be considered at each tier of
an architecture from the bottom up.
Base Installation
BeyondInsight can be implemented fully self-contained or distributed. For a basic installation,
fully self-contained in a single appliance (or matching software installation), these metrics
should be used for capacity planning:
POWERBROKER
PowerBroker is a family of solutions from BeyondTrust that delivers complete compliance for
privileged access management (PAM). Each solution can operate standalone or centrally
managed with BeyondInsight. For capacity planning, the modules will address both aspects of
functionality when feasible.
Policy Server
The Policy server is the host where command requests are received and compared with policy
to determine whether to accept or reject a privileged command request. Policy servers take
these requests on firewall port 24345 by default. Policy complexity, and the number of
concurrent requests have a bearing on the sizing of these servers. Servers are typically
deployed in pairs to provide High Availability where both Policy servers host identical policies
for processing.
IO Log Server
The Log server is responsible for recording session logs, forwarding events to the Solr server for
indexing, and processes session replay requests when contacted by BeyondInsight. When the
Policy server authorizes a command, a separate process is started between the log server, and
the Run Host to capture and record all session activity.
4gb RAM
Form Factor Varied in Customer Environment – Host can be physical or virtual
Firewall Ports Required 24345 24346 24347,24348 need to be open to the PBUL
infrastructure
Capacity Planning
The efficiency and capacity of PBUL infrastructure varies widely based on a number of factors.
Network capacity, network port usage, number of concurrent sessions, which sessions are
logged, load balancing (whether using built-in randomization, or a load balancer in front of the
infrastructure), and server I/O are factors in capacity and will vary largely by environment.
A default installation of PBUL configures traffic to transit the customer network on TCP/IP ports
24345, 24346, and 24347, and hosts are called serially as they are listed in the settings file on
the client. This implementation satisfies the requirements of installations with less than 10,000
clients, and where concurrent requests number less than 5,000 at peak.
Capacity can be controlled in two parts in PBUL.
1. Server and Network optimization
2. Policy Optimization
Depending on customer requirements, both should be considered when a customer is
optimizing their environment. There is no perfect solution for optimization, as things vary
greatly from customer to customer, however, using some combination of the suggestions
below will permit a customer to adjust their environment and optimize performance
significantly.
Load Balancing
Balancing network traffic, and requests across the PBUL infrastructure is critical to improving
performance of the PBUL software. Techniques similar to those used when balancing requests
across a busy web complex can be used to optimize requests to the PBUL infrastructure. This
will prevent any single server from becoming overwhelmed by requests. Load balancing can be
achieved in two ways, using the built-in randomization features, or using a customer installed
load balancer that is available within their network.
Built-in randomization
PBUL has keywords in the pb.settings file that provides randomization among the Policy
and Log servers listed.
Setting the randomizesubmitmasters keyword to “Yes” will cause the PBUL client
to randomly select one of the Policy Server servers listed in the submitmasters line
when a request is made.
Setting the randomizelogservers keyword to “Yes” will cause the PBUL client to
randomly select one of the Policy Server servers listed in the logservers line when a
request is made.
It is recommended that these keywords be returned to “No” when using the alternative
load balancing solution outlined below.
Customer installed Load Balancer
The PBUL infrastructure can be placed behind a Virtual IP (VIP) on a load balancing
system in the customer environment, and that system can select hosts methodically.
Most load balancing software can monitor host load, and will balance network traffic,
sending it to the host that has the most available capacity to handle the request
efficiently. Configuration of the customer installed load balancer should ensure that
requests are balanced as evenly as possible across the available PBUL infrastructure. An
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
10
additional benefit to load balancing in this manner is that severs can be taken out of
rotation for maintenance without undergoing the ‘fail-over’ delay when a request is
received.
It is important to note that corresponding firewall rules must be updated to permit traffic on
these ports if they are enabled.
Policy Optimization
PowerBroker policy language is an extremely powerful tool that processes policy to determine
whether to accept, or reject for privileged command requests. PBUL policy consists of
configuration files with functions, and procedures designed to make decisions based on
environmental data (user, group membership, host, command, etc.). Policy files reside on the
Policy Server server, and when requests are received, policy is read and assessed to end with an
accept, or reject. As with all code, policies can be optimized to make the decision process more
efficient.
All policy decisions boil down to “Who”, “Where”, and “What” in most environments, and
writing policy with that in mind will help build policy structure that is efficient. There are some
best practice considerations that can be made to optimize policies for the most efficient
processing possible.
Policy is read from top to bottom – Placing the most important policies at the top of the
policy will ensure that these requests are processed first. For instance, system
administrators often require immediate access to privileged commands to resolve
system issues. Placing the policy that governs their activities uppermost in the policy
will ensure that SA requests for privileged access will be processed first.
Make decisions efficiently – Placing an ‘If’ statement at the beginning of a configuration
file that defines the conditions under which that policy segment will be processed will
allow the policy determination to be made quickly. If a condition is placed at the top of
the policy file that defines a specific condition, and that condition is not met, policy will
continue on to the next configuration file rapidly. Winding down through policy with
broad, or poorly defined conditions will consume processing power and cycles,
degrading the efficiency of policy processing.
Smaller is better – Large files that are processed by policy will slow policy processing
significantly. A rule of thumb is that reading a file larger than ~3,000 lines will slow
processing. This has to do with the way that Unix processes information, and has little
to do with PBUL itself. Recognizing this, and writing code in such a way that you avoid
that condition will help speed processing significantly. Wherever possible, use group
membership, rather than individual users, or regular expressions to define host groups if
possible.
Avoid Hard-Coding variable information - If possible, branch out of policy to read a list in
a small file after a condition has been met, rather than hard-coding variable information
in policy. Information such as users who may come or go from an environment should
never be coded into policy. Rather, it is preferable to assess authorization based on
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
14
membership in groups, or lists of hosts defined in small files. Group provisioning is
usually handled by a centralized provisioning system, and this will eliminate the
possibility of artifact memberships or legacy policy authorization.
Leverage other Identity and Access Management systems – Most enterprises use many
tools to provision or control the identities within their environment. Most modern
provisioning tools have some type of API that can be queried, or place data into a
database that can be queried easily. When building policy, consider how volatile
information is within an environment. Typically, group memberships do not change
frequently once they are established, so rather than make an LDAP or AD call with every
policy request, give some thought to collecting the data periodically, caching, and
reading it locally. For example, making one LDAP or AD call every 30 minutes, and
storing group membership information locally will spare the LDAP/AD server from
potentially thousands of requests. Retrieve and store the information once, use it
multiple times, and refresh it periodically to reduce the burden on the PBUL
infrastructure, and other systems in the customer environment. Informing users that
following group membership provisioning it will take as much as 30 minutes for
privileges to be available is often acceptable in larger environments.
Use the Operating System – PowerBroker policy language provides unrestricted access
to the Policy Server host operating system. Branching out of policy to execute a small
shell script to create smaller files for processing can optimize policy significantly. For
instance, using a shell script to select a small group of servers from a large list, and
placing it in a specific location for reading, and processing will reduce the number of
extraneous items that have to be reviewed, thus speeding policy processing. The
temporary file can be deleted using policy language immediately following processing to
keep systems clean.
Session logging consumes processes – PBUL policy will always record metadata for
events such as commands that were requested, when, who, etc. whether they were
accepted, or rejected, and these are stored in the PBUL events file. Some events, such
as defined processes that are executed by service or application ids that require
privilege are documented and defined for audit purposes. Recording those sessions
may not be worthwhile, particularly if those processes occur multiple times per day.
Auditors may be satisfied with the process definition document, and assurance that the
process operates unchanged as defined within that document. Reducing the number of
sessions that are logged will reduce the burden on log servers, and will reduce the
amount of space required to store logs. Customers should explore their logging
requirements with their audit team to see where they can save cycles by not logging
sessions.
Take a step back, and review – PBUL policy evolves over time, and regular reviews of
policy should be undertaken. A master policy definition document that is regularly
updated will assist auditors, as well as give the team that supports PAM the opportunity
to review policy for gaps, or to remove deprecated policies. It is not uncommon for a
host or application to be decommissioned, and policy remains that defines activity on
that host. If possible, customers should integrate PAM with other systems in their
environment to dynamically update host lists, or user lists. Housekeeping in this way
will prevent artifacts from building up.
Customer Environment
CORES RAM Concurrent RequestsDaemon Server Type Comment
20 128 5000 pbmasterdPolicy Master One Master, spawning two log
20 128 5000 pblogd Session Logs requests for each pbmasterd
20 128 5000 pblogd Event Logs
Customer Environment
CORES RAM Concurrent RequestsDaemon Server Type Comment
20 128 5000 pbmasterdPolicy Master One Master, spawning two log
20 128 5000 pblogd Session Logs requests for each pbmasterd
20 128 5000 pblogd Event Logs
The Lab machines used for the stress testing of high volume concurrent requests were as
follows:
Again, it is important to remember that there are many factors that can positively or negatively
affect performance and results, such as network, hardware, encryption, policy and logging.
During BeyondTrust’s stress testing, a simplified/basic policy was used. In addition, using
automated tool to simulate load typically results in minimal session log data being generated
and the generation of large session log files (iolog files) in a short period of time should also be
considered when planning log servers, disk size, disk speed and network throughput.
Before performing stress testing against PowerBroker for Unix & Linux 9.4.1, the follow
optimizations were made to the lab machines in addition to the normal configuration
optimizations outlined throughout this document.
/etc/sysctl.conf
# System default settings live in /usr/lib/sysctl.d/00-system.conf.
# To override those settings, enter new settings here, or in an
/etc/sysctl.d/<name>.conf file
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.core.wmem_max = 12582912
net.core.rmem_max = 12582912
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_no_metrics_save = 1
net.core.netdev_max_backlog = 5000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.core.somaxconn = 1024
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_orphan_retries = 1
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_retrans_collapse = 1
The attached results are included in the ‘PBUL Sizing & Stress Test Results.xlsx’ spreadsheet
available from BeyondTrust Professional Services.
Database growth
Growth is determined by the number of password changes, the number of session requests,
and keystrokes logged. Keystroke figures are approximate, and representative of average
RDP/SSH sessions; In use, there is a vast difference between simple SSH commands such as
nslookup, and SFTP sessions that will generate vast quantities of screen output.
Password Change
Each change: 8KB
Example: 100K password changes per year = 800MB
Requests
Per request: 0.25KB
Per approval: 0.13KB
Per session recording: 0.35KB
Example: (assume session length of 2 hours in 18hr working day = 9 sessions per
concurrent user per day x 100 concurrent users = 900 sessions per day).
o 900 x 0.73KB = 0.67MB per day for requests with approvals.
o 900 x 0.6KB = 0.54MB per day for auto-approved requests.
Session Recording Files
RDP session file (on disk): ~350KB per minute of recording
SSH session file (on disk): ~25KB per minute of recording
Example: (assume 900 sessions per day x 120 mins = 108,000 minutes of
recording).
o SSH: 108,000 x 25KB = 2.7 GB per day
o RDP: 108,000 x 350KB = 37.8 GB per day
Faster user id provisioning and removal – Leveraging the power of Active Directory,
customers can provision an id in one place, and it will cascade to all authenticated
systems. User id removal will similarly cascade throughout the environment.
Centralized Identity Mapping – User ids can be mapped to specific roles or functions
using native Active Directory User Controls (ADUC).
Kerberos SSO features – Active Directory identity management offers customers the
ability to incorporate Kerberos (strong authentication) into their UNIX environment.
The health of Active Directory is critical for the successful deployment of PBIS. PBIS is, an AD
client and relies on the same infrastructure being in place for native Microsoft Windows assets
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
22
and their Active Directory domain. In a pure Windows environment, many problems can be
masked by the myriad technologies that Windows has incorporated over the years. For
example, WINS and NTLM authentication often cover up problems with DNS and Kerberos.
Because of the possibility that underlying AD problems may not be causing day-to-day issues
with a customer’s environment, it is necessary to take a step back and ensure that AD is
functioning optimally before proceeding with the installation of a product that extents its native
functionality. Problems with AD, or poorly implemented architectures, must be resolved before
the installation of PBIS or could become pronounced if not resolved.
Account Collection
When consolidating to a central directory source, such as Active Directory, it is necessary to
consider all existing account databases in a customer environment. There are often multiple
account databases in a UNIX environment that contain different, and often conflicting, sets of
account data. Examples of such databases include: NIS, LDAP, winbind (samba), and local
group/passwd files pairs. It is important to collect and review all data sources to consider them
for reconciliation. Users and groups could exist with different names (or more commonly
UID/GIDs) on multiple servers, NIS Domains, or LDAP directories.
Since the goal of PBIS is to leverage the power of AD to centralize these accounts, it is
important to collect and reconcile all user and group information for administration and
reporting. To this end, it is also necessary to collect account information from AD to be used for
mapping the account sources. Additionally, in some cases, you may have will already
performed a previous AD migration with PBIS and will have to utilize Active Directory as
another source for UNIX account information.
The account collection stage is performed by the end user. As a user, it is required to gather
account and group data from their local server databases, NIS domains, LDAP, and AD
databases. BeyondTrust has several tools and methods that will assist the customer in
gathering this information. At this point in time, account data should be considered in a
“frozen” state with respect that any accounts created after Account Collection may not be
considered in the following Account Reconciliation stages. Although it will be possible for the
manual addition of data before the Account Provisioning stage, you will need to keep careful
track of changes to account sources, and ensure that no further conflicts are created.
The table below summarizes the various database types as well as the mechanisms which can
be used to retrieve information from them:
Account Database passwd/group getent LDIF get-nis- ypcat CSVDE Retina Discovery PowerBroker
Sources files maps.sh DART
Local System files
LDAP
NIS Domain (single)
NIS Domain (multiple)
NIS with netgroups
Winbind
Active Directory (UNIX
Info)
Active Directory (AD
Info)
The following sections detail the tools involved to collect the user and group accounts spread
throughout the other account databases with an environment.
Account Retrieval Mechanisms
passwd/group
The output for the user and group accounts will already exist under the /output/reports
directory in the form of passwdlines.txt and grouplines.txt respectively. These
were generated by the viewhealthstatus.py during the Initial Audit stage.
getent
The Linux getent command (or PS tool getent.pl) can be run with the group or passwd
parameters. This command pulls information from the local system passwd/group files as
well as any NIS domains and/or LDAP databases configured via the system’s nsswitch
configuration. This is often the quickest way to get the entire cross-section of all account
domains on a single box. However, as this output does not delineate which accounts are from
which sources, it is not the best method when looking to assign priority to account source.
LDIF
LDIF can be used to pull information from an LDAP database. The LDIF file format must be
converted to standard passwd/group format after it is collected. The can be done with the
ldif2passwd.pl tool, detailed below:
Location:
/deployments/account-mapping/
HELP:
Example:
/opt/pbis/bin/ypcat passwd;
/opt/pbis/bin/ypcat group;
CSVDE
The purpose of CSVDE in the Account Collection process differs slightly than the previous
mechanism to collect account information detailed above. All of the previous methods are
used to collect UNIX account information from various sources (local,NIS,LDAP, etc). CSVDE on
the other hand is used to collect Windows account information from Active Directory. The
extracted information is used during the Account Mapping stage of Account Reconciliation
discussed later in this document.
CSVDE is a native windows command, and should be run from one Domain Controller in each
AD domain hosting user accounts. The following command can be used exactly to extract the
relevant information:
Retina Discovery
BeyondTrust’s Retina Vulnerability Management Solution is available to all PowerBroker users
for asset intelligence and discovery scanning. The solution will discover all user accounts on
Windows, most Unix and Linux platforms, infrastructure like Cisco, and display the results in
BeyondInsight. Once PBIS is implemented, BeyondInsight and Retina can report on PBIS
Finds and counts user accounts, local accounts, SSH keys, Windows and Linux groups,
default and hard-coded passwords, and more, reducing risk and ensuring that no
accounts, users, or assets are left unsecured.
Displays high-level metrics on password strength, age and other key indicators of risk in
a dashboard format for fast review and analysis by stakeholders.
Generates an easy-to-read, customized HTML or Excel-based report, helping security
leaders make informed decisions on how to immediately reduce the risks to privileged
access.
Provides the ability to export data via XML into PowerBroker Password Safe and
PowerBroker Identity Services for complete privileged account management.
Account Reconciliation
Once all the data from the Account Collection process has been gathered and placed into the
appropriate staging folders, the process to reconcile the data can begin. The following sections
detail the steps involved to reconcile the groups and users found in the customer’s UNIX
environment.
1. Initial Merge - his phase, consolidates all the collected account information into a single,
merged database of user accounts for importation into Active Directory. There will likely
be changes to make to the merge based on the toolset and desired results.
2. Account Mapping - Maps serve to correlate the gathered UNIX account information to
Active Directory accounts. The toolsets can aid in the map creation process, but a
manual review is strongly recommended to ensure that all accounts are matched
properly.
3. Merging Account Data -During this phase the data is reviewed and processed to ensure
it is consistent with migration goals. The data may be reprocessed iteratively multiple
times, each time fine-tuning individual accounts and data source priority.
4. Final Merge - The final merge can be viewed as the last iterative cycle in the data
massage process. Its resultant output is a definitive set of files which will be used in the
Account Provisioning process.
Account Provisioning
Once the Account Reconciliation stage has completed, it is time to pull all the reconciled data
into Active Directory. This process may involve the creation of new AD users and/or group to
correspond to their UNIX counterparts. Additionally, the key task of Provisioning each AD user
and Group with their UNIX account information will be performed.
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
26
The Account Provisioning stage can be broken down into several key phases:
Each Domain Controller in a Domain with provisioned users will receive a full copy of the
AD database for its Domain. This means that every attribute added for a provisioned
user will be replicated to each DC in the Domain.
The uid, uidNumber, gidNumber, and displayName attributes are replicated to the
Global Catalog (GC). This means that these 4 values, once populated, will be replicated
and stored to every GC in the Forest.
Starting with PBISE 8.0, the loginShell, unixHomeDirectory, and gecos attributes are
also replicated to the GC as those above.
AD does not replicate indexes between Domain Controllers, even those from the same
Domain. Indexes are rebuilt with the appropriate data on each Domain Controller.
While it is not possible to provide estimates on replication traffic due to the specific
configuration settings and bandwidth consideration of each AD environment, it is possible to
provide estimates of AD database growth.
Per Microsoft (http://technet.microsoft.com/en-us/library/cc961779.aspx), individual
attributes require approximately 100 bytes in the AD database. Based on internal testing,
BeyondTrust has qualified this as a relative approximation of the actual size used by each
object. Additional space however will also be used for the indexing of the appropriate
attributes.
On average, each provisioned user object will require approximately .8 KB additional space
within the AD database. Of this .8 KB, approximately .7 KB is utilized for attribute storage with
the remaining .1 KB used for indexing.
Since group objects utilize only 1 of the RFC2307 attributes (gid) and the optional Alias setting
(displayName), the space required per provisioned group should be approximately .27 KB, of
which approximately .015 is used for indexing.
Therefore, to estimate growth of the AD database for PBIS provisioning, use the following
formulas:
((# of Users * 0.8KB) + (# of Groups * .27KB)) = Total # of KB of growth
e.g.
((100,000 * 0.8KB) + (50,000 * .27KB)) = 93,500 KB = 93 MB of growth
The resulting output should provide a good estimate of the database growth required. The
actual value may vary by 10-20% in either direction depending on the specific attributes
provisioned (see Testing Methodology below).
NOTE: Capturing *all* syslog events at error level and above, requires ~2-3MB / day / agent
Agents can be queried directly for their events, or configured to forward them onto the
Reporting Database (SQL backend) via a Collector Server
Events are held for 24 hours on each Collector after sending to the SQL server. The SQL Lite
database on the Collector utilizes ~10 MB / agent under normal operating environments.
Providing 2-3x this amount as a cushion of space may be preferred for when the Collector
server is unable to communicate with the SQL database for a period of time (network outage,
server downtime, etc).
For example, a Collector server which services 400 agents will require a minimum of 4 GB of
space (400 * 10 MB) under normal operations. Increasing this to 12 GB (4 GB * 3) allows for ~3
days of logging to be stored on the Collector.
Events can be recovered up to their maximum retention period on either the individual agents
or the Collectors. In the event of extended downtimes or total system failures, the events can
be resent from the agents to the Collectors and then forwarded back to the SQL backend.
Agent settings can be adjusted to throttle how frequently connections are made to Collectors
and how many events (data) are transferred per connection period.
NOTE: The default location of the collector database is
c:\Program Files (x86)\BeyondTrust\PBIS\Enterprise
Audit Data
The auditing system utilizes 1MB / 1000 events in the audit database. Each agent reporting
through the collector servers will generate an average of 400-500 PBIS and authentication
events / day. Systems with high utilization (logins. regular cron jobs, non-standard logging
options) may generate substantially higher logs.
NOTE: Using MSDE or SQL Express versions (including the version provided on the installation
CD) are not supported in production environments due the file size limits on databases.
Audit Reports are run directly against the database from the BeyondTrust Management
Console (BMC). If required, ensure hardware requirements are sufficient to produce adequate
performance when running reports during production hours.
SQL Database Security
To strengthen the security of the PBISE reporting environment, it is recommended that only the
required rights necessary to perform specific actions are granted to the appropriate users and
service accounts. Below is a general guideline for securing the Reporting components.
Create the following Active Directory Groups:
PBISE_DB_Administrators
o dbo
PBISE_Collectors
o collectors : insert, select, update
o CollectorsStat : insert, select, update, delete
o Events : insert
o CollectorsView : select
PBISE_DB_Archive_Administrators
o Archives – insert, select, update, delete
o Events – select, delete
PBISE_Report_Viewers
o All Tables - select
PBISE_LDBUpdate
o dbo
3. Modify the connection string to match the string used for the Collector configuration
above.
4. Navigate to the "Collectors" tab. If the Collector service and security has been properly
configured, each Collector should be listed. Right-click on the desired Collector and
choose “Set collector parameters”
6. Change the Remote ACL via the “Set Permissions…” button to grant the following access
rights:
PBISE_DB_Administrators: Full Control - Required to modify the Collector
database via the btcollector-cli.exe command.
Domain Computers: Write Events – Required for agents to write events to the
Collector database.
NOTE: The default permissions on the Collector Remote ACL are defined by the following SDDL syntax:
O:LSG:BAD:PAR(A;;CCDCRP;;;BA)(A;;CCDCRP;;;DA)(A;;CC;;;DC)
SDDL_PROTECTED - Inheritance from containers that are higher in the folder hierarchy are blocked.
(A;;CCDCRP;;;BA) – ACE 1
Allow:Built-in Administrators
(A;;CCDCRP;;;DA) – ACE 2
Allow:Domain Administrators
(A;;CC;;;DC) – ACE 3
Allow:Domain Computers
Example:
"C:\Program Files\BeyondTrust\PBIS\enterprise\ldbupdate.exe" --transaction -f
dc=mydomain,dc=com -c "Data Source=SPP-VP-SQL01\PBISREPORTING;Initial
Catalog=LikewiseEnterprise;Integrated Security=true;Connection Timeout=30" –v
Using the syntax above, schedule a recurring task to run as a user in the PBISE_LDBUpdate
group on a server with the BMC Tools installed.
Capacity Testing (Methodology)
Testing was performed utilizing a two Domain Forest with a single Windows 2008 R2 DC in each
domain. Tests were performed prior to PBISE 8.0 which means the observed index growth is
slightly smaller due to not all of the listed attributes being indexed.
User Testing
Users were provisioned with the following attributes set:
The growth values observed in the NTDS.DIT database for 200,000 users can be seen as follows:
INDEX_00150001 (uid): 636 pages * 8 KB = 5088 KB = 0.02544 KB /user
INDEX_00250000 (uidNumber): 405 pages * 8 KB = 3240 KB = 0.0162 KB /user
INDEX_00250001 (gidNumber): 360 pages * 8 KB = 2880 KB = 0.0144 KB /user
PBIS can integrate within an existing AD environment in a flexible and unobtrusive way.
Required modifications pose little or no risk to an existing environment and should negligibly
impact existing AD databases and replication cycles while increasing performance for account
queries and authentication.
Publisher
Path, Hash
Folder
MSI Path
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
39
MSI Folder
ActiveX, Shell
CD/DVD
UAC
This section provides guidance on which rules are most applicable for each given situation to
minimize the quantity deployed.
PUBLISHER Rule
Used to target a digitally signed file by any element of that signature. Since its introduction in
PBW v5.0, Publisher Rules are becoming almost as common as Path rules. It provides the
highest value in single rule coverage for elevating applications while minimizing rule count.
Pros:
Effectively targets a specific application (with or without version information)
Is location agnostic, application can be launched from a local path, UNC, DFS or a
Junction Point
Can target signed .EXE, MSI, or .MSP files
Cons:
Application must be signed by a trusted signature
Best Use Case:
Whenever you are targeting a digitally signed application
When choosing to Blacklist or Whitelist an application’s execution
PATH Rule
Used to target an application or process based on its location. Path rules have historically been
the most common rule type in use within a company’s rule set.
Pros:
Path location can easily be typed into the rule’s properties
Can use wildcards and environment variables within the Path and Arguments fields
Can be used to target any process in a folder or specific extensions within a folder
Easy to troubleshoot if the intended policy isn’t triggering when the targeted process
is launched
You do not need access to the process to create a rule for it
Cons:
Can be limiting when applying a policy to an application whose location can be
different on different machines
Does not protect against a user replacing another application and renaming it to
what the rule is targeting
Best Use Case:
When you are new to PowerBroker Desktops
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
40
When choosing a rule from the Built-In template library
When applying a rule to an MS Internet Explorer URL, Scripts or Registry Merges
HASH Rule
Used to target a specific version of an application regardless of its location. This the highest cost
rule type since it is one rule per one application.
Pros:
Targets based on an exact match of a file including its version
Does not require the application to be digitally signed
Is location agnostic, application can be launched from a local path, UNC, DFS or a
Junction Point
Is not reliant on the name of the application
Cons:
Any update to the binary application will affect a Hash rule from applying (e.g. v1 to
v1.1)
Hash rules may require more maintenance and additional rules needed for the same
application than other rule types
There can be a noticeable impact to the end user for poorly written hash rules since
the solution needs to calculate the hash for each application launch that could
potentially be a match.
Best Use Case:
When targeting an application not digitally signed in a location the logged on user
has write/modify access to
When targeting a batch (.bat) file a user could edit. This requires a manual entry of
the .bat file name since the UI defaults to .exe files.
FOLDER Rule
Used to target all applications in a specific folder. The folder rule type is considered nearly
obsolete with the introduction of wildcards in PBW v5.0. To use a path rule to target a folder it
is recommended you create the rule like this, Path: ‘C:\DirectoryOne\DirectoryTwo\*’
MSI PATH Rule
Used to target MSIEXEC.EXE (32 or 64 bit) while still targeting individual Windows Installer
Packages (MSI Files).
Pros:
Consistent rule properties regardless of the bit of the MSI file
Only the Windows Installer Package location/name needs to be specified in the rule
properties, the rule assumes MSIEXEC.EXE
Can use wildcards and environment variables in the Arguments field
Can be used to target all Windows Installer Packages in a folder (and/or sub-folders)
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
41
Cons:
May not apply if the .MSI is called with additional arguments, as can be the case
when using Transforms
Best Use Case:
When you want to target one or more .MSIs with a PBW elevation or blacklisting
rule
Any time you want to elevate an approved Windows Installation File
To target of a shared folder of approved software installs
MSI FOLDER Rule
Used to target MSIEXEC.EXE (32 or 64 bit) while still targeting individual Windows Installer
Packages (MSI Files). The MSI folder rule type is considered nearly obsolete with the
introduction of wildcards in PBW v5.0. To use an MSI path rule to target a folder it is
recommended you create the rule like this, Package: ‘C:\DirectoryOne\DirectoryTwo\*’
ACTIVE X Rule
Used to target installation of ActiveX controls or installations initiated by MS Internet Explorer.
Pros:
Allows you to elevate the installation of an ActiveX control without elevating the
web page it was initiated from.
You can use any combination of a control’s Source, Name, or ID in addition to its
version to target a control.
Can be used in conjunction with a PBW User Message to better inform the end user
why a given control/page isn’t displaying information properly.
Cons:
May conflict with Java applet calls if the rule is not targeted well enough
May not complete the install of the control if IE is the controlling process. Use a URL
Path rule instead.
Best Use Case:
When a user requires the installation of an approved ActiveX control
SHELL Rule
Provides the user with a Right-Click Context Menu option allowing them to elevate an
application not otherwise targeted with a PBW rule. This is the lowest cost rule in the set since
it can be applied to almost any application. This rule should only be given to trusted users and
may represent a security threat if given to inexperienced or untrusted users.
Pros:
Allows users who normally would be stuck due to rights to continue their business
tasks without involvement or waiting for help desk staff
Pros:
Allows you to create rules on a specified CD or DVD without creating a rule for
anything executed from a Drive Letter
Creating a rule for a specific CD/DVD will also apply to any full copy made from that
CD/DVD
Cons:
You must have access to the CD/DVD to create a rule for it
Best Use Case:
When creating a CD/DVD with updates or installs that need to be shared with a large
number of end users in different locations
UAC Rule
Used to target an application that triggers a UAC prompt
Pros:
Can apply a rule to applications seamlessly to the user, without being presented
with a UAC prompt
Can be used to effectively track what applications require elevated rights within the
PBW Auto Rule Creation tool.
Can target a subset of applications that trigger UAC while allowing UAC to prompt
for non-approved software.
Cons:
Additional controls may be needed to limit what the UAC rule applies to
Must be running on a UAC enabled OS to apply
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
43
Best Use Case:
In combination with the Challenge Response feature to allow Help Desk and Remote
staff to run processes requiring elevated rights
In combination with PowerBroker Logging to assist in the discovery of applications
requiring elevated rights
Use Cases
Based on the various Rule Types defined in the previous section, these can be translated into
the following Use Cases. Please reference these as follow:
“When modifying permission and privileges of...” in the first column of the table, the best Rule
Type to “Use is a…”
Elevate the permission level for restricted users Path rule or Hash rule
performing a common Windows task or running an
application requiring higher privileges
Elevate the permission level for restricted users running Path rule with trailing wildcard
any applications in a specific folder
Reduce the permissions for administrators when using Path rule or Hash rule
applications such as Internet Explorer and Outlook
Provide a self-service software installation point for Path rule for executable and MSI
restricted users Path rule for MSI packages, (both
with trailing wildcard)
Enable restricted users to use the Add Hardware wizard Path rule
or prevent users from using the wizard
Enable restricted users to add or remove plug and play Path rule
hardware or prevent users from adding plug and play
hardware
Exchange
Events Per Minute With Agent Performance Range
100 Changes CPU & Memory 1% higher 0% - 1% (5 Runs)
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
47
500 Changes CPU & Memory 4% higher 2% - 4% (5 Runs)
1000 Changes CPU & Memory 4% Higher 2% - 4% (5 Runs)
5000 Changes CPU & Memory 5% higher 3% - 4% (5 Runs)
*Testing was limited to 5,000 Events as data consistency beyond this varied
File Systems
Events Per Minute With Agent Performance Range
100 Changes CPU & Memory 0% higher 0% (5 Runs)
500 Changes CPU & Memory 0% higher 0% (5 Runs)
1000 Changes CPU & Memory 2% Higher 2% - 6% (5 Runs)
5000 Changes CPU & Memory 3% higher 3% - 7% (5 Runs)
10,000 Changes CPU & Memory 4% higher 4% - 7% (5 Runs)
15,000 Changes CPU & Memory 4% higher 4% - 7% (5 Runs)
Microsoft SQL
Events Per Minute With Agent Performance Range
100 Changes CPU & Memory 0% higher 0% - 0% (5 Runs)
500 Changes CPU & Memory 2% higher 1% - 4% (5 Runs)
1000 Changes CPU & Memory 2% Higher 2% - 6% (5 Runs)
5000 Changes CPU & Memory 5% higher 3% 4% (5 Runs)
10,000 Changes CPU & Memory 6% higher 4% - 7% (5 Runs)
15,000 Changes CPU & Memory 6% higher 4% - 7% (5 Runs)
RETINA
Retina is a family of solutions from BeyondTrust that delivers complete coverage for
Vulnerability Management (VM) from vulnerability and configurator assessments to patch
management.
* Each increase in the number of targets (by 24) increases RAM utilization on the scanner. An
additional 1GB of RAM should be included for 48 simultaneous targets and an additional 2GB of
RAM for 64 targets.
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
49
Retina Host Security Scanner (RHSS)
The Retina Host Security Scanner is a standalone headless of version of the Retina Network
Security Scanner designed to be an agent based solution for vulnerability assessment. It
requires BeyondInsight (Retina CS) or PowerShell for command and control. Capacity planning
requirements are not government by the agent since it only assesses the host it is installed on.
Metrics for performance and BeyondInsight management are listed below:
INFRASTRUCTURE
BeyondInsight contains several additional modules for performing updates, scalable
architectures, and routing of policies and password management. Each module has its own
capacity metrics for a typical deployment.
CONCLUSION
This document detailed the capacity planning requirements per module for the PowerBroker
and Retina family of solutions by BeyondTrust. It covered each module individually and when
applicable, operating together. Special considerations have been noted for basic installations
and capacity planning when architecting for scalability, throughput, and distributed storage
requirements. This Capacity Planning Guide provides the foundation for designing and planning
for the life of your BeyondTrust solutions.
A SQL 2014 AlwaysOn HA pair for PowerBroker for Windows (PBW) only
SQL 2008 R2 standalone server for PowerBroker Unix & Linux (PBUL), PowerBroker Password Safe (PBPS), Retina Network
Security Scanner (RNSS), and PowerBroker Endpoint Protection Platform (PBEPP)
Each set of data has been averaged across at least 3 runs of the respective tools. The column named “IDLE” is the baseline for each
environment. Each dataset used was 10 minutes long, and statistically datasets where the job being profiled was not running for at
least 90% of the log was discarded. The test criteria included:
*All values are in Kilobytes, per Microsoft Performance Monitor. IOPs combine read and write. All disk stats combine log and
database.
Capacity Planning Guide © November 2016. BeyondTrust Software, Inc.
54
ABOUT BEYONDTRUST
BeyondTrust® is a global security company that believes preventing data breaches requires the
right visibility to enable control over internal and external risks.
We give you the visibility to confidently reduce risks and the control to take proactive, informed
action against data breach threats. And because threats can come from anywhere, we built a
platform that unifies the most effective technologies for addressing both internal and external
risk: Privileged Account Management and Vulnerability Management. Our solutions grow with
your needs, making sure you maintain control no matter where your organization goes.
BeyondTrust's security solutions are trusted by over 4,000 customers worldwide, including over
half of the Fortune 100. To learn more about BeyondTrust, please visit www.beyondtrust.com.