Escolar Documentos
Profissional Documentos
Cultura Documentos
The Snort IPS engine has changed substantially over the last ten years. Packet processing speed has improved, IP defragmentation and stream reassembly functions have evolved, the connection and state tracking engine has matured, but there is one thing that keeps getting left behind. Custom rule-sets. With each revision of Snort new features are added that enhance the detection capability and aid in packet processing performance of the Snort engine. Those enhancements not only open new avenues for detecting the latest bad stuff out there, they create an opportunity to improve the performance of older legacy rules you may have created many years ago. Unless your rules make good use of the current Snort language and are kept up-to-date, what once used to be a good rule could in fact turn bad.
2.16.2010 - 1
A Snort rule is made up of two parts, a rule header and a rule body. The rule body follows the rule header and is surrounded by parentheses. The header is pretty easy to understand as it reads close to natural language. The rule header consists of an action, a protocol specification, and the traffic that is to be inspected. The rule body (shown in Listing 1 in blue) is made up of a Listing 1 An example bad Snort rule. selection of rule options. A rule option consists of a keyword alert tcp $HOME_NET any -> $EXTERNAL_NET 80 \ followed by one or more arguments. For example in the (msg: 0xdeadbeefbadfoo Detected; \ above rule there is a content keyword, with an argument of flow: established, from_client; \ 0xdeadbeefbadfoo. content: 0xdeadbeefbadfoo; \ rev:1; sid:1000001;) This rule instructs Snort to look for the text 0xdeadbeefbadfoo in all packets flowing out of the network to TCP port 80 that are part of an established TCP session.
Well after the analyst has likely scratched their head for a moment, and then wondered what on earth 0xdeadbeefbadfoo is, they will probably end up Googling for 0xdeadbeefbadfoo in an effort to understand what this alert means to them. Is this a serious event? Is it somebody elses problem? Should I start panicking? It is common to have a different group of people researching and writing rules vs. those will who deal with the security events they may raise, and if this isnt the case for you today it may well be in the future. At the time of rule creation only the rule writer really knows what she or he is looking for, and the implications to the network if the traffic is found.
It
is
therefore
critical
for
this
information
to
be
passed
on
to
the
event
analyst
within
the
rule
itself.
Unless
a
rule
is
correctly
explained,
how
can
a
writer
expect
an
analyst
to
be
able
to
react
accordingly?
Lets
expand
on
my
simple
0xdeadbeefbadfoo
example
from
earlier
by
providing
some
more
theoretical
scenario
information
(Listing
2).
Listing
2
The
bad
rule,
now
improved
with
far
more
information
to
help
an
analyst.
alert tcp $HOME_NET any -> $EXTERNAL_NET 80 \ (msg: 0xdeadbeefbadfoo Detected; \ content: 0xdeadbeefbadfoo; \ classtype: trojan-activity; \ priority: 3; \ reference: cve,2010-99999; \ reference: url, http://mycompany.com/myproduct; \ sid:100001; rev:2;)
Note the addition of three new rule options: A classification type, an overriding priority qualification, and a couple of useful references. With these extra rule options added an analyst dealing with the event now knows that 0xdeadbeefbadfoo is in fact a low-priority Trojan, associated with CVE:2010-99999, and a related to a specific companys product. These seemingly minor additions make massive returns in respect to the security event analysis and remediation process. Sometimes the simplest changes provide the greatest value.
Identifying and optimizing slow rules that are wasting your CPU cycles.
So
while
fixing
the
analyst
information
problem
is
pretty
simple,
identifying
suboptimal
rules
in
terms
of
computational
overhead
is
a
little
more
of
a
technical
process.
To
make
this
challenge
possible
Snort
can
kindly
provide
us
feedback
of
how
the
system
functions
in
relation
to
the
current
configuration
and
network
traffic
being
inspected.
There
are
a
couple
of
useful
configuration
lines
that
can
be
added
to
your
snort.conf
to
provide
performance
feedback
about
how
the
detection
engine
is
performing.
Today
I
will
focus
on
the
output
provided
by
profile_rules.
config profile_rules: print 10, sort total_ticks
Adding this profile_rules configuration directive to your snort.conf will enable performance profiling of your snort rule- set. At exit, Snort will output to STDOUT a list of the top N (specified here as ten) worst performing rules categorized by the total time taken to check packets against them. This data can also be written to a text file of choice, and many other sort methods are available. Check the snort manual for full details. Note: Snort must be compiled with --enable-perfprofiling to enable the performance profiling capability. Before starting to inspect the performance output, it is vital to understand that all of the data we see is dependant on two distinct variables: The current configuration running (including the rule-set) The network traffic that is inspected by the detection engine When testing and tweaking anything as complex as an IPS rule-set for performance, I find it imperative to isolate and work on only a single variable at a time. By focusing my tests on a large sample of network traffic stored in PCAP files
that
is
representative
to
where
the
sensor
operates,
I
can
tweak
my
rules
for
performance
against
this
static
data-set.
When
I
think
I
have
optimally
tweaked
any
rules,
I
can
then
move
to
test
against
live
traffic.
An
example
of
rule
profiling
output
is
shown
in
Listing
3,
and
each
data
column
is
explained
below.
Num:
This
column
reflects
this
rules
position
number
in
regard
to
how
bad
the
rule
performs.
Here
the
top
(number
1)
reflects
the
rule
that
is
responsible
for
consuming
the
most
processing
time
(total_ticks)
SID,
GID,
Rev
:
The
Snort
ID,
Generator
ID,
and
Revision
number
of
the
rule.
This
is
shown
to
help
us
identify
the
rule
in
question
in
our
rule-set.
Checks:
The
number
of
times
rule
options
were
checked
after
the
fast_pattern
match
process
(yes,
that
bit
is
bold
because
it
is
important).
Matches:
The
number
of
times
all
rule
options
match,
therefore
traffic
matching
the
rule
has
been
found.
Alerts:
The
number
of
times
the
rule
generated
an
alert.
Note
that
this
value
can
be
different
from
Matches
due
to
other
configuration
options
such
as
alert
suppression.
Microsecs:
Total
time
taken
processing
this
rule
against
the
network
traffic
Avg/Check:
Average
time
taken
to
check
each
packet
against
this
rule.
Avg/Match:
Average
time
taken
to
check
each
packet
that
had
all
options
match
(the
rule
could
have
generated
an
alert)
Avg/Nonmatch:
Average
time
taken
to
check
each
packet
where
an
event
was
not
generated
(amount
of
time
spent
checking
a
clean
packet
for
bad
stuff)
The
two
values
a
rule
writer
has
some
level
of
control
over
are
the
number
of
checks,
and
how
long
it
took
to
perform
those
checks.
Ideally
we
would
like
to
have
low
figures
in
all
columns,
but
decreasing
the
Checks
count
is
the
first
important
part
of
rule
performance
tuning.
To
be
able
to
tweak
our
rule
to
affect
this
value,
we
need
to
first
understand
exactly
what
Checks
represents.
Listing
3
Sample
Snort
rule
profiling
output.
Rule Profile Statistics (all rules) total sort ========================================================== Num SID GID Rev Checks Matches Alerts Microsecs === === === === ====== ======= ====== ========= 1 112 1 1 208 69 69 187 2 111 1 1 208 208 208 151 3 113 1 3 69 69 69 27 Avg/Check ========= 0.9 0.7 0.4 Avg/Match Avg/Nonmatch ========= ============ 2.0 0.3 0.7 0.0 0.4 0.0
For example, if inbound traffic is destined to arrive at TCP:80 (HTTP), there isnt much point in running it though the rules associated with SMTP (TCP:25). The packet is assessed against the rules in the TCP:80 bucket. The same decision is also made related to source port and service metadata. Snort also has an extra rule-bucket for the any any rules. These are the rules that use the value any as both the source and destination port numbers. All packets are also checked against the any any rules as well after being assessed against their particular port / service based rule bucket. 2) Packet content (fast_pattern check) After identifying what rule bucket(s) this packet should be assessed against, a pre-screening content check known as the fast_pattern match is applied for all rules in the bucket(s). For any Snort rule to raise an event all rule-options in that rule must match. Applying a fast_pattern check process allows Snort to quickly test packets for the presence of a static content string (a single content: value) required to generate an event. The goal of this test is to quickly identify all packets that have any possibility of alerting after all of the rule options are tested. If a packet doesnt match the fast_pattern check, there is absolutely no point in running more computationally intense checks against it. Because the fast_pattern match has failed, we know that at least one of the rule options will not match, and an alert will never be generated.
Number of Checks
This brings us back to the important Checks value. The number of checks is the number of times a rule is assessed after both the protocol/port/service identification and fast_pattern processes are complete. The more irrelevant packets that we can exclude with these two steps, the lower the number of checks will be, the more optimal and focused the rule will be, and the less time will be wasted performing in-depth assessment of packets that will never generate an event.
Looking at the rule in Listing-4, Example.com is the longest content check (eleven characters), and by default will be used for the fast_pattern check. The other content CST-ID-001 is however less likely to be found in network traffic, especially if your company name just so happened to be Example.com. It is therefore wise to tell Snort to use this better value for the fast_pattern check with the fast_pattern modifier keyword.
content: "CST-ID-001"; nocase; fast_pattern; Following the fast_pattern check, each rule option is then tested in the order that it appears in the rule. Finally the source and destination IP addresses are tested to check if they match those defined in the rule header. Only if every check made is successful is the rule action (such as alert) taken. If any of the tests fail to match, no further checks are made on the packet against that rule, therefore it is advisable to place quick tests such as flowbits: isset, early in your rule options.
Summary
The
Snort
rule
language
is
simple
to
pick
up,
and
in
a
similar
way
to
any
other
language
it
is
easy
to
fall
into
some
bad
habits.
Hopefully
this
article
has
introduced
some
simple
suggestions
that
will
improve
any
of
your
in-house
IDP
rules
in
respect
to
their
performance
and
usefulness.
References:
http://snort.org
http://vrt-sourcefire.blogspot.com/
http://leonward.wordpress.com/dumbpig/
Leon Ward
Leon
is
a
Senior
Security
Engineer
for
Sourcefire
based
in
the
UK.
He
has
been
using
and
abusing
Snort
and
other
network
detection
technologies
for
about
ten
years
and
hates
referring
to
himself
in
the
third-person.
Thanks
go
to
Alex
Kirk
and
Dave
Venman
for
sanity
checking
this
document.