Você está na página 1de 5

you’ll find two Splunk Server processes running on your host, splunkd and splunkweb.

-Splunkd is a distributed C/C++ server that accesses, processes and indexes streaming IT data
and also handles search requests. splunkd processes and indexes your data by streaming it
through a series of pipelines, each made up of a series of processors.

-Splunkweb is a Python-based application server providing the Splunk Web user interface. It
allows users to search and navigate IT data stored by Splunk servers and to manage your
Splunk deployment through the browser interface. splunkweb communicates with your web
browser via REST and communicates with splunkd via SOAP.

Splunk Servers can communicate with one another via Splunk-2-Splunk, a TCP-based protocol,
to forward data from one server to another and to distribute searches across multiple servers.

IT operations with Splunk


Proxy logs = these logs are good for C2 analysis of files, domains,
downloads of DLL/EXE files
Anti‐virus logs = these logs are good for analysis of malware,
vulnerabilities of hosts, laptops, servers, monitor for suspicious file paths
Server Operating System logs = these logs are good for analysis of
server activities such as users, runaway services, security logs,
Firewall logs = logs for network traffic of source/destination ip addresses,
ports, protocols
Mail logs = logs for inbound/outbound mail for malicious links, targeted
recipients, unauthorized file out bound, data loss, bad attachments
Custom apps logs = logs could be analyze for possible buffer overflow,
code injection, SQL injection analysis
Intrusion Prevention System logs = capture these logs to alert on
signatures firing off, COTS signatures, threat analysis of bad network
packets
Intrusion Detection System logs = capture logs to alert on signatures
firing off, custom signatures, bad network packets,
Database logs = capture these logs for authorized access to critical data
tables, authorized logons, op ports, admin accounts
Virtual Private Network(VPN) logs = capture logs to analyze users
coming into network for situational awareness, monitored foreign ip
subnets, compliance monitoring of browsers/apps of connected hosts
Authentication logs = authentication logs to monitor
authorized/unauthorized users, times of day of connection, how often,
logons/logoffs, BIOS analysis,
Vulnerability Scan Data = import data about assets, vulnerabilities, patch
data, etc
Web Application logs = external facing logs to monitor suspicious SQL
keywords, text patterns, REGEX for threats coming in through browser
DNS logs = to correlate ip’s going to what domain at a client level
DHCP logs = monitor what systems are being assign what ip address and
how long, how often
Active Directory/Domain Controller logs = monitor user accounts for AD
admins, privilege accounts, remote access, multiple admins across the
domain, new account creation, event ID’s
Badge Access logs = logs to capture to correlate insider threat, situational
awareness, correlate data with authentication logs
Router/Switch data (net-‐flow) = capture this critical data source for APT
monitoring, network monitoring, data exfiltration, flow analysis, this is a very
important data source
Packet Capture logs(PCAP) = capture this very critical data source for
APT, data exfiltration awareness, packet analysis, deep packet inspection,
malware analysis, etc
FW + AV = will help detect and respond to viruses, worm propagation
IPS + AV + FW = detect/alert on network based attacks such as buffer
overflow, reconnaissance scans, code injection
PROXY = monitor majority of web based/application layer attacks such as:
cross-site scripting, session hacking, browse redirects
AV + PROXY = monitor/detect/respond to download of bad files, remote
code execution…web-based attacks
FW + PROXY = detect outbound data exfiltration, detect potentially
misconfig fw rules,
IPS + FW = monitor all network packet signature threats
AD Server = monitor all user/group modifications, deletes, updates for
administrators
AD + PROXY = monitor/detect/alert on post compromise analysis, lateral
movement
Splunk configuration files
alert_actions.conf
app.conf
audit.conf
authentication.conf
authorize.conf
commands.conf
crawl.conf
default.meta.conf
deploymentclient.conf
distsearch.conf
eventdiscoverer.conf
event_renderers.conf
eventtypes.conf
fields.conf
indexes.conf
inputs.conf
instance.cfg.conf
limits.conf
literals.conf
macros.conf
multikv.conf
outputs.conf
pdf_server.conf
procmon-filters.conf
props.conf
pubsub.conf
restmap.conf
savedsearches.conf
searchbnf.conf
segmenters.conf
server.conf
serverclass.conf
serverclass.seed.xml.conf
source-classifier.conf
sourcetypes.conf
tags.conf
tenants.conf
times.conf
transactiontypes.conf
transforms.conf
user-seed.conf
viewstates.conf
web.conf
wmi.conf
workflow_actions.conf

The Story of Splunk


Splunk is a powerful platform for analyzing machine data, data that machines
emit in great volumes but which is seldom used effectively. Machine
data is already important in the world of technology and is becoming
increasingly important in the world of business.

Splunk begins with indexing, which means gathering all the data
from diverse locations and combining it into centralized indexes.
Before Splunk, system administrators

many different machines to gain access to all the data using far less
powerful tools.

Using the indexes, Splunk can quickly search the logs from all
servers and hone in on when the problem occurred. With its speed,
scale, and usability, Splunk makes determining when a problem occurred
that much faster.

Splunk can then drill down into the time period when the problem
first occurred to determine its root cause. Alerts can then be created
to head the issue off in the future.

Security analysts use Splunk to sniff out security vulnerabilities


and attacks. System analysts use Splunk to discover inefficiencies
and bottlenecks in complex applications. Network analysts use
Splunk to find the cause of network outages and bandwidth bottlenecks.
Splunk does something that no other product can: efficiently capture and
analyze massive amounts of unstructured, time-series textual machine
data. Although IT departments generally start out using Splunk to solve
technically esoteric problems, they quickly gain insights valuable elsewhere
in their business.

The timestamp (_time) field is special because Splunk indexers uses it to


order events, enabling Splunk to efficiently retrieve events within a time
range.

Você também pode gostar