Escolar Documentos
Profissional Documentos
Cultura Documentos
Test Plan
Version 1.0
MD5 Team
Paul Cho
Jeff Gordy
Dana Stevenson
Wayne Fischer
Aaron Toren
Jorge Silva
Table of Contents
Executive Summary 3
Unit Tests 4
Overview 4
Task Unit Tests 5
Task 1.1 Retrieve Token From Header 5
Task 1.2 and 1.3 Validate Token 5
Task 1.3.1 Request 2FA 6
Task 1.3.2 Validate 2FA 6
Task 2.1 Log Status Event 6
Task 2.2 Send Event to Threat Determination 7
Task 3.1 Receive Threat Data 7
Task 3.2 Flag Threat Data 7
Task 3.3 Send Data to Reporting Mechanisms 9
Task 4.1 Receive report Event 9
Task 4.2 Alert Response Event 9
Task 4.3 Execute Alert Action Event 10
Task 4.4 Log Alert Action Status 10
Task 5.1 Capture Session Details (login, IP, timestamp) 10
Task 5.2 Push Record to Fusion Engine 11
Task 6.1 Execute Report Query 11
Task 6.2 Produce Report Data 12
Task 6.3 Format and Delivery 12
Task 6.4 Log Batch Report Execution Status 12
Task 7.1 Retrieve Token From Header 13
Task 7.2 Validate Token 13
Task 7.3 Request 2FA 13
Task 7.4 Validate 2FA 14
Task 8.1 Log User Access 14
Task 8.2 Wait For Request 15
Task 8.3 Log User Request 15
Task 9.1 Retrieve Search Filters 15
Task 9.2 Retrieve Data from Fusion Engine 16
Task 9.3 Output Report 16
Task 9.4 Log Report 16
Functional Tests 18
Core Functions 18
Usability Functions 20
Accessibility Functions 21
Exception/Systematic Event Handling 22
Executive Summary
The Reports and Alerts Engine Test Plan is a comprehensive plan to test the features and
functionality of the alerts and reports generated in the Supply Chain Risk Management (SCRM)
system. While the information presented is in a sequential waterfall methodology in the test
document, part of the test plan uses agile testing which is outlined in further detail in the
Mitigation strategy section.
Unit testing of individual components and functions is conducted first as the code is being
developed. Once a unit is deemed functional, it is then tested for full functionality using black
box methodology and sanity tests. After the system is deemed functional, it is fully regression
tested, checking for any errors or defects that might have occurred during the integration process.
Lastly, we perform Verification and Validation testing to verify that the code is ready and then
validated in the runtime environment.
The mitigation strategy to prevent defects and errors in the code revolves around an iterative
approach (agile testing) which heavily weighs in risk identification, assessment, analysis and
implementation of remediation procedures to minimize and eliminate risks and vulnerabilities.
The Earned Value Management (EVM) will also be included in the risk management practice so
that the planning managers can budget risk management costs more tightly while providing
insight into the risk management systems performance.
Testing is not limited to these methodologies, and this document should serve as a baseline rather
than the definitive processes and procedures to conduct testing of the Reports and Alerts Engine
as the product will evolve and change with the SCRM system as a whole.
Unit Tests
Overview
The Reporting and Alerts Module uses Node.js, Express.js, and MuleSoft for the custom in-
house software components and interaction between microservices. Unit tests are written using
the Chai Assertion Library which is a unit testing framework that supports behavior-driven
development (BDD). Behavior Driven Development allows chainable getters to improve
readability of assertions. Chai supports the following chains:
These getters allow us to use a more natural readable language in the test such as shown in the
following assertion.
expect(tokenIsPresentInHeader).to.be.true;
Chai also supports other special chain elements. An example of a special element is the .not
element which can be placed anywhere in a chain. Everything following the .not is negated.
As an example, if we want to assert that a particular function called MD5Team() does not throw
an exception we can use the following chai assertion:
expect(MD5Team() {}).to.not.throw();
This type of syntax allows for readable behavior-based unit testing. Chai also supports normal
TDD assertion style methods including .equal(actual, expected), .notEqual(actual, expected),
.isTrue(value), .isFalse(value) and others.
The MD5 Team uses a standard test naming convention to quickly bring new team members up
to speed and allow a geographically diverse development team to have a unified code base. The
method naming convention followed is MethodName_StateUnderTest_ExpectedBahavior(). As
an example, the following two method names are acceptable for the user token extraction
function tests:
● IsAuthorizedUser_ValidToken_Pass()
● isAuthorizedUser_InvalidToken_Fail()
Functional Tests
The goal of functional testing is to validate the basic functionality of the system’s core features
and functions. Each function is tested with the prescribed test procedure and the expected
outcome is compared with the actual outcome.
Core Functions
Function Test Procedure Expected Outcome Actual Outcome
Usability Functions
Function Test Procedure Expected Outcome Actual Outcome
Accessibility Functions
Function Test Procedure Expected Outcome Actual
Outcome
User login – A 1- Obtain the URL used to reach Login attempts are
user is able to the Reporting and Alerts Engine UI partially successful
reach the login 2- Obtain a valid login from all three
page using a credential for the test environment. browsers and the user
standard web 3- Attempt to login using the is asked for a second
browser and is test credential using the following factor (refer to next
required to browsers: test)
authenticate a. Google Chrome
prior to access. b. Firefox
c. Microsoft Internet Explorer
4- Record any unsuccessful
attempts.
User logging - 1- Request a user log from the 1- All user access
All user access system administrator. events are captured in
events are 2- Verify that all of the the audit log,
captured in the previous login attempts, page including all
audit log. navigation, and report access is successful and failed
visible in the log and accurately login attempts.
recorded. 2- All account
3- Using multiple browsers, administration events
load multiple pages are captured in the
simultaneously and ensure that log.
the log reflects both access
events.
4- Verify that the account
unlock performed by the user
administrator is captured in the
log.
Regression Tests
Regression testing techniques generally involve prioritizing high impact test cases, selecting test
cases that are impacted by the changes in the code, and lastly considers the Re-test (all)
condition.
Particularly we are concerned with safety hazards, delays in processing, and custom user-
generated events (particularly those related to safety).
Unit Test Cases that fulfill this regression testing requirement are: Task 2.x through Task 5.x.
Special attention is given to Task 3.2 as this is the data flagging for routing throughout the rest of
the alerts and reports system.
System and User Authentication test cases also carry equal weight with the real time alerts, as
only authenticated events and authorized users should be in use in the system.
Unit Test Cases that fulfill this regression testing requirement are: Task 1.1, 1.2, 1.3, 1.3.1, 1.3.2,
and Task 7.1, 7.2, 7.3, and 7.4.
Secondary high impact consideration is given to Report generation. These can be broken down
into two categories, regulatory and audit, and two types: system generated and user generated.
Unit Test Cases that fulfill this regression testing requirement are Task 6.x, and Task 7.x through
Task 9.x
Reusable Test Cases are defined as cases that can be reused in regression testing cycles. The
Unit Testing and Functional Testing cases for the Reports and Alerts engine fall into this
category, as well as any future defined custom defined test cases for new functionality and/or
code modification verification.
Obsolete Test Cases are defined as test cases which will not be recycled in the future.
Performance fix and Defect fix test cases would be determined and then placed into this
category.
Retest All
This test scenario involves rerunning all existing tests in the test bucket and should be only run if
regression testing has confirmed a defect which could not be isolated to a specific code change.
● No changes must be allowed to code, during the regression test phase. (Regression test
code must be kept immune to developer changes).
● The database(s) used for regression testing must be isolated. No database changes must
be allowed to ensure that the results of the regression test are in a controlled and constant
environment.
Verification
The verification phase is to ensure that the code and testing meets the intended results and the
tenets of the Reporting and Alerts Engine. Output analysis will be conducted on each of the tests
and verification will be conducted on Task Unit Tests (security & malformed data), Functionality
Tests (customer experience), Regression Tests (fixes) and Test Case Selection (quality
assurance). Tests will leverage tools which monitor application behavior, corruption, user
privilege issues and other critical areas such as security. Verification ensures that all potential
issues, vulnerabilities, defects and customer experience are addressed. Anything that is identified
will be reviewed, fixed and then re-tested to ensure that quality is built into the Reporting and
Alerts Engine and is capable to perform all intended functions for the customer. The subsequent
Validation
Discrepancies are notated and compared to the requirements. If any are valid, these are compared
to the contract and a change request is put in to make the change to the code base. Once Change
Control Board review occurs the correction is released in a minor release version.
1. The Fusion Engine provide instantaneous alert notifications in real-time and generates
reports in real-time
2. Alerts are sent to endpoints within thirty seconds of detection (soft real-time constraint)
3. The key performance indicator changes being tracked and alerted upon when exceeding
their constraints
Test data will be generated and fed into the system to review system performance and ensure
customers accept the system functions. Any feedback will be reviewed by architects and
compared to requirements. Any changes which are requested which lay outside the scope of the
original requirements may be formally documented and are subject to additional contracting
costs and scheduling upon formal acceptance by the company and government contracting
office.
Usability Test
End-users will follow user manual procedures regarding alert creation, alert review, reports
creation, reports analysis, account creation, role-based assignments and modifications, and REST
API usability. During the usability test end-users are encouraged to provide feedback regarding
user-interface configuration, input buttons, links, and active system placement, and system
design. If design elements are found not to meet requirement specification, they will be
refactored and are subject to regression testing. Additional element changes are subject to
contract negotiation by the company and the government.
Mitigation Plan
Our plan is to develop actionable risk mitigation strategy that includes an iterative
approach and methodology with a set of defined handling options and methods and
procedures for risk monitoring, reduction, and remediation. For the Supply Chain Risk
Management System (SCRM) the Reports and Alerts Engine will have on ongoing
monitoring of vulnerabilities and software errors, including other risks, that were
identified during operations. The identified risks will be result in follow-up actionable
items once reviewed within our risk model.
Vulnerabilities
● Insecure storage of data at rest that is not encrypted
● Security misconfigured for access control
● Source data compromised before it reaches the Fusion Engine (data in transit)
● Regulatory Reports not encrypted delivered via IoT or mobile
● Remote Administration
● Site Injection to manipulate the data
Software Errors
● Invalid neutralization of SQL database commands or OS commands
● Programming language bug or interoperability during the input for reports and alerts
(cross-site scripting – XSS).
● Improper validation of an array index (Index out of bounds)
● Database Objects have no security to referenced objects exposing them externally for
filesystem path traversal (Missing Authorization)
● Broken Authentication and Session Failure due to interrupted network connections or
latency timeouts
● Bad Data format including integer overflow
● Reports display format is incorrect
● Alerts missing initialization of variables
● Concurrent execution using shared resource without correct synchronization
Figure1: Iterative Risk Management Model for Vulnerabilities and Software Errors
Handling Options:
● Assume/Accept: We will acknowledge that the risk does exist and will make a
deliberate decision to accept it without using capabilities to control it. (This would
require a Program Leader Approval).
● Avoid: The program requirements or constraints could be adjusted to reduce the
risk. For example, a change in funding or the technical requirements. (This would
require a Program Leader Approval).
● Control: Implementations of actions to minimize the impact of the risk.
● Transfer: Reassign the accountability, responsibility, and authority to another
organization and stakeholder that is capable of accepting and managing the risk.
● Watch/Monitor: Continuous monitor the environment for adjustments or
changes that would directly influence the risks that have been identified.