Escolar Documentos
Profissional Documentos
Cultura Documentos
Copyright 2015
Pegasystems Inc., Cambridge, MA
All rights reserved.
This document describes products and services of Pegasystems Inc. It may contain trade secrets and proprietary information.
The document and product are protected by copyright and distributed under licenses restricting their use, copying,
distribution, or transmittal in any form without prior written authorization of Pegasystems Inc.
This document is current as of the date of publication only. Changes in the document may be made from time to time at the
discretion of Pegasystems. This document remains the property of Pegasystems and must be returned to it upon request.
This document does not imply any commitment to offer or deliver the products or services provided.
This document may include references to Pegasystems product features that have not been licensed by your company. If you
have questions about whether a particular capability is included in your installation, please consult your Pegasystems service
consultant.
PegaRULES, Process Commander, SmartBPM and the Pegasystems logo are trademarks or registered trademarks of
Pegasystems Inc. All other product names, logos and symbols may be registered trademarks of their respective owners.
Although Pegasystems Inc. strives for accuracy in its publications, any publication may contain inaccuracies or typographical
errors. This document or Help System could contain technical inaccuracies or typographical errors. Changes are periodically
added to the information herein. Pegasystems Inc. may make improvements and/or changes in the information described
herein at any time.
This document is the property of:
Pegasystems Inc.
1 Rogers Street
Cambridge, MA 02142
Phone: (617) 374-9600
Fax: (617) 374-9620
www.pega.com
DocumentName:SSA_716_StudentGuide_20150211.pdf
Date: 20150211
Table of Contents
Orientation........................................................................................................
Application Design...........................................................................................
Starting an Application.......................................................................................................
Introduction to Rulesets.....................................................................................................
10
19
Rule Resolution....................................................................................................................
26
35
Circumstancing....................................................................................................................
42
52
Case Design......................................................................................................
58
59
67
Case Hierarchy.....................................................................................................................
75
90
100
Screen Flows........................................................................................................................
108
Work Status..........................................................................................................................
119
Work Parties.........................................................................................................................
125
Data Model.......................................................................................................
134
135
143
151
User Experience...............................................................................................
164
165
Introduction to UI Architecture.........................................................................................
175
188
202
Styling Applications.............................................................................................................
212
215
226
242
Data Transforms.................................................................................................................
243
Activities...............................................................................................................................
254
Pega Pulse............................................................................................................................
262
Case Attachments...............................................................................................................
265
Routing.................................................................................................................................
274
284
Correspondence..................................................................................................................
289
Tickets...................................................................................................................................
300
Declarative Processing.......................................................................................................
306
Declarative Rules.................................................................................................................
315
326
Automating Decisioning.....................................................................................................
335
Validation.............................................................................................................................
348
Reporting..........................................................................................................
357
358
Configuring Reports............................................................................................................
369
383
392
Integration........................................................................................................
396
Introduction to Integration................................................................................................
397
406
417
429
444
462
Architecture......................................................................................................
471
477
472
Administration..................................................................................................
489
System Debugging..............................................................................................................
490
511
529
Migrating an Application....................................................................................................
534
Thislessongroupincludesthefollowinglessons:
Welcome
CompletingtheExercises
Username: admin@sae
Password: rules
The exercise application is called HRServices and has the Candidate selection and Onboarding
processes for new employees. Use the Case Designer as a starting point to familiarize yourself with the
processes.
The purpose of this application is to serve as the basis for the exercises. The application is incomplete
and contains components in draft mode. It was not built according to best practices and intentionally
contains design flaws that you will correct as you complete the exercises.
The exercise system has an open ruleset version for you to enter your rules. This is the default choice
when you save rules, and the name is HRServices: 01-01-05.
When you require modifying an existing rule, save a copy into HRServices: 01-01-05, this can be done by
using the Save As menu and then clicking Specialize by class or ruleset.
When working on the case designer, you will notice a message at the top to create a copy, click the copy
button to copy it into 01-01-05.
3. Add the solution branch to the application record using the Add Branch button. The name of the
branch is the same as the name of the solution file. For example, if the name of the solution file is
CaseLifeCycle.zip the name of the branch is CaseLifeCycle.
Introduction to RuleSets
Rule Resolution
Circumstancing
Create an application
Extend an application
implementations. We should always create a framework if we are not sure if our application will require
reuse at the application level.
We use the Framework Only option when we want to start off by creating a framework and later add
implementations using the Implementation Only options.
Or we can use the Framework and Implementation option if we want to create both the framework and
an implementation straight away. Additional implementations can be added later using the
Implementation Only option.
If we have a framework, start by configuring it. Make sure the framework runs on its own. If we have only
one implementation then build out the application in the framework and create an implementation that
contains no application logic but simply calls the framework.
If we have more than one implementation, then build out a default application in the framework and
customize it as required in the implementations.
Create an Application
Lets start by looking at how an application can be created using Application Express. As an example,
well show how the exercise application was created.
In the first step of Application Express essential application settings, such as application structure and
organization, are entered.
We recommend that the application name be fairly short. If the name is longer than 10 characters it
truncated to 8 characters.
If the name is truncated it is possible to change it. But the maximum length is limited to 14 characters.
The name of the exercise application is HRMS.
We dont want to build on a framework so we select PegaRULES for base PRPC. The exercise
application is not meant to be extended and will not require reuse at the application layer so we select
Implementation Only.
It is a best practice to keep the organization short. The organization gets truncated to four characters if it
is longer.
The truncated name can be updated, but is limited to 6 characters.
For the exercise application we want to create an organization called LP7.Were ready to move to step
two now.
The business objective for the exercise application is to provide a platform for the exercises.
In the third step we add the case types.
The case design is setup after the application structure has been created.
In step four we can add data types. A data type can either be created in the implementation or the
enterprise layer.
In the exercise application we want to add data types for employee and address to the enterprise reuse
layer.
Lets have a look at the rules and data instances to be generated using the preview.
The organization reuse layer contains the top level class and the base data class and the data types
added for the enterprise reuse layer as well as the organization integration class. Note that the trailing
dash for the Data and Int classes has been dropped in Pega 7.
6
The implementation layer looks almost the same but contains the application
and cases in addition. The Other section contains organizational data instances created to support the
application structure.
An organization with division and unit will be created and sample operators with access groups and roles
as well as a default work group and workbasket. Lets close the preview and create the application.
Extend an Application
In this section well look at how we can extend a framework or application. Use the Application Overview
landing page to explore and better understand the framework or application we want to extend. Examine
the case types and DCO assets, such as requirements and specifications. Also, run through the case
types to use in the implementation.
We want to extend the Purchase Application framework with implementations for each geographical
region.
We start off by creating an implementation for North America.
The built on application is the Purchase Application.
We only want to create an implementation so we select Implementation Only.
The organization is Pco.com
The business objectives from the built on application are shown. We can edit these to fit our
implementation.
Select case types to include in the implementation layer. New case types can easily be added when the
application structure has been created. Only top level case types and case types that can are available
for manual creation are shown in the list.
If we are extending an old framework with work types that inherits from Work- the Application Express
adds a case type rule. This provides partial case support with the main limitation being that the case cant
have subcases.
Select the data types we want to use in the implementation layer. Data types can easily be included after
the application structure has been created.
Lets have a look at the preview.
The Organization Reuse Layer is inherited from the built on application.
We can see that the case types were added to the implementation layer.
Click create to create the application.
Weve now seen how we can use Application Express to extend an existing application. Lets have a look
at the advanced configuration options.
We can specify the project methodology to use. This setting can easily be updated in the application
record after the application has been generated.
It is also possible to provide a framework name and version. By default the framework name is the
application name followed by FW and the version is 01.01.01.
In the class structure we can provide a name for the class group in addition to the application layer.
In the organizational settings we can specify a division and unit in addition to the organization.
Selecting Generate Reusable Division Records creates a division layer in the class structure.
If a reusable division layer is used a reusable unit layer can be introduced by selecting Generate
Reusable Unit Records.
The length of the class name is limited to four characters and is concatenated if its longer.
We select Generate Test Operators if we want sample operators. This option is selected by default.
Lets have a look at the preview.
We can see the division and unit layers.
Note that there are separate RuleSets for the division and unit layers.
Wrap-Up
Thanks for attending the lesson on Starting a Pega 7 Application. I hope this lesson provided you with a
good understanding of the Application Express, which has been enhanced and replaces the Application
Accelerator in Pega 7.
The idea in Pega 7 was to provide a simple wizard that requires few decisions upfront in combination with
tools that makes it easy to manage the application once it has been created.
Inttroduction to R
Rulesets
A ruleset defines a majjor subset of rules, becaus
se all rules be
elong to a rule
eset. A rulese
et is a major a
aspect
ated rules, ma
anaging rules,, and moving applications. The term ruleset
in access control, grouping interrela
es informally refers
r
to the content
c
of the ruleset.
sometime
This lesso
on covers the fundamentals of rulesets and the main
n features and
d best practice
es.
At the end
d of this lesso
on, you should
d be able to:
Describe
D
a Ruleset
Utilize
U
the Che
eck-out feature
Explain Rulese
et prerequisite
es
Explain Rulese
et lists
Describe
D
Best Practices
Manage
M
Rules
sets
Describe a Rule
eset
A ruleset is a containerr or an organiizational cons
struct that is u
used to identiffy, store, and manage a se
et of
on. Every insttance of everyy rule type be
elongs
rules that define an application or a major part off an applicatio
et. A rulesets
s primary func
ction is to group rules toge
ether for deplo
oyment.
to a rulese
h references the applicatio
Operators
s receive acce
ess to an app
plication through an accesss group, which
on.
An applica
ation is comp
prised of a list of rulesets. The
T applicatio
on rulesets arre typically cre
eated by the N
New
Applicatio
on Wizard (als
so known as Application
A
Ex
xpress).
10
The purchasing application displayed here has an integration ruleset (SupplierIntegration) in addition to
the implementation and organizational rulesets.
Rulesets have versions. The version has a three-part key. For example 01-02-03, where:
References to version numbers sometimes omit the patch number. A reference to version 03-07, for
example, means that all the versions between 03-07-01 and 03-07-99 are included.
A few rule types belong to a ruleset but not a ruleset version:
Application
Class
Access Deny
Library
Enable Check-out
Select Use check-out? on the Security tab on the ruleset to enable the check-out facility.
11
Operators need to have the Allow Rule Check out selected on the Security tab to be able to update
rules in rulesets that require check out.
Check-out
The check-out button appears when the ruleset the rule belongs to is unlocked, uses check-out and the
operator is allowed to check out rules.
If a developer checks out a rule no one else can check-out that rule and make changes to it until the
developer checks it in.
The check out button does not display if the rule is checked out by someone else. We can click the lock
icon to see who has checked out the rule.
If a rule is not available for check-out because it is checked out by someone else or because it is in a
locked ruleset version the private edit button appears instead of the check out button.
Private edit is a special case of the standard check out. It allows a developer to prototype or test changes
to a rule that is not available for standard check out.
To be able to do a private edit a developer needs to have the pxAllowPrivateCheckout privilege. The
standard access role PegaRULES:SysAdm4 provides this privilege.
When we check out a rule, we are making a private copy of the rule in our personal ruleset. Same is true
for private edits.
We can view our checkouts and private edits in the Private Explorer or by using the checkmark icon in the
header.
12
Check-in
Checking in a checked
d out rule repla
aces the original base rule
e. A commentt describing th
he changes to
o the
quired. The ch
heck-in comm
ments can be viewed
v
on the
e rules Historry tab. checkiin-comment.p
png
rule is req
Private ed
dits can also be
b checked in
n. We need to
o select an un
nlocked rulese
et version in w
which we wan
nt to
check-in the
t private ed
dit.
13
hecked-in in bulk.
b
Note that Private edits cannot be ch
14
Enforcing prerequisites during development helps ensure that rule references across rulesets are correct
during rule resolution at runtime. For example, assume that during development we create and test a rule
in ruleset Alpha:01-01-01 that depends upon a rule in ruleset Beta:01-01-01. If, at runtime in a production
system, a user executes the rule in Alpha:01-01-01 that references the rule in ruleset Beta:01-01-01, but
ruleset Beta:01-01-01 is not present on the production system (or is not in the users ruleset list), the
Alpha rule could fail or possibly run and produce an incorrect result.
Rule validation is performed against the prerequisites. When we save a rule, the system assembles a
complete required ruleset version list from:
All lower-numbered ruleset versions of this ruleset and the rulesets included in the same major
version
Therefore, if you save a rule into Delta:01-01-01 which depends on Alpha:02-01-01, and Alpha:02-01-01
depends on Beta:02-01-01, only list Alpha:02-01-01 as a required ruleset for Delta:01-01-01. There's no
need to list Beta:02-01-01 because it is already listed in Alpha:02-01-01.
Note that rules in versions below the major version are not visible to rule resolution. For example, if you
list Alpha:02-01-01 that depends on Beta:02-01-01, Alpha:02-01-01 wont see rules in Beta:01-XX-XX.
If your ruleset only depends on the PRPC product enter the Pega-ProcessCommander ruleset as a
prerequisite.
There is a 99 patch version available of the Pega-ProcessCommander ruleset in the product. Use that
ruleset version as a prerequisite in your ruleset to avoid having to update the ruleset after product
updates.
15
The Pega-ProcessCommander ruleset lists all product rulesets so we dont need to list any product
rulesets below Pega-ProcessCommander.
The prerequisite information is also validated when importing rulesets. A ruleset can only be imported to a
target system that already contains the ruleset and versions that are listed in its prerequisites or if the
prerequisite rulesets are also contained in the import file.
The order of the rulesets is important as it is used by the rule resolution algorithm. We generally refer to
the rulesets with higher precedence as being on top of those with lower precedence.
The list is assembled during login. The process starts by finding the versioned application rule referenced
on the access group of the operator. Note that in rare configurations the access group can actually come
from the requestor definition, organization or division record.
For most applications, the ruleset list is primarily comprised of rulesets referenced on the application form.
The built-on applications are recursively processed until the PegaRULES application is found.
The applications are then processed bottom up adding each applications ruleset list on top of the
previously added rulesets. As an example, if application A has two rulesets and is built on top of
application B which has two rulesets and which is built on top of PegaRULES ruleset. We will end up with
a ruleset stack with the rulesets of application A on top of those of application B on top of the Pega
rulesets.
There are a few situation when rulesets are added on top of the standard ruleset stack. If the application
has branches the corresponding rulesets will be added on top of the stack. Next, if there are production
rulesets defined on the application and the operators access group they will be added on top of the
16
stack.Finally, if the operator is allowed to check-out rules a personal ruleset is added at the very top of the
list. The personal ruleset has the name of the operator ID.
We can use the Lock and Roll feature to lock and roll (increment) versions in a single step. Before using
the Lock and Roll we need to make sure that there arent any rules checked out.
17
If the last release built was promoted to testing or production it might be worth considering skipping a few
patch numbers reserving those for emergency fixes.
For example, consider the following situation. Ruleset version 01-02-04 was promoted to QA and then
production. The next release with a few new features is planned to go into production in two weeks. The
development of the next release is done in 01-02-10, reserving 01-02-05 to 01-02-09 for emergency fixes.
After the release went into production two defects were found. The decision was made to push the fix for
one defect immediately, but for the other some additional testing was required. The first defect was fixed
in 01-02-05 and the second one in 01-02-06. This is a snapshot of the environments after the first fix was
promoted to production and the second fix still being tested on QA.ruleset-version-diagram.png (needs to
be improved by production)
Manage Rulesets
There are several tools to help us manage rulesets and ruleset versions. The ruleset refactoring tools are
found under DesignerStudio > System > Tools > Refactor RuleSets.
Use the Copy/Merge Ruleset tool to copy rulesets and ruleset versions into new versions or to merge
several ruleset versions into one version.
The Delete a Ruleset tool allows us to delete an entire ruleset or a specific version of a ruleset. Consider
the effect before of deleting rulesets that has been promoted to production.
The Skim a Ruleset tool collects the highest version of every rule in the ruleset and copies them to a
new major or minor version of that ruleset on the same system, with patch version 01. Consider a skim for
each major application release.
Use the Rulebase Compare tool to identify differences in the rules on two different systems.
Conclusion
In this lesson, we looked at rulesets. A ruleset is a fundamental building block of an application. We
looked at their purpose and impact on the application as well as some features and best practices.
Now, you should understand what a ruleset is and how rulesets are configured. You should also know
how rulesets are used during development vs runtime. Finally, you should be able to enable and use the
checkout feature and utilize ruleset best practices.
18
Setup a Team
Create a Branch
Merge Branches
19
Setup a Team
First we need to create the team application which are built on the main application. The team application
is typically named in a way that reflects the base application, team name, or focus of the team.
For example, we could call the application Purchasing_TeamAlpha or Purchasing_InventorySelection.
The built on application and version needs to be updated to reference the main base application. The
include parent checkbox should be selected. We do not configure any application rulesets at this stage.
Create an access group that references the team application. The typical name of the access group uses
the application name plus the standard user type, in our case
Purchasing_InventorySelection:Administrators.
Create a Branch
The development branches are created using the Add Branch button on the Definition tab of the
application rule.
A branch name must start with a letter and contain only alphanumeric and dash characters, up to a
maximum of 16 characters. Best practice naming convention is to relate the branch name to the planned
development to be done in that branch. In this case well call it InventorySel.
20
The check
k out feature (on the Security tab) is ena
abled by defa
ault for branch
h rulesets since it is a bestt
practice to
o use check-o
out when doin
ng team deve
elopment.
We can use the Actions menu to de
elete a branch
h from the sysstem or removve it from the application.
om the system
m will delete all
a the rulesetss and rules be
elonging to th
he branch.
Deleting the branch fro
21
As a team works with rules from the base ruleset in its branches, other teams are working on rules from
the base rulesets in their branches. If a rule has been changed in the base ruleset since it was copied to
the branch a conflict message appears as shown below.
If the rule is copied into two different branches a merge warning message appears since the rules will
need to be merged later.
To create a new rule that is needed to implement the enhancement, save it directly in the branch ruleset
that is branched from the base ruleset that will contain the new rules when development is complete.
22
Merge Branches
The branchs contents are usually merged into the base rulesets when the development in the branch is
complete and stable. Start the merge process by selecting merge from the Actions menu.
All rules in the branch rulesets need to be checked-in to complete the merge. It is a best practice to lock
the branch rulesets before merging the branch. Use the lock option in the actions menu to lock the branch
ruleset. We can use the package option if we want to move the branch to another environment.
Select Create New Version to create a new version of the base ruleset for the branch. Alternatively select
the ruleset version you want to merge into.
23
Click on th
he conflicts and warnings to
t open the notices window
w that displayys each rule that the wizard
d has
identified as having a conflict
c
or warrning.
If a rule in
n the branch ruleset
r
has be
een updated in the base ru
uleset a merge conflict is shown. It is no
ot
possible to complete a merge if such conflicts ex
xist. The brancch and base vversions of th
he rules need to be
m
before proceeding
g.
merged manually
More than
n one team might
m
have bra
anched the sa
ame rulesets a
and introduce
ed changes to
o rules that co
onflict
with each other. These
e types of con
nflicts show up
p as warningss, but do not p
prevent a merge. The team
m
p
sincce the base ru
hanged. How
wever,
merging their branch first can do so without any problems
ule has not ch
f
team has merged theirr changes the
e second team
m will get a co
onflict that mu
ust be resolve
ed
after the first
since the base rule was updated by the first team
m.
Click the compare
c
button to view the
e differences between the branch copy of the rule an
nd the one in the
base Rule
eSet. If the ch
hanges are minor open the
e branch rule a
e same chang
ges to it so that it
and apply the
matches the
t base rule.. If the change
es are more complex,
c
or iff they conflict with the logicc in the brancch
rule, conta
act the individ
dual who mod
dified the base
e rule and ne gotiate what the final set o
of changes sh
hould
be and the
e procedure for
f testing the
em.
When the
e changes to the
t branch rule are comple
eted and teste
ed, select con
nflict resolved
d merge this ru
ule.
Click OK when
w
all confflicts reported in the notices window are
e marked reso
olved and retu
urn to the Merge
Branch Rulesets wizard to continue with the merge process.
24
We have the option to keep all source and rules and rulesets after the merge. It is a best practice to
provide a password to lock the target ruleset after the merge. By default the branch and branch rulesets
are deleted after a successful merge.
Conclusion
In this lesson, we looked at the branching feature that supports parallel development in a shared rule
base. This is a powerful feature that is often used for medium and large sized projects with several
development teams.
Now, you should understand how to setup applications and access groups for development teams. You
should also know how to develop with and merge branches. Finally, you should understand common
issues in parallel development and how to address them.
25
Rule Resolution
Welcome to the lesson on Rule Resolution. This lesson covers the Rule Resolution process, which is the
backbone of how the system ensures the correct rules are executed at the correct time.
At the end of this lesson, you should be able to:
Circumstance
d. Circumstance Date
e. Date/Time
Remove all candidates that are withdrawn or hidden by other withdrawn candidates. Select the first
default rule encountered and discard the candidates in the remainder of the list.
26
Example:
Lets walk through how the system finds the section CreateRequest shown below while performing a
PurchaseRequest:
[CreateRequestSection.png]
First the system checks the cache for a previous Rule Resolution of the Section rule CreateRequest that
used the same parameters as this current request. Since this is our first time running this rule, the rule is
not found in the cache so the system proceeds to the next step.
Example:
The system needs to find all instances of a Rule-HTML-Section called CreateRequest. It doesnt do
any other filtering at this time, which still provides us a pretty lengthy list:
27
Class
Ruleset
Version
Availability
Qualifier
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-02-10
No
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-02-01
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-10
Withdrawn
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-05
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-01
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
01-01-01
Blocked
ADV-Purchasing-Work-PurchaseOrder
Purchasing
02-01-01
Yes
ADV-Purchasing-Work-PurchaseOrder
Purchasing
01-01-01
Yes
ADV-Purchasing-Work
Purchasing
02-01-07
No
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
ADV-Purchasing-Work
Purchasing
02-01-01
Yes
ADV
Purchasing
01-01-01
Yes
ADV
ADV
03-01-01
Yes
ADV
ADV
02-10-01
Yes
Yes (Date)
ADV
ADV
02-10-01
Yes
ADV
ADV
02-01-01
Yes
ADV
ADV
01-01-01
Yes
XYZ-Quoting-Work-EnterQuote
Quoting
01-01-06
Yes
XYZ-Quoting-Work-EnterQuote
Quoting
01-01-01
No
XYZ-Quoting-Work
Quoting
01-01-01
Yes
XYZ
XYZ
01-01-01
Yes
Example:
Continuing with our example above, there are only three candidates that have their Availability set to No
as indicated by the shaded lines below.
Class
RuleSet
Version
Availability
Qualifier
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-02-10
No
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-02-01
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-10
Withdrawn
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-05
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-01
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
01-01-01
Blocked
ADV-Purchasing-Work-PurchaseOrder
Purchasing
02-01-01
Yes
ADV-Purchasing-Work-PurchaseOrder
Purchasing
01-01-01
Yes
ADV-Purchasing-Work
Purchasing
02-01-07
No
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
28
Class
RuleSet
Version
Availability
Qualifier
ADV-Purchasing-Work
Purchasing
02-01-01
Yes
ADV
Purchasing
01-01-01
Yes
ADV
ADV
03-01-01
Yes
ADV
ADV
02-10-01
Yes
Yes (Date)
ADV
ADV
02-10-01
Yes
ADV
ADV
02-01-01
Yes
ADV
ADV
01-01-01
Yes
XYZ-Quoting-Work-EnterQuote
Quoting
01-01-06
Yes
XYZ-Quoting-Work-EnterQuote
Quoting
01-01-01
No
XYZ-Quoting-Work
Quoting
01-01-01
Yes
XYZ
XYZ
01-01-01
Yes
Example:
In our example, our operators list is defined as:
Purchasing:02-01
ADV:03-01
This removes eleven candidates from our list:
Class
Ruleset
Version
Availability
Qualifier
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-02-01
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-10
Withdrawn
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-05
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-01
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
01-01-01
Blocked
ADV-Purchasing-Work-PurchaseOrder
Purchasing
02-01-01
Yes
ADV-Purchasing-Work-PurchaseOrder
Purchasing
01-01-01
Yes
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
ADV-Purchasing-Work
Purchasing
02-01-01
Yes
ADV-Purchasing-Work
Purchasing
01-01-01
Yes
ADV
ADV
03-01-01
Yes
ADV
ADV
02-10-01
Yes
Yes (Date)
ADV
ADV
02-10-01
Yes
ADV
ADV
02-01-01
Yes
ADV
ADV
01-01-01
Yes
XYZ-Quoting-Work-EnterQuote
Quoting
02-01-06
Yes
XYZ-Quoting-Work
Quoting
02-01-01
Yes
XYZ
XYZ
02-01-01
Yes
29
The three candidates from XYZ and Quoting are removed because their rulesets are not defined in our
Ruleset Stack. This makes sense because they belong to a completely different application.
Within Purchasing, the three candidates from the 01-01-01 versions have been removed because they
dont match our current Major version of 02. The candidate from 02-02-01 matches the current Major
version, but is still removed since its Minor version of 02 is higher than our current Minor version of 01.
Within ADV, only one candidate matched our current Major version of 03. All the others have been
discarded.
Step 5: Discard all candidates not defined on a class in the ancestor tree.
The ancestor tree refers to a rules inheritance. Inheritance is defined in more detail in another lesson.
For this lesson, it is only important to remember that a class can inherit from other classes, which allows
for the reuse of rules.
This step is the first time that the system examines the AppliesTo class of the candidates. If the
AppliesTo class does not match the current class, or one of the current classes parents, then those
candidates are discarded.
Example:
In our example, we were currently executing a Purchase Request. There is only candidate that is not in
the ancestor tree of the Purchase Request:
Class
Ruleset
Version
Availability
Qualifier
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-10
Withdrawn
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-05
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-01
Yes
ADV-Purchasing-Work-PurchaseOrder
Purchasing
02-01-01
Yes
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
ADV-Purchasing-Work
Purchasing
02-01-01
Yes
ADV
ADV
03-01-01
Yes
Therefore, the candidate defined on ADV-Purchasing-Work-PurchaseOrder gets removed from the list.
Circumstance
d. Circumstance Date
e. Date/Time Range
f.
Version
30
The first two ranks Class and Ruleset provide the basics of Rule Resolution. The closer the
candidate is to the class where it executes and the higher it is in the ruleset stack determine
which is the default rule.
The next three ranks Circumstance, Circumstance Date, and Date/Time Range are qualifiers
to those basics. They allow us to specialize even further to address all the possible outlier
cases. The exact details of these qualifiers and an in depth look at this portion of Rule
Resolution is covered in the Circumstancing lesson in this course.
Finally, the last rank Version ranks the candidates by the ruleset version that contains them.
This ensures that circumstanced rules are not automatically overridden if the base rule is
updated in a more recent ruleset version.
2. During the ranking process, the system evaluates any of the candidates that have their
Availability set to Withdrawn. Withdrawn is a special Availability that lets us skip over some
candidates. When a rule is marked as Withdrawn, it is removed from the list of candidates as
well as any additional candidates that match all of the following:
This is the definition of how to match a Withdrawn rule. However, during Rule Resolution, the
incompatible Major versions and Purposes have already been discarded in the preceding steps.
At this point, were only concerned with the rule being in the same class, in the same RuleSet and
having the same qualifiers.
3. The last thing the system does is determine the default candidate. A default candidate is the
first candidate (highest ranked) that has no qualifiers. This default candidate is the last possible
rule to be executed as it will always be a match to any additional requests for this rule. Additional
candidates ranked below this one are discarded.
Once these three steps of ranking are complete, the only candidates remaining will either have some kind
of qualifier and there will be a single non-qualified candidate.
Example:
Continuing our example, well first rank the results. Conveniently, our list is already in the correct order
that in which it was ranked.
Class
RuleSet
Version
Availability
Qualifier
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-10
Withdrawn
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-05
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-01
Yes
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
ADV-Purchasing-Work
Purchasing
02-01-01
Yes
ADV
ADV
03-01-01
Yes
31
RuleSet
Version
Availability
Qualifier
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-10
Withdrawn
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-05
Yes
ADV-Purchasing-Work-PurchaseRequest
Purchasing
02-01-01
Yes
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
ADV-Purchasing-Work
Purchasing
02-01-01
Yes
ADV
ADV
03-01-01
Yes
Once those have been removed, we look for the first candidate that does not have any qualifiers. We find
that ADV-Purchasing-Work in the Purchasing ruleset, Version 02-01-05 meets this criteria, so the
remainder of the list is removed.
Class
RuleSet
Version
Availability
Qualifier
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
ADV-Purchasing-Work
Purchasing
02-01-01
Yes
ADV
ADV
03-01-01
Yes
This leaves us with only three candidates, two that have a qualifier and the default candidate.
Example:
In our example, the result is that all the candidates are from ADV-Purchasing-Work.
Class
RuleSet
Version
Availability
Qualifier
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
However, since we were executing a Purchase Request, the key used when putting this list in the cache
is ADV-Purchasing-Work-PurchaseRequest.
32
Example:
In our example, we have three possible candidates that are evaluated.
Class
RuleSet
Version
Availability
Qualifier
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Circumstance)
First PRPC
evaluates if the
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
Yes (Date)
condition for the
Circumstance
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
was met. Lets
say the Circumstance was Supplier=Restricted. If the value of Supplier equals Restricted then the
system uses the first candidate. Our Supplier is set to Open so we dont match and the system moves
to the next evaluation.
Now it evaluates a DateTime range. If the range on the candidate is specified as Before June 1st, 2000
and were executing this rule now, we dont match the date range, so the system moves to the next
evaluation.
The candidate does not have any qualifiers, so the system automatically chooses this candidate. We
now know which rule to execute.
Yes = Im OK to be executed.
Withdrawn = Im not OK to be executed and neither are any of my earlier versions. Pick
someone else.
Blocked = Im not OK to be executed, but do not pick anyone else. Just dont execute.
Example:
In our example, the chosen candidate has Availability set to Yes, so this rule is not blocked and is
available to run.
Class
Ruleset
Version
Availability
Qualifier
ADV-Purchasing-Work
Purchasing
02-01-05
Yes
If you remember, we did have a blocked rule in the original list of candidates, but that candidate was
discarded in Step 4.
Step 10: Security Verify that the user is authorized to see the rule.
The final piece of Rule Resolution is to check that the user has the correct authorization to execute the
rule. If the user does not have the correct authorization they are prevented from executing. The features
of security and how to configure them are covered in a different lesson in this course.
Example:
In our example, the user has the correct authorization, so the section displays.
33
Conclusion
The Rule Resolution process is backbone of the entire PRPC application. This process defines how the
system ensures the correct rules are being run at the correct time.
34
Introduction
Welcome to the Enterprise Class Structure (ECS) lesson. In this lesson well learn how to plan our class
structure with designs for future growth and some best practices for achieving maximum reuse.
The most common of these settings generates four layers in our ECS and they are the:
Organization Layer
Division Layer
Framework Layer
Implementation Layer
35
Security (for example, LDAP access, how users are granted rights, etc)
Some Integration Points (Note that these should only be placed here if they are used organization
wide. Access to a customer database is a good example of a potential organization wide
integration. However a list of automotive repair centers would not be a good idea as it is most
likely only applicable to a single line of business and would not be used organization wide.)
Division wide letterheads, signature lines, office locations, telephone numbers, etc
Automotive, Home, Life, Health, etc It doesnt matter which kind of insurance, a claim still needs to
follow these same basic steps. This is what a framework provides. Frameworks should not be tied to a
single line of business, as this limits their reuse, but instead should always look at the broad picture
across the whole organization.
The kinds of rules most often encountered here are:
Case types
Flows
Flow actions
Sections
36
Class groups
SLAs
Routers
One key aspect to the implementation layer is that this is where any of our work classes should be
instantiated. This allows the class to be the most specific and can then ensure its leveraging the
appropriate rule in all instances. It also keeps the work for a particular division tied to that one division
and avoids possible cross division contamination of work.
This is an example of typical organization that has been building and using PRPC applications for a little
while. Now is when this organization is reaping the benefits of their ECS. By forward thinking they were
able to plan frameworks and divisions that enabled them to build 5 different applications with relative ease
and reuse.
37
Theres no magic number to the balance of rules in any one of these layers. Each and every application
is going to have its own unique balance. The key to planning is keeping the purpose of the rule in mind
when building it. If the rule is related to the process, it goes in the Framework. If its a business standard,
it goes in the division or organization layers, as appropriate. If its specific to this one application, it goes
in the Implementation.
Properties
Airspeed
Living Creatures
Number of Offspring
Swallow
If the Swallow could inherit from both Things That Fly and Living Creatures, it would be able to use all 4 of
the listed properties. Thankfully, in PRPC we can do that. PRPC has the ability to define two kinds of
inheritance, Pattern and Directed. By providing two paths of inheritance, we can achieve multiple reuse
patterns.
Pattern Inheritance
Pattern inheritance is defined by the name of the class. For example, lets look at the class:
ADV-Purchasing-Work-PurchaseOrder
Anything before the last - is considered the classes parent. Patten inheritance continues to chain all the
way up to the very first class referenced in the chain. So for our class, the chain of parents is:
ADV-Purchasing-Work
ADV-Purchasing
38
ADV
Our class can use any of the rules defined on any of these three parents.
Directed Inheritance
Directed inheritance is how we explicitly specify an alternative parent for a class. We specify the directed
inheritance on the classs rule form.
For our example, the directed parent is Work-Cover-. Directed parents also chain so if we were to open
each of the class definitions well find that this classes parents are:
Work-CoverWork@baseclass
So our class is also able to use any of the rules in these classes.
39
This opens the Inheritance Viewer. The Inheritance Viewer allows us to see all the parents of the class
as shown here:
The parents are ranked according to the order theyre checked during rule resolution. When a rule is
requested, the system first looks in the child class, then works its way through the parents starting from 1.
Example:
Weve requested to execute Rule A. Looking at the all the available Rule As in the system, we find it
on:
1. ADV
2. Work3. ADV-Purchasing-Work
4. @baseclass
When the system ranks these it selects the copy of the rule in #3, since it follows the order shown in the
Inheritance Viewer.
40
Conclusion
Intodaysapplications,theenterpriseclassstructureiswhatdrivesthereusemodel.Awellplanned
classstructureshouldincorporatetheneedsforreuse,bothprocessorientatedandbusinessorientated.
41
Circumstancing
Introduction
Welcome to the lesson on Circumstancing. In this lesson, well discuss how we can use circumstancing
to provide flexibility to a PRPC system to address the complex demands of outlier business rules.
At the end of this lesson, you should be able to:
Define circumstancing
Define Circumstancing
Circumstancing is a way we can create specialized copies of rules to use based on the evaluation of a
condition. This helps us handle the outlier cases that go beyond the basic process. For example, we
might need the system to apply a different tax calculation if a person is retired. This is an easy case, and
potentially could have been handled directly in the tax calculation. But, what if we have a more complex
situation? Lets say we need to apply a different tax calculation if a person is retired, they retired after Jan
1st, 2000 and they have no next of kin. Now what do we do?
These are the kinds of outlier situations that exist in businesses today. Complex business rules vary from
state to state; by time, by demographic groups, and any and all possible combinations. Trying to account
for all of these possible combinations in different decisions and paths in the flows leads to a complex nest
of possibilities that become harder and harder to maintain.
By using circumstancing, we can keep the processes streamlined, and allow the system to determine
when to do things differently based on the information available at the time the rule executes. There are
four types of circumstancing, but they can be summed up as being either based on their data and/or
based on time.
Creating a circumstance
So, how do we create a circumstanced rule? Easy, first we select our base rule, then we select
Specialize by circumstance to a new rule.
42
In this lesson we are discussing the options for Specialize by circumstance. The other option to
Specialize by class or ruleset is described in the Reuse and Specialization lesson.
Which rules can be circumstanced?
Sothismeanswecancircumstanceanyandeveryruleright?Notsofast.Onlycertainruletypescanbe
circumstanced.Inorderforaruletobecircumstanced,thattypeofrulemusthaveatleastonethese
twooptionsenabled.
Allow rules that are only valid for a certain period of time?
We use this option for circumstancing based on the time the rule executes.
Belowarethetypesofcircumstancingwecanuse.Letstakealookateachoftheminmoredetailandin
eachone,wellevaluateadifferentpartofouroriginalexample.
Single Property
As of Date Processing
Multivariate
Time-Qualified
43
Single Property
We use single property circumstancing when we look at the data available.
When we use single property circumstancing, the property needs to be a single value. Lists and Groups
are not compatible with Single Property circumstancing. The value of the property entered on the left is
compared to the entered value on the right. This comparison looks for an exact match and is case
sensitive. For this reason, Single Property Circumstancing is not recommended for customer entered
data.
Example
A single property circumstance is perfect for evaluating the first of our conditions. We can create a
circumstanced rule based on an IsRetired property. If the value of IsRetired equals Yes than this rule
is chosen. If the value is No then the rule is not chosen. Since the comparison is case sensitive, if the
value was yes then this rule is not chosen. This is why it is very important to always have control over
the values that are being compared.
As of Date
As of Date is very similar to Single Property, except that it:
1. Can only be used with date/time type properties.
2. Looks for circumstances where the rule is past the specified date instead of an exact match.
44
As of Date first looks for all circumstances where the value in the property is past the specified date or
time. It then chooses the circumstance that is closest to the value in the property. To illustrate, lets say
we had three circumstances with the following dates defined:
Rule A.1:
Rule A.2:
Rule A.3:
If the system needs to pick one of these, and the property value is August 15th, 2012 then the system
would discard Rule A.1 because August 15th, 2012 is not past September 1st, 2012. It would keep the
other circumstances Rule A.2 and Rule A.3 because our date is past the dates of the circumstance.
Between A.2 and A.3, it would look for whichever circumstance is closer to the value in the property. This
is A.3 since July 1st, 2012 is later than June 1st, 2012, which makes it closest to our value of August
15th, 2012.
Well cover the exact process rule resolution uses to make these determinations a little later in this lesson.
Example
This kind of circumstancing is perfect for the second of our conditions. In this case we would create a
circumstance based on a RetiredDate property and specify Jan 1st, 2000 as the Date Value. This rule
is chosen anytime the RetiredDate is after Jan 1st, 2000. If the RetiredDate is previous to this date, for
instance Dec 31st, 1999 then the system drops to the default rule that has no circumstances defined.
Multivariate
But what if we need to evaluate more than one property? This is where multivariate circumstancing is
used. Multivariate requires the use of two additional rules, Circumstance Templates and Circumstance
Definitions. Using these two rules, we can define a circumstance that spans multiple properties and
multiple values.
45
In PRPC, Circumstanc
ce Definition and
a Circumsta
ance Templatte rules are part of the Tecchnical catego
ory
nstances of th
he Rule-Circu
umstance-Definition and Rule-Circumsstance-Temp
plate rule type
e
and are in
respective
ely.
The Circu
umstance Tem
mplate defines
s which prope
erties take pa
art in determin
ning if the circcumstance is valid.
What we define
d
here becomes the list of properties that a Circcumstance De
efinition can leverage.
TheCircumstanceDefinitionusesth
heCircumstanceTemplateetosetupthecolumnsoffvalidvaluesin
thisDefinition.Thede
efinitioncanb
beanynumbe
erofrowsan dcancontain
nanyvalue,o
orarangeof
values,foreachpropertyinthatrow
w.
46
Bringing this all together, with a Circumstance Template we can state that we want to evaluate multiple
properties. A Circumstance Definition then extends that Template to let us evaluate multiple values for
those multiple properties. A Circumstance Definition cannot be created without its corresponding
Circumstance Template and a Circumstance Template cannot be leveraged without at least one
Circumstance Definition being defined.
Once the Template and at least one Definition have been created, we can create a multivariate
circumstance by choosing to specialize by Template. On the new form, we can then specify the already
created Template and Definition to use for this rule.
Example
This kind of circumstancing is used for our third condition, since we will need to check if the person is
retired and if they have no next of kin. So, first we create a Circumstance Template to define that were
circumstancing on the IsRetired and NextOfKin properties. Once weve created the Circumstance
Template, we can create the Circumstance Definition that says were looking for someone who is retired
(IsRetired = True) and has no next of kin (NextOfKin = None). Finally, we specialize by Template and
select the template and definition we just created.
47
So that gives us four rules in the system to handle this special circumstance:
1. The Base rule
2. A Circumstanced Template
3. A Circumstance Definition
4. The Circumstanced Rule
Whenever this rule needs to be executed the system uses the Circumstance Definition and Template to
determine if the Circumstance is valid. If yes, it runs the Circumstanced version of the rule. If no, it runs
the Base rule.
When specializing by date, only two additional values can be specified. The Start Date and the End Date.
Only one of these is required, though both can be, and often are, specified. The system executes this
version of the rule if the current time is after the start date, but before the end date. If either the Start
Date or End Date is missing, then this becomes an open ended circumstance, which applies to all after a
specified start date or before a specified end date.
Example
This kind of circumstance satisfies our fourth condition, determining whether or not were in tax season.
To create this circumstance, we start with the SLA rule that specifies the turnaround time. We save this
rule to a new copy, provide a StartDate of Jan 1st and an end date of Apr 15th. The system now chooses
this circumstance whenever were between those dates.
48
Therankingprocessesordersby:
1. Override
2. Class
3. Ruleset
4. Circumstance Value in alphabetical order. (Multivariate circumstances rank by the order in which
the properties are listed in the Template
5. Circumstance Date in descending order.
6. Time Qualified by end date, in ascending order.
7. Time Qualified by start date, in descending order.
8. Non-Qualified
9. Version
Each of these sort orders is within the previous one. Override, Class, Ruleset and Versions are covered
as part of the Rule Resolution lesson, so well just focus on how the Circumstanced rules are ranked.
Using our previous examples, if we were to take a look at every possible circumstance combination of:
If we were to account for every one of these possible combinations, wed have a large number of rules to
maintain as shown in the table below.
Rank Order
Circumstance Value 1
Circumstance Value 2
Circumstance Date
(IsRetired)
(NextOfKin)
(RetiredDate)
True
None
True
None
True
None
True
None
True
True
Jan 1 , 2000
True
True
End Date
Start Date
st
9
10 (Default)
49
In most cases though, the business doesnt need to account for every possible combination. Often one or
more of these rules supersedes another. Clarify with your Business Architect (BA) or Subject Matter
Expert (SME) the intent of these specializations to ensure the correct number of rules is being created. If
we were to create exactly the specializations requested, wed only have 5 variations as shown in the table
below.
Rank Order
Circumstance Value 1
Circumstance Value 2
Circumstance Date
(IsRetired)
(NextOfKin)
(RetiredDate)
True
None
True
True
End Date
Start Date
4
5 (Default)
This looks much easier to maintain, and the system can now be relied upon to choose the correct rule to
execute based on the ranking order.
This table is written to the cache and evaluated as a part of every rule resolution request. This allows the
cache the maximum amount of reuse by using a single cache for every circumstance variation. Every
execution then confirms against this table the exact circumstanced version that needs to be executed for
the exact values held at runtime.
What if the base changes from 3 days to 4 days? This should be relatively easy; we just save the rule
into a new version and update it. But what about the circumstanced versions of the rules? Do we need
to save those too? No, because the system ranks version as less important than the circumstance. So it
is possible to have a circumstanced rule in version 01-01-01 and the base rule in 01-01-15. At run time, if
it matches the circumstance then it executes the rule from 01-01-01. Otherwise it will execute the one
from 01-01-15.
So how do we get rid of a circumstance? If the system always allows the old rules is there no way to
remove circumstances? Thankfully, there is a way. Any version of any rule that can be circumstanced
can also be designated as a base rule. This is done by checking the Base Rule flag while setting the
rules availability.
Checking this flag designates that this version of the rule is now considered this rules base and any
previous circumstances no longer apply. Lets take a look at the following list of all variations of a rule:
50
Version
Circumstance
01-01-01
None
01-01-01
.Dept = Accounting
01-01-01
.Dept = Engineering
01-01-15
None
01-01-20
.Dept = Engineering
01-01-25
01-01-30
.Dept = Accounting
01-01-35
None
Given this list, if we were to execute this rule when .Dept= Accounting, we get the 7th rule (ver. 01-01-30).
If we were to execute this rule when .Dept = Engineering, we get the 8th rule (ver. 01-01-35).
This is because the 6th rule (ver. 01-01-25) has the base rule checked. So, all the rules that were
previous to this version (rules 1 through 5) are no longer applicable to our ranking. When we look at only
those rules that are available for ranking, we can see that Engineering is not an applicable circumstance,
so it chooses the highest version of the rule with no circumstances defined.
Version
Circumstance
01-01-25
01-01-30
.Dept = Accounting
01-01-35
None
Conclusion
Circumstancing helps to empower the rule resolution process, allowing the system to determine the best
rule, for every possible outlier, all while providing the ease of reuse and maintenance.
51
Choose the right layer in the enterprise class structure for a given use case
Parameterization
When we first start building an application, its important to take a step back and broadly identify what
components are generic versus and which are highly specialized.
In an insurance application, for example, the ability to record customer information, coverage, and the
concepts of claims and deductibles are generic: they should apply to any part of an insurance
organization.
On the other hand, the functionality specific to location, or the particular insurance product would be
considered more specialized.We should plan ahead. With this in mind, assume that businesses will grow,
and the application will someday be used beyond the current use case. Dont box it in: assume, and build
for, change.
Also, assume that other people will be managing and extending the application. These folks may be
completely new to the project, and perhaps we will be on a different project, or out of town, or on that
dream vacation across the world. Regardless of where we are, it is imperative that someone else, given
appropriate access, can understand how the rules work.The goal here is to achieve specialization without
sacrificing reusability.
These two concepts are often pitted against each other: many applications that are specialized for one
purpose cannot be reused for another.
However, with the powerful layering technology of PRPC, coupled with proper design and planning, we
can achieve both quite handily.
52
Sections
Flows
Data Transforms
Activities
53
Functions
Messages
Collections
Lets take a look at a typical parameter table. This is a parameter table for data transforms; it looks
similar for other rule types.
First, there is the name of the parameter. This is, well, the name of the parameter.
Next is the description. Please always fill this out, as it is the best way to communicate with
someone else who would like to use our parameterized rule.
The data type can be a string (which is the default), Boolean, integer, or page name.
The last two columns, Type for SmartPrompt and Validate As, are advanced and are used
more for internal PRPC purposes. They are typically used for referencing rules
54
Persistence, like security, can also be controlled only with class-based specialization. If two different
variants of work objects must be stored in different tables, they must be in different classes.
The next question to ask is if the rules should be managed or promoted to production differently based on
the specialization. Only RuleSet specialization offers this level of control.
The next question to ask is if the set of features we are developing, while designed for a particular
purpose, are reusable for almost any application. For example, lets say we are developing a customer
complaint component for our insurance application. This is likely to be reusable in other applications as
well, throughout our organization. For this situation, RuleSet specialization is ideal, as a RuleSet can be
plugged into any application by including it in the RuleSet stack.
We should also ask if an operator should be able to call more than one variant in a given session. For
example, if a European institution must process requests differently for each country, must it be possible
for a single operator to handle work for both Germany and France in the same session? Or is each
country fundamentally a different application? RuleSets are associated with the application, which is tied
to the operators session. Therefore, if an operator must handle multiple variants in a session, either
Class or Circumstancing must be used.
The next question is if the specialization depends on customer-specific data, rather than some predefined organizational construct. If insurance premiums depend on a combination of a customers age
and address, it is best to setup circumstanced rules to handle this specific combination of values.
Finally, if the specialization depends on dates, or is in some way temporary, use Circumstancing. These
are the key use cases for which circumstancing was designed.
Now, lets take these questions and put them to the test with a series of examples.
In our first example, lets see how Lines of Business should be represented in PRPC. MyCo is an
Insurance Company with two lines of business: Life, and Home.
There is a significant amount of functionality (around 30%) that needs to be specialized along these two
products. We also need to separate these Lines of Business by security.
Its also likely that some case managers are able to handle both home and life transactions in the same
application. Therefore, considering these three factors, MyCos Business Line needs are most likely met
through Class specialization.
In our next example, YourCo is an insurance brokerage firm that wishes to specialize functionality
across brokers.
Each broker must be separate from the other; seeing one anothers rules is NOT allowed.
Further, because of this security requirement, it is critical that work objects for each broker be stored in
dedicated tables. Class-based specialization is the only technique that can meet the security and
persistence requirements of YourCo.
Our next example is an Insurance company called HerCo. HerCos application must be specialized
across locales, in this case, States.
Most locales have the same requirements, only a couple of states deviate; these are considered edgecases.
It is also imperative that a single operator can run workflows for different Locales in the same application.
As such, HerCos requirements are met through the use of Circumstance specialization.
Our final example is HisCo, an Insurance company that has different requirements across European
Countries, each of which represents a different business unit.
55
Each business unit must manage their rules separately; for example, the German business unit will
deploy their rules to production at a different time than the French business unit.
As such, designers are tempted to implement specialization with RuleSets only.
However, despite the fact that these business units are currently separate, it is believed that HisCo will
grow to a point where some operators will have to handle work for multiple business units in the same
application. It is very important to design the application not just for the current need, but for the future as
well.
There is no one specialization technique that can handle all of these requirements. RuleSet
specialization must be used to handle the deployment requirements, but what else? Class, or
Circumstance?
This is where it helps to consider how rule resolution works. Lets say we use a combination of RuleSet
and Circumstance specialization. We have four rules: a base rule and circumstance pair, in each
RuleSet.
The RuleSet stack is a fixed hierarchy: in a given application, the stack does not change. Lets say that
the France RuleSet is highest in the stack.
Lets say the value of the country property is France. RuleSet is considered before circumstancing; as
such the rules in the France RuleSet will be found first, and then the circumstanced rule will be chosen.
If, instead, the value of the country property is Germany, the rules in the France RuleSet will still be
found first, since RuleSet is considered before circumstancing.
Then, in this case, the base rule is selected.
Therefore, the combination of RuleSet and Circumstancing specialization will not work for HisCos
requirements.
If, instead, we use the combination of RuleSet and Class specialization, again we may have four rules,
but this time, the business unit specific rules are in dedicated classes.
If the country is Germany, even when the France RuleSet is higher in the stack, the appropriate rule will
be found immediately, because class is considered first during rule resolution.
Therefore, HisCos requirements would be met through a combination of RuleSet and Class
specialization.
The HisCo example illustrates an important point: RuleSets are really meant to facilitate
deployment. They are not meant for specialization within a given application.
They do help specialize, but the specialization is very broad, to differentiate one application from another.
To specialize within an application, use either class or circumstancing.
Finally, it is important to remember that Class-based specialization provides the most options, and should
likely be considered first when assessing a particular requirement.
Choose the Right Layer in the Enterprise Class Structure for a Given
Use Case
The enterprise class structure is an out-of-box structure to help facilitate reuse and specialization along
class lines.
There are four layers, each dedicated to a particular kind of reuse. The Organizational layer is for rules
that are to be used across the organization or company.
56
57
Case Hierarchy
Screen Flows
Work Status
Work Parties
58
Understand the Case Designer, and the Process View of the Case Designer
Understand which rules are associated with cases and how to open those rules from the Case
Designer
Now lets look at each tab in more detail. The Stages & Processes tab shows the stages of a case.
Under each stage, we can click on Configure Process Detail link to open the Process view of the Case
Designer, or simply the Process View.
59
The whole
e section is Process View. We can click
k on the Bacck to Stages button to get back to the
stages & processes tab
b of the case designer. In the
t process vview, all the stteps of the sttage appear o
on left
m of a tree like
e view.
in the form
We use th
he Case Designer Details
s tab to config
gure the case
e. Shown belo
ow is the case
e designer
details tab
b for a top mo
ost parent cas
se, for a subcase which ha
as its own sub
bcases, and a subcase which
does not have
h
any sub
bcases, as sho
own in 1, 2 an
nd 3 respectivvely. As we ccan see, the m
main differencces
are that th
he top most parent
p
case on
nly has Email Instantiation configuration
n, any case th
hat has subca
ases
has the Data Propagattion configura
ation item, which is used to
o propagate th
he data declaratively from it a
case to its
s subcases. All
A subcases also
a
have Insttantiation con
nfiguration.
60
The Case
e Designer Sp
pecifications ta
ab is used to document the
e business re
equirements rrelated to the case.
Understand the
e rules be
ehind the
e Case De
esigner, a
and Proccess View
w
Understan
nding the rule
es being used
d is essential for
f the Seniorr System Arch
hitect, so thatt we can ensu
ure
the correc
ct rules are in an unlocked ruleset versio
on, before we
e make any ch
hanges.
We can open the rule behind
b
the ca
ase designer configuration
c
from the Casse Explorer itsself, by rightn the case an
nd selecting th
he Open men
nu option.
clicking on
This open
ns the Case Type
T
rule, pyD
Default. Lets look
l
at this in
n detail, so tha
at we can und
derstand where
the Case Designer con
nfigurations are stored in th
his rule. This rule is also ussed to configu
ure some item
ms
se Designer. Here, we see
e that the rule is not locked
d, as there is n
no lock icon o
or
that are not part of Cas
S
button is there to sav
ve the change
es. We can m
make the Case
e Designer
check outt button. The Save
changes in this rule. We
W recommend that you ma
ake the config
guration chan
nges in the De
esigner instea
ad of
e.
in the rule
61
The Proce
esses tab of the
t Case Type rule stores the configura
ation information we saw on the detail ta
ab of
the case designer.
d
This
s includes the
e work parties
s, data propag
gation, case w
wide supporting processess,
case wide
e local action and case match which is for
f duplicate ssearch. For now, just unde
erstand the ru
ule
type and name
n
of the rule
r
where the
ese configura
ations settingss are stored a
and how to op
pen that rule. In
this tab, we
w also see th
he list of subc
cases that are
e part of this ccase, under co
overable work types and th
he
starting prrocesses. The
e starting process is the firrst process th
hat runs when
n users click o
on create fro
om
the Create
e menu in the
e portal.
Lets open
n the starting process by clicking
c
on the
e magnifying g
glass next to the starting p
process rule n
name.
The starting flow usually consists off only one utility shape ( p zInitializeStag
ge) if the casse utilizes sta
ages.
62
The Stage
es tab of this rule stores th
he stages & processes tab information o
of the case de
esigner. We ccan
see the prrimary stages
s, configuratio
on of each sta
age and config
guration of ea
ach step as shown below. This
tab also stores
s
the info
ormation relate
ed to alternatte stages, verry similar to primary stagess information as
shown be
elow.
The Calcu
ulation tab sto
ores the calcu
ulations that are
a configured
d in the details tab of the C
Case Designe
er. We
can use th
he Attachmen
nt Categories tab in the Ca
ase Type rule to add new ccategories asssociated with the
case on which
w
we are working.
w
This
s tab does nott store any infformation from
m the Case D
Designer.
an use the
The Adva
anced tab doe
es not store an
ny information
n from the ca se designer a
as well. We ca
Advanced tab to publlish as a remo
ote case type
e, which is req
quired for Fed
derated Case
e Managemen
nt.
ng configuration in the Cas
se Designer and
a the lockin
ng configuratio
on here in the
e advanced ta
ab of
The lockin
the case type
t
rule are different.
d
Cas
se Designer lo
ocking configu
uration is valid for cases th
hat are
instantiate
ed under a ca
ase hierarchy.. The locking configuration
n here is valid for cases tha
at are instantiiated
as stand-a
alone case instances.
63
So far, we
e have seen the pyDefault Case Type ru
ule, which sto
ores the Case
e Designer an
nd Stage Designer
configurattion details.
Now, lets
s look at the ru
ule associated with any ste
ep in any stag
ge. We can open the flow rule associate
ed
with each step from the
e Open men
nu item of the Step Optionss menu.
64
Conclu
usion
Now, you should understand what the Case Designer, and Prrocess View o
offer us. Case
e Designer
he pyDefault Case
C
Type ru
ule. Each step
p under a stag
ge is a flow ru
ule,
configurattion details arre stored in th
which can
n be configure
ed in the Proc
cess View.
These rule
es should be unlocked beffore making th
he changes in
n these desig
gners and the process view
w.
These rule
es can also be
b opened from the Case Designer.
D
The
ere can be mu
ultiple starting
g processes ffor a
case, and
d the starting process
p
can be
b any flow frrom any classs in the whole
e direct inherittance tree.
Remembe
er that the pyDefault Case Type rule do
oes contain m
more configura
ation details o
other the Case
e
Designer configuration details.
65
66
Stage Configuration
Remember that a case defines the work we want to complete. A case is then broken up into a set of
stages; stages are logical milestones that work will transition through. Stages comprise a set of actions
that can either be a single step, multiple steps or launch another case. PRPC handles moving the work
from one step to the next, then one stage to the next. How does PRPC know to when to start a stage?
You configure stage behavior in the Case Explorer.
We configure the stage behavior with the Configure Stage Behavior menu option.
Click the Configure Stage Behavior, to open the Stage configuration dialog box.
67
Stage Name We can change the name of the stage here also, if we want to.
Stage service level agreement Applies a service level rule to the stage. The service level starts when
the case enters the stage, and ends when the case leaves the stage or the process otherwise stops.
Skip stage when Use this option to skip a stage if a when rule resolves to true. For example, suppose
a customer is prequalified for a home loan, therefore a background check is no longer needed. A stage
used to conduct a background check can then be skipped.
Is this a resolution stage? By checking the box we identify the current stage as a resolution stage. A
resolution stage indicates a stage where you expect to resolve a case. When selected the stage name is
underlined in red, in the Case Explorer. When a stage is marked as a resolution stage, a case is not
automatically resolved upon completion of that stage. Resolving the case needs to be done from within
the application. The benefit of marking a stage as a resolution stage is that it gives business users a
visual indication of which stages are expected to complete a case.
When all stage processes are completed or skipped
Transition to the next stage - when all the steps are processed, the case instance transitions
automatically to the next stage in the primary stages.
Stay in the current stage - tools such as the Change Stage smart shape or the Change Stage
flow action allows the system or the end user to move to another stage.
OPTIONAL PROCESSES We can specify optional processes, which are implemented as flow rules.
These optional processes appear to end users in the Perform Harness, under the Other actions menu.
Users can run optional processes as needed. For example, suppose we select stay in the current stage,
and provide users different processes which they can select to move to a more appropriate stage. We are
allowing users to decide which stage to go to, in this scenario. We are enabling the users to make a
manual decision as needed. One concrete example is, when a shipped item is received, the users can
select to go to pay invoice stage or return the shipment stage.
68
In addition to stage level optional processes we have configured here, we can also have case wide
supporting processes which we configure on the Case Designer Details tab. Both configurations
appear in the processes section under the action menu for the end users. The difference is that case
designer supporting processes appear throughout the case processing and the stage level optional
processes only appear when the case instance is in that specific stage. If both are configured, both
processes appear without any duplicates.
OPTIONAL ACTIONS We can have optional actions, which are implemented as local flow actions.
These optional actions appear to end users in the perform harness, under the Other actions menu.
Local action when processed does not make the case instance to move from the assignment shape.
Flow action when processed makes the case instance to move to the next shape in the flow. AttachFile is
a standard local action, which the end users can use optionally to attach a file.
Local Actions can be:
Specific to
Configured on
Assignment
Assignment Shape
Flow wide
Flow rule
Stage wide
Case Wide
We can also use a validate rule to determine whether the case enters the stage or not, by selecting
Configure entry validation from the menu. This provides us with two validation options:
Stage entry validation a specified validate rule runs against the case.
Attachment validation the case must contain an attachment of the specified type.
If either validation fails, the case returns to the previous stage with the appropriate validation error
message.
Step Configuration
Once the stages and steps are added, the steps can be configured to meet different business needs.
Remember that each step creates a flow rule. We can configure the steps intended purpose, and
whether the steps run in sequence or in parallel. A step can be reused in any stage.
We can configure the step behavior with the menu option Configure step behaviors.
69
Ste
ep Behaviors
s, Step config
guration dialog box below a
appears.
Single Step As
ssignment creates a flow
w rule with a ssingle assignment shape a
and start and end
sh
hapes.
Case
C
it crea
ates a flow rulle with a single Create Ca
ase(s) smart shape and sttart and end
sh
hapes.
Approval
A
creates a flow rule
r
with a sin
ngle subproce
ess shape. Th
he subprocesss shape callss the
sttandard pzAp
pprovalFlowW
Wrapper flow
w. Approval stteps can be cconfigured forr processing b
by a
siingle operatorr, or a cascad
ding series of operators ba
ased upon eith
her the existin
ng reporting
sttructure or an
n authority ma
atrix configure
ed by a decisi on table. The
e cascading a
approval optio
on
70
behaves similarly to the Cascading Approval smart shape, and can be configured in the same
manner.
Attachment creates a flow rule with a single subprocess shape. The subprocess shape calls
the standard pzAttachFile flow, which itself consists of a single assignment that calls the
pxAttachContent flow action.
Multi Step Process creates a flow rule with two assignment shapes. The flow rule can be
edited to add more shapes or to remove shapes. In essence, any flow that does not qualify as an
Assignment, Case, Approval or Attachment stage is a multi-step process step.
Start Step The Start Step section allows us to determine when if at all in the stage the user is
allowed to perform the step. When the case enters the stage, it automatically attempts to prompt a user to
perform the first step. Subsequent steps can either occur in parallel or in sequence.
Upon stage entry steps are processed concurrently when the case instance enters this specific
stage.
Upon completion or skip of previous step steps are processed sequentially when the case
instance enters this specific stage.
The first step of a stage must always have upon stage entry set to true. For any subsequent steps, we
can choose either concurrent or sequential evaluation. Each step configured to start upon stage entry
begins a new step sequence, and each sequence is indicated with a double line colored blue.
and when We can have a when rule in this field. If the when rule is false, the step is skipped. If the
purchase request contains a hardware or software items, we need to get the approval from an IT
Manager. We can have a when rule IsITApprovalReqd for the IT Manager Approval step.
When a specific step of a stage is processed, subsequent step in that stage will be automatically kicked
off, if that is marked to start upon completion or skip or previous step. In the previous step, if we change
to a previous stage using the Change Stage smart shape, another step in the previous stage also
started. If this effect of two steps across two stages getting kicked off is not desired, we can have the
standard when rule pxIsInCurrentStage in the subsequent step.
71
Launch on
o re-entry? Once a sta
age is processed it can be re-entered frrom another sstage by the
change sttage process or manually by
b the end us
ser. Only the ssteps that havve this option
n selected are
e
started au
utomatically when
w
the stage is re-entere
ed.
Runtim
me User Interface of the Sttages
We need to understand
d that how we
e configure th
he stages and
d steps have a
an impact at rruntime.
When a case that is de
esigned with stages
s
is insta
antiated, we ccan see whatt stage the case instance iss in.
All the primary stages show
s
in blue and the stage
e we are in ap
ppears in blacck. Alternate stages appea
ar
t primary sttages where the
t case insta
ance deviatess from the hap
ppy path. Forr example, if tthe
between the
case insta
ance is rejecte
ed right in the
e initiation sta
age itself, Rejjection altern
nate stage shows up in
between the
t initiation and
a review prrimary stages. If it is rejecte
ed during the review stage
e, Rejection
alternate stage
s
shows up in between the review and
a fulfillmen
nt primary stag
ges
ent applicatio
The exam
mple below shows multiple steps configu
ured to run co
oncurrently. T
This is a differe
on to
show a Pu
urchase Requ
uest case.
Each of th
he areas identified in the sc
creen shot arre described a
above.
72
1. Here
H
the appro
oval stage has been config
gured to have
e two steps that run upon sstage entry
ap
pproval to salles manager and approval to purchasin
ng manager. B
Both steps run
n simultaneou
usly
an
nd create two
o assignments
s. Rememberr, not all the ssteps have to create an asssignment though
a step can be a Single Step
p Assignment, or Multi S tep Process or Case. Siingle Step
Assignment
A
ste
eps have alw
ways one assig
gnment. Multiistep processs may have no
one or one orr
more
m
assignme
ents.
2. Under
U
other ac
ctions menu, there
t
are the flow actions ffor the assign
nments. In this example, th
he
op
perator has th
he privileges of both Sales
s Manager an d Purchasing
g Manager roles so we see
e the
flo
ow actions for both the ass
signments. Un
nder the flow actions, loca
al actions appear if they are
e
co
onfigured. Un
nder the main menu of othe
er actions, op
ptional processses appear iff they are
co
onfigured. Jus
st a quick rev
view here. Loc
cal actions ca
an be assignm
ment specific or flow wide ((step)
orr stage wide or
o case wide. Optional processes can b
be stage wide
e or case wide
e. Based on th
he
co
onfigurations,, these will be
e available to the end userss when they a
are at a specific assignment or
sttep or stage or
o case.
he number pu
ulls a standard
d report as sh
hown below.
Clicking th
73
Conclu
usion
Now, you should understand how to
o configure sta
ages and ste ps. Stages an
nd steps not o
only help brea
ak
kflow into man
nageable task
ks, but also a llow for reuse
e of the proce
esses. Stagess in
the compllex case work
the case designer
d
show
w the overall view of the ca
ase, instead o
of having to o
open the starting flow and ffollow
it through the whole thrread to underrstand the bus
siness processs.
At runtime
e, we can see
e what stage a specific cas
se instance is in, and also a summary o
of all the casess and
the stages
s that they are
e in.
74
Case Hierarchy
During a Case decomposition of smaller tasks, we identify stages and steps. In the process, if we see that
the tasks are individual transactions that have their own lifecycle independent of the parent case, then we
create them as subcases. The parent case and subcases form a case hierarchy. While designing for our
Purchase Request case, we identify two more subcases, Purchase Order and Inventory Selection. The
subcases are initiated at the appropriate step of the parent case and are processed to resolution
eventually with their own lifecycle.
When creating the subcases in the case hierarchy, understanding how to configure the subcases
inheritance helps the architects to maximize reuse of the rules and understand where the rules are pulled
from when rule resolution executes. Understanding how to instantiate the subcases also helps the
architect to instantiate the rule at the right time and only as needed. This lesson covers these, the
advanced settings of the creation of subcases, and the different ways that we can instantiate subcases.
When subcases are instantiated, the subcase might need data from the parent case. And we might need
to aggregate the data from several subcase instances to the parent case. This lesson covers data
propagation to and from the parent case and explains how to use the calculation feature to aggregate
properties in the subcases. Architects need to decide whether the cases are to be locked when opening
the cases or only when submitting the cases. We also need to decide whether the parent case is to be
locked when the subcase is being worked on. We will also cover different locking configurations that
impact parent and subcases.
At the end of this lesson, you should be able to:
Instantiate subcases in the case hierarchy and choose the best way of instantiation
Use the different locking configurations related to subcases and parent case
Understand the key properties that related to the parent case and subcases
75
Derives From
F
(Directe
ed) We use this field to change the d
directed inherritance chain. All the casess are
typically derived
d
from Work-CoverW
or
o one of its descendants,
d
so that we w
want can take advantage off
several sttandard features available in Work-Cove
er- abstract cclass and its ssuper classes.
Derives from
f
(Pattern
n) We use this field to ch
hange the pa ttern inheritan
nce chain. Ch
hoose the
appropriate class from the dropdown maximizes reuse and letts indicate wh
here we wantt the rule
n algorithm to pull the rules
s from. We can choose the
e same workp
pool class as tthat of the parent,
resolution
the class of the parent case, the cla
ass of a sibling
g case or a cllass of any ca
ase in the app
plication on w
which
orking.
we are wo
Ruleset and
a Version Use this fie
eld select the ruleset and vversion where
e the Case Tyype rule for th
he
case is go
oing to be sav
ved.
Remote Case
C
Type Use this field to select the
e Remote Ca
ase type to pu
ublish this casse as a remotte
case for another
a
applic
cation. This is
s only required
d for Federate
ed Case Man
nagement. Federated Case
e
Managem
ment will be co
overed under a different les
sson in an ad
dvanced coursse.
Create Sttarting Proce
ess Use th
his field to create the pySta
artCase flow rrule, with a de
efault short
descriptio
on of Create case
c
name. Allways select this
t
option wh
hen creating ccases becausse this creates a
starting flo
ow pyStartCa
ase which ha
as one utility shape
s
pxInitiializeStagepzzInitializeStag
ge that initializes
the stages
s. This startin
ng flow rule ca
an be edited later to chang
ge the short description forr the create m
menu
and/or to change the flow rule, if nec
cessary.
If we need
d to change any
a of these settings
s
after the
t creation o
of the subcase
e, just update
e the Class Group
and Paren
nt class fields
s in the class definition
d
form
m.
If we need
d to change th
he ruleset or ruleset versio
on, save the p
pyDefault Casse Type rule u
under the
appropriate ruleset and
d/or version. Then
T
we can add more sta
arting flows in
n the Process tab of the
a
select or deselect Pu
ublish as Rem
mote case type
e in the adva
anced
pyDefault Case Type rule. We can also
tab of the same Case Type
T
rule.
76
Subcase Case
e Designer
Step Configura
ation in the Parent
P
Case Designer
D
Create
C
Case Smart
S
Shape in
i any Flow ru
ule
Using th
he Subcase
e Case Dessigner
Using the Subcase Case Designer we
w can instan
ntiate a subca
ase automaticcally or manua
ally. To instan
ntiate
e instance automatically an
nd/or manually by our end users, selectt Case Design
ner> Details T
Table
a subcase
and click the
t Edit link by
b Instantiatio
on.
Lets begin by learning how we conffigure a subca
ase to instanttiate automatiically. We ma
ay want to
e a subcase automatically
a
if we have a dependency condition that has to be m
met..
instantiate
For example, we have a Catalog ma
aintenance su
ubcase that n
needs to be in
nstantiated wh
hen the inventory
s pulling items
s from invento
ory and then w
when the parrent case reacches
selection sibling subcase completes
c
is compllete.
the fulfilled status our case
e can also instantiate the subcase Purc
chase Order a
automaticallyy under Purch
hase Request
Note: We
parent cas
se, by selecting the check box Automa
atically by sysstem when an
nd select the radio button the
parent cas
se starts. Or we can use a When rule and
a when it e
evaluates to trrue, the subca
ase gets
instantiate
ed.
77
The follow
wing screen shot shows the
e instantiation
n of Purchase
e Order subca
ase when the Purchase
Request parent
p
case has
h a status of
o Pending-Fulfillment and tthe Inventoryy Selection sib
bling case ha
as
started.
We can se
elect Manually by user wh
hen we want to use a Whe
en rule and th
hen when the
e rule evaluate
es to
true, our users
u
can create a subcase manually.
For example, our custo
omer services
s representatives (CSRs) m
modify the customer addre
ess informatio
on
e there is no need
n
to instan
ntiate the Ad
ddress Chang
ge case all th
he time. It is b
better
infrequenttly. Therefore
to have th
he CSR manu
ually instantiatte the case.
At runtime
e, end users can
c manually create the su
ubcase from tthe parent case context ass shown below
w. If
the subca
ase is configured to be instantiated manually, then ussers can crea
ate the subcasse purchase o
order
from the parent
p
Purcha
ase Request case.
c
From th
he Other actio
ons menu, ussers select Ad
dd Work men
nu
and selec
ct the Create Purchase Orrder Case to create the su
ubcase manua
ally. This lets our users co
ontrol
the instan
ntiation proces
ss and instantiate the subc
case as neede
ed.
78
Using th
he Step Co
onfiguratio
on in the Pa
arent Case
e Designerr
We can use this config
guration option
n whenever we
w want a sub
bcase is to be
e instantiated automaticallyy by
m when a step of a stage is reached in the parent ca
ase,. This metthod gives uss more optionss of
the system
instantiating multiple ca
ase instances
s or a top leve
el case.
For example, we create
e a purchase order for a la
aptop and ourr purchase request case iss created in th
he
Purchase Request system.
In the cas
se designer off the parent case, we can add
a a step in an appropria
ate stage and we can configure
it to instan
ntiate a subca
ase instance. We can also use an optio nal When rule
e and the sub
bcase instancce is
instantiate
ed only when that rule evaluates to true
e.
79
If we choo
ose the Crea
ate a top case
e option inste
ead the case is to instantia
ated as a top level case, th
hat is,
it is not a subcase of a parent case. Any case ca
an be instantia
ated as stand-alone top levvel case. We can
w
.AP
Purchase Req
quest parent ccase can crea
ate
also creatte another parent case, if the situation warrants
another Purchase
P
Request parent case
c
instance or it can crea
ate a Purchasse Order top llevel case, evven
though Pu
urchase Orde
er is a subcase in the case hierarchy.
For example, now we want
w
to add another
a
item to
o our purchasse request forr a laptop.. W
We found out out
aptop it doesn
nt come with bag, so we create anotherr purchase re
equest case fo
or the bag. Th
he
that our la
laptop bag
g can be adde
ed as another line item in the
t same req
quest or it can
n be added ass another purcchase
request ca
ase in a spec
cific step. How
w the flow gets
s designed de
epends on the business re
equirements. Here
we see that we can ins
stantiate the parent
p
or any of the subcasses as top levvel case.
80
81
Using th
he Create Case Smarrt Shape in
n any Flow
w rule
We can use this config
guration option
n whenever a subcase is tto be instantia
ated from a flo
ow rule of anyy
ardless of pare
ent or sibling or the case ittself. This me
ethod gives uss more option
ns when
case rega
instantiating multiple ca
ase instances
s or a top leve
el case.
For example, a healthc
care insurance company gets a file from
m their provide
ers and they w
want to create
e
ses, after certain steps are
e done and ce
ertain conditio
ons are met. A flow, as parrt of a case, ssuch
claims cas
as the file
e processor ca
ase, parses th
he file record and at the en
nd of the file rrecord, a deciision is made to
whether or
o not to creatte a claim cas
se.
82
A subcase
e can be insta
antiated in an
ny flow rule by
y adding the C
Create Case(s) smart shap
pe. To add the
Create Ca
ases smart sh
hape to the flo
ow rule, selec
ct the Shapess menu > Sma
art Shapes. O
Once the shap
pe is
added to the
t flow rule, the shape ca
an be configurred just like a step in the C
Case step type configuratio
on.
Getting
g Data frrom the Parent
P
Ca
ase to the
e Subcase
e(s)
When a subcase is ins
stantiated as part
p of the parent case, we
e might requirre some data that is alread
dy
e parent case
e.
part of the
In the Cas
se Designer of
o the parent case,
c
we can use the Data
a Propagation
n option to sett the values in
n the
subcase(s
s). The Data Propagation
P
option
o
can be
e found on the
e Details tab o
of the Case D
Designer. To
propagate
e data from th
he parent case
e to the subca
ase, we click the edit link n
next to Data P
Propagation, to
open Cas
se Designer: Data Propaga
ation dialog box.
We can propagate the data from the
e parent case
e Purchase Re
equest into th
he subcases, Purchase Orrder
ntory Selection. If we are simply taking the
t data witho
out making an
ny changes, w
we can use th
he
and Inven
data propagation option. We do not have to selec
ct Also applyy Data Transfform. If we arre using the d
data
ally or looping
g through a pa
age list or if we
w need to reu
use propagatting data from
m parent to the
e
conditiona
subcase, we can use the Also apply Data Trans
sform, option as well. We w
will learn abo
out using data
a
transform rules in another lesson.
83
m rule, we can
n propagate data from pare
ent case to the subcase in the step
Using a data transform
ase designer of
o the parent case.
configurattion, in the ca
84
If the data
a from the parrent case cha
anges after the
e data has be
een propagate
ed to the subcase during tthe
instantiation of the sub
bcase, the datta change is not
n reflected i n the subcase. If the latesst value of a
o the parent case
c
is neede
ed, then we use a data tran
nsform rule to
o get the valu
ue just before the
property of
processing of step/flow
w/shape of the
e subcase.
Calcula
ating a Pa
arent Casse Property from Subcase
e Properties
If we want to track the total cost of something,
s
we
e would need
d to aggregate
e the calculation in the parrent
m the propertie
es of all the subcases that have been in
nstantiated within the paren
nt case. For
case from
example, when Purcha
ase Requests
s subcases arre created outt of a Program
m Fund paren
nt case, we want to
k of the total cost
c
of the pu
urchase reque
ests made, so
o that we don t go over the
e funds allocatted in
keep track
the Progra
am Fund pare
ent case. So, we aggregate the total co st of purchasse request sub
bcases using a
calculation in the Progrram Fund parrent case
In the Cas
se Designer of
o the parent case,
c
we can use the Calcculations optio
on to set the vvalues in the
subcase(s
s). The Calculations option
n can be found on the Deta
ails tab of the Case Design
ner.
To see ho
ow the Calcula
ations option works, lets calculate
c
the T
Total Actual C
Cost of the Pu
urchase Requ
uest
parent cas
se. We will ne
eed to find the
e sum of the Invoice Amou
unt propertiess of all the Purchase Orderr
subcase instances and
d the Inventorry Cost of all Inventory
I
Sele
ection subcasse instances. Calculation
on is stored in
n the Process
ses tab of the Case Type rrule of the parrent case.
configurattion informatio
85
Lockin
ng of the Parent Case and the
t Subca
ases
Case insta
ances and wo
ork object locking has been enhanced i n Pega 7. Ussers can now select one off two
configurattion options Default lock
king or Optimistic locking. The businesss can make a decision on
whether to
o lock the cas
ses when ope
ening the case
es or only wh
hen submitting
g the cases.
Default lo
ocking locks the case whe
en an operato
or opens the ccase. If a seco
ond operator tries to open the
same cas
se, they get a message tha
at the case is locked by the
e first operato
or. The second
d operator is able
to open th
he case in rev
view mode on
nly. The lock is
s acquired wh
hen the case was opened by the first
operator. No actions ca
an be taken.
Optimistiic locking en
nables both th
he operators to open the ca
ase. No lock iis obtained w
when the case is
opened. The
T lock occu
urs when the user
u
clicks Su
ubmit. When ttwo operatorss are working on the same
e
case, the changes to th
he case are made
m
by whoe
ever submits the case firstt. The second
d operator recceives
ge with the firs
st operators name
n
and the
e time of the cchange. Also there is a reffresh button th
hat
a messag
allows the
e second operator to get th
he new chang
ges made by tthe first opera
ator. The seco
ond operators
changes are
a not applie
ed until they click
c
refresh and submit the
eir action afte
er the refresh.
86
In the Cas
se Designer of
o the parent case,
c
we can use the Lockking option to set locking o
on the top most
parent cas
se. If the subc
cases are ins
stantiated as part
p of the pa rent case, the
ey have the ssame locking
settings as
a the parent. The Locking option can be
e found on th e Details tab of the Case D
Designer.
The Defau
ult locking tim
meout is 30 minutes but can
n easily be ch
hanged to a ccustom timeou
ut as shown
below.
If a secon
nd operator tries to open th
he same case
e, they get the
e message tha
at the case iss locked by the first
operator and
a they can only open the
e case in revie
ew mode onlyy, as shown b
below.
Remembe
er if the paren
nt case has de
efault or optim
mistic locking and the subccase is instan
ntiated under tthe
parent cas
se hierarchy, then the subcase also has
s default or o ptimistic lockiing respective
ely. But, we ccan
change th
he custom tim
meout for the subcase,
s
irres
spective of the
e timeout setttings in the pa
arent case, fo
or
default loc
cking configuration.
We can also set parent case locking
g in the subca
ase locking co
onfiguration. T
The default option is to locck the
se when the subcase
s
is be
eing worked on.
o We can ch
hange this bu
ut the recomm
mended appro
oach
parent cas
for most use
u cases is to lock the parrent when the
e subcase is b
being worked
d on. If the Do
o not lock op
ption
is selected
d, the parent case is not lo
ocked when th
he subcase iss being worke
ed on. The ad
dvantage of ussing
the Do no
ot lock option
n, both the pa
arent and sub
bcases can be
e processed ssimultaneouslly. The main
disadvanttage of using the Do not lo
ock option is
s that since tthe parent case is not lockked, any prope
erties
related to the subcases
s, such as count of open subcases
s
is no
ot updated in the parent ca
ase. If this is not
neous process
sing is preferrred, then sele
ect the Do no
ot lock checkk box.
important and simultan
87
Any time we
w change to
o optimistic loc
cking or chan
nge the lock tiimeout in defa
ault locking, w
we are using
custom lo
ock settings. These
T
locking
g settings are persisted on
n the Case Tyype record in tthe following
properties
s, pyLockingM
Mode, and pyL
LockTimeout.. If the locking
g mode is deffault locking w
without a custtom
timeout sp
pecified, then the lock time
eout is obtaine
ed by the Datta-SystemAdm
min setting.
Locking
g Standalon
ne Cases
The lockin
ng process fo
or a standalon
ne case is a litttle different. A
Any subcase can be creatted as a
standalon
ne case. That is, we can create an inven
ntory selection
n case as a sstandalone ca
ase, not as a
subcase of
o the purchas
se request pa
arent case.
If a subca
ase is instantia
ated as a stan
ndalone case
e, the locking configuration is done on th
he Advanced tab
of the Cas
se Type rule for
f that speciffic subcase. We
W can selecct Default lockking and can sset a custom
timeout. Or
O we can also select Optim
mistic locking.
88
Conclu
usion
We learne
ed how to create subcases
s, and how su
ubcases beco
ome instantiatted. Instantiattion can be do
one in
a variety of
o ways and we
w should kno
ow which con
nfiguration opttion is best su
uited for the d
different situattions.
We also le
earned that we
w can propag
gate data from
m the parent tto subcases. This is useful when we ne
eed
the parent case data in
n the subcase
es. We should
d know how tto aggregate the values off the subcasess
e parent case
e. This is need
ded if we wan
nt roll up the d
data from the
e instances off the subcasess to
data to the
the parent case.
We can have optimistic
c locking or default locking
g in the case h
hierarchy and
d in stand-alon
ne cases.
c locking allow
ws multiple us
sers to open the
t same cas e instance. D
Default locking
g locks the ca
ase
Optimistic
when the case instance
e is used. The
ere are beneffits in both loccking mechan
nisms. We can lock the parent
essed. Finally
y, we should kknow the keyy properties th
hat are used ffor
case when a subcase is being proce
onship.
parent subcases relatio
89
Create flows
Edit flows
Creation of Flows
When a step is added in a stage in the Case Designer, a flow rule is created for that step. This is the
recommended and easiest way to create a flow.
A step can represent a simple task, such as review and approve a purchase request. It can also
represent a complex process that involves multiple parties, coordinated with some processing logic, such
as requesting a quote from a vendor. It can even represent another case one that can be processed in
parallel, such as creating a purchase order case, from the purchase request case. These are achieved
using the following types of steps, Single Step Assignment, Multi Step Process or Case.
Sometimes we have to create a flow rule first before it can be added to a step. For example, a screen
flow, which captures data entry, has to be created first and then it can be added later as a step in any
stage. We will learn about creating screen flows in another lesson. If the flow belongs to a data class,
then we need to create the flow but not as a step in a case. For example, in a purchase request, we want
to get the quote for each of the line items to be purchased from a preferred vendor. The line item may be
a data class for the purchase request case and the Request Quote may be a flow in that class. This flow
is not represented as a step in the Purchase Request case. Instead, we call this flow as a subflow from
another flow associated with a step in the Purchase Request case.
If a flow needs to be created as an internal process for technical reasons, such as parsing a temporary
file and do some logic behind the scenes, we create the flow, not as a step in a case. The business users
may not be interested in knowing the details of these flows. These flows may not have to be listed as
steps in the Case Designer.
We can create a flow rule from the Case, Data and App explorers using the Create menu as shown
below.
90
Editing
g Flows
Once the flows are created, we can edit the flows
s to meet the business nee
eds by adding
g more shape
es or
pes.
removing existing shap
If the flow
w is listed as a step in the Case
C
Designe
er, we click the
e Configure process detail link under a
specific sttage in the Ca
ase Designerr to bring the Process
P
Mode
process view.. We can add a
eler into the p
basic shape or a Smarrt Shape or an
n Advanced Shape
S
from th
he shapes me
enu as shown
n below. To de
elete
k on the shap
pe and click th
he delete buttton or right click on the sha
ape and selecct
an existing shape, click
o
delete from the menu option.
91
92
Calling
g Flow(s) From a Flow
F
When we are editing a flow, we can add a host of
o basic, Smarrt and advancced shapes. B
Below we see
e the
hat can be use
ed to call othe
er flows from a flow. Later we will learn the business reasons abo
out
shapes th
why and when
w
we wou
uld use a spec
cific shape to call other flow
ws.
We can ca
all other flows
s by using one of these thrree shapes
SubProcess sh
hape in the ba
asic shapes
Split Join.
93
94
For Eac
ch shape, we can re-run th
he same flow for a set of re
ecords, as sto
ored in a page
e list
or page group. With the
e items in the
e LineItems pa
age list, we ca
an request a quote from th
he vendor for each
line item.
c
are
e All Any S
Some or Iterrate. The ma
ain flow can co
ontinue only w
when the quo
ote
The join conditions
request frrom any or all or some of th
he vendors arre complete. If we use the Some join condition, the
e exit
iteration can
c be based on a when co
ondition or ba
ased on a cou
unt. If we use the Iterate jjoin setting, th
hen
also the exit
e iteration can be based on a when co
ondition or ba
ased on a cou
unt. We config
gure the
conditions
s of when to process
p
the subflows unde
er the Page G
Group Iteratio
on Settings T
Tab. This tab is
only visiblle if the Iterate join condittion is selecte
ed. For examp
ple, lets say radon testing
g is only required
as part of home inspec
ction for the sttate of Califorrnia. Then, in the home insspection flow,, we can call tthe
o Radon Tes
sting with CA
A for Californ
nia listed in th
he subscript o
order as show
wn below. If w
we
sub flow of
want to us
se this flow fo
or multiple sta
ates, we can just enter C without the E
EXACT MATC
CH option sele
ected
and we ge
et California, Connecticut and
a Colorado
o, or we can a
add more item
ms to the list b
by clicking Ad
dd
Item link..
96
Config
guration of
o Wait sh
hape
There ma
ay be times wh
hen we want a flow to waitt before proce
essing continu
ues. For exam
mple, we mayy want
a purchas
se request to be put on hold until more funds
f
are ava
ailable. So, we
e need to che
eck to see if
enough fu
unds are available to purch
hase the requested items. If not, instead
d of rejecting the case, we
make the case instance wait using the
t wait shape and send a notification tto appropriate
e manager
e is in a wait status
s
becaus
se there are n
not enough funds available
e. In another
indicating that the case
w
to sprea
ad the work more
m
evenly a mongst our o
operators. So
o, when an
example, suppose we want
r
a cerrtain quota pro
ocessing scheduled tasks,, the next taskk assigned to
o that operato
or can
operator reaches
be put into
o a wait stattus until Mond
day of the following week.
The Wait shape can be
e configured for
f Timer Waiit Type and th
he waiting perriod can be a future date/tiime.
ys have a pro
ocess that is meant
m
for aud
diting in the be
eginning of th
he year. We ccan
Lets say that we alway
a we can also set the tim
me, if needed.. The other op
ption is Time
e
select 1/1/yyyy for the future date, and
a shown belo
ow. This can be any combination of min
nutes, hours, days and so o
on. We mightt use
Interval as
this option
n whenever a peer review is requested. We can say that the waiting period is o
one week, so that
the Manag
ger can make
e the necessa
ary business adjustments
a
w
within a weekk and the peer review can b
be
scheduled
d after the wa
aiting period. We
W can enterr one week orr seven days
97
98
Conclusion
We should now know how to create and edit flows. Most of the time, we create a flow is by adding a step
in the Case Designer. But, in some instances when a screen flow is required, and the flow belongs to a
data class, or the flow should not be a step, we create the flows using the create menu in any one of the
explorers or the new menu in the app explorer.
Editing the flows can be done using the embedded process modeler in the process view of the Case
Designer. If the flow is not listed as a step, editing can be done using the embedded process modeler of
the flow rule. In either case, security, design and other configurations are done through different tabs of
the flow rule.
SubProcess shapes can be used to call any other flow, a subflow. We can use the Spin Off configuration,
if the main flow and the subflow can run in parallel. Note that we may not need to use this shape at all,
since we can create the configuration using the Case Designer.
We use the Split-Join shape to call any number of different sub flows on any class, Case or data class.
The main flow is configured to wait for the results of any or all or some flows to complete.
We use the Split-For-Each shape to call the same sub flow for each object in a list or a group. The main
flow can be configured to wait for the results of any or all or some or specific iteration flows to be
complete.
And we use the Wait shape to configure the current flow of a case to wait for certain time, such as
beginning of the quarter or for a future date or for a dependency on another case such as reaching of a
specific status, such as item is shipped from the inventory and the catalog needs to be maintained to
correct the items in the inventory.
99
Adv
vanced Flow Pro
ocessing
g
During Ca
ase Managem
ment Design, we
w can create
e and edit flow
ws to add diffferent shapess based on
business requirements
s. In this lesso
on we will see
e some smart shapes that are useful forr advanced
s such as getting multiple levels
l
of apprrovals for a pu
urchase request. Another requirement m
might
processes
require a duplicate sea
arch, to check
k whether the case instancce already exists before a n
new instance is
L
see how
w we can meet these two re
equirements w
with the help of smart shap
pes.
created. Lets
This lesso
on covers the usage of thre
ee smart shap
pes, cascadin
ng approval, d
duplicate search, and perssisting
a tempora
ary case. Alon
ng with the us
se of smart sh
hape, Case D
Designer confiiguration is re
equired for
Duplicate search.
At the end
d of this lesso
on, you should
d be able to understand
u
ho
ow to use:
Cascading
C
App
proval Smart Shape
Duplicate
D
Searrch Smart Shape
100
We then configure
c
the Cascading Approval,
A
so th
hat we can ge
et multiple approvals, by rig
ght clicking on the
shape and
d selecting V
View Propertie
es. Lets look
k at approvalss based on the Authority M
Matrix.
First, we need
n
to create
e a decision table
t
to captu
ure the require
ement. This ta
able evaluate
es multiple row
ws
based on the cost of th
he purchase request
r
and re
eturns one orr more valuess. This table d
differs from the
e
ecision table configuration,
c
which only re
eturns the ressult for the firsst condition sa
atisfied. More
e
default de
informatio
on about this configuration
c
can be found
d in the Autom
mating Decisio
ons lesson.
101
The value
e(s) returned by
b the decisio
on table needs to be stored
d in a propertty in a page list. Here the
property is
s ApproverID on the Appro
overList page
e list. At runtim
me, when the case instancce reaches this
step in the
e stage, multiple assignme
ents are creatted one by on
ne to get the a
approval from
m each one of the
approvers
s.
102
103
Search
h for Dup
plicate Cases
Duplicate search is a useful
u
function
nality that pre
events the cre
eation of redun
ndant case in
nstances.
e and the Casse designer
Duplicate search functionality can be configured with use of a smart shape
ase instances
s with similar characteristic
c
cs are created
d, one of them
m gets resolve
ed
configurattion. When ca
with the status resolved
d-duplicate. We
W can be pro
oactive and p
prompt the end-users of po
otential dupliccates
arch functiona
ality.
using the duplicate sea
When two
o Quality Analysts find a bu
ug almost at the
t same time
e, two bugs w
with different b
bug IDs are
created. Later,
L
when th
he duplicate is
s found, one of
o the bugs iss closed with a status of re
esolved-dupliccate.
We can avoid this situa
ation using the duplicate se
earch function
nality. When the duplicate search is use
ed,
nd quality analyst is promptted with a me
essage saying
g that there w
was another bug created with
the secon
similar characteristics. The analyst can
c then mak
ke a decision to continue a
and open a bu
ug or resolve the
t status of resolved-dup
plicate.
bug with the
From the Case Designer Details ta
ab, we can co
onfigure dupliicate search. Configuring tthis creates a
different case
e
Case Mattch rule with a default name pyDefaultCaseMatch. If we want we ccan create a d
match rule
e by editing Case
C
Match fie
eld in the processes tab off the Case Tyype rule. If duplicate search
h has
been conffigured before
e in Case Des
signer, this ru
ule should be unlocked to e
edit the dupliccate search
configurattion in Case Designer.
D
1. Here
H
is the sam
mple for identtifying duplica
ate bug casess. Must Match conditions, as the name
e
su
uggests mustt match betwe
een the curren
nt case instan
nces that we are creating w
with past
in
nstances. Leftt side column is for the exis
sting case insstances and tthe right side column is forr the
cu
urrent case. In the example shown abov
ve, BugArea
a, the area off the applicatio
on where the bug
is
s found of the potential dup
plicates should be same ass the current ccase. pyStattusWork, the
sttatus of the po
otential duplic
cates does no
ot contain Re
esolved, thatt is, the statuss of the dupliccates
is
s not resolved
d. And the description of th
he duplicate ccases contain
ns the descrip
ption of the cu
urrent
ca
ase.
2. Weighted
W
Matc
ch Conditions
s are used nex
xt, if the cond
ditions are ma
atching, then weights are
as
ssigned for ea
ach of the ma
atching condittion as config
gured. Note, th
he current case and existin
ng
ca
ase instances
s can be in eitther column. Current case properties ha
ave to be refe
erenced with tthe
ke
eyword Primary as mentiioned in the configuration
c
d
dialog box itsself.
104
3. Potential duplic
cates are iden
ntified based on the sum o
of the weighte
ed conditions.. In this exam
mple, if
an
ny of the two values of new
w bugs prope
erties, Severitty, Priority, an
nd Type matcches with the
ex
xisting cases, they are identified as pote
ential duplica
ates.
Next, we need to check for the duplicates in an appropriate
a
sttep of a speciffic stage of th
he case. Typiccally,
es after the initial screen off gathering ba
asic data. It ca
an be anywhe
ere in the casse
we check the duplicate
ough. Here is a sample forr a Program Fund
F
case. Th
he checking iss being done after the initia
al
stages tho
screen.
e flow is assoc
ciated with a step,
s
we can edit the flow tto include the
e Duplicate Se
earch smart
When the
shape. Inc
clusion of smart shape gives users the ability to conttinue or resolvved-duplicate
e. In the situattion
of cases identified as not
n duplicate, any other sha
apes can be a
added after th
he duplicate ssearch smart
t
flow, for more
m
functionality.
shape in this
At runtime
e, if the poten
ntial duplicates
s are found, users
u
are pro mpted. If we know that the
e new case is not a
duplicate, we dont nee
ed to select any from the lis
st of the pote ntial duplicate
es and can cllick continue. The
ves to the nex
xt step in the current
c
stage or in the subssequent stage. Or we can select one ca
ase
case mov
from the list and click Resolve as Duplicate
D
to resolve the cu
urrent case ass Resolved-D
Duplicate.
105
Persistt a Case
We can create tempora
ary case insta
ances and the
en make it pe
ermanent after we know tha
at the case
his case instan
nce acts as a placeholder.. Lets say tha
at a customerr calls on the
instance is needed. Th
d inquires abo
out a productt that they are
e interested in
n purchasing. When we cre
eate a sale ca
ase,
phone and
we can crreate it as a te
emporary cas
se with the details of the cu
ustomer, prod
duct and product purchase
quoted. When
W
the actu
ual sale happe
ens, the temp
porary case ca
an be convertted to a perm
manent case.
In our exa
ample of duplicate bugs, without
w
the dup
plicate search
h configuratio
on, we are req
quiring the end
users to close
c
the duplicate bug ma
anually. With the
t duplicate search config
guration, we p
prompt the ussers
to identify
y whether the new bug could be a duplic
cate. In eitherr case, the ne
ew bug is crea
ated with a bu
ug ID
and is res
solved with a status
s
of reso
olved-duplicatte. We can im
mprove this prrocess with th
he creation of the
temporary
y case, when the new bug is created. No bug ID is id
dentified and n
nothing is savved in the
database.. All the data is transient. Once
O
the userr identifies tha
at the bug is iindeed not a d
duplicate, the
e
temporary
y bug case ca
an be made a permanent case.
c
This involves two steps
s. First we ma
ark the case instance crea tion as a temporary objectt. Then we pe
ersist
t make it perrmanent.
the case to
In the starrting flow of th
he case, we can
c select the
e Temporary object under the processs tab. The casse is
not actuallly created in the database
e if this option is selected.
Conclu
usion
As we saw
w, we can use
e Cascading Approval,
A
Duplicate Searcch and Persistt Case smart shapes to ma
ake
the proces
ss of getting multiple
m
appro
ovals, identify
ying potential duplicates an
nd converting
g temporary cases
into perma
anent case in
nstances, muc
ch easier with
h some simple
e configuratio
ons.
106
For the duplicate search, we need to configure a Case Match rule through the Case Designer as well.
Cascading approval can be based on an authority matrix which can be captured in a Decision Table rule
or it can be based on the reports to or group manager structure.
Whenever there is no need for a permanent case in the beginning of the case flow, we can create it as a
temporary object and once it is confirmed we can make it a permanent object by persisting the case
107
Screen Flows
A screen flow is a type of flow rule that is specifically used to break up data entry so that related fields
appear on a form and in a sequence of forms. For example, a credit card application form can be broken
into a set of forms one for personal information, one for financial information, one for employment
information, one for contact and security information, and one for any additional information.
We can configure how we want the forms of the screen flow rules to be presented to our users; tree
navigation, tabbed navigation or in the form of bread crumbs.
We can decide if we want our users to go through each form of the screen flow rules in sequence or if we
want them to be able to jump around from one step or screen to another screen either forwards or
backwards..
We can decide if we want the data to be saved on each form or only on the last form. Since the data
being captured is usually for a single person, the whole screen flow is routed to one operator.
At the end of this lesson, you should be able to:
Explain the purpose of the screen flows and how to create them
Differentiate screen flows and regular flows and how some configurations are set differently or not
needed at all.
default, th
he rule name is the concate
enation of the
e short descri ption. If we w
want, we can cchange this default
rule name
e using the E
Edit link in the
e Identifier field.
We click the
t Create an
nd Open button to create and open the
e screen flow rule. In the D
Design tab of the
flow rule, we can see that the Categ
gory is Scree
enFlow.. Reg
gular flow rule
es have the ccategory as
s tab.
FlowStandard in this
f
to create have three d
distinct forms to capture the
e data we nee
ed;
We know that we wantt this screen flow
evant data, L ine Items w
what items are
e needed for tthis
Personal Information who it is for and other rele
a Address Information
I
where we ne
eed to verify th
he shipping a
address and e
edit it if a chan
nge
request, and
in the add
dress is neede
ed.
As we can
n see, each fo
orm is an ass
signment shap
pe in the flow rule. But, com
mpared to the
e regular flow
w
rules, thes
se are not truly assignmen
nts. The whole
e screen flow
w is an assignm
ment and is a
assigned to on
ne
user.
In regularr flows, we configure where
e the assignm
ment will be ro
outed and on the connecto
ors, we configure
flow actions we want users to perforrm on the ass
signment.
109
Config
guration of
o User In
nterface Options
We use th
he harness ru
ule to define th
he appearanc
ce and processsing of the fo
orms we are g
going to prese
ent to
the user. In a regular flow, for each assignment, we
w can have a different ha
arness rule fo
or different user
d
Sinc
ce the screen flow is an as
ssignment as a whole and is routed to o
only one user,, the
interface displays.
harness rule setting is configured on
n the start sha
ape for the w hole flow.
ness rule is se
et to TreeNavigation7, the user interfa
ace shows the
e steps of the screen flow a
as a
If the harn
tree navig
gation. This us
ser interface shows
s
the current step of tthe screen flo
ow highlighted
d in blue and
shows all the steps in the
t navigation
n tree. Users can navigate using the ba
ack and next b
buttons or by
n the menu ite
em in the tree
e navigation. When clicking
g on the tree navigation, users can jump
clicking on
multiple steps backwarrds or forward
ds.
110
If the harn
ness rule is se
et to TabbedScreenFlow7
7, the user in terface shows the steps off the screen fflow
as tabbed
d navigation. This
T
user inte
erface shows the current sttep of the scre
een flow high
hlighted in blue and
shows all the steps in the
t tab naviga
ation. The use
er can naviga
ate using the b
back and nexxt buttons or b
by
n a tab in the tab navigatio
on. Users can jump multiple
e steps forwa
ard or backwa
ard by clicking
g on a
clicking on
tab in the tab navigatio
on
ness rule is se
et to Perform
mScreenFlow, the user inte
erface showss the steps of the screen flo
ow as
If the harn
bread crumbs. This use
er interface shows the currrent step of th
he screen flow
w and the pre
evious steps in the
b display. This
s harness opttion of the UI display does not show the
e future steps.
horizontall bread crumb
Users can
n navigate using the back and
a next butto
ons to go bacck and forth. W
With the brea
ad crumbs, the
ey
can only go
g back, not forward.
f
So, clicking
c
on a bread
b
crumb in the navigattion, users ca
an only jump
multiple steps backwarrds.
111
The busin
ness can decide which use
er interface op
ption they wan
nt to present tto their userss. There is no
difference
e between the
ese three, as far
f as functionality, exceptt in the Perfo
orm Screen Fllow, where u
users
cannot jum
mp multiple steps forward. The default sequencing
s
o
options can also be configu
ured to behavve
differently
y, as well see
e in the next section.
Config
guration of
o Sequen
ncing and
d Post-Prrocessing
g Optionss
We can co
onfigure a screen flow so that
t
steps are
e run in a stricctly enforced sequence; that is, step 2 iss
executed after step 1, and
a step 3 ex
xecutes after step 2, and s o on. On the other hand, tthe screen flo
ow
etup so that stteps are run in any order, enabling
e
userrs to jump ahe
ead from step
p 1 to step 5. Its
can be se
also possible to custom
mize the sequ
uence so that the behavior is somewherre in between
n.
One of the
e simplest wa
ays to enforce
e a strict sequ
uence is to usse the Perform
mScreenFlow
w harness, ass
explained above. This user interface
e option show
ws a breadcru
umb trail that d
does not inclu
ude future ste
eps,
s simply nothing to click to
o jump to a futture step; insttead, the Nexxt button mu
ust be used. F
For
so there is
example, we cannot co
ollect credit ca
ard informatio
on, unless the
e shipping add
dress informa
ation step is
d.
processed
The Tabb
bed and Tree Navigation sc
creen flow sho
ows a represe
entation of the entire flow, and users ca
an
click the future steps to
o jump to them
m. This gives flexibility to u
users to fill in any step. Thiis is handy for an
e operator is entering
e
data
a provided by a customer o
on the phone. The custom
mer
application in which the
mation in an un
nsystematic way,
w
requiring
g the operatorr to jump arou
und and enter data
may proviide the inform
as provide
ed.
We can customize the Tabbed and Tree Navigattion screen flo
ows for the se
equence on a per-step bassis,
dcrumb trail checkbox.
c
Th is configures the step as a
an entry point, and
using the Enable link ifi using bread
es whether or not the step appears on th
he flow viewe
er. By default,, this option iss enabled for each
determine
assignme
ent; this option
n must be ena
abled for at le
east one assig
gnment in the
e screen flow. Lets unchecck it
for the Ad
dd Line Items
s step in the screen
s
flow to
o observe the
e effect on flow
w processing
g.
112
In this exa
ample, weve disabled the link for the A
Add Line Item
ms step, and a
as such, it do
oes not appea
ar in
the naviga
ation view. Th
he only way to
o get to the sttep is by clickking the next button on th
he previous sttep,
Enter Purchase Reque
est, as show
wn below.
s an entry poin
nt, and the ste
ep is represe
ented in the na
avigation view
w, we
When we select configure a step as
gure whether end users ca
an jump forwa
ard to it or nott. This is done
e by selecting
g the Only go
oing
can config
back che
eckbox. When
n this is set, th
he step is disa
abled, when tthe user is on
n a previous sstep. Users m
must
progress naturally to th
he step using the Next bu
utton.
113
In this exa
ample, weve enabled both
h the Enable link if using b
breadcrumb trrail and Onlyy going back
options fo
or the Add Lin
ne Items step
p. Now it appears in the na
avigation view
w; but it is gra
ayed out. The only
way to ge
et to the step is by clicking the Next button on the prrevious step, Enter Purch
hase Request, as
shown be
elow.
Without th
he purchase request
r
inform
mation such as
a when it is n
needed, and o
other relevantt information, we
cannot ad
dd line items to
t this purchase request. However,
H
if the
e user is on a subsequentt step, he or sshe is
allowed to
o go back to
o this step, fro
om the naviga
ation tree.
Lets revie
ew the key po
oints regarding use of the Enable
When
W
Enable link if using breadcrumb
b
trrail is not sellected, the ste
ep does not sshow up in the
e
na
avigation flow
w listing, whetther we go forrward or backkward in the fllow. Hence, w
we cannot jum
mp to
th
his specific ste
ep. The only way
w is to use the back and
d next buttonss. This is true
e for all three
sttandard harne
ess configurations.
114
If we enable bo
oth Enable link if using bre
eadcrumb tra
ail and Only going back, the step show
ws up
n the navigatio
on flow listing
g. The step is initially greye
ed out from a previous step
p and the onlyy way
in
we
w can go to th
he step is using the next button. Once the step is prrocessed and we have moved
be
eyond, we ca
an jump back to this specifiic step from a
any forward stteps.
w action AddLineItems, th
he validation rules
r
are liste
ed in the validation tab. Thiis validation m
may
In the flow
validate fo
or the presenc
ce of at least one line item
m and for each
h line item the
e presence off required field
ds.
The valida
ation rules run
n only if Perfform post-processing when
n navigating b
back is seleccted for the
assignme
ent shape of th
he screen flow
w.
115
Config
guration of
o Persisttence and
d Routing
g Optionss
The start shape has a Save on Las
st Step prope
erty option. W
When selected
d, the screen fflow is not savved
s is often used
d in tandem with
w the Allow
w Errors option, which wh
hen selected d
does
until, the last step. This
g even though
h errors are ra
aised until the
e screen flow is complete. This is handyy for
not prevent processing
ation in which
h the operatorr is entering data provided by a custome
er on the phone. The custo
omer
an applica
may proviide the inform
mation in an un
nsystematic way,
w
requiring
g the operatorr to jump arou
und and enter data
as provide
ed. It is only at
a the end when the system
m prevents th e operator fro
om continuing
g if errors werre
flagged th
hroughout the
e screen flow.
116
Differe
ences bettween Sccreen Flow
ws and R
Regular Fllows
We have learned abou
ut some of the
e differences already.
a
Now
w, lets look at all the key differences bettween
ows and regular process flo
ows, so that we
w can undersstand why so
ome configura
ation is done
screen flo
differently
y and why som
me configurattion is not nee
eded at all.
Screen flow us
se assignmen
nt shape for th
he each step of the screen
n flow. But the
ese are not tru
uly
ssignments because
b
we are not assigning different ttasks to be co
ompleted by d
different end u
users
as
as
s we do in a regular
r
flow. We
W can consiider the whole
e screen flow
w as one task to be complete
an
nd hence it is
s routed to one user. So, we
w set the routting on the start shape, wh
here as in reg
gular
flo
ow, we set the routing on each
e
assignm
ment shape.
In
n a regular flow, from each assignment, the end userrs can take different action
ns, such as
ap
pprove or den
ny or hold the
e purchase request. So, in regular flow, flow actions a
are defined o
on the
co
onnectors and
d likelihood is
s configured on
o the connecctor. In a scre
een flow, there
e is no need tto
ta
ake multiple actions;
a
users just need to go between b
back and forth
h between mu
ultiple forms ffor
da
ata entry. Hen
nce the flow action
a
is defin
ned on the as signment sha
ape itself. Onlly one connecctor is
po
ossible from the
t assignme
ent shape and
d so no likeliho
ood can be set.
117
In a regular flow, we can configure harnesses for each assignment shape, as there could be a
need to show different relevant data for end users to make decisions when they get the
assignment. In screen flows, we configure harness on the start shape for the whole screen flow.
In the screen flow, the assignment shape does not have a ticket, an SLA or notify configurations,
as there is no need for any of them.
The screen flow is has limited number of shapes. Since the screen flow is executed by a single
user, external systems cannot be involved. Hence, there are no integrator shapes, and no
assignment service shapes.
Conclusions
We learned the purpose of the screen flow rules and how to create them. Screen flows are meant for data
gathering forms.
Screen flows can have tabbed, or tree or breadcrumbs navigation. We can configure screen flows to
process the steps in sequence, jump around freely or we can have a hybrid. Sequencing options are
configured in the assignment shape of the flow rule, which is a step in a screen flow rule.
The whole flow is considered one assignment for an operator and hence the configuration of the harness,
persistence and routing are set in the start shape of the screen flow, compared to settings in the
assignment shape for a regular flow.
118
Work Status
s a case instance (or work item) is instantiated and as it progress towards resolution, a work status
property tracks its state, whether open or resolved. The standard property for the case instance status is
Work-.pyStatusWork. Its values are restricted and controlled. This is also called work item status. Do not
confuse case instance status with assignment status which is a different property and not in the scope of
this lesson.
At the end of this lesson, you should be able to:
) and
)
A New work status indicates that the case instance has just been created and has not been reviewed or
qualified for processing.
The Open status indicates that the case instance is being processed by the organization which is
supposed to process it.
The Pending status indicates that the responsibility of processing the case instance is currently with an
external organization. So the processing of the case instance is suspended until the external organization
or group has done its part.
A Resolved status is generally the final status for a case instance as it indicates the completion of the
work. Usually, a case instance in a resolved status is not modified by any later processing. Resolved
items are stored in the database and no longer appear on a work list or in a workbasket. Users can work
on a resolved work item; but first it must be reopened.
A few standard values are defined in each category. To access them, click Designer Studio > Process &
Rules > Processes > Status Values.
119
Enter a la
abel and identtifier, which is
s the rule insta
ance name. H
Here we are ccreating an O
Open-Fulfillme
ent
Field Valu
ue rule instanc
ce under the SAE-HRServ
vices-Work cla
ass. Enter or select a value for the Field
d
name. This should be the
t property name.
n
In our case,
c
the pro perty name iss .pyStatusW
Work. Once this
e is created, we
w have a new
w status value to use in ou
ur application. It should now appear in o
our
field value
Status Va
alues list.
120
Use PR
RPC Stand
dards to Update Work
W
Status
Most flow shapes, inclu
uding Start, Assignment,
A
End
E and Smarrt shapes, pro
ovide a mean
ns to set or up
pdate
us. To set or update
u
the prroperty Work--.pyStatusWo rk value, do n
not use the p
property-set
work statu
activity me
ethod or use a data transfo
orm rule to se
et a value for this property,, but instead, use the
Properties
s panel of the
e flow shapes. The Status tab on the p
panel allows u
us to define th
he new work sstatus
value by providing
p
a va
alue in the W
Work Status field. This is th
he preferred w
way.
Here is a sample scree
enshot where we are settin
ng the work sttatus to Open in the assig
gnment shape.
121
When the
e case instanc
ce advances to
t a shape tha
at sets the sta
atus, PRPC a
automatically updates the sstatus
of the worrk item to the value defined
d for that shape. The appro
opriate flag in
ndicator appears next to th
he
work statu
us value.
shape provides
p
two fields: Flow Result
R
and W
Work Status. Make sure tto use Work
Status fo
or this purpose
e. If the flow belongs to the last step off the resolutio
on stage, wo
ork status is one of
the Reso
olved status field
f
values, such
s
as Reso
olved-Comple
eted, Resolvved-Withdraw
wn, Resolved
dCanceled etc.
ase, PRPC us
ses the stand
dard activity W
Work-.UpdateStatus to upd
date
When we set the work status of a ca
the value of the properrty Work-.pyStatusWork.
vered in more
e detail in the Activities lessson. The brieff discussion tthat follows
Note: Actiivities are cov
explains how
h
PRPC us
ses the Updatte Status activ
vity to update
e the work sta
atus for a case
e.
An activity
y is a sequence of structurred steps, des
signed to auto
omate some a
aspect of casse processing. The
activity Work-.UpdateS
W
Status, which can also be called
c
from a fflow utility shape, changess the propertyy
122
directly an
nd calls some
e standard acttivities such as
a Work-.Res olve if the sta
atus value is o
one that starte
ed
with the word
w
Resolve
ed.
There are
e also a few sttandard rules which call the Work-.Upda
ateStatus acttivity. Activitie
es such as Wo
ork.WorkBas
sket and Work
k-.Worklist wh
hich place an assignment iin a workbaskket or worklistt respectivelyy, also
call Work--UpdateStatus.
Key sta
andard fu
unctionalities which levera
age the W
Work Status
When a case instance//work item sta
atus is first up
pdated to a R
Resolved sta tus, PRPC ca
alls the Resolve
utomatically. This
T
activity activates
a
the Status-Resollved ticket. A
Another particcular standard
d
activity au
ticket is A
AllCoveredRe
esolved. If a subcase
s
statu
us becomes R
Resolved, the
e immediate p
parent case iss
checked. These standa
ard tickets are
e covered in the
t Tickets llesson.
PRPC als
so automatica
ally maintains three standard properties in Work-class. They are
pyElapsed
dStatusNew, pyElapsedStatusOpen and pyElapsedS
StatusPending. These properties contain
cumulative time in seco
onds that a ca
ase instance has had a sta
atus value tha
at started with
h the word New,
espectively.
Open orr Pending re
123
Conclusion
The work status conveys important information about the progress of the work item towards completion,
such as whether the work item is New, Open, Pending or Resolved. The work status can be set on any
shape in a flow. When the case instance reaches that shape at runtime, its pyStatusWork property is
updated with the status value from the shape.
If the status of the case instance needs to be updated outside of the flow rules, we can use the standard
activity Work-.UpdateStatus.
Some standard tickets and properties leverage the work status value.
124
Worrk Partie
es
A Work Party is a person, organizattion, or busine
ess involved o
or interested in some way in the work ittem or
erefore, they are
a kept inforrmed as the ccase instance progresses tto completion. In
in a case instance. The
e use corresp
pondence to in
nform the inte
erested partie
es. Correspon
ndence rules, which include
e
PRPC, we
email, ma
ail or phone te
ext, are ONLY
Y sent to entities identified as work parties.
A case ins
stance may have
h
multiple work
w
parties. As an examp
ple, lets conssider a case in
nstance creatted
because of
o a customerr complaint. The
T customer and her spou
use, the custo
omers lawyerr and the
organizatiions lawyer could
c
all be involved or inte
erested in the
e complaint. In
n that case, th
hey are define
ed as
work partiies for the cus
stomer complaint case instance or workk item.
In PRPC, Work Parties
s are part of th
he Process ca
ategory and a
are instancess of the Rule--Obj-WorkParrties
rule type.
At the end
d of this lesso
on, you should
d be able to:
Explain how to
o configure a Work
W
Party
Describe
D
Work
k Party Data Structure
S
Add
A a Work Pa
arty to a case
e instance
Work Party
P
Con
nfiguratio
on
A work pa
arty representts an entity that needs to be
b notified abo
out the progre
ess or status of work.
When a case is created
d as part of a new applicattion through tthe Applicatio
on Express wizard or added
d to
c
addition menu in the case explorer, the system creates a de
efault
an existing application through the case
he newly crea
ated case. This rule can ea
asily be config
gured from the
e Details tab of the
Work Partties rule for th
Case Des
signer.
125
Clicking on
o the edit link
k displays the
e Parties form
m for us to ed
dit or delete e
existing roles o
or add a new
w role.
By defaultt, PRPC adds
s three roles, Customer, Owner, and Intterested with the appropria
ate default
properties
s values in the
e Work Partie
es rule instanc
ce for the casse on which w
we are working
g. This can be
modified to
t suit our bus
siness needs. Shortly, we will see whatt each field means and how
w these can b
be
modified.
The Descriptio
on is the uniqu
ue name for each
e
of the ro
oles we see diisplayed in the user interfa
ace.
The Display on
o Creation checkbox
c
can be selected tto if we want the work partty to appear a
at
untime when the
t case insta
ance/work item entry form first appears.
ru
Opening that
t
Work Parrties Rule, we
e see the follo
owing informa
ation that we h
have configurred in the parrties
form earlie
er:
126
Though we
w can edit, ad
dd or delete role
r
entries in this rule form
m directly, it iss easier to use
e the Case
Designer.
Work Party
P
Datta Structu
ure
The Party
y Type on the configuration
n form of the Case
C
Designe
er, can be one of the follow
wing:
Party/Com
mpany, Party//Government,, Party/Opera
ator, Party/No n-profit, Partyy/Person or itt can be any o
other
custom pa
arty that you create.
c
Party Typ
pes are define
ed in PRPC as
s instances of the Data-Pa
arty class or o
one of its sub--classes. Proccess
Command
der organizes
s the Data-Pa
arty data into meaningful
m
ca
ategories.
Data-Party-Co
D
m for busines
ss organizatio
ons, and the P
Party/Compan
ny Party Type
e roles are
in
nstances of th
his class.
Data-Party-Go
D
ov for Governm
ment organiza
ations, and th
he Party/Gove
ernment Partyy Type roles a
are
in
nstances of th
his class.
Data-Party-Op
D
perator for PR
RPC applicatio
on users with an Operator ID record, an
nd the
Party/Operatorr Party Type roles
r
are insta
ances of this class.
Data-Party-Org
D
g is reserved for the non-p
profit organiza
ations, and the
e Party/Non-p
profit Party Tyype
ro
oles are instan
nces of this class.
c
Data-Party-Pe
D
rson is design
ned for any person who is not however a PRPC app
plication user, and
th
he Party/Person Party Type
e roles are instances of thiis class.
127
The drop
p down for the
e model shows the data tra
ansform rules.. The data tra
ansform rules defined in the
class related Party Type field or in one of its parent class sho
ow up as entrries in the Mo
odel drop dow
wn.
The stand
dard data tran
nsforms corresponding to the entries in the Model dro
op down appe
ear in the DattaParty clas
ss.
128
A substan
ntial set of Wo
ork party prop
perties are delivered with P
PRPC. A few properties are
e required, su
uch
as:
px
xPartyRole, which
w
identifie
es the role of the
t party.
py
yWorkPartyUri, which uniq
quely identifies the party.
py
yEmail1, whic
ch Pega 7 use
es to send no
otifications to the party. If w
we dont proviide a value fo
or
py
yWorkPartyUri, Pega 7 uses pyEmail1 to
t determine a value when
n validating th
he work party
us
sing the stand
dard activity Data-Party-.V
D
Validate.
Some com
mmonly used though nott required properties inclu
ude pyLastNa
ame, pyFullName, and
pyAddress.
In the CurrrentOperatorr Data Transfo
orm rule, the property pyW
WorkPartyURI is set to the current opera
ator,
using the property pxR
Requestors py
yUserIdentifie
er. Data trans form rules are
e covered in d
detail in a diffferent
lesson.
Add a Work
W
Parrty to a Case
C
Insta
ance
Once a work party rule
e is created an
nd configured
d for a case, w
we have multiiple ways to a
add it to a casse
instance.
In the parrties configura
ation of the Ca
ase Designerr, if we marke d a Party Rolle as Displayy on Creation, (or
Required
d, which auto
omatically marrks Display on
o Creation) the work partty is created w
when the casse
instance is created.
In the exa
ample shown for the Case Designer con
nfiguration, th e Owner Ro
ole is marked as Display o
on
Creation. The model selected
s
for th
his role is the CurrentOpe rator data tra
ansform rule. It sets the
pyWorkP
PartyUri as th
he current ope
erators userID
D. We can ve
erify that in the
e clipboard pa
age on the
pyWorkPa
arty page crea
ated under py
yWorkPage.
129
We can se
ee the Work Parties
P
inform
mation in the standard
s
perfo
form harnesse
es on the righ
ht hand side a
at
runtime off the case ins
stance creatio
on.
Here, own
ner is the Work Party role name.
n
We ca
an click the A
Administrator link to look a
at work party
details.
130
In the cas
se type rule, starter
s
processes flows are
e listed under the Processe
es tab.
131
If we unch
heck the Skip
p create harness, then wh
hen the case instance is instantiated, th
he new harnesss be
displays first, before the actual case
e is instantiate
ed.
a new work
k parties in thi s new harnesss as well, such as the perrform
We can edit existing work party or add
harness.
d or some oth
her harness iss used and if we dont have
e the capabiliity to
If the perfform harness is customized
add a worrk party at run
ntime, we can
n either includ
de the Work- pyWorkParrtiesWrapper section rule
wherever we want or we
w can use the AddParty flow action a s local action
n at the assign
nment level,th
he
vel or the cas
se level. Below
w is the config
guration at the assignment level local a
action.
flow level,, the stage lev
At runtime
e, from other actions menu
u, we can click on the Add
d a party link and this bring
gs the same
section to
o edit existing parties or to add new partties just as we
e saw before in the new ha
arness and th
he
perform harness.
132
Conclu
usion
The Work
k Parties rule is an instance
e of Rule-Obj-WorkPartiess, while the W
Work Party Role listed in the
e rule
is an insta
ance of Data-Party class or one of it sub
b-classes. Th ere are stand
dard roles ava
ailable that we
e can
add to the
e work party rule. PRPCs flexibility
f
allow
ws us to exten
nd Data-Partyy and create our own workk
party roles
s as needed.
A work pa
arty role can be
b added to any
a case insta
ance by config
guring the parties configurration in the C
Case
Designer, or by using the
t standard perform
p
harne
ess, new harn
ness, or by in
ncluding pyWo
orkPartiesWra
apper
ule or by using
g the AddParrty flow actio
on.
section ru
Work partty is any entity
y that is intere
ested in the progress
p
of th e case. Theyy can be notified and/or the
ey
can be inv
volved in proc
cessing an as
ssignment.
133
134
To illustrate this point, we have a class Person with 3 properties defined FirstName, LastName and
middle Initial. This is the definition of a Person, not an instance of one. The pages, Person1 and Person2
are both instances of the class type person. Each can have different values and set a different subset
of the defined properties.
Since these topics are so important to building PRPC applications lets review the concepts from the top
down. The clipboard is a per user session area of memory containing top level pages. Each page is an
instance of a class. The class defines the properties that can be set on the page. The property can be a
single value, for example FirstName = Joe, a page of a specific class (for example, AddressInfo) or
represent a list of items (for example, LineItems()).
136
persist these pages we can define them as concrete and not belonging to a class group. Next we
complete the history tab.
We can see that we now have a new class, but no properties defined on it.
Now we create properties in two different manners. You can create a single property by right clicking and
selecting NEW | Data Model | Property.
Here we can fill out the information to create a new property.However, when creating a new class it is
often faster to use the Define Properties wizard to bulk create properties. Here we can quickly add three
new properties and set their type. We will add 3 new single value properties.
After clicking Finish, PRPC will create the properties for us. Here we can see our new properties in the
Application Explorer.
We now have a new class and properties defined, however our work types have not defined a
relationship to our new class.
What we need to do is create a property to reference our newly created class.
We will call the property OrderInformation. Since we only need one order information page per work
object we choose the page property mode and set the class to our newly created OrderInfo class. Now,
we can see the OrderInformation page and the properties we defined.
In many situations it will be very useful to quickly understand the data model of an application that is in
development or already developed. PRPC has two landing pages that provide a more complete view than
the Application Explorer.
Click on the Pega button, then select data model > classes & properties > property tree. The Property
Tree tab shows the properties defined and their type for a given Applies To class, here our general work
class. We can expand OrderInformation to see the properties of the OrderInfo class as well.
A number of other options also exist in order to help customize the view to show the appropriate
properties for a given application.
The Class Relationship tab shows the relationship of just the page, page list and page group properties.
This can be helpful as a high level relationship model.
Now that we have created the classes and property definitions we also must understand how to actually
create pages of the classes and instantiate properties on those pages. In most cases PRPC will manage
the creation of pages for us. Pages can be created to represent new objects, for example when we create
new work, or to represent an already persisted object, such as opening existing work. PRPC also creates
embedded pages for us whenever the property is referenced.
From a more programmatic standpoint we can explicitly create new pages using Page-New or bring an
already existing instance onto the clipboard as a page using Obj-Open. There are many other ways
pages get created but these are some of the most common ones.
Property instantiation is even easier than pages. PRPC creates properties as they are used or
referenced.
There is no constructor or Property-New.As business rules, UI forms or other rules such as transforms
reference properties PRPC automatically creates them.
Unlike many other programming languages, PRPC does not allocate memory for defined properties until
they are used; this allows us to inherit many properties without the associated cost.
137
138
In conclusion, the PRPC data model is very powerful at representing the structures that our application
and business will need to interact with. Understanding how to take full advantage of the PRPC data
model is one of the first steps to building a great system.
Deselecting a data type removes it from the Data Explorer, but the underlying rules are not deleted.
Select view all to display all the data types available.
Select Create New to add a new data type to the application. We want to add a data type address.
139
The wizard creates the class specified in the ID field using the given details.
In the next step we can add properties to the data type.
A property can be defined as single value, value list, or value group in which case the type must be
specified or as a page, page list, or page group in which case we need to specify the page class.
In next step we define how we want to display the properties.
In the final screen an overview of the properties to be created displays. Click Submit to create the
properties.
Properties can also be created directly from the Application Explorer. Use New link to create a single
property or the Create properties link to create several properties in a similar way that we created them
using the Data Explorer.
140
In many situations it is very useful to quickly understand the data model of an application that is in
development or already developed. There are two landing pages (DesignerStudio > Data Model >
Classes & Properties) that provides a more complete view than the Application Explorer.
The Property Tree tab shows the properties defined and their type for a given Applies To class, here we
see our general work class.
The Class Relationship tab shows the relationship of just the page, page list and page group properties.
This can be helpful as a high level relationship model.
141
Now that we have created the classes and property definitions we must also understand how to actually
create pages of the classes and instantiate properties on those pages. In most cases PRPC manages the
creation of pages for us. Pages can be created to represent new objects, for example when we create
new work, or to represent an already persisted object, such as opening existing work. PRPC also creates
embedded pages for us whenever the property is referenced.
From a more programmatic standpoint we can explicitly create new pages using Page-New or bring an
already existing instance onto the clipboard as a page using Obj-Open.
There are many other ways pages get created but these are some of the most common ones.
Property instantiation is even easier than pages. Properties are created as they are used or referenced.
Conclusion
Understanding and taking full advantage of the data model is essential in building an application. Take
business requirements, such as reporting and case persistence but also flexibility and scalability into
account when building the data model.
Now, we understand the fundamentals of data modelling. We understand the role properties have and
how the data model is related to the class structure and inheritance. Finally we know the best practices
and how to effectively construct a new data structure.
142
The most straightforward mode is Single Value, which is used to represent simple, scalar values, for
example, a price.
The value list and value group modes represent collections of single value properties. The value list is an
ordered list subscripted by a number and the group is subscripted by a string term and is unordered.
The page type allows a single property to represent a collection of properties as a page of a specific
class. This is very powerful as it allows us to embed and reuse complex data structures in various parts of
an application. For example an Account structure could be used by multiple work types.
Page List and Page Group are similar to value lists and groups in how they are subscripted but their
instances represent pages rather than single values. Page Lists and Page Groups are generally
preferred over Value Lists and Value Groups because they are more flexible.
143
There are also a number of Java related types; these are used when interfacing with external java
libraries or beans. They are not covered in this lesson.
Each property type has additional options associated with it; lets look at Single Value first. A property of
mode Single Value also has a type. The type represents the kind of data that is valid for that property.
Common types include Text, which can store strings, a variety of numeric types such as Integer, Decimal
and Double as well as temporal properties such as DateTime, Date and TimeOfDay. Each type is well
documented in PRPC help and can be reviewed there.
The Value List and Value Group have the same options as Single Value, which makes sense since Value
Lists and Group are just a collection of single values.
Now lets look at the options for Property Mode. First we see that instead of defining a type, like text or
integer, we instead define a Page class. The class determines what this page will represent.
For example, if we enter our supplier class then our property will be a supplier. We can reference
properties of the line item class, for example SupplierID, using the dot notation. For example
.Supplier.SupplierID.
Like the Page mode, the PageList mode requires a Page Definition. This property now represents a list
of Line Items rather than a single line item. The list can be referenced using the dot notations with a
numeric subscript, for example .LineItems(1).ProductID.
The PageGroup mode is similar to the PageList except that the subscript is a text string and the lists
order is not guaranteed. PageGroups can be referenced using a string subscript, for example .Vendors
(VendorA).VendorID. PageGroups are useful when a specific item in the list needs to be retrieved based
on a key.
Single Value
For a single value type property there are two data access options; Manual and Automatic reference to
class instances.
Select Manual if the user adds data to this property through the UI or if data transforms or other rules
may be required to manipulate the value.
144
Select Automatic reference to class instances (linked) to establish a direct relationship to a single
instance of a different concrete class also known as the target.
At runtime an instance of the specified class is retrieved in read-only mode, hence without a lock. In this
example we are showing the standard pxCreateOperator property.
If there is only one key, the value of the source property becomes the key used to open the object at runtime. If there is more than one key, the key properties become input fields where users may enter values
or properties that contain the appropriate values at run time.
We can use the linked property as a Page property in property references. For example, we can
reverence the name of the create operator as .pxCreateOperator.pyFullName in a case type.
Help contains a list of standard linked properties, see Atlas Standard linked properties.
Page
For the page type property there are three data access options: Manual, Refer to a data page, and Copy
data from a data page.
Select Manual if the user adds data to this page property through the UI or if data transforms or other
rules may be required to manipulate the value.
Use Refer to a data page to point to a data page. The data is not persisted with the case, but instead is
always fetched when needed. We use this setting if we want the most up to date information, such as a
customers address.
145
Use Copy data from a data page to copy data from a data page to the property. The data is not be
reloaded unless one of the data page parameters changes. The data is persisted with the case. We use
this setting if we want a snapshot of the data, such as the details of an insurance policy.
How to configure a property to use a Data Page is covered in more detail in one of the Data Modelling
lessons.
146
The control listed on the property form defines how the property displays in the UI if Inherit from
property is selected.
The Table Type field provides validation against a list of valid values. The list is determined by the table
type. This validation occurs both at runtime and design time when saving business rules.
Local List allows us to define a simple list of strings that define the valid values.
Prompt List uses two values, a standard value, the value stored on the clipboard and database and a
prompt value, the value shown to users.
For example, we can enter Approve for the Standard value and Approve Order for the Prompt Value.
This allows us to use Approve in our rules but have the user see a more descriptive label. This provides
flexibility in separating the display from the rule logic.
Local Lists and Prompt Lists are useful for simple cases however they do not allow for reuse as the list is
associated with a single property.
Class Key Value allows us to specify a class whose instances become the allowed values. The class
key value requires us to enter a validation class, which is the class whose instances are used for
validation.
For example, if our property was to represent a workbasket defined in the system, wed enter DataAdmin-WorkBasket. Clicking preview shows us how this will be rendered. This is the list of defined
workbaskets. The actual field displayed is the Key property for the class specified which is why the table
is called Class Key Value.
147
The Subset Name field is used when the class has a multipart key and we wish to use only a subset of
values. For more details review the help file.
The Display Only (Not For Validation), option allows us not to validate against these values; this is
commonly used with localization.
The last table type we will review is the Field Value. This table type allows us to utilize field value rules
which have the added value of being fully localizable.
Since field values are rule resolved we supply the Class first. Here we are going to use the pyChannel
field values defined at baseclass.
You may have also noticed a table type called Remote List, this table type is deprecated and should not
be used.
All of the table types can be used with the Dropdown control. Select As defined on property as the list
source type.
Another powerful aspect of using table edits is validation in certain business rules that might be
delegated, such as decision tables.
148
Max Length allows us to set the maximum length, in characters, of the field. Any attempt to set the string
to a longer value results in a validation error.
Expected Length is used by some UI controls to set the width of a text field. No warning or validation
error occurs if more text is entered.
Override Sort Function allows us to specify a custom sort function to be used when sorting a list.
Access When setting is used for encrypted text to determine when the clear text value can be accessed.
Edit Input allows us to set the Rule-Edit-Input rule that applies to this property. Edit Inputs is used to
format a value when it is entered on a user screen. This formatting occurs on the server side.
Use Validate specifies a Rule-Edit-Validate rule that is applied when user input from a screen is sent to
the server. This rule is applied after the edit-input and can add validation messages to the property.
Column Inclusion provides guidance to database administrators (DBAs) as to whether a column should
be exposed for direct reporting. This field does not have direct impact on the runtime behavior of the
system.
If we want the value of this property to be omitted when a page containing the property is committed to
the database we can select Do not save property data. Marking appropriate properties improves
performance by reducing the size of the Storage Stream property value.
149
Select Cannot be Declarative Target to prevent a property from being used as a declarative target. This
is helpful to indicate to other architects that a property should not be used as a declarative target.
Select Cannot be included as Input Field to prevent users from directly entering a value for this
property in an HTML form. This can be useful as an additional security measure for critical properties.
Select Allow use as Reference Property in Activities to make this property a reference property that
can link to a source property.
Select Cannot be localized in UI controls to prevent this property from being localized.
A qualifier is essentially metadata about a property. There are a few standard qualifiers, such as
pyDecimalPrecision.
Page Properties share many of the advanced options with the single value property, but also have a few
additional ones.
Select Validate embedded page to validate this pages data, even though it is embedded. In almost all
cases this should be left checked.
The java page option is specific for working with Java objects.
Conclusion
A property defines and labels a value that is associated with a class. The first and most important part of
defining a property is to select the appropriate property type mode. However, property definitions can also
be used to load data, define consistent presentation, access and validation of data across an application
or even an enterprise.
Now, we understand the fundamentals of data modelling. We understand the role properties have and
how the data model is related to the class structure and inheritance. Finally we know the best practices
and how to effectively construct a new data structure.
150
151
Node should be chosen. Requestor scope allows us to share data pages for a given user session and is
often used when the data page contains data associated with the logged in operator. Using Pega
Academy as an example, a Requestor level data page might contain the list of enrolled and completed
courses for a student. A Thread level data page would contain the specifics about a particular
course. The Thread level would allow for multiple courses each being sourced from a separate data page
instance.
The node option makes a single data page instance accessible by all users of the application and other
applications running on a given node. On a multinode system, each Java Virtual Machine instance has
one copy of the node level data page. Node level pages reduce the memory footprint by storing only a
single copy of the data available to all users and are an excellent option for storing common, typically
more static, reference data.
Now we are ready to configure the data source.
Data Pages can load their data using a variety of options to include Connector, Data Transform, Report
Definition, or a Load Activity.
For page structures, the Lookup data source replaces Report Definition as a source.
Lets look at each in more detail. Since most applications rely heavily on integrating with other systems,
data pages can obtain their data from another application using integration connectors.
There are many different connector protocols available including a newly added REST architecture.
Each of these connector types requires a Request Data Transform rule which sets the properties or
parameter values that are used in the connect rules.
If the connector class is the same or inherits from the class of the data page, a Response Data Transform
is optional.
Otherwise, a response data transform is required to map the different properties from one class instance
to another.
Other data sources include a data transform as well such as Lookup, which retrieves an instance of a
class by its key values.
Finally we can also use a load activity although its a best practice to avoid sourcing data pages via an
activity since they are typically more difficult to maintain.
In addition to the variety of data sources available to a data page, its possible to conditionally use a
variety of sources based on a when rule. Once you add a second data source, a when rule appears to
conditionally execute the data source. If multiple sources are present, the data page uses the data
source for the first when rule which evaluates to true. If all return false, the final otherwise data source is
used.
All of these data sources have a simulation option so that we can test our application in the event the
source data does not yet exist or has no data. Selecting this option allows us to select an alternative data
source while persisting our intended one. This way when the real data source becomes available we can
disable the simulation and the data page automatically switches to the non-simulated data source.
At the bottom of the page definition is an option to use a post load processing activity. This is useful if we
want to supplement our data page with additional data or check for errors during the load process.
Even if our data page has been configured to be read only, the post load processing activity is an
opportunity to make necessary data changes before the page is available to the application.
Ok now that we have talked about all the options, lets confirm the configuration to load the Product
Catalog. We expect a list of results of type Pco-FW-PurchaseFW-Data-VendorItems.
Well leave the Edit Mode as read only and the scope at Node since other users can share the vendor
153
product catalog.
Remember that node level pages require an access group so well go to the Load Management tab and
select an Access Group containing the RuleSets to execute this page.
Next we select the source of the data to be a report definition called dataTableListReport which in this
case points to a local data table containing some product information.
In this case the class of the report definition and page structure are the same so we dont require a
Response Data Transform.
In reality, this data would likely come from an external source. Thats it. Now all we need to do is save it
and our data page is ready to go.
Remember that a data page is a declarative rule so it does not load the data until it is referenced. We
can cause that reference to occur by manually running the page from the Actions menu.
We have the option of removing any instances already on the clipboard.
When we click execute, the source data is loaded into the page.
We get an xml confirmation page containing a lot of the pages metadata in addition to the list of results.
Lets check the clipboard for the presence of this page.
Since the pages scope is node, we can find it under Data pages, Node, the name of the data page in this
case is D_ProductCatalog .
There are 10 returned results. Here is the first returned product.
Ok, weve confirmed the page returns the expected data. While this data page is now available to be
used by our purchase order application, lets first understand the lifecycle of data pages.
Similarly, the when rule option to reload is also removed as we would not want an individual users
context to determine the result of the when rule. If that is needed, then Thread or Requestor level pages
is the appropriate choice.
If no refresh strategy is selected, we may choose to clear pages after non-use. This causes the system
to remove Node level pages that are unused for a period of 24 hours. Subsequent access after this 24
hour period causes the page to be reloaded. This 24 hour interval can be configured in the prconfig.xml
file or specified using a dynamic system setting.
155
Lets use a data source of Lookup and point it to the VendorItems class.
Now we need to use the incoming parameter for this data page as the class lookup criteria. To do that,
we click on the parameters link below the class name and provide a value for the class key which in this
case is ItemID. Since we want to use the parameter value as the criteria we select param.ItemID.
Lets save it so we can test the new addition of a parameter by running our page. Notice that we are now
prompted to provide an ItemID parameter value.
After seeing the xml confirmation of the result, well repeat this a few more times for two other ItemIDs.
We executed this data page with three different parameters, so we should see three distinct instances of
D_ProductCatalog. Lets confirm this by looking at the clipboard. Since our page is a thread level page,
we first need to select the appropriate Thread which is Standard when testing from the Designer Studio.
If we had configured the data page to limit it to a single data page, then only one page instance, the most
recent one, would be on the clipboard and references to a previously called page would be reloaded and
would overwrite the current one.
156
157
Scrolling down exposes the List Source where we can reference the data page.
Next enter .ItemID as the property for value and the .ItemName as the display value.
That should do it!
Now lets save it and preview the results and we see that we are able to select from a list of products.
In this demo example we have only 10 products, but imagine the usability impact if we had a long list of
products. We can improve the user experience by changing the data page to return only the results
based on the chosen vendor .
To accomplish this well not only need to change the data pages definition but also how the dropdown
control that calls the page because well need to supply a parameter value. Lets start with the data page.
Well go to the parameters tab and enter a vendor parameter.
When adding a parameter, we must apply it in some way. In this case we want the parameter to affect
the sourcing of the data which is a report defition.
So well do the same in the DataTableListReport by adding a parameter there.
Now back to the query tab, where well use the new parameter in a filter condition to limit its results by
vendor.
After clicking save, we can return to the data page.
Here, we must make sure that the data page takes the parameter value it receives and passes it onto the
report definition. Refreshing the data page, the parameters link picks up the recently added parameter to
the report definition allowing us to enter the pass through parameter value.
Another setting we should change on the data page is its scope since the page now contains vendor
specific products as determined by a particular case, thus a better scope is Thread.
Also well need to reload the page if the user selects a different vendor so well modify the refresh
strategy to Reload once per interaction
After saving, we need to return to the section to edit the dropdown control so that we can provide a value
for this new parameter we just defined. First, lets refresh the section so that the updated data page
definition is recognized.
Now by going back into the dropdown controls properties, we see the addition of a parameter value
prompt for vendor once we reselect the data page.
We want to provide as a value the selection made from the vendor dropdown which is bound to the
VendorList property so we enter it here.
Now were done, so we click OK, save and return to the case to see if it all worked.
So now if we select Amazon as the vendor it yields only amazon products.
By changing the vendor to Microsoft, the items now shown are limited to Microsoft products.
returns the customer details for the provided customer ID and is persisted to the case. No further access
to the data page is made even if the data on the data page or in the database changes. Notice here, that
the address on the customer data page has changed but that change is not reflected in the case because
it was a copy. This is important to understand. Even if this Balance inquiry case later requires a decision
based on address or is later routed and opened by another operator, once the page copy for a given
parameter value has occurred, it does not reaccess the data page regardless of the refresh strategy.
The other option is to point to, or refer to, the data page. This works differently in that the case never
actually obtains or persists the data but instead always goes to the data page to determine the value of
the data. This setting is used particularly when we want the most up to date information from another
system of record. Take for example a case where business decisions are made based on stock
prices. When a case requests pricing information after supplying a stock symbol, the data page pointer is
triggered and causes the data page to load the stocks information from the database. Any rendering,
processing, or decisions made on that data is done directly from the data page. This way, any changes
to the data are automatically seen by the case throughout its lifecycle since it never stores its own copy of
the data.
does not come into play as long as the parameter value remains the same for each page instance.
If we didnt have the option to Load each page in this page list individually selected, then the entire line
items page list would be filled at once as soon as the configured parameter value changes.
If that is the case, the Data Page would have to be configured to return a page list and use a parameter
value that is not a property contained within the page list itself. Both of these requirements are enforced
at design time as we can see when we click save.
By the way, this option is only available for a page list property. Otherwise all the data access settings
are identical for both page and page list properties.
When the data page access method is set to copy, an additional configuration option is available and its
worth mentioning its the Optional Data Mapping option.
The reason it is optional is that the data page itself already has a response Data transform which is used
to map the data.
So why do we even need this?
Its helpful if we think in terms of reuse. We might have three different case types that can each use the
same data page provided they each customize the data mapping just slightly. If that option didnt exist,
we would then have little choice but to create three specific versions of the data page.
Now, as weve already showed conceptually, the other method of referencing a data page from a property
is by pointing to the data page, in other words, using the refer to a data page option.
Again to summarize, the difference with this option is that the data does not get saved with the case. If
we close and reopen the case, the data page would again be referenced to obtain the data and
depending on its scope and refresh strategy, it may still be fresh on the clipboard or need to be reloaded.
The configuration settings with this option are very similar to the Copy option.
There are two differences. One, the Optional Data mapping field does not exist for the Refer To option
since we are directly pointing to the data page as it is defined.
The other is the Save parameters with this property for access on reopen check box which only
appears on the Refer To option.
It gives us the choice to persist the parameter value with the case so that if it is later reopened, it
automatically reloads the data page using the persisted parameter value. If not checked, then the data
page is not referenced until the parameter is set to some value as a result of further processing of the
case.
The ExchangeRate page property is the key link here as it is configured to refer to the ExchangeRates
data page. Lets look at this property configuration.
We can see it references the ExchangeRates data page.
This data page has been defined with a BaseCurrency parameter and a ToCurrencyCode key.
Lets just verify this on the data page definition.
Sure enough this page has been configured with the keyed page access using the ToCurrencyCode
key property and lets check the parameter.
Yes, it has indeed been defined with a BaseCurrency parameter. This parameter is used to limit the data
source results to just a single base currency.
By the way, for the purpose of this demo, we are using a simple data transform simulation to hard code a
list of exchange rates rather than using a realistic data source as would be expected with a production
system.
Ok lets return to the ExchangeRate page property and tie it all together.
As a user selects a different BaseCurrency, a different parameter is used which loads the list of exchange
rates for that given base currency.
When a ToCurrencyCode is selected, it becomes the key that is used to find the correct instance in that
currency list.
To better understand what happens behind the scenes lets look at the clipboard.
We can see the Exchange Rates data page. It contains a unique page instance for each unique
parameter: US Dollars, Euros and Japan Yen.
For each of these pages, the various exchange rates are contained in the pxResults. Here, US Dollars to
Great British Pounds, to Euros, and to Japanese Yen. The currency key value is used to find the desired
instance in the list of results.
Ok lets conclude by returning to the UI and recognizing that the exchange rate field is bound to the
exchangeRate page which in turn automatically points to the exchange rates data page using the
BaseCurrency as the parameter and the target currency as the key.
Once a data page has two or more parameters, we must use the full syntax since we need to explicitly
state which parameter values correspond with each of the parameters. Notice in the last example, the
first parameter value uses the result of a function while the second hard codes the Include Image
parameter value to true.
Now lets look at one example of this syntax in the context of a when rule. It checks to see if the
Products ItemCategory value retrieved from the data page is equal to General Purchase. Since the
parameter value supplied here is equal to the current ProductID property value, this when rule evaluates
the product currently in context.
Notice how the when rule references a specific property on the data page. This can be done for all the
other rule types and various syntaxes we just covered.
Just keep in mind that the entire data page is loaded and not just the individual referenced property.
162
Conclusion
The new data management paradigm is extremely powerful and flexible and can be applied to a number
of different use cases.
The first and most common use case is where a case has reference data that it does not manage, but is
needed throughout its process. If the data changes we want that reflected in a timely manner. Think of a
vendor object used by a purchase order case. The vendor does not belong to the purchase order, but is
referenced. If the vendor data changes we want to see those changes.
A related use case is the situation where the data to be referenced by the case needs to be a snapshot in
time. We may need the snapshot of a policy for a claim that is processed. If that policy changes AFTER
the claim is created we dont want it to be updated.
A third and very common use case is to use a data page to store a list of items that are used to populate
a drop down. This list can be set to be shared across the entire node improving performance.
In another use case a single data page can be used as a list, for example a list of currency codes, and
also provides direct access to a specific page in the list using the keyed page access feature.
Now, we can describe and define data pages. We know when to use data pages in our application and
how to reference them. We also know how to improve performance using asynchronous loading.
163
Introduction to UI Architecture
Building the UI
Introduction to Responsive UI
Building Dynamic UI
164
Ergonomics
165
The examples extend beyond calculations. Anything that makes the job easier for a user should be
considered to give the user a better overall experience. This could entail anything from pulling history of
a caller in a call center to see their past transactions to using analytics to determine the next best action a
user should take.
The ability to present users with the correct relevant information is a shortcoming of many of the green
screen applications in use today. These outdated approaches of always presenting all the data about a
customer at all times often leads users to spend extra time reviewing irrelevant information and hunting
for the information that is relevant. By focusing on the intent at any one point in the process, the user
spends less time searching for the relevant information and thereby becomes more productive.
166
Ergonomics
This consideration is all about making the system easier to use. Well want to try to eliminate as many
clicks, and as many mouse movements as possible to make the user more efficient in their use of the
system.
Lets take the example of asking users to agree to some terms of service. Weve all seen them, and
weve all seen them implemented multiple ways, but theres two ways that seem to be the most
prominent. Asking users to click a link to view the terms,
Ask yourself which of these is friendlier to a user? By placing the terms of service directly on the screen,
we can save users the unnecessary clicks of opening a modal window and then subsequently closing that
window.
Ergonomics is more than just eliminating clicks though. Its about smart placement of items within the
screen. Using tabs to navigate a multiple input form instead of having the user reach for a mouse.
Grouping selections nearby, so the user doesnt have to move the mouse over a large area. Breaking the
screen up to avoid users having to scroll, especially horizontal scrolling.
Thats actually one of the recommended best practices. Never present a user with horizontal scrolling
and try to minimize the necessity of vertical scrolling.
167
Portal
A portal is really a user workspace. In PRPC, the term "Portal" indicates not only the user workspace, but
can also indicate the role the user has. For example, the User, Case Manager, CSR, Developer, and
Business Architect are all names of PRPC portals that are geared to particular users.
It's important to think about what type of functions and tasks our user groups need access to, to
determine the type of portal that is needed.
Harness
A harness can really be thought of as an HTML Page. It is a top level component within the Pega UI.
There are a number of different types of Harnesses within the PRPC.
168
The Perform and Review Harnesses are the most popular ones to use in an application. These allow
users to enter, edit and review data within a PRPC application.
Section
Sections are exactly what they sound like. Within a harness, there can be a single or multiple sections.
These sections are the building blocks of the application UI. There are hundreds of standard section rules
that are shipped with the product that allow easy configuration of the user interface.
169
Layout
Layouts allow us to specify the way we want our form data to be presented. Every section contains at
least one layout. PRPC and Pega 7 in particular provide several different layouts to get us started with
building our UI and we always have the option to add more layouts as needed.
170
We should note that Layouts are the only UI component that is not its own rule in PRPC.
Controls
Controls are the most granular of the UI components. These allow us to present data, take user input,
make decisions and interact with the user. These include things like text boxes, buttons, selectable lists,
etc
171
Pega 7 in particular provides several new controls aimed at making UI development even easier, dynamic
and more robust.
172
173
The UI Kit is delivered as a locked ruleset. To customize the rules provided by the UI Kit, copy them to an
application ruleset first.
Conclusion
The overall user experience is so much more than the user interface, but the user interface is still an
integral part of the users experience. Well-designed user interfaces coupled with planned user
experiences leads to user adoption and the success of our application.
174
Introduction to UI Architecture
Introduction
Welcome to the lesson on User Interface architecture, portal, harness, and flow action. What do all these
rules mean? We already know that Section rules provide the building blocks of a user interface but when
do we use all these other rules? How do we put together a user interface?
Thats what were here to discuss today. In this lesson we will cover which rules make up the user
interface and how they interact with each other.
At the end of this lesson, you should be able to:
175
Parts of a Portal
Below is a typical portal a user will use while performing their work and it is a best practice to follow this
same general layout while building our portal.
The top pane is setup as a header that provides access to search for or create new work. It also lets the
user log off and switch applications if access to multiple applications has been granted.
176
The left-hand pane is referred to as the navigation pane. This allows the user to switch between screens
of the application. From here, they can access their recent work, their worklist, and a variety of reports
This main area is known as the Work Pane. This is where a user interacts with each individual piece of
work.
177
Creating a portal
When working with portals, we need to open the portal rule form. In most cases, we will only be dealing
with two tabs.
Skins
The Skins tab allows us to specify three different options. These options tell the system how to display
this portal.
The first option is role. Role can be either user or developer. When developer is specified, the system
includes additional scripts necessary to support development actions, such as check-in and check-out.
These additional scripts cause the system to leverage more memory, so it is a best practice to specify any
new portal as user and to only use the existing developer portals for development.
The second option is for the type of portal. Type can be one of four possible values:
Fixedthis type of portal is provided to support backwards compatibility with applications created
prior to version 5.5. It cannot be used in new development.
Customthis type of portal determines the user interface from an activity. It is an advanced topic
that is discussed in the Lead System Architect course
Mobilethis type of portal supports displays on smartphones and tablets. It is an advanced topic
that is discussed in the Pega Mobile course.
Compositethis portal uses harnesses to define its layout and is the recommended portal type to
use for all new development. The rest of this lesson we will only reference settings for this type of
portal.
The last option is for the skin rule. In most cases, we want to default to the Application Skin option. This
will allow the system to inherit all the same settings for the entire application. Alternatively, we can
specify another skin to use, but this does not provide for reuse and should only be used when its
unavoidable.
178
Spaces
When using Composite portals, the other tab we need to configure is the Spaces tab. In most cases, we
only need to specify a single space called Work and provide the associated harness. The Work space
must always be present even if we specify any additional spaces.
The harness we specify is a special kind of harness that uses a screen layout. Screen layouts allow us to
specify which sections to display to the user. If we open the harness associated with the pyCaseManager
portal, we find a screen layout that reflects the same top, left, and main panes weve seen when viewing
the rendered end user portal.
179
More combinations can be defined in the skin rule, but the majority of portals leverage the Header Left
configuration.
Each region in a screen layout can only contain a single section, though that section can then in turn
contain multiple embedded sections. The main region is typically a section that provides a single
Dynamic Container. Dynamic Containers are necessary to display work objects.
180
Standard Harnesses
Out of the box there are several different harnesses we can use as starting points. Most of them can be
grouped into four categories that share a similar purpose:
Newthis encompasses any harness that is displayed when the work object is being created.
They are presented to the user after a work object has been instantiated on the clipboard but
before it is persisted to the database. It provides the options to set some initial values that should
be present during the initial creation. As a best practice, the New harness should only be used
for this first step. Examples of a New harness are New, NewCovered and NewSample.
Perform this encompasses any harness where the user is performing their assignments.
These harnesses are what allow the user to interact with the work. There are several different
forms of the Perform harness that provide a different interaction but all fundamentally allow the
user to access and submit a flow action to move the process on from this assignment. Examples
of a Perform harness are Perform, Perform_Buttons, and Perform_Step.
Reviewthis encompasses any harness where the user is viewing read-only data and not
interacting with a flow action. These harnesses are used to present the user with information
about the work object and are most often used to view all the associated assignments with this
particular piece of work. Examples of the Review harness are Review and ReviewSample.
Confirmthis encompasses any harness where the user has submitted the work and does not
own the next assignment. They are used to present final information to the user, but the user can
no longer interact with the piece of work. Examples of a Confirm harness are Confirm and
SimpleConfirm. In addition, there is a special confirm harness called AutoClose that can be used
if we dont need to display that final piece of information.
181
The Design tab is where the bulk of the development on a harness is completed. Its here that we use the
Designer Canvas to specify the layouts and sections we want this harness to display. The Designer
Canvas is described in the Building the UI lesson.
Many people focus all their efforts on this tab alone and forget that there are some other essential
elements that should be configured beyond these layouts. These are all described in the help files in
detail, but lets take a look at a high level overview of which features each tab offers.
182
The Display Options tab provides options for how errors are communicated to the user, some control over
the presentation to the user, and an audit feature that allows the system to track every time this harness
is displayed to a user.
The Scripts and Styles tab is used to provide form-level JavaScript to the system. The best practice is to
not leverage these and provide any required actions or events in the Skin rule. They are provided here
for backwards compatibility.
The Advanced tab provides options on how the HTML is generated from this rule to be presented to the
user.
The other two tabs, Pages & Classes and History, are similar to all other rules that support these forms.
183
The Layout tab allows us to specify which section will provide the user interface for this flow action.
The Validation tab allows us to specify which validation rule (Rule-Obj-Validate) to execute when this flow
action is submitted. We can also use this validation rule to conditionally update the work objects status if
the validation is successful.
184
The Action tab is the second most important tab of a flow action. It is on this tab that we define how the
system behaves when performing this flow action. At this point we have the option to take actions just
before the screen renders, such as to update some of our data or to prepare some calculations. We can
specify which actions to take when the flow action is submitted, such as performing calculations or
updating a back end system. We can also configure how the system interacts with the user for the next
step of the flow such as whether to show the next step of the process or to display a confirmation to the
user.
The Help Setup tab is underutilized in most systems. This tab allows a developer to specify information a
user can then leverage to gain additional insight about this flow action.
185
The Security tab is used to control who can perform this flow action. Privileges and security in general is
discussed in the Security lesson.
The HTML tab lets us specify how this flow action will affect the rendering of the user interface. In most
cases, we will want to specify to use a Reference Section. In some scenarios, we may want to leverage
the No HTML to indicate there is no user input beyond the selection. This is often used in conjunction
with the Perform_Buttons harness to provide an easy means of allowing a user to choose between
branches of the flow, such as Accept or Reject without requiring an additional user interface. The
Reference HTML choice is provided for backwards compatibility and should not be used for new
development.
The other two tabs, Pages & Classes and History, are similar to all other rules that support these forms.
186
Conclusion
When it comes to user interfaces, there are only a handful of rules that need to be configured.
Flow and Local Actions define how the user interacts with the work.
Sections are the building blocks of these various user interface rule types.
187
188
Designer Canvas
The Designer Canvas assists us to quickly build user interfaces without writing lines and lines of code.
The Designer Canvas is accessible in harnesses and sections.
The top part has a toolbar which provides various icons to perform edit operations such as cut, copy,
paste, add/delete rows or columns, merge rows or columns.
In addition to these operations the toolbar also provides control groups (Layout, Basic and Advanced) for
us to add layouts and other controls in the section. The following represents the entire list of layout types,
as well as the most commonly used basic and advanced controls.
Layout group:
This group allows us to add any of these layouts in the section. Well learn about adding these layouts in
this lesson. These can be added by selecting one of them and dragging it into the section.
189
190
Layout Types
There are various types of layouts supported in Pega for us to build our user interfaces.
Column Layout As the name suggests, this creates a layout with columns.
Smart and Free form layouts they exist in the product for backward compatibility
Dynamic Layouts
Dynamic layouts are a newer layout type supported in Pega7. Dynamic layout is the default layout type in
all newer applications. They use HTML5 tags and support responsive behavior.
When a new layout is added, we need to select the layout type.
When adding a new Dynamic layout in a section, we also need to configure the format. The format
controls the presentation in terms of number of columns each row can have, the spacing between
columns, the width of the cell, text alignment, cell padding, label positioning, and so on. The format is
defined in the skin rule. PRPC ships with a set of standard layout formats that we can use in the
application skin. We can also add more application specific formats if required. This is the list of all
formats supplied in the standard skin.
191
Column Layouts
Column layouts allow us to split the work area into columns. They are mostly used on user portals. The
designer studio and case manager portal heavily use columns. Column layouts are typically used when
we need to present different content in differing columns.
For example, if we are looking at a website such as Amazon.com, the product we are looking for is
presented in the main section while a second column displays actions such as adding the item to the cart
or wish list.
Column layouts can be added in sections and the presentation is configured in the skin rule. PRPC ships
several formats. To get the structure that is shown in Amazon example, we can use the Two column
(Main-Sidebar) format. Please look up the link named Using Column Layouts in PDN to see how column
layouts are added.
192
Repeating Layouts
Dynamic and Column layouts can also be extended using the repeating mode. To add a repeating layout,
we select Repeating in the Set Layout Type dialog and then select the type we want.
The Grid is a commonly used choice, to display the repeating group in a spreadsheet format rows and
columns -- new rows representing a new item. A column header row displays to indicate what each
column represents. Grids can be styled using formats and these are saved in the skin rule.
Unlike other layouts, the repeating layouts require us to identify a source. Grids can display a page list, or
a data page (with list structure), or the results of a report definition.
Dynamic repeating layouts can be sourced using either a page list or a data page. The repeating layout
displays a single cell where we include a section. The section gets repeated for each item of the page list.
The Section can be designed to use any dynamic layouts similar to how we set it up for a single level
property.
Column repeating layouts are similar to grids except the items are repeated across columns instead of
rows. Column repeating layouts can be sourced by a page list or a report definition. In the example
below, each column represents a row as part of the Vendors page list.
193
At runtime, the properties VendorName, Address, city and State must be added in each row inside the
column repeating layout.
194
Configuring layouts
As we saw in our screenshots earlier, layouts play a critical role in separating content and presentation.
The presentation is configured in the skin rule while the content is configured directly in the section where
the layout is added. Data elements are added into the layout using Control rules. The Designer Canvas
provides access to controls, we need to select a control and drag it into the cell inside the layout to add
the property.
195
This adds a tab layout with one tab in there. We can add another layout or a section inside this tabbed
layout as a new tab. To do this, we select Layout or Section in the Layout group and then drop it next to
the existing tab. While doing this, the system will display an orange indicator to guide where we can drop
this,
To delete a single tab we need to click the tab and then use the delete icon that is in the canvas. Clicking
the delete a row icon will remove the entire tabbed layout (including all tabs).
196
A dynamic layout requires us to select a format. This format determines whether its inline or inline grid
and if its the latter how many columns will we have per row. Similarly column layout requires us to select
how many sidebars and the width of the sidebar. If we look closer at this mockup, it is hard or even
impossible to do this without a lot of customization. When we look at the underlying section rule that
renders this page, we can see it uses a combination of dynamic and column layouts to render this screen.
197
Floats
Another powerful feature of a dynamic layout is that it can be floated to the left or the right. When float is
selected the layouts can be aligned to the left or to the right. In the presentation tab there are other fields
that are very important when using floats. In this example, these are all enabled, however by default only
Self-Clear is enabled.
Set layout width to auto This flag allows layouts to use only the width that it requires.
Self-Clear This field is enabled by default and is used to account for the height of floated
layouts. Do not disable this field unless on occasions like we see down below.
Clear floated layouts This field clears other floated layouts so that they dont appear in same
line.
Lets look at a short video that shows us how the float option works in conjunction with these settings.
Click the link named Using Floats that is in the Related links content to see how floats work.
Example of Float:
The Pega 7 header bar uses two layouts (one floating left and another floating right)
The section rendering this header bar rule looks like the screen shot below. [Header_Section.png]
198
So, how do they appear in same line? Besides the float setting we enabled, the set layout width to auto
field makes sure the layout takes up only the space it requires. This helps in putting both the layouts in
one single line.
Notice the other fields are disabled. Self-clear does not apply since there is no other layout in the section
other than these two layouts. If the Clear floated layouts flag is enabled then both layouts float in two
separate rows.
199
Grids allow various types of edit modes. Inline allows rows to be edited directly in the grid while Masterdetail allows them to edit the grid in a modal dialog or by expanding the row or in a separate panel
outside the grid. Master-detail is useful if the grid displays the high level information and clicking a row
then provides additional details. The content that is being displayed to present additional details comes
from a flow action rule.
200
All three modes (None, Inline and Master-Detail) support grid operations. Grid operations allow us to
configure sorting, filtering of the records in addition to other settings. When enabled, sorting offers column
specific operations, so we can disable or enable only the columns where we require sorting.
Filtering in grids can be configured to either use a value or range. The options vary based on the data
type of the column, if its numeric it allows the range to use min and max values and if its string it uses a
set of string characters it can search on.
Conclusion
Sections are primarily used in building user interfaces. When a case is being processed the flow actions
control which section it has to render. In each section, the content can be formatted and presented using
layouts. Layouts help in separating the content and presentation layers thereby enabling the UI
development team to quickly build applications.
Dynamic and column layouts are useful in rendering the data elements and we can nest these layouts to
present a complex user interface. When using repeating groups, we use the Repeating layout which can
present the items in a row (grid or dynamic), column or tab.
201
202
2. Harness rule: In the harness rule on the Advanced tab we select Inherit from application rule
for the document type and the application renders in standards mode.
If our application was built prior to Pega 7, we can upgrade to Standards mode by using the HTML5
readiness gadget. Refer to the PDN article (Upgrading an application to render in HTML5 Document
Type) available from the related content area. It provides step by step instructions on how to upgrade to
HTML5.
203
When using IE as the browser, we need to disable compatibility view for the standards mode to work.
Using Standards mode, we can add newer layout types such as dynamic layouts, column layouts and
repeating dynamic layouts. The newer layout types use the <DIV> tag instead of the <TABLE> tag which
makes it much more flexible to resize the information based on screen resolution.
204
The main factor to remember in designing user interfaces for multiple form factors such as tablets and
mobile phones is to make sure that we avoid horizontal scrolling. Setting the width in % helps us to
achieve this goal.
Floats
We learned about floats in the Building the UI lesson. Floats are useful in aligning the layout to the left or
the right. This is useful when rendering multiple layouts in the same line. The header in Designer Studio
actually uses two dynamic layouts with the second dynamic layout aligning to right.
Login to Designer Studio and reduce the screen size. Notice that the layout on the right always stays to
the right and it the spacing between layouts decreases to fit both layouts on one line. Using Floats helps
us to achieve responsiveness without the need for any additional configurations.
Now lets look at a video to learn how to set responsive breakpoints to achieve responsiveness.
205
In the demonstration we also noticed that when the screen size was reduced, the left navigation in Case
Manager and in the Designer Studio portals moved to the left.
The Case Manager portal uses a screen layout, the screen layout does not display the format it is using
here. To see the format lets add the screen layout on top of it.
The alert indicates that the panels will be used if we select a layout using the same panel.
The screen layout used in the section closely resembles the Header Left screen layout, so lets select
that.
Great, it was the same so we did not lose any part of the panel.
The format is configured in the skin rule and we see that this has a header on the top, side bar on the
left and main area.
The response breakpoint is enabled in the left header.
At 210 pixels the panel is accessible from an icon in the header.
Repeating Grids
Repeating grids also support responsive behavior. Lets see the various choices it offers.
Lets modify the default format and enable responsive breakpoints.
There are three choices in repeating grids.
We have configured two breakpoints here, the first breakpoint occurs at 1024 pixels, the choice selected
is Drop columns with importance other. Lets see what impacts this has at runtime to understand this
better.
The second breakpoint occurs at 768 pixels and the choice selected is Transform to List. There are
additional fields such as List Item, Primary cell and secondary cell. For now lets leave them as-is. Again
we will look at the behavior in runtime.
The third choice is not configured, but its pretty clear as the name suggests at a specified width it hides
the whole grid.
Lets look at a section which uses a grid layout.
Unlike other layouts, there is some configuration to be done in the grid repeat layout for responsiveness.
If we look at the presentation tab, the grid is using the default format
and notice there is a flag to enable responsiveness.
In addition to this, we also need to configure the columns.
If we open the properties panel for the category column, we have selected the importance as Other.
As per the configuration on the skin rule, this field will be hidden at the first breakpoint.
Similarly, if we open the properties panel for Product, we have configured the importance for this field as
Primary.
We will look at how this is presented in runtime in a little bit.
All the other columns in the grid are configured to have the importance as secondary.
Now lets look at a case., This shows the grid of three rows with all these five columns and this is the
same section we saw a little earlier.
207
Before we shrink the screen, pay close attention to the category column. When the response breakpoint
is reached it removes this column.
Now lets reduce further it to reach breakpoint 2, there we go. We have the grid displayed as a list with
the primary column at the top and all secondary columns under it.
208
Layout Group
In addition to setting responsive breakpoints, layouts can be made responsive using Layout groups.
Layout group is a way to group different layouts such as dynamic layouts, column layouts, repeating
dynamic layouts and present them together. Layout group can also include another layout group.
How is a Layout group presented in user interfaces? PRPC support four formats by default:
1. Tab
2. Accordion
3. Stacked
209
4. Menu
Breakpoint definitions allow a layout group to be rendered in Tabbed format on a laptop, in Accordion on
a tablet and as a menu on a Mobile Phone. Layout groups require that the applications be rendered via
the HTML5 Document Type (Standards mode).
210
Conclusion
Applications are accessed using modern browsers on different resolutions. PRPC applications use the
HTML5 Document type to render user interfaces in standards mode when accessed using browsers.
HTML5 allows us to use the newer layout types like dynamic layouts, column and screen layouts that can
use responsive breakpoints to change how information is presented to users based on screen size.
211
Styling Applications
Introduction
Welcome to the lesson on Styling Applications. Styling applications is all about controlling the look and
feel of the application. This is often required to meet corporate palette and formatting standards.
At the end of this lesson, you should be able to:
212
Styling an Application
Intro to Styling
One of the major focuses of user interfaces in Pega 7 is to ensure a definitive separation between content
and presentation. Thats why theres nothing more interesting than the improvements done with the skin
rules. New features like formats and mixins now alleviate the need for inline styles and custom
CSS. Using these features, well cover how to improve the overall look and feel of an application and to
ensure theres a consistent user experience throughout.
Previous to PRPC 6.3, this was handled with the branding wizard.
This was a great tool, but in Pega 7 skin rules make it even better.
The skin rule is now a one stop shop for configuring the presentation of our application. The tools in the
skin provides us the ability to achieve branding and styling consistency throughout our application. The
skin generates the entire CSS for our application.
Skin Rules
A skin rule is broken down into three parts. Mixins, Components and CSS.
Well be taking a closer look at each part as we get a little deeper into this lesson. But first, youre
probably thinking, How do we use it? Well, skin rules are only referenced in a couple of places
throughout the application. The most important spot is in the application rule.
This is the main skin for the entire application. The other spot is in Portal rules.
Normally wed want our Portal rule to use just the one specified for our application,
but we have the ability here to override the default skin for cases where we may want to display a
different look and feel, such as for external vs internal users.
Operators can also setup a preference for a particular skin. This preference is used for developers to
override the skin of the designer studio when previewing or testing an application. Ideally, were going to
want to keep this preference referring to the same skin as our application.
Mixins
A mixin defines values for a group of style attributes. Mixins allow for efficient and clean style repetitions
as well as an easy way to update our styling with ease. A collection of mixins form the palette for the
skin, such that a mixin or mixins can be used to style different components in the skin. Lets say we want
to use red to highlight all important text. We might leverage mixins when we want to use an exact color of
red everytime something is considered important.
There are three different kinds of mixins that can be defined. Typography, which covers anything related
to text. Backgrounds, which covers, well, backgrounds. And Borders, which relates to borders and
shading effects. Theres a 4th category called Combinations, which is any mixin that affects more than
one of the other categories. An example of a combination is Header Text, because this mixin affects both
Background
and Text.
All previous 7.1 skins have been upgraded to leverage mixins upon opening of the skin. Pega 7
evaluates any style presets that were defined in the old skin, and converts these into appropriate mixins
in the new skin.
Lets cover how to create a new mixin. To start off, were going to click add.
We want this mixin to represent anything we deem important, so lets call this one Important. And since
we wanted important text to be red, lets change the color to red.
We now have a mixin called Important that we can use to apply the same red text everytime its
leveraged.
213
Formats
The Components tab is where we define styles for all the different components. Each component can
have multiple formats for it. We can think of a format as a Type of, so that we can have a format of
button (or type of button) called Primary, and another one called Secondary. A format is what determines
the styling of the component once associated to a User Interface element in a section. Once these
formats are defined in the skin, we can then associate formats on User Interface elements that we drop in
a section. This association provides the User Interface element its look and/or style.
There are four categories of components, General, Layouts, Controls and Reports. Every component in
the system can have one or more formats defined. These formats are then used throughout the
application to control the user experience. All components have a default called either Standard or
Default that is supplied out-of-the-box.
Many components, such as this button example, also have additional formats already defined. Formats
are where we use mixins.
By using mixins, were defining the same style to all components formats where that mixin is used. We
also have the option of overriding part of the mixin.
Lets say we wanted to have our buttons use bold text. Wed select Mixin Overrides click add mixin
override, select font weight and set the weight to bold.
Using our important red text approach, if we create a new button format, we can mark that button to use
the Important mixin. In fact, lets go do that now. First, we would select the Add a format link.
We need to give our format a name, Im going to stick with Important. Then we want to change the Mixin
from General to the one we created called Important, which is down here at the bottom of the list.
We now have a style for our buttons, called Important that will give our buttons red text.
Using formats
Leveraging a format is a simple process. In any user interface screen, just select the component to edit,
such as this button, open the properties panel and on the presentation tab, select a different format.
Once we click on OK our button now reflects the style defined by the new format.
CSS
The CSS portion of the skin can largely be ignored. This tab is leveraged to support backwards
compatibility with applications that already use custom CSS. For any new applications, its better to
leverage the format and mixin features. In fact, if style sheets are used, a warning message is added to
remind us that it should be avoided.
Under this advanced section we have the option to automatically generate a css/workform, similar to the
ones used in previous versions. Again this is only for backwards compatibility and is not recommended
so Im going to disable this again.
Conclusion
As we were shown, through the use of a skin rule, and its embedded features of mixins and formats, we
have complete control over the entire look and feel of an application, allowing us to meet corporate
palette and formatting standards.
214
215
Controls are auto-generated giving us flexibility to use in various browsers, easier configuration options
(eliminating the need to write JSP, JavaScript, CSS or HTML tags). The Product does ship with several
hundred non-auto-generated controls that are still available for selection but they only exist for backward
compatibility.
Most of the commonly used controls are available for selection in the designer canvas under the Basic
and Advanced Control groups. This is the list of all auto-generated controls that are shipped in the
product.
Action
pxButton
pxCheckBox
pxCurrency
pxIcon
pxDateTime
pxLink
pxDropdown
pxRadioButtons
pxInteger
pxAutoComplete
pxNumber
pxTextInput
pxPassword
pxTextArea
pxPercentage
Controls
pxRichTextEditor
Auto-generated controls can be configured to serve most requirements, however some of the non-autogenerated controls that we still use are Chart, SmartInfo, and the Menu Bar.
216
Configuring Controls
Controls are a rule type that is defined as part of the UI Category.
2. Options/List Source the choices here vary with the type of UI element we previously selected.
For all controls that are not list based, the options that display allow us to select the size, style
and additional properties depending on the UI element. For date fields it also lets us choose the
Date Picker: Notice it has some values already populated. These values are used as-is unless we
customize them.
217
For List based controls the List Source and Presentation sections appear. The list source
determines the source of the listing as well as the presentation of the list. Usually these are not
configured in the control since they are dependent on the application.
3. Format:The Format section is useful when configuring the presentation. Shown below we can
see how the TextInput UI Element is configured to present a Currency symbol. In this example, it
applies more changes to how the currency is presented than when the currency is rendered readonly. Format appears in the presentation tab at runtime for us to configure or override the default
selections made in the control rule. We need to remember that the presentation in terms of font,
color choice, size, width and other such configurations must be applied in the skin rule.
218
4. Behavior: This allows configuring how a change in value on this control will affect other data
elements. That leads us to the next topic where we learn more about configuring the behavior.
Most of our applications use standard auto-generated controls, so we can configure the options and
formats in the sections where the control is added. Behaviors are also configured in the section where we
add it. Practice adding controls in a section to see how you can configure the controls using three tabs
General, presentation and Actions.
The General tab consists of the property that the control is associated with, the label, visibility
settings and so on. List Source is also configured in the General tab.
The Presentation tab allows us to configure the items in the Options and format sections.
The Actions tab is used for setting the behavior, which we will learn about next.
219
Other Events
In addition to change we also have other events. These are classified as Mouse or Keyboard events.
Actions
When clicking Add an action, the system lists all the common actions. We can use the All actions link to
see the list of all possible actions.
220
We can add an additional event in the same action set. When adding an event the actions are performed
when either of these events occurs. This is useful when we need to add a keyboard event corresponding
to the mouse event in accessible friendly applications.
If we need different events to perform different actions then we need to create it as a new action set.
221
Configuring Auto-complete
The auto-complete control provides lot of configuration choices in addition to the standard controls. Look
at the UI gallery for the many examples available for us.
222
1. Listing Source Similar to other list based controls such as drop down, we can select the
source for the list. We can select Data Page, a clipboard page or a report definition.
2. Search Results Configuration- Once the source is selected the system provides us with the
option to select the fields that can be used as search criteria. This section allows us to select
which fields appear in the auto-complete (using the Show flag), if the value in the field is being
stored in another property (using the SetValue flag) and if the field is used in the search (Use for
Search). In the example below, it uses name and employee type to show in the auto-complete,
however only name is being used in the search. Once the user selects a choice, the value of
Name field in that row is saved in the associated property.
Auto-complete looks like a grid if it uses more than one column, except all of the options are inside the
text box.
1. Search Method Auto-complete can be configured to search StartsWith or Contains, this can
be configured using the match start of string.
2. Search Results search results can be categorized using a specific field or can be configured to
display the best bets. Best bets display the most popular amongst the list of options at the top.
3. Number of Options we can also configure the minimum number of characters that a user
needs to enter before seeing the list of choices and also the maximum number of results that it
can display for performance reasons.
223
The product which is dependent on this field is configured as shown in the screenshot below. Notice it
passes the category that is selected in the previous field as a parameter.
To make this work we need one more change. In the category field we need to configure an action set.
The Post value action triggers saving the category selection in the case.
PRPC supports cascading relationships to multiple fields as well. For example, if we select a country,
then the state, then the county and then the city, each of these fields accepts the selected value of the
dependent field as a parameter for its listing source.
The example above involved a dropdown control, but we can also use other controls, as well. For
example,country might use an auto-complete while state might use a drop down and city might display in
a repeat grid layout in addition to displaying other fields.
224
Conclusion
Controls are very useful in formatting the data elements especially their values when presenting them as
either read-only or in editable format. They are referenced in the property as well as in the section where
they are added. The same property can be presented in different formats using controls. Controls can
also be used to restrict the values that we can select when its used as a user input field or the actions it
can perform when we click a button control.
225
Apply visible when and refresh when to create dynamic user interfaces
226
Benefits
We do not need to clutter the user interface with unnecessary information. User forms are no longer
lengthy, they only display information that is relevant to the user.
We can refresh only a part of the screen, even a single cell so the changes appear seamlessly. Users
need not see a whole page refresh every time to see the dynamic content.
If the application uses a static model, the screen must be submitted to the server for the change in value
to take effect. This model forces us to build many screens, with many submit actions and potentially a
performance impact. All of this will surely lead to poor customer satisfaction especially since the user
experience is not good.
This concept is very powerful in PRPC there is a wide variety of supported events and also a wide
variety of supported actions.
227
Events
Events are used to trigger an action that makes the user forms dynamic. As we said earlier, there are a
wide variety of events available in PRPC. Lets look at few examples, when the user
1. Clicks a control such as button, link or icon
2. Double clicks a row in the grid
3. Right clicks on the entire grid
4. Presses Esc key in the keyboard
5. Selects MA in the state dropdown
6. Clicks the checkbox to enable or disable
7. Enters 5 in the quantity field
Thats a little of everything. So now lets see where we can associate events.
228
Event Association
If we look more closely at the examples, we can identify them easily.
1. Controls In most of the cases, we associate events on controls. The main purpose of a control
is to decide the presentation of the data element in the user interface screen. Events are not
defined in the actual control rule. They are defined in the cell where the control is added. So for
the checkbox example, we associate the event on the properties panel of the cell where the
checkbox is added.
2. Grids - The Events are configured by opening the properties panel of the Grid layout.
3. Expression Calculation The event occurs when a declare expression computes the value of a
property. This configuration is enabled by default on all newer PRPC versions. This configuration
is on the flow action rule form.
229
230
Actions
Now that we know about where and how we can identify events, lets take a look at the other part. OK, so
we identified the event, what happens when that event occurs? Thats defined in Actions. Using the list of
examples we saw earlier, lets see what types of actions can occur.
1. When the user clicks a control such as button, link or icon it opens a new window.
2. When the user double clicks a row in the grid it opens the row in edit mode.
3. When the user right clicks on the entire grid it shows a menu.
4. When the user presses Esc key in the keyboard it closes the assignment and goes back to
home page.
5. When the user selects MA in the state dropdown it updates the list of counties.
6. When the user clicks the checkbox to enable or disable it unmasks the password.
7. When the user enters 5 in the quantity field it displays a message that the user can order a max
of 1.
231
Actions Association
Actions are usually configured along with the events using an action set. An Action set can be defined
both on controls and on grids. Refer to the Understanding Available Controls lesson for more
information on how we can define actions on Controls. We will learn about setting action sets on grids
later in this lesson.
Some specific actions can also be associated outside the context of an action set. They are directly
configured on the part of the screen where we want the action to take in effect. These actions can be to:
1. Apply a visible when condition on a cell or a section or a layout to hide or show its elements
based on a condition.
2. Apply a refresh when condition to refresh a cell or a section or a layout.
3. Apply a Disable when condition to disable the cell.
4. Apply a Read only condition to make the cell read-only.
Lastly, Actions can also be associated directly in a Navigation rule. The Navigation rule belongs to user
interface category and we will learn more about this rule in the last part of this lesson.
Now lets look at how to create an action set in a Grid layout.
232
After a grid is added, an action set can be configured via the properties panel. The action set is
initialized based on edit operations. Below is the default set for a modal dialog.
Actions can be customized on grids, however the list of common actions is a little different from that of
controls. Again the All actions link opens up the complete list of actions.
233
234
Visible When
Visible when is the main mechanism that's used in PRPC to toggle whether or not some part of the page
is visible. Visible when can be applied on sections included in another section, on layouts, and on cells.
Visible when uses a condition.
Three Visible when options exist for dynamic layouts and sections.
Visibility conditions can also be added on grid layouts. We can configure to show/ hide the entire grid or a
specific row.
235
Conditions
Visibility options are:.
1. Always there is no visibility condition on this field, layout or section.
2. If not blank visible if the value of that field is not blank.
3. If not zero visible if the value of that field is not zero.
4. Condition (expression) uses a condition to determine visibility, visible when the condition is
true.
5. Condition (when rule) uses a when rule to determine visibility, visible if the when rule returns
true.
Condition builder
When we choose Condition (Expression) in the visibility field, the system provides the option of using a
condition builder to write expressions.
In the condition builder we could use an expression and add another expression using AND or OR. If we
need to write more than two expressions, its better to write it as a when rule. The Condition builder can
also be configured to reference a when rule. When we choose a when rule, it provides the option to select
true or false.
In addition to this, when we use expression, the system offers another key factor to choose.
What does this field mean? If this is enabled, the visibility condition fires on the client side. So at runtime,
the section code comes with a markup added to hide/show part of the screen based on the condition. If
this field is disabled, the section code does not contain the hidden part and it requires a communication
with the server to get the hidden part when the condition is satisfied.
Server communication is initiated by refreshing the section which leads us to the next topic.
236
Refresh When
Refresh When allows us to have a portion of a page return to the server to refresh its content when a
specified condition is true. Refresh When can be configured to refresh either an included section or a
specific layout. Similar to Visible When, we can use the condition builder to add conditions.
We can configure the properties to respond to a condition and there is the option to use AND, OR to add
more than one condition. In addition to these, there are two keywords.
1. Changes This indicates when the propertys value is changed, in this example, it refreshes
when the discount is changed.
2. AddDelete This applies only on repeating groups, so a line item can be added to or deleted
from the purchase request.
We can also configure refresh of a specific row within grids. By default, the row never refreshes; we can
configure it to refresh on a specific condition
Unlike Visible When, a Refresh When conditions can also be configured using an action set. The action
set allows refreshing the current section or another section by its reference.
237
Read Only whens can also be applied only on a control and the choices are not very different. Auto
means editable in most cases while read only can be applied as an expression or a when rule.
238
The condition does not reference a When rule. When rules can only be processed on the server.
The condition contains no more than four (4) properties and constants, combined by Boolean
operators.
When the condition is configured to run on the client, the server sends all of the possible layouts to the
client, and the client determines which layouts to display and keep hidden.
The choice of client-side or server-side processing can have a significant impact on how the UI behaves
for end users, as outlined in the following table:
Client-Side Event
Server-Side Event
conditions
239
This rule essentially has one tab (Editor) to configure. We add nodes using the icons in the header and
form a tree. Typically a parent-child relationship ( so in this example UI is the top level which then has the
UI gallery, Skins&PortalsTools) and then Tools has some child nodes. This rule is used by the main
navigation rule to have UI listed as a child for designer studio. (For an example, look at the navigation rule
named pxMainMenu.).
Other than setting the hierarchy, the next important part is to define the action. Actions can be added by
double clicking the row to open up the properties panel. We can also modify the type field. Item is most
commonly used. An Item list can reference a page list which is useful when presenting a dynamic list of
options stored as part of a pagelist. Reference is useful to refer another navigation rule.
240
How does the navigation rule get referenced? There are multiple options:
1. We can reference it in an action set either using a control (such as a Button, link) or on the grid.
2. In the menu bar control. (Refer to the PDN article listed in the related content for this.)
3. Using the reference keyword in a navigation rule.
To access a Navigation rule, we need to define an action set that can have an event (click, doubleclick,
rightclick, etc.) that invokes a specific action (Display Menu) [Action_Nav_rule.png]
Conclusion
Dynamic User Interface is a very powerful feature that comes handy in implementing smart looking user
interfaces that interact with user choices. Presenting content dynamically gives us flexibility in defining
various layouts, section includes and cells that can be hidden and made visible only when required.
Dynamic UIs are useful in presenting updated property values that are populated using a declare
expression seamlessly.
241
Data Transforms
Activities
PegaPulse
Case Attachments
Routing
Correspondence
Ticketing
Declarative Processing
Declaratives Rules
Automating Decisions
Validation
242
Data Transforms
Introduction
The Data Transform rule is a powerful tool in Pega 7 which makes data manipulation on properties and
pages easy and efficient. Data Transforms map data from a source to a target, performing any required
conversions or transformations on that data. The source could be one case such as a purchase request
case and the target could be another case, such as a purchase order case. To create the purchase order,
we might require data regarding the vendor from whom we bought the items on the purchase request.
In PRPC, Data Transforms are part of the Data Model category and are instances of the Rule-Obj-Model
rule type.
At the end of this lesson, you should be able to:
243
A property value should be set declaratively, rather than procedurally. To set a property value
declaratively, use a declare expression rule instead.
When defining source data for a data page. Depending on the source of the data use of a report
definition, database lookup, or activity may be more appropriate.
Updating data in a database. Data transforms do not provide any means to write data to a
database.
First, lets look at the Data Transform rule form itself (shown below). The Definition tab is where we
define the actions to be taken. They are presented as a sequence of rows in a tree grid. We will discuss
the structure in detail shortly. The next tab is the Parameters tab. On the Parameters tab we can either
list the variables that input data to, or return data from, the Data Transform. Variables defined here are
referenced on the Definition tab by the notation Param dot ParamName. We use the Pages & Classes
tab to specify the pages referenced on the fields of the Definition tab. These two tabs are standard tabs
for most rules.
Lets look at this example to review the capabilities of the Data Transform rule.
In the first row, we delete the vendorlist page from the purchase request.
In the second row, we iterate through each of the line items that comprise the LineItems page
list. The rows indented under the second rows are performed for each page of the page list.
In the third row, we check whether the vendor listed in the specific line item is unique or not, using
the when condition IsDistinctVendor. Any rows indented under this row execute only when the
condition is true. Otherwise, the data transform skips the indented row(s).
In the fourth row, the unique vendor information is copied from the source line items to the target
vendor list whenever the vendor is considered unique, as determined by the when condition in
the third row.
Data transforms can be chained with other data transforms defined in their parent classes. To do this, we
need to enable the Call superclass data transform? checkbox, and the data transforms must share the
same name.
244
PRPC checks for the same name in all the parents of this class following both pattern and directed
inheritance. Then it applies all the data transforms starting with the deepest ancestor continuing for each
child upward.
From the pyDefault Data Transform for the Purchase Request case, we can use the open icon next to the
check box to open the Data Transform of the parent class and keep going until we find the deepest
ancestor. From the chain, we can see nothing is set for @baseclass first, and it sets pyWorkIDPrefix as
W- in Work-. Then when it applies the data transform in Work-Cover- it sets the same property as C-.
Finally, it applies the data transform in the Purchase Request class, so the WorkIDPrefix is PR- when a
new purchase request instance is created. This is a powerful feature to set default values for the
properties at the appropriate level.
245
Action This is where we identify what the step is for. This is required for each step. We will look
at the different actions that are possible shortly.
Target Most actions require a target. In cases where an action is not required, we cannot select
one. The smart prompt box acts both as a select and a text box. We can select a property from
the list or we can type text.
Relation This is an optional column. The typical relationship between source and targer is
equal to. There are few actions that use other relationships.
Source This becomes selectable only if the action requires a source. We can specify literal
values, properties or expressions. We can, for example, set the target property Employment
Type as the value Full Time, target property Full Name as the concatenation of two other
properties First Name and Last Name, and target property Rate as an expression, namely,
the sum of source properties BaseRate, LoanRateAdjustment, and RiskRateAdjustment.
To add rows we can use one of two options. We can click the Add a row icon or we can right-click on
any row to access the context menu. The right click menu is context sensitive, so the choices depend on
where we click. In some cases, when we add a child it adds a tree like nested structure. We can use the
Delete this row icon or the right click and select delete from the menu.
As a best practice, and to improve readability, do not create more than 25 steps. If we need to define
more than 25 steps, we can group some of the actions and define them in another data transform rule.
We will see how we can reference another data transform rule from one data transform rule shortly.
246
Set is used to set the target from a source. We can set a value to a Single Value property which exists in
the top-level page such as the pyWorkIDPrefix or in an embedded page such as the LoanType in the
LoanInfo page.
Remove is used to delete the target and any associated values from the clipboard.
We can also use Update Page to set the properties defined on an embedded page. When we use
Update Page we need to set individual properties using the Set action with the nested rows after the row
with Update Page. In fact, we have the option of selecting any of the actions shown above for the
nested rows below the Update Page row.
We can reference another data transform rule from our existing data transform rule. This might occur if
we are going over 25 steps and want to break the rule into smaller manageable data transforms. Or we
might have a reusable data transform, such as one for initializing a case with some default values.
Whenever we need to invoke a Data Transform from another data transform rule, we use Apply Data
Transform.
Conditional Actions are Data transforms that execute all the actions in the order defined. However there
are a few conditional actions that are available to add logic, to perform steps based on a condition.
Otherwise When and Otherwise To provide actions for the alternative to the When actions.
We can also iterate over a pagelist using For Each Page In action. Using this action we are able to apply
the change for all the child nested rows of the page list. We have the option of selecting any of the actions
for the child nested rows, such as Update Page. Update Page is primarily for a single page, while For
Each Page In is for a page list. We can use the Append to action to copy a page from the source to
the target. For instance, if we want to add a new page to the Assets page list, we can select new page.
We can also add another existing page or copy all the pages from another pagelist, by selecting the
appropriate values in the drop down in the relation column. Append and Map to is used to map
individual properties in the page list. When we select this action, at least one nested child column is used
with Set action.
If we dont want to loop through all the pages in the pagelist, we can use the Exit For Each condition to
exit the loop, or we can exit from processing all the remaining steps with the Exit Data Transform
condition. Typically, we have these steps after a when condition.
247
As we iterate through the list of the line items page list, <CURRENT> represents the current index of the
iteration.
Here is the list of symbolic indexes that we can use when we loop through the iteration:
<CURRENT> - Identifies the index of the current iteration
<APPEND> - Inserts the element at a new position at the end
<LAST> - Retrieves the highest index
<PREPEND> - Inserts the element at the top
<INSERT> # - Inserts the element at a specific position that is indicated by the number
Param.pyForEachCount Same as <CURRENT>. Identifies the index of the current iteration.
This can be used for the Page index and in the expression as well, while <CURRENT> can be used only
for the Page index.
For Data Transform rules, we cannot use the <APPEND> keyword in the For Each Page In
action. Instead, we need to follow the For Each Page action with an Append to action or Append and Map
to action. We can use <APPEND> for Update Page action in the Data Transform and also for looping
through in the steps of an Activity rule.
While we are discussing symbolic indexes, lets understand two more keywords that are useful to
access the correct pages. These are used in a host of rules, wherever pages are referenced.
Top Refers the top most page of the embedded page.
Parent Refers to the parent page of the embedded page. If the embedded page has only one parent,
and no grandparents in the hierarchy, Top and Parent refer to the same parent page.
248
Anytime when we create a new case either through an explorer, or create new cases with a new
application through Application creation wizard, PRPC creates pyDefault. We always recommend that we
use the same name for all starting process to take advantage of the chaining. The main purpose of Data
Transform usage in the process tab of the starter flow rule is to initialize the properties of the case
instance, when it is instantiated.
Here is an example where we initialize the properties of a Purchase Request case. When a case instance
is instantiated, it is initialized with who it is requested for, the cost center is, and the currency.
249
In the case hierarchy, we use the data propagation configuration to propagate the data from the parent
case to the subcases when the subcases are instantiated.
We can propagate the data from the parent case Purchase Request into the subcase, Purchase Order. If
we are simply taking the data without making any changes, we can use the data propagation option and
we do not need to select Also apply Data Transform. If we are using the data conditionally or looping
through a pagelist or if we need to reuse propagating data from parent to subcase in other rules, we can
use the Also apply Data Transform, option and use a Data Transform rule.
We can also use a Data Transform rule in a step where a subcase is instantiated from the case designer
of the parent case. We propagate the data this way if we dont want to take advantage of propagating the
data from Case Designer Data Propagation configuration. Lets say, we are creating Inventory
Selection step in two steps of different stages in the Purchase Request case, and in those steps, the
data that needs to be propagated is different. In this scenario, we need to reference the different data
transform rules in the step configuration, instead of relying on the one data transform rule referenced in
the data propagation settings in the details tab of the case designer of the parent case. The example
shown here is using CopyLineItemsPO data transform when creating multiple subcase instances of the
250
Purchase Order case in the step configuration of Create Orders step of Purchase Request case stage
design.
For each step, there is an appropriate flow rule associated with it. A step of step type Case uses the
Create Case(s) smart shape in the flow rule for that step. Referencing a data transform on step
configuration is the same as setting the properties on the smart shape in the flow rule. We can modify it in
either place and it is reflected in the other place.
To avoid accidentally overwriting data when a subcase is instantiated, it is important to understand the
order in which an application evaluates these Data Transforms:
1. First, the application applies the Data Transform on the starting flow of the subcase
2. Next, the application applies the data propagation and Data Transform configured on parent case
type rule (defined on Case Designer Details tab)
3. Finally, the application applies the Data Transform defined on the case instantiation step in the
Case Designer of the parent case, or the Data Transform defined in the Create Case(s) smart
shape in the create case flow rule of the Parent case.
In flows, on any connector between any two shapes, we can set properties or reference data transform
rules. If it is a simple setting of one or two properties, we can use Set Properties. But, If we are using
data conditionally or looping through a page list or if we need to reuse the data transform in other places,
we can use the Also apply Data Transform, option and use a Data Transform rule. For example, when
the flow is processed on a decision shape, on the connectors, we can use the appropriate data transform
rules based on the decision made by the end user.
In Flow actions, we can specify a data transform on the flow action's Action tab. To populate the
properties from another page, we can invoke a data transform in the before this action area. If we want
to copy the values submitted in the user session to another page, then we can invoke after the action is
submitted.
251
We can reference a data transform in a section rule, for a change event of any cells directly, and also we
are able to reference a data transform if we select Refresh This Section for the client change event. This
is handy, when we change a value from a drop down or select a radio button. Lets say, a business
requirement is, that for the state of Texas, the state tax is 0 % and the state tax for California is 6% and
so on and so forth. We can use a data transform rule when the state is selected from a dropdown and the
user interface is refreshed to show the tax to be deducted based on the state selected.
252
We can reference a data transform from another data transform rule with Apply Data Transform action,
as we described earlier in this lesson. We can also reference a data transform rule from an activity rule,
which we will discuss in the Activities lesson.
Conclusion
We learned that Data Transforms can process steps in sequence.
Data transforms rules can be used for setting and manipulating data such as:
To append properties from one page list property to another page list property.
To iterate over pages in a page list property to map data on each page.
We can reference Data Transform rules in a host of other rules wherever we need to set and manipulate
data as in the list above.
253
Activities
Introduction
Activity rules provide us with one way to automate the processing of work in PRPC-based applications
using a program-like approach. It consists of a sequence of steps executed procedurally.
In PRPC, Activity rules are part of the Technical category and are instances of the Rule-Obj-Activity rule
type.
Each step can call a PRPC method, transfer control to another activity or execute custom inline java
code. As a programming tool, it also provides features such as iterations and conditions. While activities
can appear to some as an easy and flexible way to automate work process, they can quickly become
complex to analyze, execute, debug and maintain. Consequently if writing an activity is our only option,
we must keep the following best practices in mind. Keep activities short, no more than 25 steps, and
avoid inline hand-coded java as much as possible by using library functions instead.
At the end of this lesson, you should be able to:
254
To perform Case related functions such as creating a case instance, routing the case, or updating
the work status, as part of certain operations such as parsing an input file.
Lets take a look at the Activity rule form starting with the standard activity, UpdateLocaleSettings. The
steps tab is where we define the steps to be processed. They are presented as a sequence of rows in a
tree grid. We will discuss its structure in detail shortly.
255
Before writing an activity, it is important to understand the three common page types we will interact with.
They are:
Primary pages
Parameter pages
A Primary page is a clipboard page which has the same class as the Applies To class of the activity and
is designated when the activity is called. This page is the default location of properties referenced with a
dot and no preceding page name. For greater clarity, we can reference a property on the primary page
using the keyword Primary followed by a dot and the property name.
When a Branch or Call instruction executes as defined in a step in the Steps tab, the page in the Step
Page column of the step becomes the primary page of the called activity. If the Step Page column is
blank, the primary page of the current activity becomes the primary page of the called or branched-to
activity. That is, the primary page of an activity becomes the step page of each step, except for steps
where the Step Page column is not blank. The step page becomes the primary page for the duration of
this step's execution.
A parameter page contains parameter names and values, as listed in the parameters tab. It has no name
or associated class, and is not visible through the Clipboard tool. However, we can display the contents of
the parameter page with the Tracer tool.
On the Pages and Classes tab, we list the pages used in the steps of the activity along with their
classes.
256
The Security tab has a few settings that allow us to set who can access the activity and how.
Allow direct invocation from the client or a service check box indicates whether user input
processing can start the activity or it must only be called from another activity.
The authenticate checkbox, when selected, only allows authenticated users to run the activity.
The Privilege Class and name identify a privilege a user must have in order to be allowed to
execute the activity.
The Usage type determines whether and how the activity can be referenced in other rules. Select
one usage type depending on the intent of the activity.
257
The Label provides an identifier for the step that can be referenced from other steps. The label name is
used in the When and Jump conditions, which well look at later. We can also put two slash characters as
a step label to indicate to Process Commander not to execute the step. Such steps can also be used as
comments.
The Loop allows us to setup the iteration through the elements of a Value List, Page List, Value Group or
Page Group and performs the provided activity method on each value or each embedded page.
As we iterate through the loop, we can select the For Each Page option to sequence through all pages
of a specified class or classes and perform a method or instruction for each page. Leave the Step Page
field blank. Each time the step repeats, the step's page changes internally automatically.
Use the For Each Embedded Page option to apply a method or instruction to each embedded page in a
Page List or Page Group property. Identify the property containing the embedded pages in the Step Page
field.
For the optional Valid Classes (Only loop for certain classes as shown below) parameter for these two
Repeat conditions, we can enter a class or classes. We can click the Add Row icon to add more than
one class. When valid classes are populated, iteration processing only processes pages of the valid
class list and the ones derived from them and skips over the pages of classes that are not in the list.
258
We can select the For Each Element in a Value List option to repeat the step for each element in a
Value List property. When we select this iteration form, a Property Name field appears. Identify the Value
List property in the Property Name field.
We can select the For Each Element in a Value Group option to repeat the step for each element in a
Value Group property. When we select this iteration form, a Property Name field appears. Identify the
Value Group property in the Property Name field.
We select the For Loop option to repeat the step a number of times determined by the values of integer
constants or integer properties. Enter integer constant values or integer property references for the Start,
Stop, and Increment fields. The Increment must be a positive integer.
To add a child step to an iteration, right-click the iteration step and select Add Child.
The When allows us to define a precondition that controls whether the step is executed or skipped.
259
We need to check the Enable conditions before this action to display the row where we can enter a
condition or reference a When rule. When the rule returns true or false, we can select one of the
conditions.
Continue Whens
Skip Whens
To move to a later step with the label of the step noted in the param field
Skip Step
Exit Iteration
To skip the iteration and to move to the next iteration in the same step
Exit Activity
The Method indicates which Process Commander method the step will execute. We will look at some
common methods that we can use, later in this lesson.
The Step Page, as mentioned earlier, identifies a Page to be used as the context for referenced
properties within the step
The Description is text that explains to other developers the action that the step is performing. As a best
practice, always provide a comment for each activity step.
The Jump condition or post condition is similar to the When precondition. After the step is executed, a
condition is evaluated. Based on the result, the activity may perform different actions such as jump to
another step, or exit the activity. Also, there is another option of what needs to be done in case of an
exception in the step processing.
260
Property-Set used to set the value of one or more properties. If this is the only step, we would
be using a data transform rule instead of activity rule. When setting the properties, we can use
any of the symbolic indexes that we had described earlier in this lesson.
Call used to find and execute another activity. After the called activity completes, the calling
activity resumes processing.
Page-Remove used to delete one or more named pages from the clipboard. If no page is
specified, the page referenced by the step page will be removed.
Conclusion
Best practice is to use data transform rules, declarative rules, report definitions, and so on, in place of
activity rules. All the actions provided by a Data Transform rule are also possible through an activity.
However, using a Data Transform rule speeds up development as its rule form is easier to read than the
activity rule form. In addition, a Data Transform rule provides runtime performance improvements over an
activity.
In some instances an activity rule cannot be avoided, often times they are mandated, for example,
Service Rule Request processing
261
Pega Pulse
Introduction
PegaPulse is a rich social platform that adds social activity streaming capabilities to the user interface.
We can collaborate and converse with other application users or developers.
By default, PegaPulse section is included in the standard portals, where we can share messages, URLs,
and files with others in our work group. It is included in the standard perform harness as well; hence
when processing cases, we can share messages, URLs, and files. Here we can set the shared content to
be viewed in either public mode or private mode. In private mode, only those who have access to the
case can view the content. In public mode, anyone who has access to the case and also users in the
workgroup can view the content. Workgroup users can view the content in the standard portal interface.
A Social User Profile is the profile for any user that includes their name, position, phone, etc. When we
click on the link of any users profile, not only can we see that information, but we can also send a
PegaPulse posting to only that user.
PegaPulse is limited to attaching conversations, URLs and files, compared to Case Attachments where
we can attach screen shots, scanned documents, etc. In PegaPulse, we can take actions such as
creating a task, which we cannot do with Case Attachments. We can use the Post to Pulse SmartShape
to have a Pega Pulse posting added to case instances. Similarly we can use the Attach a Content
SmartShape to add attachments to case instances. The attachment types are limited in PegaPulse; while
Case management lets us add more attachment types. For more information on Case Attachments, see
the Case Attachments lesson in this course.
We can view the details of the PegaPulse in the following video.
At the end of this lesson, you should be able to:
Determine how to view Social User Profile and send individual messages.
262
The User posting drop-down list lets us select the user under whom the system adds the update to the
contextual activity stream. By default, posts are attributed to the assigned operator (Current Operator).
We can also select Other, then select the operator ID we want to use.
The Message field contains the content of the automated post.
The Message pertains to the current case and Make secure post checkboxes allow us to configure
the availability of the post. The Make secure post option is available only when Message pertains to the
current case has been selected.
Message pertains to the current case is
disabled.
263
Conclusion
PegaPulse is a rich social platform built into PRPC that can easily be integrated into Pega applications for
the purposes of collaboration, document sharing , and conversation among application developers and
end users.
PegaPulse content, whether it is message or file or URL, can be shared among the workgroup users only,
to those only involved in a case instance, or to any individual user directly.
If PegaPulse needs to be included in other harnesses, or in any user interface rule, simply embed a
section rule, @baseclass.pxActivityStream.
264
Case Attachments
Introduction
During the normal sequence of processing a case instance, we may have to add attachments to enable
users to take the appropriate actions and also for future reference. When we process an insurance claim
for an automobile accident, we can attach the police report to identify who is at fault.
A case attachment can be a file, screen shot capture, URL or text note. By default, case instance
attachments are stored in the pc_data_workattach table as instances of concrete classes derived from
the Data-WorkAttach- class. They are not stored directly with the case instances in the same table.
We learn how to add attachments manually and automatically. Suppose, we want to provide access to
certain users so they can add attachments and we want to provide rights to delete attachments only to
managers. Using the Attachment Category rule type we can enable security and restrict access for
certain attachment operations.
At the end of this lesson, you should be able to:
265
While processing a case, end users can add attachments to a case, for future reference. We can do so
easily with the standard perform and review harnesses. On the right hand side, we can use the Case
Attachments section to add attachments. We can use the Add or Advanced link to add attachments.
We can attach a File or a URL using the Add link.
When we click the Advanced link, an attachment listing window pops up and it has an Add dropdown
button. Using the button, we can attach a Note, File, URL, Screenshot and a Scanned Document.
The Attachment user interface is slightly different for each of these choices so lets quickly look at how
they work.
When we click Attach a File from either menu, we get the Attach a File dialog box, where we can drag
and drop a file or browse to the file folder and select a file to attach it.
When we click Attach a URL from either menu, we get the Attach a Link dialog box, where we can
enter a subject for a description or why we are attaching the link, and the URL.
When we click the Attach a Note link in the advanced menu, we see the Attachment dialog box and
we can enter a subject and the text for our note.
266
When we click the Attach a Screen Shot link in the advanced menu, we see the Attachment dialog box
and we can select a window name from the drop down and enter a note as description/reason for the
attachment.
When we click the Scan and Attach a Document link in the advanced menu, we see the Attach Scan
Document dialog. We first need to scan a document before we can attach it. So, from the Select Source
button, select a scanner. Next select the Scan button and we will see the scanned document in the
viewer.
As mentioned earlier, there is also the Enterprise Content Management attachment type. We can
configure our application to store attachments of type file in an external enterprise content management
system (or ECM) using the CMIS protocol such as Microsoft SharePoint or IBM FileNet. We can enable
this option by selecting a checkbox on the application rule Integration tab. We can select this checkbox
267
to enable content management system integration for the application. Two more fields are needed in
order to properly ensure the attached files are stored in the ECM.
We use the Connector Name to specify the Connect CMIS rule used to connect to the external CMS
system.
If the Connect CMIS rule has not been created before, we can click the magnifying glass icon to create
the rule now. The second field, CMIS Folder, provides the Browse button to select the directory
location on the content management system in which we are going to store work item attachments for this
application
Anyone who has access to this case can view the list of the attachments and can also view the contents
of the attachments. In the Attachments section, we can see the list of attachments associated with this
case. To view the contents of an attachment, just click on the link.
In the Advanced attachments window, attachments are grouped by type. Here we get to see the timing,
description, and who attached it as well. If permitted, we can also remove the attachment if the
attachment is not required. For quick viewing, we can use the attachments section in the harnesses. To
see other details and/or to remove the attachments, we can use the advanced attachments window.
If the perform or review standard harnesses are customized, or some other standard or custom harness is
used, the case attachment section may not be available to end users to add attachments. In that
268
scenario, the system architects can provide a local action to the end users; end users can select the local
action menu option from the other actions menu. All attachments types can be added as a local action.
AddMultipleAttachments though provides the ability to add multiple file attachments, with a
category for each, in a single action. For every attachment types, there is one flow action.
AttachAFile as its name implies is used to attach a file to the work item, including an explanatory
note but no attachment category.
AttachANote flow action prompts users to type in a text note that becomes an attachment to the
case instance.
AttachAScreenShot is used to select one window from a list of windows on the Windows
desktop, and then capture and attach a screen shot JPG image to the case instance with an
explanatory note.
AttachAUrl is the flow action you might use to attach a URL reference to the work object with an
explanatory note.
AttachFromScanner allows us to start the Image Viewer tool to scan, view or update a TIFF file
from a scanner and attach it to the case instance.
269
Attach a URL and Attach a note options require a URL and description, and a note and description,
respectively.
As part of attaching a file automatically, we can generate a PDF file and attach it to the case. To
accomplish this, we can use the smart shape Create PDF. For example, a list of the purchase request
items can be generated as a PDF file and attached to the case instance. We provide the section name
that has the data that needs to be included in the PDF file plus the description. The description is used for
the PDF file name.
270
Once the PDF file is generated and attached, it can be viewed the same as other attachments.
271
In the Access control List by Privilege section, we can select a privilege rule in the Privilege Name
field. The system uses the Applies to class of the attachment category to validate the privilege name.
We can select any of the following checkboxes that apply. If the checkbox next to an option is not
selected, the qualified user cannot perform it.
Create to grant users the ability to add attachments of this category if they have the privilege
in the Privilege Name field.
Edit to grant users the ability to edit attachments of this category if they have the privilege. The
permission to Edit implies permission to view.
View to grant users the ability to view attachments of this category if they have the privilege.
Delete Own to grant users the ability to delete attachments of this category that they added
earlier if they have privilege.
Delete Any to grant users the ability to delete any attachments of this category if they have the
privilege. The permission to delete any implies permission to delete own.
We can add multiple privileges by using the Add a row icon. When we have multiple rows, users must
hold at least one privilege to gain access. The order of rows in this section is not significant.
We can also define a list of When rules to control access to the attachments. In this case though, all
When rules must evaluate to true for a qualified user to be granted access.
In the Access Control List by When section, we select a When rule in the Rule field. The system uses
the Applies to class of the attachment category rule to find the When rule. Next select any of the
operation checkboxes that apply. If an operation checkbox is not selected, the user cannot perform that
operation. The operations are similar to control by Privilege with the difference that they are now
controlled by a When rule rather than a Privilege rule.
The Enable Attachment Level Security option allows the operator at runtime who attaches a work
attachment of this category to identify one or more work groups that have access to the attachment.
272
When enabled, this attachment-level restriction operates in addition to and independently of any
restrictions defined on this tab for the category.
The next tab, Availability, provides a list of the attachment types. We select the checkboxes for the
attachment types that are valid for the category and the work type identified by the Applies to class of
this attachment category rule.
In this example, only File attachments are valid for this category.
If no types are selected, the category is not inaccessible. Instead, a default category rule in Work- class is
used for its respective type. The default category has no security restrictions.
When adding an attachment to the work item, users have the ability to choose the category to which the
attachment belongs.
Conclusion
End users can add attachments to case instances manually as needed. Or we can add attachments
automatically at a certain point of the flow. We can view the added attachments easily through the section
included in standard harnesses.
We can restrict who can add, view and remove attachments through the Attachment Category rule type.
We can reuse the standard attachment categories or create a new attachment category. When adding an
attachment, the end users can select an appropriate category for the attachment to ensure that the
attachments are secure.
273
Routing
Introduction
Routing is a critical component of business process management as it determines who can actually
perform the work required to complete a process.
As we learned in the introductory courses, an assignment of a case can be routed to an operators
worklist or to a queue known as workbasket. Assignments can be pushed to or pulled by an individual
operator. In this lesson, we will look at more details of the push versus the pull concepts.
Sometimes we want to assign a task to an external person who is outside of the organization and may not
have access to PRPC. For example, when we are processing a purchase request, we may want to get
quotes from multiple preferred vendors. The quoting task assignment is sent to the vendors and they can
respond to the assignment through web. This process is called Directed Web Access (DWA). We will
learn how we can configure DWA and how the external users can respond to the assignments in this
lesson.
We have seen some basic standard routers such as ToWorkList, ToWorkBasket etc., in the
introductory courses. Now we will learn about standard routers for advanced requirements.
At the end of this lesson, you should be able to:
Explain the concept of push versus pull routing and how to use each one effectively
Identify some advanced standard routers and when and how to use them
274
Pull routing, on the other hand, routes to a workbasket that is shared across multiple users. Ownership
does not occur until an operator takes action on it. Most applications use GetNextWork logic to select
the most urgent assignment from multiple workbaskets. When this happens, the assignment is removed
from the workbasket and is assigned to the operators worklist. GetNextWork logic is covered in more
detail in another lesson. In most of the standard portals, we can pull the next assignment to work with
Get Next Work logic by clicking on Next Assignment button at the top of the portal.
Lets take a look at how routing is incorporated into a typical user experience, especially from a work
group manager perspective.
The case manager portal, typically for work group managers and other managers, shows a list of
operators in their work group on the right side of the dashboard.
275
As a work group manager, he/she has the privilege to view individual members worklists. Selecting a
particular operator (in this case, the operator named Purchasing Manager) shows all the items that have
been pushed to his/her worklist. There are at least three that have not been processed for a long time.
The work group managers and all the other operators can see the list of workbaskets to which they have
access, on their portal, on the bottom right hand side of the dashboard.
Selecting a workbasket shows the assignments in a particular workbasket. They are waiting to be pulled
by an operator.
276
An operator can select one of these items directly, which promotes it to his/her worklist, or use the
GetNextWork function, which selects an item for him. GetNextWork queries both worklist and workbasket
items. GetNextWork is a great way to ensure that the most appropriate item gets the most attention. This
way, we can prevent an operator from cherry picking the item that they want to work on from the
workbasket, instead of the items in the worklist.
277
When the case reaches request for quote external type assignment in the flow, PRPC sends an email to
the vendor. The vendor gets an email as shown below, with a link.
278
When the recipient of the vendor organization, clicks on the link, they get the quote request flow action in
a web page. When they fill in the requested data and submit the flow action, the assignment and the flow
action are complete and the flow moves on to the next shape.
We need to configure URL that is to be sent in the emails. We can configure this through Designer Studio
-> System -> Settings -> URLs -> Public Link URL. Public Link URL is set typically in the
form http://nodename:portnumber/PRServlet, as shown below.
279
Most of these routers are for push-based routing, which routes to the worklist of a particular
operator. Most of the routers are parameterized and we can enter the appropriate parameters
when we select a router.
ToWorkParty routes to a work party. The actual operator depends on the work object; for
example, if the workparty is Initiator, the actual operator is different for every work object. Work
Parties are covered in more detail in another lesson.
ToSkilledGroup also routes to an operator in a workgroup and takes required skills into account.
We should not get confused by the naming of some of these routers. ToWorkgroup, ToSkilledGroup, and
ToLeveledGroup do NOT route to a group; they route to an operator, as is the case with all push routing.
These routers simply use the workgroup as a criterion for selecting the operator.
A couple of workbasket routers are worth noting. Workbasket routers are used with pull-based routing.
280
ToSkilledWorkbasket also routes to a particular workbasket. We will look at the details of this
router in later in this lesson.
281
Operators are associated with any number of these skills, and the appropriate rating is supplied. The
assignment is then associated with skills as well, depending on the type of router selected.
If the operator has a rating that is equal to or above the level required by the assignment, that operator
can be chosen by the router. Note that a skill can be set to be required, or not. If skill is not set, the skills
are fundamentally considered by the system as a nice-to-have, and are used in ranking the choice, but
not considered as an absolute requirement.
282
Lets look at a few out-of-box routers that incorporate skills into their algorithm.
For skill-based push routing, an operator is selected for a specified group and required skill. For the
ToSkilledGroup router, this selection is random.
For Pull-based routing, the ToSkilledWorkbasket router adds an assignment to a workbasket, and also
associates a skill with that assignment. Subsequently, GetNextWork will take these skills into account
when finding an assignment for an operator to work on.
Please do not be confused by the term ToSkilledWorkbasket. There is no such thing as a skilled
workbasket. This router merely sends an assignment to a workbasket and marks the assignment as
requiring a particular set of skills.
For the ToLeveledGroup router, the selection is prioritized by load and desired skills. A high load, in this
case, refers to when an operator has a high number of assignments that are past deadline.
Conclusions
As we saw, work can be pushed to either a worklist or workbasket assignments and workbasket
assignments can be pulled to create a worklist assignment. We can send the assignments to external
users and external users can process the assignment over the web by clicking the link in the email.
There are a variety of standard routers available to us and we can choose the appropriate one based on
our need. We saw how some of them relate to load and skill based routing.
283
284
On the other hand the Get Most Urgent button may also appear at the bottom of the user forms on the
confirmation harness allowing users to access the next available task right after completing one.
When users click the Next Assignment link or the Get Most Urgent button, PRPC starts the standard
activity Work-.GetNextWork. This activity calls the final activity Work-.getNextWorkObject.
285
When Get from workbaskets first is checked, the activity Work-.findAssignmentinWorkbasket is called.
Otherwise, PRPC calls Work-findAssignmentinWorklist before examining the workbaskets.
The Work-.findAssignmentinWorkbasket uses a standard list view rule AssignWorkbasket.GetNextWork.ALL which returns a list of tasks sorted in decreasing order of urgency.
On the same operator ID rule form, the Work Settings tab has another check box, Merge workbaskets?,
which is identified with number (2) in the screen shot above. It indicates whether or not all tasks in all the
workbaskets listed should be combined for the operator in a single list before sorting them. If this check
box is not checked, the task assigned could come from the first workbasket, even though there could be a
task with a higher urgency in subsequent workbaskets listed in the work tab for this operator.
If the Merge workbaskets? checkbox is checked, PRPC displays another checkbox Use all workbasket
assignments in the users work group, which is identified with number (3) in the screen shot above. This
checkbox, when selected, indicates to PRPC to consider tasks only from workbaskets for which the work
group field (on the workbasket rule form) is the same as the work group field, (identified with number (4))
of this operator.
The next step is to filter the list to get to the most appropriate task. To do that, PRPC applies the decision
tree Assign-.GetNextWorkCriteria to each task in the sorted list.
286
This decision tree rule first checks to see if the task is ready to be worked on. This means that the
pyActionTime property of the task is NOT in the future.
If it is ready to be worked on, the decision tree checks if the current worker has already worked on or
updated the task earlier that day.
If the current worker did not work on the task, the Assign-.GetNextWorkCriteria now examines whether
the worker has the required skills to work on this task. The workers skills are recorded on the work tab of
the operator ID rule form. Note that skills are covered in detail in the Routing lesson of this course.
The search ends when PRPC finds the first surviving task (if any) that meets all the criteria.
The GetNextWork activity creates and populates the newAssignPage page and locks the case instance. If
the System Setting rule GetNextWork_MoveAssignmentToWorklist is set to true, the selected task is
moved to the operators worklist.
If no task is found in the workbasket related operations, PRPC repeats the process but uses the Work.findAssignmentinWorklist activity and the list view Assign-Worklist.GetNextWork.ALL.
Thus the default Get Next Work functionality can be summarized into these steps:
1. Users click either the Next Assignment link or the Get Most Urgent button.
2. The Work-.GetNextWork activity starts
287
5. The Work-.findAssignmentinWorkbasket activity starts and uses the list view AssignWorkbasket.GetNextWork.ALL
The list of assignments are sorted in decreasing order by assignment urgency (property
Assign.pxUrgencyAssign)
o Merge Workbaskets? checkbox is NOT checked
list of assignments from all workbaskets with the same workgroup as this
user
assignments from all listed workbaskets in the Work tab are assembled
into a single list
Conclusion
The Next Assignment link and the Get Most Urgent button use the Get Next Work functionality. This
functionality can be customized for each operator using the settings in the Work tab of the operator ID
data instance. The next assignment to be worked on can come from a workbasket or first from a worklist.
All the workbaskets listed in the tab can be merged before pulling the next assignment. Either the
workbaskets listed in the tab are considered, or workbaskets belonging to the operators workgroup are
considered.
Your PRPC application selects and provides the best, most appropriate task to operators when they click
the Next Assignment link or the Get Most Urgent button.
288
Correspondence
Introduction
Correspondence refers to letters, email messages, Fax , and SMS text messages that PRPC applications
send to interested parties while a case instance is being processed. For example, we may want to notify
the case originator and others about its progress, request needed information or obtain a signature. The
interested parties may be PRPC operators or people who are external to the system.
In order to generate correspondence during flow processing we need to define who to send the
correspondence to and what to send. We must also select one of the standard correspondence types; ,
email, Fax, mail or phone text (SMS).,. A PRPC application can provide this information by prompting
users to specify values or can provide these values programmatically at the appropriate place in the flow.
In PRPC, correspondence is part of the Process category and is an instance of the Rule-Obj-Corr rule
type.
In this lesson, we will look at email correspondence, also known as email notifications.
At the end of this lesson, you should be able to:
289
An Email is sent from a sender to one or more recipients, and uses a four-step model.
1. We need very little information for recipients, an email address, and perhaps the recipients
names.
2. The sender, in addition to needing an address and name, requires account information, and
perhaps a provider, server and port. If an email client program is used, these are typically set
once early on in the process.
3. The body of an email, contain the actual message that needs to be communicated to the
recipient.
4. In between the message and the recipient, lets add a delivery step. Gone are the times when
every email requires someone to click a send button. We still send these ad-hoc messages, but
our inboxes fill up with other emails as well, such as Task-triggered emails, like You were
recently sent a new credit card. Call to activate." We also get (friendly) reminder emails, like
Your credit card is now two weeks past due. Pay now or expect a large fee.
Lets look at the PRPC rules and data that fit into this four-step model.
1. The recipients of the emails are represented as PRPC work parties or the email ID itself as that
can be typed in certain configurations. If the recipient is internal to the system, the email address
is stored in the operator record.
2. The Sender information is stored in an Email Account rule. This rule holds account information,
and as with life outside PRPC, a single account is used to send multiple emails.
3. The message itself is created from a Correspondence rule, which dictates the actual content
and layout of the email body. Correspondence rules can also contain smaller, reusable
Correspondence Fragment rules and/or Paragraph rules.
4. On the delivery front, task-driven notification is setup from the PRPC representation of tasks:
Assignments. Assignments point to a special subset activities, called Notification Activities. Or
they can be set up with the Send Email smart shape after the assignment shape in the flow
rules. Reminders are set at the SLA level. That is, when the goal or deadline of a service level
agreement has been reached, a notification email is sent out. This is done using an Escalation
Actions. Ad-Hoc emails are typically sent using the out-of-the-box Send Correspondence
290
flow action, which can be configured as a local or flow wide or stage wide or case wide flow
action.
In the wizard, change the drop down value to Configure an email account. Note that this account is
being shared for a workpool. Multiple cases, even multiple assignments for a case type, will share the
same email account record, therefore, we only need to do this once, for a workpool. When creating an
email account using the wizard, when you select a workpool name PRPC makes the Email Account rule
name the same as the workpool name, , and implicitly associates this account rule for the subsequent
notifications.
Use the form below to provide the account information. To help us get started, click the Select Email
Provider button to pre-populate the form with information from a well-known email provider.
291
We then fill out other critical information, such as the address, host, user ID, and password. Its possible
to use different account information for sending and receiving. For now, well use the same.
292
In the correspondence rule form dialog, we need to specify what type of correspondence, by selecting
one of the standard correspondence types, Email, Fax, Mail and PhoneText. We also need to make sure
the Context (App Layer, Apply to class and ruleset name is accurate.
A correspondence rule is a template that contains the standard body of the message that we wish to
send. We can insert PRPC properties and include logic as well as text in this content. The
correspondence rule provides a Rich Text editor with a wide variety of formatting tools to change the
appearance of our correspondence. The toolbar includes selectors for bold, italics, fonts, font size, color,
spell checking, list formatting, alignment and including images and graphic elements.
PRPC properties are marked using angle << >> brackets and these are replaced with the actual values at
runtime before the correspondence is sent to the recipient.
293
We can also include larger fragments of text, including paragraphs, sections, other correspondence rules,
and correspondence fragments, by clicking the Include a Rule button, when we are editing the message
contents of a correspondence rule.
Correspondence Fragment rules are part of the Process category, and Paragraph and Section are part of
the User Interface category.
294
Instead of the smart shape, we can also use a utility shape and call a correspondence activity. PRPC
includes several standard correspondence activities. The two main ones are CorrNew, which is typically
used when generating correspondence from a utility, and CorrQuickStart, which is a simpler
correspondence generation activity, with fewer options than CorrNew. The usage of CorrNew directly is
greatly reduced, as that is the one used behind the scenes for the Send Email smart shape with easier
configuration.
In a flow rule, instead of using the send email smart shape or utility shape with a call to a correspondence
activity, in the notify tab of the assignment we can configure it to send correspondence . Assignment
shape correspondence is generally used to send automatic notification messages to the users who have
the assignments, alerting them that more work has arrived in their worklist or acknowledging some other
party that the specific assignment has happened. For example, in an UnderWriteLoan flow of an
insurance application, an assignment can alert the loan officer that a new loan is ready for his or her
review. In the purchase request application, the Ship Purchase Items assignment can notify the
requestor that the items requested are about to be shipped.
295
Note that, since notification is now set up in the notification tab, a mail icon appears on the assignment
shape, similar to the Send Email smart shape. We can enter one of the activity rules Notify, NotifyAll,
NotifyAllAssignees, NotifyAssignee and NotifyParty in the Notify field, and enter the appropriate
parameters.
NotifyAssignee, sends a message to the assignee, or to the first contact in workbasket for
workbasket assignments.
Some of these rules can optionally check for urgency, and only send email when the urgency is equal to
or above this value.
If the service level is not met for an assignment, we can send the correspondence to the appropriate
recipients. Previously we learned that service Level rules are referenced in the Assignment tab of the
assignment shape in a flow.
In the service level rules, we can do one or more escalation action(s), if the goal or deadline is not met
and the deadline is passed. The list of possible actions is shown below.
Notify actions can be used to send correspondence. We can see what activities are called when using the
Notify Assignee and Notify Manager activities.
296
Whenever an email is sent automatically as part the processing the case instance, or manually (ad-hoc),
a copy of the email is attached to the case instances, and we can view them from the attachments section
in the perform harness.
Clicking the attachment link opens up the copy of the email with the contents and identifies who sent it.
297
Ad-hoc Notifications
Weve looked at use cases for automated emails; smart shape in a flow, on an assignment, and triggered
from an SLA. Now, lets look at ad-hoc messaging. Imagine a situation in which a manager wants to send
an initiator a question about the current purchase request. He doesnt want to approve the message until
he gets his answer. He could just email the requestor directly, but prefers doing it in PRPC for better
record keeping. The assignment is set with the local flow action SendCorrespondence. This could also
be set as a flow-wide or stage wide or case wide flow action.
At runtime, from other actions menu, we can click the Send Correspondence link and this enables us to
send ad-hoc notifications.
When we click on the link, the correspondence action is initiated and we see the work parties drop down.
Owner work party is configured as the operator who initiates the purchase request case.
Now, we select from a list of out-of-the-box correspondence template rules. We can select
SampleLetter, which allows us to edit the correspondence before sending it.
298
We can click the Next button to finish and the correspondence is sent back to the requester. Please note
that all emails sent from PRPC, are included as attachments in the case instances just as we have seen.
Conclusion
Correspondence can be in the form of email, mail, fax or SMS text. This lesson focused on email
correspondence, which is critical to timely and fluid execution of a business process. We looked at the
high level components of PRPC communication, and learned which PRPC rules are mapped to each of
these components, and how they can be configured.
Notification can be sent through a smart shape in a flow rule (utility shape in a flow rule which might not
be needed at all), notification configuration in an assignment shape, and SLA rule. Ad-hoc notification
can be enabled with a local action in an assignment shape. Notifications are sent to the work parties in all
the configurations and in some configuration, such as Send Email smart shape configuration, we can
also type in an e-mail directly.
Once the correspondence is sent, it is attached to the work item and is a permanent record that cannot be
deleted.
299
Tickets
Introduction
During the normal sequence of processing a case instance, an event may be considered a business
exception. When that event occurs, a ticket may redirect the processing to another point in the processing
of the case.
In PRPC, tickets are part of the Process category and are instances of the Rule-Obj-Ticket rule type.
We raise the ticket when the business exception happens, and we reference the ticket at a point where
we want the flow of the case to resume again. In this lesson, we will learn how to raise and reference
tickets.
At the end of this lesson, you should be able to:
300
Using Tickets
With the introduction of alternate stages in Pega 7, the use of tickets is diminished. An alternate stage is
meant for exception flows and processes that deviate from the happy-path primary flow. Lets look at one
example. During the processing of a purchase request, anyone can reject the request for any reason.
Whenever the request is rejected, we have a different starting point for the special processing of the
rejection. This can involve sending a notification to the requestor who requested for the purchase, return
the funds allocated for the purchase request etc. This special processing can be done with the use of
tickets or alternate stages. We recommend the use of alternate stages. Whenever rejected, we change
the stage to an alternate stage such as rejection, where we can have the steps to notify, return of funds
etc. Alternate stages are discussed in detail as part of the Case Lifecycle Management lesson group.
In another example, we want to hold the purchase request case from processing further until all the
subcase instances of purchase orders are resolved. This can be done either with the use of tickets or by
using the advanced Wait shape, with Case Dependency being the wait type. We recommend using the
Wait shape. The Wait shape is covered in a different lesson.
But, we still might have instances where using a ticket is the easiest or only way to satisfy a requirement.
For example, when a purchased item is delivered to the requestor, they can confirm delivery or they can
return the purchase if the item is defective or did not meet their expectations. So, in the purchase order
subcase, a ticket is raised whenever the purchased item is returned. It is referenced in the purchase
request parent case to restart the purchase request process.
A ticket rule only defines a name and description and not the processing to occur in the case of an
exception.
301
Raising a Ticket
In the flows for a case, we define when an event may be considered a business exception by raising a
ticket.
PRPC provides us with two distinct ways to raise a ticket. One way is to provide users the option of
selecting the Work-.SetTicket or @baseclass.SetTicket flow actions when processing an assignment. If
an assignment is presenting a case instance with a condition that meets the business exception criteria,
users can select this flow action to trigger the exception processing. As shown below, users have to
select a ticket from the dropdown, to set the ticket flow action during the processing of the case. We use
an HTML rule, ActionSetTicket to display a list of Tickets. These standard flow actions require users to
have the ActionSetTicket privilege.
The second way to raise a ticket is by calling the standard activity @baseclass.SetTicket. This activity
takes the ticket name as a parameter and turns it on or off. Instead of using the standard SetTicket flow
action, in the flow action where the business exception is occurring we use the standard activity to raise
the ticket. In the return to purchase business exception flow action, we raise the ToPurchasing ticket
that we created earlier, by passing the name of the ticket as the parameter to the activity.
The SetTicket activity takes the ticket name as parameter and turns it on or off.
302
Referencing a Ticket
In the flows for a case, we also need to define the starting point of the processing that needs to occur
once a ticket is raised. We do this by referencing the ticket.
Once a ticket is raised, flow processing stops and resumes at the point in the flow where the ticket is
referenced.
In the process modeler, specific shapes have the ticket functionality built into the Properties panel.
Use the Tickets tab of the Properties panel on the shape we want the processing to resume from when
the ticket is raised.
The following shapes provide a Tickets tab in their properties panel:
Assignment
Decision
Subprocess
Utility
End
Split Join
Integrator
Assignment Service
The shape has a ticket indicator with the description next to it as shown below.
303
We can use the Ticket landing page available by selecting the Pega button >Process and Rules > Work
Management >, Tickets in our current application to verify which ticket rules are in active use, and by
which flows or activities.
304
Work-.Withdraw is similar but is raised when the work item is withdrawn. If this needs to be
handled in a special way in our flow, we can reference it to indicate the starting point of the
special processing.
The Work-Cover-.AllCoveredResolved ticket is raised by PRPC when all covered work items
are resolved. The ticket alerts the covered work item and any needed special processing
referenced by this ticket is triggered. The parent Purchase Request case is waiting in a
workbasket and when all the subcases are resolved, the ticket is raised. It is referenced here in
the Update Status utility shape to resolve the status of the parent case automatically.
Conclusion
Tickets are a powerful feature that helps to handle business exceptions easily and efficiently when
alternate stages or Wait shape and dependencies cannot be used. Turning tickets on and off is simple.
They are also a great tool to build flexibility into our applications.
The Ticket rule itself is just a name and description. We raise the ticket where the business exception is
happening and we reference the ticket where we want the flow of a case instance to resume.
There are some standard tickets available for us to reference in our flows and they are automatically
raised by PRPC.
305
Declarative Processing
Introduction
Traditional programming models involve procedural execution of logic, in the case of a BPM application
these are tied to the business process. In certain scenarios this programming model poses significant
challenges especially when it comes to adapting to change quickly. Declarative programming specifies
what computation should be performed without explicitly mentioning when to compute it.
Lets look at an example - a customer is entering a questionnaire during the enrollment process that is
used to determine the quote for his insurance plan. There are several factors to consider here, the
insurance calculation depends on a variety of factors, the execution of the calculation should not dictate
the order in which the questions are answered and lastly the model should be flexible enough to handle
new or changing factors. PRPC offers a powerful declarative engine that can compute declarative
calculations. These declaratives work in tandem with the business process and do not require any explicit
reference in the business process for these rules to be invoked.
At the end of this lesson, you should be able to:
306
Declarative rules support all those features that apply to other rules such as class specialization, ruleset
and rule specialization, circumstancing and or effective date.
When developing PRPC applications, we may have some situations where we need to decide whether a
rule can be written as a declarative or as a procedural rule. In most cases, the answer to that question is
to write the rule as a declarative rule. For example, the unit total for a selected line item is the product of
the price of the line items and the quantity (UnitTotal = LineItemPrice* LineItemQuantity). This logic can
either be written using a data transform or a declare expression. In this case this should be written using
the declare expression so its easier to maintain and modify.
Another benefit of declarative processing is Performance. Well written declarative rules offer better
application performance than using a procedural rule. When working on a purchase request application,
the customer can order any number of items. Declarative calculation of line item totals can automatically
trigger the calculation of the grand total for the purchase request. Declarative rules allow us to use
chaining to any number of levels which creates a declarative network.
307
308
The target property is what is being calculated or computed using these declaratives and
Source properties (in some cases there is more than one property) are the ones which are used
as an input.
Forward Chaining
Forward chaining is executed when the value of any of the source properties change. Most of the
declarative rules follow forward chaining.
In the case of Declare expressions, the expression is calculated whenever any of the source properties
change. In our example, the subtotal gets calculated whenever the quantity or the unit price changes.
Declare Expressions are configured in Forward Chaining Mode when we select the option Whenever
inputs change.
In the case of Constraints, when the value changes, the constraint is executed to verify that the value
conforms to the conditions expressed. Constraints can have any number of rows and the order in which
they are listed is not significant. The message text can be entered directly in the rule form or saved as a
message rule which can be referenced as shown below.
309
Declare OnChange can track several properties that have been saved as part of the case. When one or
more of these properties change it invokes the activity which has the logic to be performed. The
Declarative Engine invokes the activity only once if more than one property gets changed.
The Declare Trigger executes when instances of its classes are created, saved or deleted in the
database. On the database event, it triggers an activity which contains the logic to be performed. Triggers
can be used in cases when we want to capture the history of how a property got different values over the
case lifecycle. For example, the audit trail must contain information about the different values stored as
part of discount. Whenever the values in the discount change, it causes the trigger to fire the standard
activity which writes an entry to the audit trail.
310
The Declare Index executes when the values of a page list or a page group change which requires the
system to update the records saved in the indexed table (corresponding to the indexed class). This is
primarily used in reporting when we expose page list properties to improve performance. Whenever a
new page list gets created or deleted or when one or more of the values change the Declare Index
automatically create or update the records stored in the index table.
311
Backward Chaining
Backward Chaining mode executes when the target property is referenced. Backward chaining is
supported only for declare expressions.
Declare expressions provide three options for Backward chaining.
1. When used, if no value is present
2. When used, if property is missing
3. Whenever used
The target property is considered used when its referenced by its name, such as in a UI rule, in a
decision tree or in a data transform.
The if no value present / if property is missing options make sure the expression is calculated only
once in the case lifecycle unless the property is removed from clipboard.
When we use Whenever used the system throws a performance warning, indicating that the expression
will fire as many times as the target property is referenced.
312
313
Conclusion
In this lesson we learned about declarative processing, and how it provides an alternative to traditional,
declarative processing. The use of declarative rules can reduce errors and maintenance effort, simplify
development, and improve performance if used properly.
We also learned about forward chaining and backward chaining, how PRPC controls the execution mode
of the declarative calculations using these modes. We will learn more about writing declarative
expressions in another lesson. The Lead System Architect (LSA) course provides more detail on the
advanced rules such as Triggers and OnChange.
314
Declarative Rules
Introduction
Business calculations involving user defined fields should all be implemented using declaratives. Some
calculations use simple mathematical operations like add, subtract, multiply or divide, while there are
some advanced operations which might involve creating a java function.
Business calculations may span multiple layers that together form a deep network. For example, the total
price of an order is calculated as the sum of the subtotals. The subtotal is calculated as the sum of all line
item totals and then deduct the discounts. The line item total is calculated as quantity * unit price.
Discounts may involve other calculations depending on the customer type, geography, time when order is
placed, and so on.
Declare expressions provide a great way for us to implement this functionality. We could use activities or
data transforms to do the same, however using expressions offers greater benefits. Lets look at
Declarative expressions now to learn more.
At the end of this lesson, you should be able to:
315
If the target property is of page list or page group then we need to identify the Page Context.
Declare expressions have a number of prebuilt calculation types. This list is different for each type of
target property. In this case UnitPrice is a decimal and you're presented with the list of calculation types
used for decimals. To set a scalar value we use Value Of and then select the free form expression which
allows us to write any expression. There are three other choices that apply to decimals.
316
Other calculation types for decimal and integers include: sum of, min/max of, average of, count of and the
two index calculations. Each of these loops over a list and performs their respective calculation.
The result of the decision tree, table and map value allows us to call other business rules and take
their return value and set the target property.
The Value of first matching property in parent pages option allows declare expressions to reflect
a value for a property of the same name on a parent page. The class of the target page must equal
or be a descendent of the class of the declare expressions rule.
Lets take a look at the examples of declare expressions that can be used to calculate these values. The
subtotal or line item total is calculated as
317
while the total price is calculated as the sum of all line item totals.
When the target property is string, the functions available vary and the selections are only Value Of and
Results of Decision rules. We can also apply a condition to evaluate which expression to pick.
Similarly a date property comes up with different list that applies to date functions. We can also apply a
condition in the In Which field, that applies a condition for each item in a page list or a page group.
318
A function is a rule type in Pega that is used in various other rules. Functions allow us to extend PRPC
capabilities by using custom java code. Functions belong to Technical Category (Rule-Utility-Function)
and are grouped by Library.
The Library rule is primarily used only for grouping related functions. PRPC ships various libraries each of
them containing a list of functions for us to use as-is. The expression builder allows us to select the library
which in turn provides the list of functions that are available in that category. In most cases we should be
able to use these functions as such and we may need to write new ones on rare situations.
In the expression builder, first we need to select the library, if we are not sure which one it might belong to
we can select All.
319
When a function is selected, it opens up the parameters panel. Functions require one or more parameters
to be passed as inputs. After adding the parameters, we click Insert to add the function. The expression
builder provides functionality to validate if the function is valid.
Functions are also used in other rule forms like when, decision trees, etc. We will learn more about these
in other lessons.
320
The other important concept we need to learn in declare expressions is the Context Execution behavior.
There are two general types of context settings, Context Sensitive, which is set by selecting the first
choice and context free, which is set by choosing the regardless of any pages it is contained in option.
The third option is a hybrid where we can explicitly list the classes for which it will behave like a context
free expression. In practice this option has limited appeal as maintaining such a list reduces reusability.
The context sensitive option is the default for expressions. The term context sensitive really means that
the path defined by the page context + target property must be complete from the top level page that
derives from the Applies to class.
In the examples we saw earlier TotalPrice which uses the sum of all lineitemtotals fires only when the
applies to class is the work class in which it belongs (in this case its ADV-Purchasing-WorkPurchaseRequest).
The LineItemTotal or the subtotal calculation that we saw earlier is used to calculate the totals of each
line item. This expression also uses the context sensitive expression. The only difference being the
expression uses .LineItems() as the page context property so the expression is fired based on the page
list property LineItems that is defined in the work class.
321
Context free expressions allow us to define the expression without taking into account where it is used in
the expression. This option is best used for calculations that are ALWAYS necessary and the same,
regardless of where the object is embedded.
In our base application we implemented the unit price using a context free expression. In this rule, we
copy the unit price from a page property named SelectedProduct. The page property SelectedProduct is
auto populated by a data page so this unit price is copied from data page irrespective of where the line
item is added.
Instead of defining the applies to class at the work object and supplying the page context we define the
expression directly on the data class and set it to context-free.
322
The landing page shows us the top most expressions. The expressions that are used by these top
expressions are not shown here. Think of this as showing us the final results and not the intermediate
calculations. We can open our Total Price expression to test further by clicking the highlighted icon.
Here we can see that the total price uses subtotal which is an expression and which uses quantity and
unit price. Unit Price is an expression using another unit price.
For complex networks we can zoom out (or in) to see different components of the network. PRPC also
provides alternative views of the network. Here we can see the basic tree view as well as the org tree
view.
Another useful feature for more complex networks is the ability to expand the network one level at a time.
Here we can see that, starting at level one and expanding one level at a time, we can clearly see the
properties that factor in to each calculation.
323
As we can see all the dependent properties needed to determine the total have been identified. Each
property is editable so we can test directly from here.
To enter a value, we need to click on the property that does not have fx to its left, the ones with fx are
calculated by the expression. We then enter the value in the text box and click update.
Now if we update our Unit Price to 10 the discount expression fires and automatically updates the Unit
Price. Since we have already defined a value for Quantity the subtotal and Total price are also
calculated.
For properties that are lists, such as our LineItemList, we can also add additional pages directly from this
screen. The expression testing tool has a number of other features which can be very helpful when
reviewing the calculations with other developers or the members of the business team. This form can also
be accessed directly from the declare expression rule using the actions menu and clicking Run.
324
Conclusion
Declare expressions are most popular amongst all declarative rules and we learned about defining them.
Declare expressions are useful in cases when we want to calculate the total price on a purchase request,
or derive the quote for an insurance policy. Some applications require us to perform complex business
calculations which require us to use function rules. Remember that function rules are grouped using
libraries.
We also looked at examples of how to write context free and context sensitive declare expressions.
Finally we learned about the declarative network analysis tool to explore the nested levels of declarative
rule and how we can unit test a declare expression to make sure it works.
325
326
Delegation of Rules
One of the key benefits of PRPC applications is that they are very adaptive to change. A well designed
PRPC application should involve business users not just to capture requirements in the product but also
provide them with the option to change the rules in real-time.
Some of the changes are related to data, for example the list of products that a user can order or a list of
accounts owned by the customer or the list of transactions made by the customer. This data comes from
either internal or external sources and is usually updated outside of the application. Pega 7 helps us in
designing our application to separate data from the business logic so that changes in data can be
handled easily without impacting the application.
In this lesson, we will look at some changes that impact case processing logic.
1. The bank decides that it wants to automatically approve a credit card dispute if it is less than 30
dollars and if the customer has had the card for more than 2 years.
2. When fulfilling the order, the invoice amount should apply the tax rate based on the state in which
the customer lives.
In the first scenario, if the bank decides to increase the auto approval amount for the credit card dispute
from 30 to 40 dollars or reduce it to 25 dollars or even add another factor such as customer type so that
the amount can be based on the customer type. By delegating these rules, these types of changes can
be made by the user without the need for a developer to be involved.
Suggested steps
1. Identify the group of delegated users These users must be part of a specific access group and
the rules should be in a specific production ruleset that these users can access. These users
should also be able to access the delegated rules from their portal.
2. Identify the Rules that can be delegated Typically any of the Decisioning rules can be delegated.
This makes it easier to make changes and PRPC supports additional configuration to Decisioning
rules which makes it easier for us to update them.
3. Improve readability and customization options to make it easier for users to make their updates.
Lets take a detailed look on how to implement these steps.
327
If the rules are delegated to more than one user, then we should enable the checkout option so the
changes are not overwritten.
The ruleset should then be added as a production ruleset in the application rule.
The Accessgroup of users should also be modified to include the production ruleset.
328
329
This gives us the choice to whether add it as a favorite for a specific user or an access group. The last
two choices appear only after we add the rule to a production ruleset.
Once added, the delegated rules appear as part of the user portal for users to access.
1. For users using designer studio, these rules can be seen in the Favorites Explorer.
2. For users using case manager portal, these rules are accessible in the operator menu.
330
Improving Readability
Decision trees usually look complex with multiple if-then-else statements, but we can make these more
user friendly in several ways. We recommend you follow these guidelines when delegating rules to
business users so its easier for them to configure.
1. Display Label The display label button displays the short descriptions saved in the property
instead of using the property name, so meaningful short descriptions make the decision tree more
readable. Shown below is how the decision tree originally looks., Note it uses the property
identifier (.propertyname or Page.PropertyName) which makes it harder to read.
When we click the Display Label button then the same now looks like the screen shot below.
2. Function Alias Using a function alias adds more meaning to the condition rather than using
the actual function signature. A function signature looks like this:
@(Pega-RULES:DateTime).isWithinDaysOfNow({theDate}, {days}).
We can modify that by using a function alias rule and the tree now looks like this.
331
In this case the function requires two parameters the date and the number of days from that date. We
set the first parameter to case creation date, so this alias is used only on cases where we would like to
select x number of days from the case creation date.
A function alias helps in making the decision trees very readable to simulate reading an If then else
condition logic.
332
This can be enabled both in trees and tables, and the choices can be added in one of two ways.
We can add these directly in the Allowed Results table. Or, the results can be stored as part of a property
and then we can refer the property in the Allowed Values Property.
We can also set another property when the results are chosen. These property sets are hidden to users,
and we can set one or more properties for each result.
Decision Trees
The options field must be carefully selected on rules that are extended to business users. In most cases
the ability to change functions, call decision or take actions should be disabled. If there is a case where
we need to let users change functions, we should add the list of functions in the Functions Allowed field.
Then users can select from that list and not the entire list.
Similarly if users need to take actions, then we use the Allowed Action Functions field to add the
properties that we need to add.
Decision Tables
The Allowed results in Decision tables works similarly to how it does in decision trees, except in the
Decision table rule form the options are listed as Delegation Options. Only the highlighted flags impact
how users can make the changes on delegated rules.
333
1. Allow to update row layout When this field is disabled, then business users can only modify
the values corresponding to that row. They can neither add new rows nor delete any existing
row.
2. Allow to update column layout: When this field is disabled, then business users can only
change the values corresponding to that column. They cannot add or delete columns in the table.
In addition, they cannot change the property that is referenced in the columns.
3. Allowed to build expressions : This field enables the end user to use expressions as part of
the cell in the decision table. If disabled, then users can enter either a constant or a property in
the cell.
Map Values
Map Values offer two options for delegated users they can change the matrix layout or use expressions
in the cell.
Conclusion
Applications require rules to be delegated to business users so that they can make updates to business
logic without requiring a developer to make change. Though we all know the benefits, delegation comes
with some challenges. To make it easier for business users, the development team should configure the
delegation options so the choices that the business users have to make are restricted.
334
Automating Decisioning
Introduction
Applications become powerful when we automate its decision making capabilities. Decisioning can be
applied on the action paths taken by the case or to assign a specific value based on a set of specific
conditions. Lets look at couple of examples:
1. A customer calls in to dispute a transaction that appears on his statement. The company would
like to automatically approve all disputes that are smaller than a specific amount since it is not
cost effective for the company to process the dispute as a claim. In some cases, we may need to
include other factors like the number of such claims filed by the customer in the past year to make
the decision.
2. When completing a purchase request the system determines the discount for which this requestor
qualifies. The discount is determined using a set of specific conditions such as the department,
customer type (platinum, gold, silver), time period in which the order is made and so on.
Pega provides decision rules that can be used to implement these conditions to automate the process. In
many cases these rules are delegated to business users so they have the ability to change the small
currency amount or the factors that are used to determine the discount factor.
At the end of this lesson, you should be able to:
Check for consistency and Unit test the rules to check results
335
Decision Rules
Decision rules play a key role in any enterprise application that involves business processes. They
represent the decisions and policies that drive business processes and case management.
Decision rules can be invoked using a decision shape in the flow, using a declare expression or using a
method in an activity step. The referencing of the decision rule depends on the context in which we need
them.
We use flows in case processing to determine which path the case goes down. Using decision rules in
our flows automates the case processing. For example, a decision table can decide if the case requires
approval from another manager or if it can move to the next step in the business process.
Declare expressions can use decision trees or decision tables to get a specific value based on a set of
conditions. For example, if the customer lives in Massachusetts, has a credit score above 720, and is
paying at least 20% as a down payment, then the APR for the car loan is 1.7%.
Decision trees are used in activities when we want the activity to decide whom to route the case to or
what the escalation activity on the SLA should perform.
There are four types of decision rules that we can write to evaluate decisions:
When
Decision trees
Decision tables
Map Value
When Rules
When rules belong to decision category, however they can return only one of two values True or False.
When rules are used in processes to decide which one of the two paths the case can take. For example,
if the purchase request total is less than 500 dollars then the purchase request does not require approval.
When rules are also used in other rules like UI, data transforms, other decisioning rules and declare
expressions to evaluate a condition. Though most of these rules support adding the condition directly, we
recommend that you use a when rule so the condition is entered once in the when rule and is used
everywhere.
When rules can use a Boolean expression or a function rule part of the library shipped in the product or a
custom function defined as part of the application. When rules can involve any number of conditions and
can be combined using AND, OR & NOT.
336
337
Decision Trees
Decision Trees are useful when applying the If- Then- Else construct. Decision trees can be constructed
to have many branches which evaluate different properties to determine a decision. Decision trees can
also use function rules or when rules for building the condition.
One of the main reasons to use Decision trees is that it allows nesting, which means it can invoke another
decision tree or other decision rules such as a decision table or a map value.
Decision Tables
Decision Tables are useful in presenting the set of conditions using a tabular structure. This is very user
friendly to for managers since it resembles a spreadsheet with rows and columns. Decision Tables are
suited to cases where we use a set of properties to arrive at a decision.
Some of the main reasons we use Decision Tables are that they:
1. Give us the option to evaluate all rows to arrive at a decision.
2. Let us increment values on the specified condition which is useful in implementing scoring.
3. Can invoke another decision table
4. Decision Table
338
339
Map Values
Map Value rules let us determine a specific value based on one to two properties- think of this as in a
map determining the location based on the latitude and longitude.
Map Values are usually used in special circumstances where we the values of one to two factors decide
the outcome. For example, if we want to determine the interest rates for bank accounts and customer
types.
If we have five different types of customers and eight different types of accounts, we need forty rows in
decision table to present this while we can do this in a 5x8 matrix. Map values can be chained to other
map values as well if we need to use more than two properties to determine the outcome.
Map Value
340
Decision Trees
The decision tree can use the direct expression such as .pyStatusWork = Resolved-Completed or
reference a when rule named StatusIsResolved or use a function rule.
Decision trees usually return a value, these values can then be used in the calling rule such as the
declare expression or the flow.
The decision tree can also be configured to set a value to properties and this can be done using the Take
Action option in the menu. Continue is used to check additional conditions and otherwise is always used
as an else condition.
Decision trees can also be configured to check values on a specific property and decide its course of
action. Evaluate is used in such scenarios.
A new tab Input appears when the Evaluate option is selected to enter the actual property that is
evaluated.
341
Though the decision tab is where we configure the decision, the developers need to use the entries in the
Results tab to control how decision trees can be used.
The Options area has two options basic (by default) and advanced. Selecting Advanced enables all the
checkboxes in the options area. Besides the options we can use the Functions Allowed section
determines what function rules are allowed for the selection. Similarly we can configure what options are
available for Take Actions in the Allowed Action Functions area.
The decision tree can also have restricted return values, in this case the decision tree can return either
true or false. We can also enter a property in the Allowed values property, in that case it uses the values
saved as part of local list in property definition. The decision trees must be configured to have all these
selections if they are delegated to business users. This helps our users to select the results that are
allowed, use the functions that are required and only take the appropriate actions.
342
Decision Tables
Decision tables are easier to delegate to business users and in most cases is the sought out decision rule
if the same logic can be written using either a tree or a table. This resembles tools like MS Excel and
hence most developers also feel more comfortable using them.. Decision tables are more apt to be used
if we are using a smaller number of unique properties to evaluate the decisions. Since we use tables,
each property gets a column and it makes sense to have rows added in a way that each of these columns
have values.
The table columns can use properties for specifying a set of conditions. The property can be compared
against a single value like Customer Type = Retail or against a range like Total price is greater than or
equal to 300 and less than or equal to 1000.
Decision trees also offer additional capabilities to add OR conditions. Suppose we want to introduce a
new column for Supplier and for two of the three customer types the values in other columns do not
change. We could use OR to split a cell instead of adding a new cell. In the example below we have a
single cell using OR for suppliers (Named vendor, New Vendor) for the first two rows.
We can return values or enable the property set field to set properties directly, using other icons we could
return more than a single property based on the condition.
The results tab provides a similar set of options that we saw in decision trees we can restrict the results
returned using a property or a set of values. The main difference between the tree and table is the first
option (Evaluate all rows), if this is selected, the decision table continues evaluating other conditions even
if one condition is satisfied. This behavior is not enabled by default and is used only on specific cases.
We will review one such case later in this lesson. When this setting is enabled, the decision table cannot
return values and hence we cannot execute the decision rule from a declare expression.
The delegation options apply only for the delegated user. Refer to Pega 7 Help for the details of each of
these options.
343
344
345
346
Conclusion
Decision rules play a key role in automating business processes by evaluating a set of specified
conditions. The decisions made can affect how the case is processed in terms of which path it takes, how
the calculations are changed based on decisions, and what is presented users based on a set of
conditions.
Pega offers decision trees, decision tables, when and map value rules to handle these requirements.
Well learn more about how to reference the decision rules in the Process modeling and UI lessons.
347
Validation
Validation of data ensures the quality of the information being used in the application and in the business
process.
In general, data validation involves examining incoming values to ensure that they meet the applications
requirements. The values coming into the application from a user or external source are compared
against pre-defined criteria. If the values do not meet the criteria, the system raises an error, and further
action can be taken, such as prompting for the correct information, or rejecting the incoming values
entirely.
Validation of data input by the users has been covered in detail in the System Architect Essentials (SAE) I
and II courses, as well as in the user experience lessons in this course. In this lesson, we will look at the
overall validation process. We will review some of the concepts that are already covered, at high level,
and we will explain new concepts in detail.
In PRPC,
Property rules are part of the Data Model category and are instances of the Rule-Obj-Property
rule type.
Control rules are part of the User Interface category and are instances of the Rule-HTMLProperty rule type.
Validate rules are part of the Process category and are instances of the Rule-Obj-Validate rule
type.
Edit Validate rules are part of the Data Model category and are instances of the Rule-EditValidate rule type.
Constraints rules are part of the Decision category and are instances of the Rule-DeclareConstraints rule type.
Dictionary validation: Part of this level of validation occurs automatically and another part
occurs when it is built into the application and at specific times during the process execution.
Object validation occurs only when explicitly designed into the application.
Another validation level which is optional and occurs only when built in the application is
Constraint validation. Based on declarative constraint rules, it is evaluated automatically by the
system each time a property identified in a constraint rule changes
Mode validation leverages the property mode setting. A property mode is identified on the General tab
of the property rule form and it is combined with property type, as shown below. The Mode Validation
enforces the property mode when a value is being assigned to a property. For example, you cannot set
348
The Dictio
onary validattion examines
s a property value
v
in the co
ontext of the ccorresponding
g property rulle. It
includes multiple
m
valida
ation tests. Th
he first validattion test is to ensure that the property vvalue is compatible
with its typ
pe. For example, a propertty of Integer type
t
cannot ccontain stringss as a value. The list of
available standard prop
perty modes and
a types is shown
s
above .
The next validation
v
tes
st is related to
o the Maximum length asssigned to the property. On the Advance
ed
tab of the Property rule
e form, Pega 7 allows us to
o specify a ch
haracter limit ffor Value mod
de properties of
sword, Text orr Identifier. Th
he system uses any value specified in tthis field to restrict the prop
perty
type Pass
value to a specific max
ximum numbe
er of characters. If the leng
gth of the inpu
ut string excee
eds this limit, the
clipboard keeps the lon
nger value, bu
ut the dictiona
ary validation adds an asso
ociated error message.
We may also
a
reference
e an Edit Validate rule which
w
defines a Java routin
ne that tests th
he validity of a
an
input value. Normally, if user input fa
ails such proc
cessing, the in
nput is rejecte
ed and a red X appears ne
ext to
f
in error along
a
with me
essages that may
m convey m
more about th
he error and the suggested
d
the input field
349
remedy. Users
U
can cha
ange the inpu
ut and resubm
mit the form. Edit Validate
e validation iss only possible
with prope
erties of mode
e Single Valu
ue, Value List, or Value
e Group. Anyy architect who
o has java
knowledge can build ne
ew edit-valida
ate rules. Or we
w can use a ny of the stan
ndard rules th
hat are availab
ble to
us.
ample, we use
ed a standard
d edit validate
e rule, USZipC
Code, which cchecks for wh
hether the patttern
In this exa
is 5 digits.
User
U
input ente
ered on an HT
TML form is placed
p
on the
e clipboard.
An
A Edit Input rule and Prope
erty-Validate method (in an
n activity rule
e) is applied to
o the propertyy. [An
ed
dit input rule provides
p
a co
onversion facility. We can u
use edit inputt rules to convvert data ente
ered
by
y a user (or re
eceived from an external system)
s
from a format that our applicatio
on doesn't usse into
an
nother formatt. We can refe
erence an ediit input rule in
n the Edit Inpu
ut Value field on the advan
nced
ta
ab of Property
y rule.]
Constrraint Validation
Constraints rules provide an automa
atic form of prroperty valida
ation every tim
me the properrty's value
o the validation provided by
y the propertyy rule or otherr means. The technique ussed is
changes, in addition to
called forrward chaining, like in many other declarative rule tyypes.
The syste
em automatica
ally adds a message to any property tha
at is present o
on the clipboa
ard and fails a
constraintt. No other rules explicitly reference
r
con
nstraints ruless. When we save a constra
aints rule, Pro
ocess
Command
der enforces it immediately
y and thereaftter.
We can create the constraints rule using
u
any exp
plorer. In the C
Constraints ru
ule, use the C
Constraints ttab to
e configurations that constrain the prope
erty values.
record the
351
Case le
evel Valid
dation
We learne
ed in other co
ourses how to
o use validatio
on rules in flow
w actions in a flow rule. In the Details ta
ab of
the Case Designer, we
e can configurre the validation rules to ru
un when a casse is instantia
ated and/or
s
in the database.
d
The
e advantage o
of having the validation at case level is,,
wheneverr the case is saved
wheneverr we need to validate
v
certain data for ev
very saving off the case insttances, we ca
an configure iit in
the case designer
d
once
e instead of needing
n
to call the validate rule in multip
ple flow action
ns. When a
purchase request is ma
ade, we can validate
v
whether the paren
nt case has en
nough funds tto process this
efore adding the case to th
he Program Fund
F
parent ccase.
request be
352
Clicking on
o these two links opens up
u Work-Cove
er-.OnAdd an d Work-Cove
er-.Validate ru
ules. These ca
an be
specialize
ed by putting the
t rules in th
he work class of the work p
pool or a speccific case. The
e validation ru
ules
can be co
onfigured like any other validation rule th
hat we used i n the SAE I a
and SAE II courses.
353
Usage of Valida
ation rule
es
Most the validations
v
ex
xplained in the
e previous se
ections, such a
as property m
mode, propertty type, contro
ol
(such as pxDateTime),
p
, expected len
ngth, required
d field, and ta ble values arre self-explan
natory and easy to
understan
nd how they are
a used.. Butt, will these be
e enough for validation? Lets looks at a
an example. W
We
want to re
estrict users so that they ca
annot enter a date in the fu
uture for the d
date of birth fiield since it iss not
valid. Defining date of birth as a date property an
nd using the ccalendar contrrol users cannot choose
o
than a date
d
value, ho
owever users can select a future date. T
This is where Validate, Ediit
anything other
Validate, and
a Constrain
nts rules com
me into picture
e.
Lets do a comparison of the first tw
wo to understa
and when to u
use them appropriately.
The edit validate
v
rule te
ests the validity of a single
e input value a
and the tests are built usin
ng Java. It is
specified in the propertty rule for the tested prope
erty. The syste
em has a num
mber of stand
dard edit validate
rules that we can use in our application.
354
In contras
st, the validate
e rule can tes
st the validity of
o multiple inp
put values at once, and the
e tests are bu
uilt
using eas
sier-to-understand "If-And-O
Or" logical ex
xpressions ratther than Java
a code.
It is speciffied on the flo
ow action rule
e that presents
s the form to the user, and
d multiple flow
w actions can use
the same validate rule.. Or it can be at the case le
evel or stage level validatio
on.
In general, it is a best practice
p
to us
se validate rules over edit vvalidate rules, because:
Validate
V
rules are easier forr non-program
mmers to desiign and unde
erstand. Edit vvalidate rules
re
equire Java programming skills.
s
Validate
V
rules simplify application design
n, because on
ne validate rulle can verify m
multiple inputt
va
alues at once
e.
355
Conclu
usion
In this lesson, we discu
ussed how to help users enter valid pro perty values into an HTML
L form in PRP
PC
ate the data re
eceived from another syste
em or source
e. When a valu
ue fails to passs the validattion
and valida
test, the system
s
adds a message to
o the clipboard
d page contaiining the value that displayys to end users.
The valida
ation error me
essage invalid
dates the pag
ge, preventing
g it to be perssisted. A set o
of configuratio
on
items and
d rules allows different type
es and levels of validation.
Validation
n can be confiigured on pro
operty rules an
nd section rulles. Validation
n rules can be
e referenced on
flow action, property ru
ules and on Case
C
Designerr. Validation ccan be serverr side or client side.
356
Configuring Reports
357
Lets take a step back and learn about some of the key terms we need to know to better understand
PRPCs reporting capabilities.
Reporting Terminology
The Ad-hoc Reports are accessible in a Report Browser. The report browser is included as part of the
default Case Manager portal. While designing end user portals, we highly recommend that you customize
how the reports are presented. PRPC ships with a wide variety of standard reports, so its absolutely
essential to decide which of these reports are necessary for users.
358
Next are the Report Categories. There are two types of categories Private and Public (with standard
and shared types). Private Categories are specific to an operator, so the reports in that category can be
accessed only by that operator. Public Categories include all standard and shared reports.
There are two types of standard reportsreports that are shipped with the product and categories
created by developers. When a developer creates a category, in this example ADV reports, its
accessible for all users who can view the report browser.
Public Categories also include shared reports. When a manager creates a new category, PRPC saves it
as a shared category and only users belonging to the same access group can access the reports in it.
The Category is saved as a rule but when we open the rule we can see that it only has a short
description. The category rule does not indicate the category typePublic (Shared or Standard) or
Private (Personal).
That leads us to Shortcutsa shortcut is a link that displays the report title in the report browser. When
we click on a category, for example, Monitor Assignments, we see all the shortcuts that are defined in that
category.
Shortcuts serve two purposesthey provide a pointer to a report definition rule that is used in presenting
the results and the category type where this shortcut appears.
The shortcut rule has three parts:
359
The shortcut is displayed as a link and when we click, it, we see the report results in the Report Viewer.
The Report Viewer also allows us to manipulate the results. There is a toolbar on the top with options to
export the results to a pdf or excel or to a printer.
360
The Filter criteria can be edited by clicking on the link and modifying the condition.
In addition to this, the report viewer supports a wider choice of command menus which can be accessed
by right-clicking on a column header.
We can control what users can do in the report viewer by configuring them in the actual report definition
rule. This allows us to configure these actions to be specific for each report instead of using the same for
all reports the managers are using.
We can also restrict the report from appearing in the report browser if the report is not to be used as an
ad-hoc report.
361
For making additional edits, we also have the Report Editor which lets us edit the report by adding new
columns and making a variety of other changes. The report editor is very powerful and enables us to build
a report using a visual interface rather than using the report definition rule form.
The report definition rule form is accessible only to developers; however, developers can access the
report browser by using the edit link on the report definition rule or by using the reporting landing page.
362
After creating the category, we can create a report and select which category it belongs to. When
creating a new report, we need to select the following:
1. Type of report (list or summary) and
2. The data that we want to run the report against it can be one of the case types or, if we need to
run it against all the case types, then we select the work pool. We can also report on assignments
to track the case processing.
Clicking OK opens up the report in Report Editor for the manager to complete it. The system actually
creates a report before the manager makes any changes. To do this, it uses a template rule named
pyDefaultReport. We can modify the template rule to include additional columns or new filter criteria.
363
The Report Editor provides more options than the Report Viewerwe can add new columns for filtering,
add new columns to the report, delete the columns that are currently in the report, and so on. The Report
Editor is also available for developers in addition to them being able to work on the report definition rule
form directly. In most cases, we can build the report with all the features without ever opening the report
definition rule form.
After making changes, use the save link (at the top of the Report Editor) to save the rule.
When a manager creates a report, it must be saved in the private category (in this case, My Reports).
The system also creates a shortcut rule which provides access to the report definition rule.
After defining the report, the manager can keep it private until he finishes completing it. To share it with
other users, they should use the Take action on report icon to access this menu.
364
The manager then has to select the category where they want the report to be saved.
The copy action creates only a new shortcut rule and, when opened, we see that the report is in the
Manager Reports category accessible by all operators (A) in the Purchasing:Managers access group.
When copied, it creates a shortcut rule and is still referencing the same report definition rule that was
created earlier.
When we open Records explorer, we can see all the instances of shortcuts. The OpenCases report has
two shortcuts - one in the ManagerReports category which is shared with all purchasing managers in
the PurchasingMangers access group while the other is in the MyReports category accessible only for
Manager@LES and is a personal report. Both of these reports point to the same rule and, at this point,
the manager has the option to delete the shortcut that is in his private category.
365
Once we click Schedule, it provides options for us to configure to schedule the report. We can select if it
has to be scheduled once or if it can be a recurring task that occurs after a specific time period. We can
also configure the format of the report results and the email message which is sent automatically to a set
of users.
366
367
Production RuleSets
Before concluding this lesson, lets quickly run through another important concept that developers need to
implement for the managers to create reports. When applications require business users to make
modifications, we need to make sure that the report definition rule allows modifications even in a
production environment.
To implement this functionality, we need to define a RuleSet and then add it as a production RuleSet in
the application. Production RuleSets remain unlocked in production so that the rules belonging to that
RuleSet can be modified by users. However, we need to ensure that we are allowing this for a specific
group of people and not to everyone.
For example, if we want managers to work on performance metric reports, we create a new ruleset and
add it as the design-time configuration ruleset on the manager access group so only they get access to
these rules.
Often, developers create reports in a private ruleset such as a branch or a personal ruleset so that
they can test their reports before making them available to all users. By branching the production ruleset,
we can also allow business users to create and configure reports in a private environment without
disrupting the production environment.
Conclusion
Business users require the power and flexibility to create or modify reports and giving them the option to
do this increases their productivity since they can directly build them in the system instead of waiting for
developers to create it for them.
In this lesson, we learned how PRPC offers toolsets like the Report Editor, the Report Viewer and the
Report browser for end users to quickly create, modify, and access these reports. We also learned about
the scheduling and subscription feature where users can get the report results directly in their inbox as an
email attachment.
368
Configuring Reports
Introduction
Reports generated by PRPC sometimes display a lot of data that does not serve any purpose. We will
learn about how to split a long running report into various pages, available options to query a limited set
of records and how not to compromise application performance.
Cases capture a lot of information and instead of presenting all of it as columns, we may need to present
the primary data as columns and supplement the data with options to look up the additional details using
sections.
We will also lesson learn how to:
Enterprise applications usually involve a lot of data. Reporting on such applications is not always as
simple as displaying the columns in a list or grouping them on a specific field. There may be cases where
we would like to apply manipulations to the data before presenting the data to the user. Lets take a look
at couple of such requirements:
The database stores the feedback score as numbers like 3, 9 and 6. It makes more sense to
present the results to the user as Low, Exceptional, and Average.
We want to get a list of all purchase requests that are not waiting for any user action. This report
involves two separate entities case and assignments.
How should we design reports to handle these requirements? In this lesson, we will introduce concepts
such as SQL Functions and Sub Reports that helps us in building these reports.
At the end of this lesson, you should be able to:
369
The choices in the Report Viewer and Data Access tabs can be performed only in the report definition
rule form. In the Report Viewer tab we decide which options we want to make available to the business
users when they are in the report viewer. For example, do we want users to be able to change filter
criteria, and/or access the command menu by right-clicking on a column, and what options should be
made available in that toolbar.
We will learn about the Data Access tab in the Creating Sub Reports topic later in this lesson.
There are other features that require us to open up the report definition rule form. Lets look at few of the
most common ones.
370
The Paging options allow us to customize the pagination features such as the Page size (number of rows
per page) and Page mode (the pagination format).
When paging is configured, the system retrieves only the rows that are required in the current page and it
retrieves the results for the next page only when that page is accessed. Paging in addition to filtering
helps to limit the number of records that we query from the database.
In addition to this, PRPC also allows setting threshold values on cases when paging cannot be enabled.
In such cases, these threshold values help to limit the number of rows accessed by the system. We can
limit the maximum number of rows that it can retrieved, by default its set to 500. We can also set the
maximum elapsed time in seconds, by default its set to 30 seconds. The system stops at the time limit
and displays a message indicating that the report is taking a long time to run.
If the application is configured to export the results, we can set thresholds for the maximum number of
records and the maximum amount of elapsed time to avoid performance issues.
371
Lets take a look at the two of the most important reasons sections are applied on a report definition rule.
The first reason is when we need a custom section for filtering. When we define filters for a report, PRPC
automatically opens the section in a new dialog. However we can override that by using a custom section.
We typically use a custom filter section when we want to give business user some help in selecting a filter
value or customizing the filters.
The second reason is to include smart Info. Reports display data in a grid structure rows and columns,
each record displays as a row and the columns provide values of other fields specific to that row. For
example, a Purchase requests case collects data for twenty different properties. In our report we include
three columns to show case type, case ID and case status.
To see the additional information, users can click the icon in each row.
372
The feature that we use to configure this behavior is called Smart Info. First we select the Enable Smart
Info flag in the Report Viewer tab and then click the options next to Enable Smart Info. This opens up a
dialog where we enter getSmartInfo in the Content field. That is the name of the activity rule shipped in
PRPC to render a Smart Info box. The activity references a section named SmartInfo to get the contents.
To display the information that is specific to our application, we can create a section rule named
SmartInfo in our application.
373
In addition, we can also change the appearance of the report header by configuring the corresponding
section, pyReportEditorHeader, in the Code-Pega-List class. Unlike the other sections weve discussed,
we can customize this section by copying it into an application ruleset and configuring it as needed.
374
375
376
For this configuration to work properly, all of the data used by the report must be present in the alternate
database. This includes any tables that are required by JOIN operations.
377
SQL Functions
Reports display the results that are stored in a specific database table. Tables might store codes which
require us to format the values to present them in a way that enable us to interpret the results better.
Lets look at a couple of requirements:The first requirement is to have the report always display the number of days it took to resolve a
case from the date of its creation.
The second requirement is to have the report format the feedback rating received from users
(who enter the rating on a scale from 1 to 10) to convert them into labels (Excellent, Good, Fair or
Poor).
SQL functions are helpful in addressing this requirement. SQL functions are applied at the database level,
so the function is passed directly in the query that is executed against the table.
SQL Functions apply to a class named Embed-UserFunction and are saved as a Function Alias rule type.
PRPC comes with a lot of standard SQL functions that we can use without making any changes.
Data Type
String
Concatenate, Replace,
Substring, Length, Upper Case,
Lower Case, etc.
Number
Date
Conversion
SLA based
The first requirement can be addressed by using the Difference between days function.
To add a function, we need to add a column and then click the fx button to open the calculation builder.
378
To pick a function from the list, enter a few characters of the function name and then we can select the
one we want.
A Function Alias if created properly can provide the proper instructions so that managers can add them in
reports.
However, for the Rating score field we need to write a new function. We can access the list of all SQL
functions using the landing page. The landing page has a new button to create a new SQL function. The
Function Alias rule must be defined in Embed-UserFunction class.
379
The Source Here we enter the code for implementing the logic. Usually to create a Function
alias you need to have knowledge of SQL, therefore they are usually created by developers who
are familiar with SQL. Since it uses SQL, we may have to create platform specific rules if SQL
scripts vary with the database we use. Most of the time we will be using one database. However,
when defining new SQL functions it makes sense to define them for other databases so that in
future the migration is smoother. We can use circumstancing to create function alias rules for
different databases.
380
Functions once defined can be added onto reports directly by managers. Functions are available
in the report editor under the Calculations tab.
381
Conclusion
PRPC offers a wide variety of tools report browser, report viewer report editor to create and configure
reports easily. In this lesson we looked at various reasons why we may require opening the underlying
report definition rule form to make these changes.
Reports may need to reference a sub report to filter data or to include aggregate calculations on each
row. We know we need to use a sub report when we need to query data from a table and use it to filter
the data in another table. For example, if we want to display the date on which the first subcase was
resolved in the report.
Lastly we looked at how to format the codes saved in database into meaningful text. What does a
feedback rating of 7 really mean? The underlying database might save the score as 7, but we want the
report to display Good instead of 7 to make it more meaningful. We learned that we can use standard
SQL functions to display the information rather than creating another property that saves these labels. We
also looked at SQL Functions, how to use the standard SQL functions and how we can create a new one.
382
Which CSRs resolved the most cases during the past 3 quarters
How many resolved cases during the past two months had exceptions and required a specialized
CSR to intervene to solve the problem
Having this information then helps managers improve their processes and improve customer service.
PRPC saves data across multiple tables when a case is being processed. When designing reports, it is
important for us to know which table has the data we need.
In this lesson we will learn about the three most important data points that we would like to report on such
as case related information, assignment related information (who is currently on the case) and the audit
trail for that case. We will learn which database tables are used to store this information.
We will see how a PRPC concept like a class, property or the case corresponds to a database concept
such as table, column and row.
In addition, we will learn how to join or associate database tables so that a single report can display
columns from more than one database table.
We will conclude this lesson by learning about the need for optimization and how it helps to enhance the
performance of our reports.
At the end of this lesson, you should be able to:
383
For example, when users create a new purchase request, the system assigns a case ID and it is saved in
a new row in the database table named pc_ADV_Purchasing_Work. When a sub case is created for the
purchase request, such as a purchase order, PRPC adds another row in the same database table.
Each case is treated as a separate entity and is saved as a new row in the database table. So, how do
we know which database table the class is mapped to? PRPC uses two records for managing the
database mapping namely, DB Name and DB Table
DB Name record defines the database configuration detailsand can be configured to use either JNDI or
JDBC url for the database connection details.
PegaRULES record which maps to a database where all the rules are saved.
2.
PegaDATA record which maps to a database where all data instances are saved.
The second one is DB Table record- which defines the class name, the database table name and the DB
Table record.
As mentioned earlier, we may need to write reports on varied data sets, so lets look at the three most
important data sets that we would most likely want to report on.
384
pxUpdateOpName the name of the operator who last updated the case
Additionally we may also need properties that are used after the case is resolved such as:
Additional properties for resolved user work group, org, unit or division
There are also properties to determine the time period between goal and resolution
(pyElapsedPastGoal) as well as deadline and resolution (pyElapsedPastDeadline). We can also look at
properties such as pyEffortEstimate and pyEffortActual for the effort in completing the case.
Lastly, we can also see how much time was spent at each status - pyElapsedStatusNew,
pyElapsedStatusOpen and pyElapsedStatusPending.
When we define subcases, then we need to know about a few additional properties if we are creating
reports with data from multiple classes.
The most important property we need to know is pzInsKey this is the internal identifier used by PRPC
for each case. A subcase for a purchase request, say PurchaseOrder will have a property named
pxCoverInsKey (this is the property used to identify the parent case) which had the value of the
pzInsKey of the Purchase Request.
So, lets identify a few properties that are specific to cases as you might find these helpful with reporting:
385
Assignment Reports
When a case is being processed, it gets assigned to a user if it requires a user action. Each time a case
gets assigned PRPC also creates an assignment object. Once the assignment is resolved the case
moves on and a new assignment is created when another assignment shape is encountered. If the
assignment is routed to an operator, it gets saved in the database table named pc_assign_worklist and
if the assignment is routed to a workbasket it gets saved in a database table named
pc_assign_workbasket.
Lets take a look at some of the properties that are specific to assignments that you might find are
required while creating reports:
One other important property to know about is the pxRefObjectKey which is mapped to pzInsKey so the
case and assignments are related to each other.
F - flow action
L Local Action
M - Memo
pxTimeCreated time when audit trail is recorded, this is actually after each action is submitted
History tables can also be used when designing performance-based reports and we want to capture
process metrics to see how to improve the business process.
pyPerformAssignmentTime total time spent on that assignment including all local actions on
that assignment
These times along with a few others give an indication as to why an assignment takes longer to complete.
If more than one operator works on the case then we need to know about these properties as well.
pxAssignmentElapsedTime total time elapsed after the assignment is routed -this indicates
how long the assignment was waiting for user action
386
pyPerformTaskTime total time spent by all users, this is applicable when the assignment is
routed to multiple users - this value is equal to pyPerformAssignmentTime if only one user works
on the case.
If pyPerformTaskTime is significantly lower than pxTaskElapsedTime then the assignment has been idle
for a long time.
Using these properties in a report we could find out how each operator is performing. Note that the times
returned by these columns are in seconds, so we would need to use functions to convert them into the
time into hours or days. Functions are discussed in detail in a separate lesson in this course.
387
Join:
Class Join is configured in the report definition rule form. For example, in this case we have the
OpenCases report defined in Purchase Request class. To configure a class join we define a prefix (PO)
to join to ADV-Purchasing-Work-PurchaseOrder class with the Purchase Request Class. PRPC allows
both Inner and Outer Joins - Inner Joins return records that are in both tables while Outer Joins return all
the matched records from one table and all the records(including unmatched) from another table. Select
only include matching rows is an option to configure the Inner Join. There are two options to configure an
Outer Join depending if we require all rows from PurchaseOrder or from the Purchase Request.
After defining the prefix, we need to define the condition; in this case we need to match the pzinskey of
the purchase request with the pxcoverinskey of purchaseorder.
To display any property belonging to PurchaseOrder class we need to use the prefix (PO) that we defined
earlier while the properties belonging to PurchaseRequest class can be referenced without any prefix.
388
Association:
We can also use associations to join multiple classes. PRPC comes with standard associations to
connect commonly used classes like Assignment- Case and Case History tables.
This is an example of a standard association rule that comes with PRPC to associate workbaskets and
cases.
In the report definition rule, we can access the association by entering some part of the text of the
association such as Assignment which gives us access to the properties in the class. The association
gets automatically referenced in the data access tab once we add a column. If we need to associate other
tables, define the association rule and then reference it as shownbelow.
389
Optimization of Properties
As we learned earlier, PRPC uses a specialized BLOB structure to save property values. To run reports
we can query against this BLOB, however if the dataset involves thousands of records there is going to
be an impact on the BLOB processing time.
To avoid this, PRPC allows optimization of properties for all the standard properties such as work status,
creation date and time, and creation operator, which are exposed by the product.
How do we decide which properties we should optimize? We can add unoptimized properties to appear
as columns in a report definition or in filter criteria, however the system displays a warning. PRPC by
default restricts adding the unoptimized properties in the Report Editor because it impacts the application
performance. But, we can enable a flag in the report definition so that the unoptimized properties can also
be added to the Report Editor.
Optimization can be done easily by running the Optimization wizard which is accessible by right clicking
on a property in the Application Explorer.
PRPC supports varied data structures - single level, page and page list or group.
When optimizing a single level or a page property the Optimization wizard does the following:
1. Tags the Required property flag in the property rule form to indicate its optimized.
2. Creates a new column in the same table.
3. Runs a query that takes all the values from existing cases and populates this column.
When we use a page list, we cannot create a column since we are not sure how many rows we are going
to need. For example in the Purchase request P-1 we will have two items and in purchase request P-3,
we may have 4 items. To do this PRPC creates an indexed table. When optimizing a page list, the
property wizard creates the following:
1. A declarative index class.
2. A set of all indexed properties in the declarative index class.
3. A declarative index rule that maps the properties in the declarative index class to the
corresponding properties in the main class.
4. A database table that corresponds to the declarative index class.
5. Database columns for all of the indexed properties in this table.
6. A query to populate this table with data taken from existing cases.
So in the example above P-1, will have two entries in the indexed database table and P-3 will have four
entries in the indexed database table.
The primary objective of the declarative index is to support reporting on page list properties. After the
declarative index is created, the report is usually written in the declarative index class to display the entire
list. This report is then added as a drilldown report in the other reports that are defined for the case. So
when we run the case it displays the report and we can drill down into it to see the page list.
390
Conclusion
Understanding the data model and knowing where to find the class mapping, the key standard properties
that are mostly used in reporting helps us to design reports to suit the business requirements.
Reports in PRPC are usually created to see the data saved as part of the Case, information on
Assignments or learning about how things are processed using the Audit Trail. Associating classes
enables us to get data from various database tables and displays that information in one report.
Optimizing properties and creating declarative indexes when optimizing a page list property enables us to
improve the BLOB processing time.
391
392
Data Visualization
Introduction
Most of this lesson is on charts. The chart capabilities have been enhanced tremendously in Pega 7.
However, well also have a look at the new property security feature for reports. Choosing which chart to
use depends on what message to convey. A confusing chart, or one that directs attention away from the
important information in the report, can be worse than no chart at all. The three charts all draw on the
same data. And we have to use our knowledge of the message and our audience to decide whether one
of these or some other chart, would be the one to use. Experiment with charts for your report until you
find the chart type that best conveys the reports information. Then adjust colors, labels, and other
settings until the resulting chart is both pleasing and compelling.
When reading a chart we can zoom in and out to see the big picture or critical section of the chart. We
can also drill down to see the specific data contributing to a particular point, column, or area of the chart.
It is possible to change the filter conditions for the report. Charts can be shown in 2D or 3D. Report
Definitions generate list and summary type reports. Only summary-type reports can include charts. A
summary-type report has at least one column with a numeric aggregate value, such as count, total, or
average and at least one unaggregated group-by column such as status, day/month when the items were
created or as in this case work type.
columns, etc. If there are fewer colors in the palette than the chart needs, the chart selects additional
colors from the standard palette.
Conditional Colors allow us to define colors that are conditionally applied to chart elements. The condition
can be based on either numeric values or Group By column values displayed on the chart. We can use
Threshold Colors to define colored lines or regions to display on the chart grid. For example, on a line or
column chart we can display a red line parallel to the x-axis or specify a range of values.
Use General Settings for additional options. Select Format Options to specify the default font size and
weight for all labels and the size of the chart.
Legend and slider options are also available under General settings. Select Enable User Commands to
specify commands available to end users when the chart displays. The commands available depend on
the selected chart type.
When a chart is added to a report it displays above the report data by default. To display just the chart,
open the Report Definition and select Hide summary data display on the Report Viewer tab.
Configure a map
Pega 7 includes hundreds of maps that we can use to illustrate the data in a Report Definition, allowing
report users to see graphically how data values are distributed and then compare across geographic
areas.
Map charts cant be defined in the Report Editor. They must be defined using the Charts tab on the
Report Definition. Lets have a look at how we can show the number of addresses per country in our
address data table using a map chart. We need to associate the data in the report with the regions in the
map, in this case the countries. We do that on the maps landing page found under Reporting > Settings >
Maps. By default the name of the country is used for the property value. If a country code is used instead
it can be mapped here.
We use the map World By Country for this report. The columns can be dragged directly into the chart. We
can define colors based on how many addresses a country has. It is possible to setup drill down maps so
that users can drill down from the world to a country view to see the same data by state or province in a
country.
Configure the drill down map on the Report Viewer tab. When providing drill-down from a map such as
the World By Country to another map on a drill down report, it is not possible to know in advance which
country a user may click. To solve this problem, we select a set of generic map types that refers not to a
single map, but rather to a set of maps that may be used.
When a user clicks a country to drill down on it, the map for the selected country is shown. This means
that we need to map all state maps for countries returned by the report.
Lets try it out.
394
Conclusion
Charts or graphs are useful when we want to illustrate the data in a way that is easier for business users
to interpret. Some reports like the sales across various geographical regions are better presented as a
map to highlight which regions performed well.
We learned about creating reports that can be configured to display the results in charts. When these
reports are run in the report browser, managers see a chart and the data is usually displayed below.
Another use of charts is that some of the trend or timeliness or productivity reports can be directly
embedded in the dashboard of a user portal. When the portal rule is created, we can include a chart
control which references the report definition rule. When charts are rendered in the dashboard, we always
recommend that the data that displays below the chart be disabled so users see only the chart.
395
Introduction to Integration
396
Introduction to Integration
Introduction
Integration enables your application to interact with other systems, which is a common requirement in
most applications. A wide variety of integrations are supported in Pega 7.
This lesson covers the fundamentals of the integration processing model and how integration fits into the
application architecture.
At the end of this lesson, you should be able to:
Invoke a Connector
397
398
CMIS,
COM,
.Net,
.Net,
EJB,
EJB,
File,
Email,
HTTP,
File,
Java,
HTTP,
JCA,
Java,
JMS,
JMS,
MQ,
JSR94,
REST,
MQ,
SAP,
Porlet,
SAPJCo (SAP Java Connector),
REST,
SOAP, and
SAP,
SQL applications.
SAPJCo,
SOAP applications.
399
Integration Wizards
The development of most integration is supported by wizards, which guides us through creating
connectors and services and dramatically accelerates development.
The connector wizards use metadata, such as a WSDL file, EJB class, or a data table definition to create
the connector and mapping rules.
Classes and properties used for mapping the request and response data are typically created as
well.
The service wizard lets us create a service for our application, allowing external systems to send a
request to our application.
The wizard creates service and deployment records as well as listeners if they are required for the
specific service.
400
Invoke a Connector
A connector is invoked either from a data page or an activity. If you call a connector from a flow you
can use the Integrator shape to reference the activity.
Use the connector with a data page if you want to fetch data from the service and with an activity if you
want to push data to the service.
Lets have a brief look at the configuration for a data page.
We can specify the type of connector to use as well as the request and response data transforms to map
the request and response. The data page can be used on its own or referenced by or copied to a
property.
401
If we use an activity we can explicitly call a connector. We use data transforms to map application data to
the request and from the response.
For example, lets assume we have an application in which we want to update a supplier, which is held in
an external system.
In this case we could use a data page with the connector for the GetSupplier service and configure a
property to copy the data from the data page.
We would then modify the data in the property and use an activity to invoke the connector for the
UpdateSupplier service to update the external system.
402
1. Before the connector is invoked from a data page or activity a data transform is used to
map data between our application and integration clipboard pages.
2. Then the connect rule is invoked. Connect rules are protocol specific and implement the
interface to the remote service. They specify the target service, and provide data
mapping configuration details for outbound and inbound content.
3. The service client is initialized based on the connect rule type.
4. The request data is mapped into the format required by this type of connector using the
mapping rules specified in the connect rule. Dont confuse this mapping with the data
transforms. This mapping is between the clipboard and the format required by this type
of connection.
5. The request is sent to the external system.
6. When the response is received, the response data is mapped using the mapping rules
specified in the connect rule.
7. The service client is finalized, and control returns to the data page or activity.
8. Finally a data transform is used to copy all or part of the response from the integration
clipboard data structure to our application.
403
1. The service listener is responsible for sensing that an incoming request has arrived. This
functionality is sometimes provided by the underlying Web or Application Server, and
sometimes provided by a PRPC Listener.
2. The service listener receives the request and instantiates the Service API to provide
communication with PRPC. Then, via the Service API, control is handed to PRPC.
3. PRPC looks up the service package and related service rule, using the access group that is
specified in the service package.
4. It then establishes the service requestor, and optionally performs authentication based on
security credentials that are passed in the request. Once authenticated, service processing
continues using the authenticated users access group, not the access group that is
contained in the service package.
5. The request is mapped onto the clipboard according to the specifications contained in the
service rule.
6. Control is passed to the service activity, which provides the logic for the service.
7. When your service activity completes, control is returned to the Service API.
8. The service rule maps the clipboard data to form the response data.
9. The Service API then cleans up the requestor that it previously created, and returns control to
the service listener.
10. Finally, the service listener sends the response.
Lets take a closer look at the records that are involved in a service, working backward from the business
logic.
The service activity is the primary activity that is executed by the service. It contains the business logic
with the request as input and creates the Clipboard structure that represents the response.
The service rule is the first step toward exposing our service activity as a service. It specifies the service
activity, defines the external API of our service (the signature that external applications use to
communicate with this service), and specifies the data mapping for the request and response.
404
The service package is data, not a rule. It specifies the access group for locating the service rule, plus
provides service requestor authentication, pooling and deployment options.
The job of the service listener is to sense when a request has arrived. In many Integration Types, this
functionality is provided by the underlying Web or Application Server.
In those Types where this is not provided, PRPC provides Listener data classes that allow us to specify
the details about listener initialization.
The data instances should be associated with our applications ruleset since it is our application that
provides the service. There are no special considerations with regards to the enterprise class structure
when creating a service. Note that it is a best practice to use authentication.
Conclusion
Understanding the integration model and capabilities is necessary when integrating an application into an
existing system landscape.
Now, you should have an understanding of the integration capabilities. You should also understand the
processing model for connectors and services. Finally, you should know when to use a data page vs
activity to invoke a connector.
405
Simulate a Connector
406
Transient
Permanent
Transient errors typically dont last very long and correct themselves over time. For example, consider a
situation where the remote host of the application or system the connector is accessing is restarted or
temporarily unavailable.
Permanent errors can also occur. These errors are typically due to a configuration error or an error in the
remote application logic. For example, consider a situation in which an invalid SOAP request is sent to a
SOAP service. In this case, the error persists until the SOAP request message is fixed.
Depending on how the connector is invoked there are different mechanisms for determining if there has
been an error. Each of these mechanisms is described below.
1. If the connector is invoked from a data page we can use the Post Load Processing Activity
to check for and handle errors.
2. If an error occurs while loading the data page a message is added. Check for messages
in the Post Loading Processing Activity and handle the error.
407
3. If the connector is invoked from an activity, use the Transition Step Status to check if the
connector failed and if so handle the error.
In addition to checking and fixing the errors directly after the connector is invoked it is also possible to use
the Error Handling Flow feature available for most connectors.
Note that the Error Handling Flow is not executed if the error was handled in a transition in the activity or
if the connector is invoked from a data page.
The Error Handler Flow feature is enabled by default and is triggered if the exception is not caught
elsewhere.
408
409
410
Simulate a Connector
Connector simulations are very useful in situations where the data source is not available or when one
wants to dictate the response returned. Connector simulators can be setup for most connector types.
The Integration landing page (DesignerStudio > Integration) gives us an overview of the available
connectors and their simulators.
The landing page shows for which connector types simulations are available and if it is active. Connector
simulations can be temporarily enabled and disabled from the landing page using the Enable/Disable
Simulations button. It is also possible to permanently disable all active simulators in one easy step using
the Clear Simulations button. Note that SQL connectors cannot be simulated and that there are no
connector simulations available out-of-the-box.
The connector simulator configuration can be accessed directly from the landing page by clicking the X
Simulations link or from the Simulations button on the connector rule.
411
Simulation activities can be defined on a global or user session level. If global is selected the connector is
simulated for all users on the system. If user is selected the connector is simulated for the current user
only.
It is possible to define several simulation activities for each connector, but only one can be active per
level. Hence, a connector can have one simulation activity active as a global simulation and another one
active for the user session. In such a case the user session simulation overrides the global simulation.
If were using our connector with a data page we have the option to simulate the data source instead of
the connector.
412
Lets have a look at the connector simulation activity for the SOAP connector called GetSupplier.
Place the connector simulation activity in the same class as the connector it simulates. It is worth
considering placing the connector simulation activities in a separate RuleSet, which is not moved to the
production system.
The step page in the connector simulation activity is the service page of the connector it simulates so it is
easy to set response properties. Note that the properties are set directly on the service page and the
stream and parse rules that some connector types have defined on the service page are not used in
simulation mode.
We can use pre-conditions to return different responses based on things such as values in the request.
413
Other service types allow a service error condition to be defined on the Response tab.
When the service encounters a processing error and a condition evaluates to true an error message as
defined is returned to the calling application. The following options are available for defining an error
response message.
Queue When If the specified When rule returns true the request is queued and a PRPCspecific SOAP fault that includes the queue ID of the request is returned. Asynchronous
processing is covered in another lesson.
Mapping Error Error occurs while mapping incoming data from the request message to the
clipboard.
If the mapping, security, and service errors are not defined the system returns standard exceptions.
414
Tracer
We can use the Tracer to capture session information about the connectors actions from the moment the
session starts. The Tracer can monitor any active requestor session. When using it for a connector, we
start it for our requestor session before running the connector activity.
A Service, however, runs in a new requestor session and the processing is usually so quick that it can be
hard to catch the event to trace it. Therefore, the Trace option available in the Run menu is more
convenient to trace service rules. Using this option we can trace a specific rule. This Trace option can be
used when unit testing the service using the Run option in the Actions menu and to trace real service
requests invoked from an external client application.
In the Tracer, we can use the following options when tracing services.
The services option adds steps when the service invocation begins and ends. Nested within
those steps are entries that show when the inbound and outbound data mapping begins and
ends.
The parse rules option adds steps when the parse rules begin and end their processing.
The stream rules option indicates when HTML and XML stream rules begin and end their
processing.
Clipboard
We can use the Clipboard tool to examine property values and messages associated with them.
For connectors, it is easy to examine the clipboard since connectors typically run in our requestor context.
You can create or select a work item, move it through the flow and examine the Clipboard before the
connector is invoked and after it obtains a response.
The session data of a service requestor is also accessible on the Clipboard. However, because the
duration of a service request is so short, it is nearly impossible to examine the Clipboard if it is invoked
externally and therefore not run in our requestor context. To see the Clipboard of a service rule, we must
invoke the service using the Run option in the Actions menu.
Log File
A message is written to the Alert Log file when some processing events exceed their threshold. Alert
messages are available for both services and connectors.
During a service execution, the following operations can contribute to a long transaction time:
Data parsing
If one of those time thresholds is exceeded, a PEGA0011 alert message is reported in the alert log.
Similarly, a PEGA0020 alert message is reported when a connector call to an external system has
exceeded an elapsed time threshold.
Beside the Alert Log, we have the Pega Log in which system errors, exceptions, debug statements are
gathered. While testing our integration interactions, we can increase log level settings, add more loggers
415
and then examine the results in the log file. Use the DesignerStudio > System > Tools > Logs > Logging
Level Settings landing page to update the log level settings.
For connectors, we need to set a logging level for the Invoke activity.
Below is a list of Java classes with their matching service types. We can set logging levels for the classes
appropriate for the service we are testing.
Service Type
Classes
All
com.pega.pegarules.services
SOAP or .Net
com.pega.pegarules.services.soap.SOAPService
com.pega.pegarules.services.soap.SOAPResponseHandler
com.pega.pegarules.services.soap.SOAPUtils
EJB
com.pega.pegarules.services.jsr94
Java
com.pega.pegarules.services.jsr94
JMS
com.pega.pegarules.services.JMSListener
MQ
com.pega.pegarules.services.MQListener
File
com.pega.pegarules.services.file
com.pega.pegarules.services.EmailListener
Conclusion
It is very important to make time to think through the error handling approach for your integration projects.
Your application will never be robust if you neglect this part.
Now, you should understand the options for error handling for both services and connectors. You should
also be able to debug and unit test a connector and service. Finally, you should now know how to
simulate connectors.
Keep in mind that some integrations have specific error handling, which we will cover in other lessons.
416
417
We can provide a URL indicating where the WSDL is hosted. The system prompts us for credentials if the
server hosting the WSDL requires authentication.
Alternatively we can upload the WSDL as a file. Many WSDLs do not stand alone, but reference other
WSDLs or schemas defining shared data models. The wizard only allows a single file so WSDLs that
dont stand alone cannot be uploaded as a file, but needs to be referenced via an URL.
Next we have to select the operations that we want to be able to call in our application.
If the service requires authentication click the Edit Authentication link to configure it. It is possible to
configure a single Authentication Profile for the service or different profiles for each selected operation.
418
We can specify an existing profile or provide the required information such as authentication scheme,
user name, and password to have an Authentication Profile generated by the wizard.
You can use the Test button if you want to test the connection for an operation.
Click Next to display the Final screen where we can configure several key components.
In the Integration field, enter the name of this integration service. The short name is a unique identifier
and defaults into the Description field. It is also used in the class, RuleSet, and authentication profile
names created.
The Reuse Layer determines where this connector can be reused.
Global Integrations indicates that the integration can be used across organizations and
applications.
Implementation Integration Class indicates that the integration can be used within this
application only.
Organization Integration Class indicates that the integration can be reused across applications
within the organization.
If implementation or organization integration is selected we can either create a new RuleSet for this
integration above the reuse layer or use the integration RuleSet of the reuse layer.
The setting we use for the reuse layer determines the base class and ruleset in which the connector rules
created.
419
Reuse Layer
Baseclass
RuleSet
Option
Connector RuleSet
RuleSet
Prerequisite
Implementation
Org-App-IntSupplier
New
SupplierIntegration
AppInt
Reuse Layer
AppInt
OrgInt
New
SupplierIntegration
OrgInt
Reuse Layer
OrgInt
N/A
SupplierIntegration
OrgInt has
SupplierIntegration
as a prerequisite
Organization
Global
Org-Int-Supplier
Int-Supplier
If the integration already exists at the reuse layer the message This integration already exists at the
reuse layer specified. If you proceed the integrations will be merged appears and proceeding creates a
new RuleSet version of the integration RuleSet.
Click Preview to view a summary of records the wizard will create for this integration.
Click Create to create the integration. Depending on what RuleSet options we selected and the security
settings in use for the application and its rulesets, a password (or passwords) may be required to
proceed.
We can use the Undo Generation button to remove the records generated.
Select DesignerStudio > Application > Tools > All Wizards to see a list of all wizards. This allows us to
complete a wizard in progress or undo generation of a completed wizard.
420
XML Stream and Parse XML rules for request and response mapping
In addition to the above rules, an Authentication Profile record is created if we configured it in the wizard.
421
Below the base class, which is named after the integration (Int-Supplier in this case) there is another
class with the same name holding the request and response properties for the operations in the service
as well as the connect and mapping rules.
In this particular WSDL the requests and responses were all modelled as named complex types and are
represented as classes with the complex type name being the class names.
422
Connect SOAP
Lets have a look at the Connect SOAP rule.
Most of the fields on the service tab were populated from the WSDL file.
If the operation requires authentication and it was configured in the wizard an Authentication Profile will
be referenced in the Authentication section.
The Service Endpoint URL (shown in the Connection section) varies depending on the connectors
environment (development, QA, or production). The Global Resource Settings feature allows us to specify
the endpoint URL with a system variable, rather than using a rule that may be maintained in a locked
ruleset.
The wizard defaults the response timeout to 120,000 milliseconds. This means that a connection attempt
will wait up to 2 minutes before timing out with a failure message. Typically we are going to want to adjust
this to a much lower value, depending on the SLA of the service. In particular, if this service is invoked as
part of a users interactive session, this value should be reduced to a few seconds.
The error handling properties hold status and exception information returned by the connector and the
processing options are advanced settings that allow us to setup the connector to operate asynchronously.
423
Entries in the request headers specify any data that needs to be included in the SOAP envelope header.
Entries in the request parameters specify the data that will be mapped into the body of the SOAP request
message. In this particular case the body is mapped using an XML stream rule.
We use the target property if we want to store the entire original SOAP response with the case. This
might be useful or required for auditing or history purposes.
Response header entries are used to map the incoming SOAP envelope header.
The response parameters allow us to specify how to map the data from the body of the reply message to
Clipboard properties. In this particular case the body is mapped using an XML parse rule.
424
A SOAP fault is an error in a SOAP communication resulting from an incorrect message format, headerprocessing problems, or incompatibility between applications. When a SOAP fault occurs, a special
message is generated that contains data indicating where the error originated and what caused it. The
properties on the faults tab allows us to map the faults details returned and handle the error appropriately.
425
The SOAP connector architecture uses an Axis client to make SOAP calls and process the response. The
client properties allow us to specify settings for the Axis client.
If the connector requires security we can configure it on the Advanced tab.
Compensating actions are intended for use when the connector succeeds, but a subsequent step in the
process determines that the action of the connector should be reversed. Typically the compensating
action sends a message to the external system by executing another connector that un-does the action of
the original connector. Compensating actions are not intended to help recover from system failures.
426
XML Stream
The XML stream rule maps data from the Clipboard to the SOAP request.
Here, we map the supplier to the update supplier service request as we can see in the tree structure.
The XML tab defines how the properties will be mapped into the XML message structure.
427
Parse XML
The XML parse rule maps the response to the Clipboard.
Mapping Tab
Here, the list of suppliers is mapped from the get supplier list service response.
Conclusion
The Create a SOAP Integration wizard walks you through the process of integrating with an external
SOAP service accelerating the development considerably.
Now, we can use SOAP Integration wizard to create a SOAP connector. We also understand the options
available and the effect they have on the
428
Lets start by reviewing the requirement for our service. We want to create a service that returns the
details of a purchase request with a given ID.
The information returned for each purchase request should include:
Requestor
Date
Line items
Status
Total Amount
Use of authentication
429
Choose Create and manage work to use the service to create a new case and perform one or more
processing action(s) upon it. When selecting this option, the wizard generates two activities for use by the
service.
svcAddWorkObject, which creates the case and starts the specified flow. This is the standard
service activity for services that create new cases.
svcPerformFlowAction, which identifies the case and then calls the specified flow action. This is
the standard service activity for services that perform a flow action on a case.
Select Invoke existing activity rules to use an existing activity to manipulate an existing case.
Select Process input and output data to map data to or from the clipboard, such as to update the
contents of a data table. The wizard creates the service rule and a new empty activity. This activity can be
configured to use the mapped input data to perform an action, and to populate the clipboard properties to
be returned in the mapped output fields..
This lesson focuses on creating a service to invoke an existing activity rule. Information on the remaining
options can be found on the PDN.
Next we need to specify the activity class.
430
In our case the class is the purchase request case type. Next we need to select the activity to use.
If the activity has parameters those can be used as inputs or outputs to the service by selecting them on
the next screen.
Our activity has no parameters so there is nothing to do.
In the next step the input and output properties for the service are defined based on the properties
available in the service class.
431
In our case we want to take the purchase request ID as an input parameter and return the line items and
total price.
Only properties defined directly in the class (i.e. not inherited ones) are shown. We need take care of
inherited properties outside the wizard.
Our input parameter the purchase request ID is inherited so it is not shown.
In many cases, we can use the clipboard to map data between our application and the other system. If we
want to include a more complex structure, such as a repeating list, in the input or output we need to use
XML for data mapping. In our case we want the line items in the response so we need to use XML for
data mapping. Some of the properties in the response are not inherited so we select those..
Next we need to select ruleset and service package.
Typically we want to create the service rules in the application ruleset. The service package defines the
deployment of the service. A service package typically represents a WSDL and all services belonging to
the service package will show up as operations in the WSDL.
If we select to create a new service package we need to provide details in the next step. The wizard
provides us with a default name.
432
It is a best practice to have a dedicated access group for services, which has minimum privileges required
to run the service activities. If our application has services that have individual security policies, we may
have multiple service packages and access groups.
Select requires authentication to authenticate the service client at runtime. If this option is selected the
client needs to provide a valid operator and password when calling the service.
Selecting Suppress Show-HTML causes the system to skip over any activity step that calls the ShowHTML method in the service activities that execute through service rules that reference this service
package instance. This feature lets us reuse or share an activity that supports both interactive users and
services.
The last step provides an overview of the rules to be created.
Click Finish to create the records.
In the final screen the created records are listed and the WSDL URL provided.
433
SOAP Service
Service Package
434
The service tab defines the service page and activity for this service. The primary page holds the data
mapped to the request and response. Optionally you can specify a data transform that is applied
immediately after it creates the page. The service activity provides the processing for the service. Lets
have a look at the activity.
In our case the activity takes the input parameter (purchase request ID) from the service page and loads
the corresponding purchase request directly into the service page. However, the service activity could
create a case or perform a flow action on an existing case or do just about anything.
The request tab contains the mapping configuration for the request message.
435
We can see that the pyID property is mapped directly from the service page. We needed to add the
request parameter manually since pyID as an inherited property didnt show up in the wizard. If we had
selected use XML data mapping for the request in the wizard an XML parse rule would have been
created and configured here.
The response tab contains the mapping configuration for the response message.
The response is mapped from the service page with a XML stream rule since we selected use XML for
data mapping in the wizard. Lets have a look at it.
436
In addition to the properties mapped we also wanted to map the requestor (pxCreateOperator) and the
date (pxCreateDateTime) of the purchase request. These properties didnt show up in the wizard because
they are inherited. We also want to rename the pyID element ID and the LineItems element LineItem
since the element itself represents a single line item.
This is how the XML stream rule looks after weve made our changes.
The fault tab allows us to configure when to return a SOAP fault.
437
We want to return a SOAP fault if the purchase request with the given ID isnt found. We assume that if
the pxCreateOperator isnt available the purchase request wasnt found and that was because the ID
given was invalid. In a real-world application, we would want to provide additional error-handling details.
438
The context and pooling tabs are populated from the information we provided in the wizard. The methods
tab specifies the service type and lists the methods part of the service package.
Use the deployment tab to get the WSDL.
439
Select the service class of interest and click the deployment tab. Typically there is only one service class
per service package. However, it can be several making it possible to have several WSDLs for a single
service package.
If the check-out feature is used we need to ensure that all rules related to the service is checked in before
using the deployment tab since rules in private rulesets are ignored.
440
Our initial test should use the Use current requestor context option to test the service activity and
mapping rules.
We have the option of entering the purchase request ID for our request as either an individual value, or
entering it directly into the SOAP request envelope.
441
Enter a valid purchase request ID, and click execute. We see that the overall result for first test was a
success.
We can see that the response is the purchase request for the given ID.
Now let's test what happens if we give an invalid purchase request ID.
442
A SOAP fault is returned, just as expected. Our service activity, mapping rules, and error handling seem
to be working well.
When using the current requestor context some processing steps are not attempted:
Initialize Requestor
Perform Authentication
In order to fully test our service, we want to include the above. Select Initialize service requester
context, and enter the user ID and password that were going to use to authenticate.
The final test is to invoke the service from an external application.
Conclusion
The Service Wizard walks you through the process of creating a service, accelerating the development
considerably. In this lesson we used it to create a SOAP service.
Now, we can use the Service Wizard to create a SOAP service. We understand the options available and
the effect they have on the records created. We also understand the records generated so that we can
tune the configuration.
443
Lets start by reviewing the requirement for our service. In our purchasing application there is a data table
with suppliers. We want to create a file service that takes a CSV (comma separated value) file with
supplier information and updates the table.
Each row in the CSV file contains the following information:
ID
Supplier Name
Contact Name
Contact Title
Address
City
Region
Postal Code
Country
Phone
444
In this lesson, we discuss how to configure a file listener and service. Though the configuration
information is valid only for file listeners and services, we can apply the general concepts to any
listener/service combination.
445
Choose Create and manage work to use the service to create a new case and perform some processing
action(s) upon it. When selecting this option, the wizard generates two activities for use by the service.
svcAddWorkObject, which creates the case and starts the specified flow. This is the standard
service activity for services that create new cases.
svcPerformFlowAction, which identifies the case and then calls the specified flow action. This is
the standard service activity for services that perform a flow action on a case.
If you already have an activity you want to use to manipulate an existing case, select Invoke existing
activity rules.
Select Process input and output data to map data to or from the clipboard, such as to update the
contents of a data table. The wizard creates the service rule and a new empty activity. This activity
can be configured to use the mapped input data to perform an action, and to populate the clipboard
properties to be returned in the mapped output fields.
In this case, we choose Process input and output data and set the service type to File, so we need to
specify the class and the name of the service.
446
In our case we specify the supplier class of the data table and we name the service UpdateSupplier.
In the next step the input and output properties for the service are defined based on the properties
available in the service class.
In our case we want to use a parse delimiter rule, which cant be configured in the wizard so we need to
manually configure that after the wizard is complete. The wizard requires that at least one input and one
output property is selected so we need to select something.
Next we need to select ruleset and service package.
447
Typically we want to create the service rules in the application ruleset. If we select Configure a new
service package and/or Configure a new file listener, we need to provide details in the next step.
The wizard provides us with a default name. It is a best practice to have a dedicated access group for
services, which has minimum privileges required to run the service activities. If our application has
services that have individual security policies, we may have multiple service packages and access
groups.
Enable the Requires Authentication option to authenticate the service client at runtime. If this option is
selected the client needs to provide a valid operator and password when calling the service.
448
Enabling the Suppress Show-HTML option causes the system to skip over any activity step that calls the
Show-HTML method in the service activities that execute through service rules that reference this service
package instance. This feature lets us reuse or share an activity that supports both interactive users and
services.
The Source Location field identifies the directory in which the listener looks for the input files. The
listener requires read and write access to this directory. Note that this field might change between
environments and supports the Global Resource Settings syntax.
The Source Name Mask field allows us to enter a mask used to select the files in the Source Location.
We can use an asterisk as a wildcard match. For example, *.CSV causes the system to process all files
with extension CSV. Note that case is significant in matching files on UNIX systems. If we leave this field
blank the listener selects every file in the directory identified in the Source Location.
The last step provides an overview of the rules to be created.
449
File Service
Activity
Service Package
File Listener
450
Service Activity, which is the primary activity that is executed by the service.
Parsing Rule, which parses the file content and creates the service clipboard page which is used
in the service activity.
Service File, which specifies the parsing rule and service activity.
Service Package, which specifies the access group as well as authentication and requestor
options.
A File Listener which encapsulates details about listener initialization and processing options.
The wizard created the service file, service package, file listener, and an empty activity for us. Lets have
a look at the records created by the wizard and how we can configure the file service to parse our CSV
file and update the suppliers.
Lets start by looking at our service activity.
Service Activity
In the service activity we open and lock the supplier with the given ID.
Then we set the values from the CSV file and save the supplier. If an error occurs when the supplier is
opened or saved the activity simply exits.
Parse Delimited
The service file rules support three types of parse rules:
Parse Delimited rules which are used when a comma, tab, quote, or other character separates
fields within each input record.
Parse Structured rules which are used to import fixed-format flat files
451
In our case we have a CSV file so we need to use a Parse Delimited rule.
The field format is set to comma-separated values (CSV) and the values in the record are mapped to
properties.
Service File
The service tab defines the service page and the activity for this service.
The service page holds the data mapped from the request. The processing options are used to configure
asynchronous processing, which can increase the throughput of the file service. In the advanced courses
there is a separate lesson describing the configuration of asynchronous file service.
The Method tab describes how the system detects fields and records in the input files.
452
Record at a time parses a record at a time. Use the Record Terminator or Record Length fields
to identify the record.
By record type extracts a record type from the record. Use the Offset and Length fields to
identify the record type and the Record Terminator or Record Length to identify the record.
Specify the Data Mode and Character Encoding of the input file. This choice affects which methods can
be used in parsing the record.
The Request tab defines the processing and data mapping for each file or each record.
It is possible to specify an Initial Activity that is run before processing each input file and a Final activity
that is run after the entire file is processed.
How the information configured in the Parse Segments is used depends on the processing method
specified in the Methods tab.
453
The Record Type is used to identify which parse segment to apply to a record parsed from the
file. This option is only relevant if the processing method on the method tab is record type.
Select Once to apply the parse segment only once. This option is useful for files that contain a
header record.
The Map To field defines the action to take in the parse step. The following options are available:
o
In our case we want to use a parse delimited rule. Enter the corresponding key for the Map To
field in the Map to Key field. In this case it is the name of the delimited rule we want to use.
If Auth? is selected the system uses two parameters pyServiceUser and pyServicePassword from
the parameter page as authentication credentials for a requestor. These two parameters can be
set in earlier rows in the parse segments.
Use the Activity field to specify an activity to run after the data is processed by this parse
segment. We want to update the supplier after each record in the file so we specify our service
activity here.
Select Keep Page? to cause the primary page of the activity to be retained after processing of
this row is complete. Clear to cause the primary page to be reinitialized after this row is
processed.
The Frequency field allows us to specify a number of records to process before committing database
changes. When this field is not blank, a final commit operation also occurs after all records are
processed. If this field is blank, no automatic commit operations are performed. In our case we want to
commit after every record so we set the frequency to 1.
Use the Success Criteria to specify a when rule to be evaluated after each record is processed. If this
rule evaluates to false a database rollback occurs backing out of the most recent commit operation and
the processing of the file is abandoned.
454
Service Package
The context and pooling tabs are populated from the information we provided in the wizard.
The methods tab specifies the service type and lists the methods part of the service package. Thats
where we can find our file service.
The Deployment tab is not relevant for file services.
File Listener
The Properties tab identifies where the listener is to run, where it looks for input files, and the Service File
rule.
455
Run on all nodes The listener is run on all nodes, in other words all servers in a cluster.
Host based startup The listener is started on specified number of nodes on specific servers
within the cluster.
If node or host based startup is selected then additional fields appear for the configuration of the
node or the host name and node count.
The Reset Startup button deletes all instances from the class Log-Connect-AssignedNodes so that
listeners can be started.
The Source Location and Source Name Mask are populated from the wizard. The listener creates three
sub-directories within the source directory:
Work Original files are copied to the Work directory for recovery attempts
Completed Files for which processing completed successfully are copied here
The Concurrent Threads field allows us to specify the number of threads per server node this listener
requestor creates when it starts. Each thread operates on a single file, in other words multiple concurrent
threads have no benefit unless multiple files are available at the same time for processing.
Each file listener links to a single Service File rule. Service Package is the first key part, Service Class
is the second key part and Service Method is the third key part of the Service File rule.
We can use the Test Connectivity button to verify that the listener thread can be created on the current
node and access the device and directory.
The Requestor Login fields define the user name and password if the service package requires
requestors to be authenticated.
The Diagnostics section allows us to configure remote logging for this service file rule. For more
information on remote logging see the help topic called How to install and use remote logging.
Select Blocked in the Startup Status to prevent this listener from being started by any means. If cleared,
this listener starts with system startup, or can be started using the System Management Application.
The Process tab contains runtime configuration for this file listener.
456
The Polling Interval field defines the number of seconds this file listener waits before it checks for newly
arrived files.
Select Ignore duplicate file names? To prevent processing of a second or later file that has the same
name as a previously processed file. If a match is found it bypasses normal processing and copies it to
the work directory and saves it with .dub as the file extension.
Selecting Lock temporary file names? causes each file listener to lock the temporary files it creates to
avoid name collisions when multiple copies of the listener may be running. Select this only when
necessary as locking adds processing time that can affect performance.
If Generate report file? is selected the listener creates a report file in the source directory using the
content of the Source Property in the page specified in the Source Page Name. The name of the file is
the original file name suffixed with the Target File Extension defined.
Select Persist Log-Service-File instances? to save instances of the Log-Service-File class, which
record the processing of a listener, in the PegaRULES database for reporting and debugging.
The Idle Processing section lets us define an activity to be called after the processing of a file is
complete.
The File Operation field lets us decide what we want to happen to the file if it is successfully processed.
It can either be kept or deleted. Choosing keep moves it to the completed folder.
The Error tab defines what processing occurs at runtime when requestors based on this listener
encounter errors.
457
Some errors are technical and related to the infrastructure and hence might be temporary. It might be
worth trying to attempt to recover from such errors. If we select Attempt recovery? the system leaves the
files containing errors in the Work folder for recovery purposes.
Max Recovery Attempts defines the maximum number of times the system attempts recovery.
In the Cleanup section we can define what we want to happen to the file when an error occurs. It can
either be deleted or renamed. If you choose to have it renamed the file extension is renamed to the
specified value. The file name itself remains the same.
458
Lets select the node on which we want to start or stop the file listener and then select Listener
Management. Select the file listener in the drop-down.
Click Start to start the listener. We get a message that tells us that it was successfully started and we can
now see it in the list of running listeners.
Select the listener and click Stop to stop it.
459
For our initial testing lets select Use current requestor context to test the service activity and mapping
rules.
We have the option of entering the file content in a text area or upload a file.
Enter valid file content, and click execute. We see that the overall result for our first test was successful.
When using the current requestor context some processing steps are not attempted:
Initialize Requestor
Perform Authentication
460
If the file service requires authentication we can provide the authentication details when using Run in the
Actions menu or provide the pyServiceUser and pyServicePassword parameters and select
authentication in the parse segments.
In order to fully test our service, we need to place a file in the directory configured.
Conclusion
We can use file services to read in and process the data in files. The data files may be exported from
another system, or created by users as text files in a wide variety of formats.
A file listener monitors the file directory and calls the file service when files arrive. The file service uses a
parse rule (XML, structured, or delimited) to open and read the file, evaluate each input record, divide the
records into fields, and then write the fields to the clipboard. The service activity can then process the
data.
Now, we can use the Service Wizard to create a file service. We understand the options available in the
wizard and the effect they have on the records created. We also understand how to use the additional
configuration options in the records created so that we can fine tune the configuration.
461
462
A Database Table instance maps classes and class groups to relational database tables or views in a
database. Before creating a Database Table instance it is important to confirm that the table or view is
present in the database schema.
It is not necessary to create a Database Table instance for every concrete class. At runtime, the system
uses pattern inheritance and class groups to locate the appropriate Database Table instance for an
object. If none is found, it uses the table pr_other as default.
However, each class representing a table in an external table must have a Database Table instance.
463
The External Mapping tab on the class form is primarily used for external classes. Typically, this tab is
completed by the Connector and Metadata Wizard or the External Database Table Class Mapping
gadget.
Each external class has an associated database table instance and can therefore not be part of a class
group.
The Column Name is the name of a column in the external table. The Property Name is the name of the
property in the external class that represents the table column in the Column Name Field.
Declare expression, validate, and trigger rules do operate on Clipboard property values for external
classes.
464
The external class generated can be used together with a data page and the lookup to fetch an instance
or with a report definition to fetch a list of instances. Alternatively, it can also be used in an activity with
the Obj-methods to open, save, and remove rows from the table.
465
Use the Obj-Refresh-and-Lock method if the page already is available on the clipboard and youre not
sure if it is current and a lock is held. If the object is locked Obj-Refresh-and-Lock has no effect. However,
if the object is not locked a lock is acquired and if the page is stale it is replaced with the most current
version.
Use Obj-Save to persist the instance in the database.
Obj-Save doesnt cause the object to be written immediately to the database unless the WriteNow
parameter is selected. Every Obj-Save without the WriteNow parameter checked becomes a deferred
save. A deferred save adds a deferred operation to an internal list of operations to be performed on the
next commit. The internal list is also known as deferred operations list or deferred list.
Until the next commit has occurred, either explicit or automatic, changes are not reflected in the
database. Automatic commits are performed:
When a work object is updated, resolved, or reopened after processing for a flow-action is
completed
We can make an explicit commit in an activity using the Commit method as shown above.
There is not necessarily a 1-to-1 relationship between deferred saves and the list of deferred operations.
When we have multiple Obj-Save methods for the same clipboard page, there is just one operation in the
deferred list and that operation tracks all changes. So, there is one operation in the deferred list for each
466
clipboard page on which Obj-Save has been called no matter how many times Obj-Save has been called
on the page.
When a Commit is called and any of the operations fail a rollback is performed ensuring data integrity.
Consequently, it is not recommended to call a Commit method in an activity that is executed within a flow
since a situation may occur in which a failure occurs after the first Commit. In that case, only the
operations that were written to the deferred list since the first Commit can be rolled back, causing some
issue to the data integrity.
Another potential problem includes using the Obj-Save method with the WriteNow parameter checked.
In this activity for example, the first property-set method at step 2 sets pyNote property to Ready for
Review. The Obj-Save then adds this to the internal stack of deferred saves. Following this at step 4,
another property-set method sets the same property pyNote to Review Completed.
The next step is an Obj-Save now with Write Now set to true. This causes a write to the database with
pyNote set to the new value. A Commit follows. The result is the commit method causes all of the
deferred saves to also be submitted, which now re-sets the value of pyNote to the initial value, Ready for
Review.
So avoid using the Write Now directive unless it is absolutely necessary.
Use the Obj-Delete or Obj-Delete-By-Handle methods to remove an instance from the database.
467
There are keywords and notations to describe mapping to the external database. This lesson only covers
parts of the syntax; see the Developer Help Connect SQL form Data mapping for a complete overview.
The Open tab is used to retrieve a single row in the database and copy its column values as properties
onto the clipboard page.
Lets fetch a city and region for a postal code. We can map the columns in the select to properties in the
clipboard page using the keyword as for the select.
It is a best practice not to hardcode table names in the SQL statement. Instead we can use the class
keyword that uses the Database Table instance configured for the class to determine the table name.
In the where clause we can then reference the ID property in the step page.
Use the characters /* and */ to surround comments within the SQL statement.
To include SQL debugging and return status information from the database software, enter a line at the
top of the SQL code defining the name of the page on which to record the messages.
Use the Delete tab for DELETE, TRUNCATE, or DROP statements and the Save tab for INSERT,
UPDATE, or CREATE statements. The Browse tab is used for queries that return more than one result.
468
Here we are searching on the city. The Asis keyword is used to prevent the system form placing spaces
or quotes around the value, which it does for standard property references. This way we can use wildcard
characters.
In addition to SELECT it is also possible to enter EXECUTE or CALL statements for stored procedures in
the Browse tab.
Connect SQL rules can be invoked from a data page or alternatively from an activity using the RDBmethods.
469
Conclusion
In this lesson, we explained features and tools available to accelerate the integration with external
databases.
Now, we understand the options for interacting with an external database and when which approach is
appropriate. We also understand the characteristics of external classes and know how to configure
Connect SQL rules.
470
Standard Agents
471
472
If the users credentials pass the activity is responsible for creating or updating a Data-Admin-Operator-ID
to represent the user. Even though the operator ID is not used for authentication it is still required as it is
used by PRPC for work routing, reporting and other functionality.
We may need to customize this activity to add additional business logic such as mapping the LDAP
attributes to those known by our application.
By default the operator is created using the model operator defined on the organization unit as a
template. To use this approach weu must map the appropriate org unit fields.
The timeout activity is used to display the authentication challenge screen and re-authenticate the user.
The sample activity can often be used with little modification.
Again, the authentication wizard handles a lot of this for us. Lets look at an example.
Use Authentication Accelerator to Implement LDAP Authentication
To get started lets review some information about the LDAP system to which we are going to connect..
We need the LDAP server URL for PRPC to connect to.
We also need the Bind Distinguished Name and corresponding password. This is the account we initial
connect to LDAP with, not the account of the user being authentication.
The LDAP administrator also needs to provide the search context, which is the location in the directory to
search for this user as well as the Search Filter used to find the user.
Finally, it is good to have the ID of a test account that is representative of the users of the application.
This test accounts attributes are used by the wizard in the data mapping step. It is also important to
understand the attributes of each entry and how they map to our application.
Here we see an example entry for a test user and their attributes.
Now we are ready to run the wizard.
We can start the wizard by clicking the Pega button and selecting Integration > Tools > Authentication
wizard.
The Authentication wizard creates an authentication service already configured to one of three predefined
servlet definitions. This eliminates the need for us to create a new servlet or modify web.xml.
Next we select the LDAP server type. For anything but the Websphere and Weblogic built in server the
Sun option is safe choice. This sets the context factory for us.
Next we set the directory URL which was provided.
Since we are not using SSL we can skip the trust store.
Next we complete the bind name and password, also values that would be provided by our LDAP
administrator
The search context defines where we look in our sample LDAP directory. The search filter is UID as that
is the field that has the email address for which we are authenticating.
Finally, we enter the test user ID which is used to help us in the mapping step.
The mapping screen has two columns the attributes found for our test user and the PRPC properties, on
the operator ID, to map to.
474
The wizard initially shows the org, division and unit properties as these properties must be mapped when
using the default activities. This is because the orgunit is where the model operator is defined.
We can select the attributes that map to our properties. If these properties dont map exactly we can
perform additional mapping by customizing the authentication activity.
We can also add additional mappings to set things like the user name and user identifier. These
mappings overwrite the values in the model operator.
We can now click Next and fill in any of the additional descriptions we want to set and then click Finish to
generate our authentication service.
On the confirmation screen we can see the name of the AuthService that was generated, WebLDAP2.
We can also see the URL for users who are going to use this authentication service.
Note that running the wizard doesnt disable other authentication services or servlets. It is possible to
have LDAP and PRBasic authentication running at the same time. If you wish to disable PRBasic
authentication you may do so via web.xml.
We can click the open button to see the generated authentication service.
Here we can see the values we entered as well as the names of the default authentication and timeout
activities being used.
We can also see the mapping values we entered.
Now lets see our LDAP authentication in action.
First we must connect to our LDAP URL.
We can see that this URL is using our PRWebLDAP2 servlet rather than the standard PRServlet. This
causes PRPC to use the WebLDAP2 authentication service.
Lets see what happens if we try to login as a user who is not in our directory.
We can see that an appropriate error messages is shown.
Now well try a valid user with an invalid password.
Again we see the appropriate error message
Now we can enter the valid user ID and password.
At this point PRPC has validated the user ID against LDAP and automatically created the operator ID
instance using the model operator.
By clicking the profile we can see that this user was assigned to the sales orgunit based on their LDAP
attributes.
Now lets logout and go back to our standard PRServlet URL
From our PRServlet URL we can try to login as our newly created operator.
PRPC prevents users created via external authentication from logging in via PRBasic authentication.
This is a good thing as we can still keep PRBasic authentication for developers or administrators.
Now well login as an architect to see what was created.
First well look at our operator list. Here we can see that we now have a user test.user1@les.com
475
On the operator form we can see that some fields, for example full name, were mapped from LDAP.
Other fields, such as the access group, did not come from LDAP but from the model operator.
The model operator comes from the organization unit that was mapped from LDAP.
This test operator was part of the sales unit. We can see that the sales org unit has a model operator
defined. This model operators data is merged with the data from LDAP to create the new operator ID.
The data from LDAP takes precedence over the model operator. So in our case the Full name was
replaced with the LDAP data but the access group was not since it was not mapped in our authentication
service.
Now that weve seen the standard behavior of the authentication activity its important to understand that
this activity is a sample and is often customized in some way.
Some common customizations include changing the messages shown to the user when authentication
fails. Perhaps even localizing the messages.
Another common customization is to add additional mapping logic to use decision rules to determine the
proper model operator or access group using a number of different LDAP attributes.
Or to add logic that uses LDAP attributes to explicitly to deny access to certain users.
Conclusion
Using LDAP as an authentication service for PRPC is straight forward and can be accomplished quickly
by using the accelerator to get started. This speed does not come at the cost of flexibility as the
authentication activities are designed to be overwritten and extended to meet our applications
requirements.
Conclusion
Using LDAP as an authentication service for PRPC is straight forward and can be accomplished quickly
by using the accelerator to get started. This speed does not come at the cost of flexibility as the
authentication activities are designed to be overwritten and extended to meet our applications
requirements.
Now, we understand the underlying custom authentication model and how it can be used with LDAP. We
also know how to use the Authentication Wizard to implement LDAP authentication.
476
477
Each access group also lists a series of roles. A role lists a set of allowed capabilities, and is an instance
of Rule-Access-Role-Name. Unlike the access groups listed on an operator ID, all of the roles listed on an
access group are in effect at the same time. The ability to divide capabilities into multiple roles improves
reusability an access group for managers can include both a User role and a Manager role, thus
granting the capabilities assigned to each role to managers, without copying the user role capabilities to
the manager role. By combining the capabilities of each role, the access group describes a composite
role for its constituent users.
478
The access controls and permissions listed in a role are managed at a class level by the access of role to
object rule, commonly referred to as an ARO. Each ARO is an instance of a Rule-Access-Role-Obj rule,
and applies to a specific class. The ARO defines the capabilities that apply to that class and the
instances of that class. For example, an ARO might allow a specific role to open cases, but not to run
reports.
Each ARO can also include a set of privileges, which allows an application to permit or deny access to
specific rules and tools. Think of a privilege as a token. By itself, the token doesnt do anything, but other
rules can require that a user possess the token to allow an action. For example, the flow action Approve
may require a user to be granted the privilege CanApprove. So, if a user has been granted the
CanApprove privilege, they can access the Approve flow action rule to perform an approval, while a
user who lacks the privilege cannot perform an approval.
Privileges can be added to many rule types, and can be checked programmatically by utility functions.
Privileges can also be conditionally granted by system level or through the use of Access When rules.
By combining all of these types of rules, an application developer can model complex application
requirements to create a powerful authorization model.
479
480
Each action can be set to either full availability (Full Access), no availability (No Access), or conditional
availability (Conditional Access).
Open, Modify, and Delete indicate whether an operator can open, edit, and delete instances of the case
type. Perform indicates if the operator can perform other operators assignments for the case type. That
is, if No Access is specified, the worklist for each operator in that role contains only his or her own
assignments. Run Report indicates if the role can run reports defined for the case type. View History
indicates if users can view the history for the case. Most actions can be governed for each case type, but
View History is specified at the class group so the setting is same for all case types in a class group.
In this example, the PurchaseFW:Managers access group consists of three roles: Purchase Requestor,
Purchase Manager, and PegaRULES:Feedback. For each operation, the Access Manager displays the
access rights for each role. In some cases, the permissions for an operation differ between the roles. For
example, the Purchase Requestor role has permission to open a Program Fund case, while the
PegaRules:Feedback role does not. When this occurs, the system uses the most permissive role in the
481
access group to determine the level of authorization for the operation. Since the Purchase Requestor
role has full access, this permission governs the operation for the access group, overriding the no
access setting for the PegaRules:Feedback role.
Conditional access the yellow circle generally indicates the use of an access when rule to determine
the circumstances in which a role is to be granted access. In the following example, an access when rule
named RequestedByOrForMe has been added to the Delete operation for the Purchase Request case
type, to ensure that requestors can only delete their own purchase requests, but not requests for other
operators.
The Process Flows and Flow Actions sections indicate the flows and flow actions that an operator can
access and therefore perform. By default, the Access Manager only lists the starting flow for the case
type, and the flow actions referenced by that flow. To configure permissions for flows and flow actions
along with any other rule type that allows permissions we can create and apply the permissions
ourselves.
When the yellow circle appears next to the case type, it means that more than one permission level is
applied to the actions listed for that case type. So, if the permission level for Run Reports is set to No
Access, and all other actions are set to Full Access, the indicator applied to the case type is the yellow
circle, to indicate that a mix of permission levels is in effect for the case type.
Also, members of the Purchasing department create purchase orders as part of the process of fulfilling
purchase requests. We want to ensure that members of the Purchasing department can complete
purchase request tasks assigned to their coworkers. A department is modeled as a workgroup, so we can
apply the standard Access When rule pxAssignedToMyWorkGroup to configure conditional access to
the Perform action.
483
If we look at the AROs, we can see how the Access Manager performed its configuration.
On the Purchase Request ARO for the Purchase Requestor role, Modify instances has been set to a
permission level of 0, which means no access. Also, Delete instances has been configured for
conditional access by adding the Access When rule RequestedByOrForMe.
On the Purchase Orders ARO for the Purchasing role, a Perform privilege was added to the Privileges
tab, and used the Access When rule pxAssignedToMyWorkGroup to provide access conditionally. The
L: prefix indicates conditional, or limited, access.
484
Whenever possible, the permissions for one role should not overlap those for other roles. This best
practice simplifies maintenance of the authorization model. Remember that by combining roles, we can
use conditional permissions and override permissions on a different role to customize the resulting
composite model.
Access control for attachments can be configured on the attachment category rule form. The access
control model for attachment category rules uses privilege rules and when rules (rather than access when
rules) to determine whether users can create, edit, view, or delete attachments from a case. In addition,
the Enable attachment-level security option allows an operator to identify one or more workgroups that
can access the attachment.
485
Then, we want to add the privilege to the purchase requestor role. We can open the
PurchaseFW:PurchaseRequestor ARO for the Purchase Request class. On the Privilege tab, we can
add the privilege we just created, and set the level to match our desired permission level. A level of five
indicates full access, while a level of zero indicates no access. To prevent access for users, we set the
level to 0.
486
Next, we need to configure the Purchase Manager role to allow conditional access to the flow action.
Remember that the roles are combined at the access group level, and the most permissive authorization
is applied. So far, we have denied access for requestors. Since we havent applied any privilege for
managers, the Managers access group can only apply the permission setting from the Purchase
Requestors role. To ensure that managers can approve purchase requests other than their own, we must
add the privilege to the manager ARO and configure conditional access with an Access When rule,
prefixing the rule name with an L:. In our example, we can create an Access When rule named
NotRequestedByOrForMe and apply it to the ARO. This rule tests both the Create Operator and
Requested For properties. If neither one matches the current operator, the rule returns a true result and
the system grants access.
Finally, we can open the flow action and apply the privilege on the Security tab.
This successfully secures access to the approval flow action. When the application calls the flow action,
the system checks the Security tab for any necessary privileges. If a privilege is required, the system
checks the appropriate ARO to determine the level for the privilege, evaluating any Access When rule to
do so. For users with the Purchase Manager role, the more permissive conditional access setting
overrides the permission specified for the Purchase Requestor role (no access).
487
Conclusion
The Access Manager provides a great overview of who can do what in the application. Consider the
Access Manager a view to your authorization model, rather than a means to construct it. We need to
create access roles and access groups, and define privileges independently of the Access Manager.
Remember that only the application scope is shown. Always review the underlying roles to ensure that
the system meets the security requirements.
Now, we understand the underlying authorization model. We know when and how to use the Access
Manager to configure access to our application. We also understand how to configure privileges for
specific rules.
488
System Debugging
Migrating an Application
489
System Debugging
Introduction
Welcome to the lesson on System Debugging. Lets face it, every system is going to have bugs so its
imperative to know how to track them down and get rid of them. This lesson focuses on the tools
available for locating bugs and tracking them down to a root cause.
To access the clipboard, click on Clipboard in the tools area at the bottom of the Designer Studio. This
launches the clipboard in a second window.
The clipboard is one of the most useful tools for debugging the system. It allows us to access the entire
page structure for the system, one thread at a time. Typically, the clipboard tool is used when developing
and debugging to examine property values and messages associated with them and to find the
information necessary to reference a property, such as the page names and the property name.
Now that we know how to access it, lets discuss the structure of the Clipboard tool.
490
In order to switch to a different PRThread, we click the name of the current PRThread in the upper left
corner of the clipboard tool. This shows us a list of other PRThreads we can select. Once we select one,
the clipboard tool refreshes to show us the pages associated with that PRThread.
The next thing to point out is that the clipboard tool is divided into two panels. The left panel displays the
entire clipboard for the currently selected PRThread as a tree. The first pages in the tree are the top-level
pages. In this screen, pyPortal would be a top level page. Selecting any one of these pages refreshes
the right panel, where we can see all Single Value properties, sorted by property name.
Within a top level page can be additional embedded pages, lists or groups, as shown in the tree by a
slight indent. These are labeled with one of these icons:
represents properties with modes Page List, Page Group, Value List or Value Group.
Properties of mode Java Object, Java Object List and Java Object Group are not shown on the clipboard.
When we select any of these pages, the clipboard displays the properties of that page in the right panel.
Note, that in the case of a Value List or Value Group, the values of all the elements in that list or group
are shown instead, as seen here.
491
The clipboard tool arranges the top level pages into four categories.
User Pages this contains the top-level pages created by the requestor during normal
processing, sorted alphabetically by page name. For example, an execution of a Page-New
method in an activity results in a new page being added to this category.
Data Pages this contains all the data pages instantiated in the system. See the lessons on
data pages for information on their usage.
Linked Property Pages this contains all the pages created by a linked property. See the
lesson on properties for information on their usage.
System Pages this contains special, reserved pages the system relies on to function. Within
this category, three pages are always present.
o
pxRequestor this contains the information about the requestor and HTTP protocol
parameters.
pxProcess this contains information about the servers Java Virtual Machine and the
operating system.
Additional pages that contain information on the access group, application rule, operator ID,
organization, org division and org unit may appear as required. Guest users do not have these
pages as theyre part of the authentication for a named user.
The menu provides the commonly used options of Edit and Refresh as separate buttons. The rest of the
options are available by clicking on the Actions button.
Edit - allows us to update values on the page. This is useful to prepopulate data during testing
that the system might not currently process. For example, if were calculating a full name out of
the users first and last names, we can enter those values here to test our calculation instead of
492
waiting for the screen to be built that lets the user enter them. Of course, our testing shouldnt
end here. Dont forget to also test once the screen has been created!
Refresh - gets the latest values from the system. Since the clipboard opens in a separate
screen, it is possible for the system to continue processing even though the local copy we have
on our desktop does not match the one at the server. As a best practice, if the clipboard is left
open and the system has moved screens, we should use the refresh button to update our view of
the clipboard before examining any values.
Show XML - provides a raw XML view of the entire page. This is useful to see the entire page,
including the embedded pages, on one screen. It also has the added benefit of displaying any of
the properties that start with pz which are reserved by the system and therefore not shown on
the clipboard.
Execute Activity - allows us to run an activity using this page as its step page. This provides the
same functionality as the run button available when an activity is open in the designer studio.
This toolbar provides us a search box to locate values on any clipboard page, a tools menu with the
options Analyze and Collect Details. These are advanced tools used to track approximate sizes of pages
on the clipboard.
PegaRULES:SysAdm4
PegaRULES:SysArch4
PegaRULES:ProArch4
any <application name>:Administrator access roles created when we run Application Express
. Generic users should not be granted any of these roles if they do not require access to the clipboard.
493
To access the tracer, we click on the Tracer in the tools area at the bottom of the Designer Studio. This
launches the tracer in a second window.
The tracer displays an area for the results and a toolbar. Were going to take a look at the toolbar first.
Pause/Play this icon starts or stops the tracer. Pause displays when the tracer is currently
capturing events and play is displayed when the tracer is not.
Clear this icon removes the results visible in the window. This is useful when restarting the
tracer.
Settings this icon allows us to configure the options for the tracer. Well review the options a
little later in this lesson.
Breakpoints this icon allows us to define a breakpoint for the tracer. This is useful when we
want to automatically pause the execution at a predefined point in the process to closely examine
the system.
Watch similar to breakpoints, this allows us to define when to automatically pause the tracer.
With watch we define to pause when the value of a property changes, instead of a predefined
point. This is useful to track down where a value gets set if we are unfamiliar with the process
setting that value.
Remote Tracer this allows us to select other requestors to trace. This is useful for debugging
from one system while logged in as a test user in another system. Well discuss more about this
option a little later in this lesson.
Save this saves a copy of the tracer results for analysis in a separate application called Tracer
Viewer. This application will be discussed later in this lesson.
494
The results displayed in the tracer can be overwhelming. But once we understand how the tracer
displays its data, it becomes quite easy to read through a process. One of the first things to realize is that
the most recent event is at the top of the tracer. So working from top to bottom were working backwards
in time, from the most recent to earlier events. The line numbers also indicated this backwards nature and
serve to remind us of this fact.
Most events in a tracer have both a Begin and an End. These two lines form the box around the event,
with the lines in-between representing sub-events that took place during this event. Take a look at the
screen above and identify the Activity Begin in line 21 and the Activity End in line 44. So, between lines
21 and line 44 we were running the same activity.
When tracing Data Transforms and Activities, we will also see a Step Begin and a Step End for each step
in the data transform or activity. These steps indicate the method being executed, provide a link to the
step page of that step, identify the status of that step, and much more. The help files contain information
about each one of these elements in detail.
Clicking directly on the step page opens a view to the properties on that page, similar to the clipboard.
495
Clicking directly on the rule name is a link that opens the rule instance in the developer studio
Otherwise, clicking anywhere else on the row opens a screen that shows more information for that event.
496
This screen provides a few additional details, such as the full timestamp and the workpool that is not
visible in single row. The real benefit to this screen though is the ability to view the parameter page.
Clicking on the name of the parameter page, which is often =unnamed=, opens a similar view as the
step page, where we can view the parameters and their associated values.
Configuring Tracer
The tracer provides us with many options to control the output to the tracer. The first thing to note is that
the Tracer Settings are broken into several categories. These are all described in detail in the help files,
but lets take a high level look at some of the more commonly used ones.
497
The events to trace and break conditions category provides us with the ability to select some of the more
critical events, such as data transforms and when rules. Unless we need the data, we recommend using
the defaults as selected.
The next set of options is the Break Conditions. This causes the tracer to automatically pause if one of
these is encountered. Again, unless needed, it is recommended that we use the defaults as selected.
The last set of options can affect the performance of the tracer. Tracer by itself can greatly impact the
system performance.
Abbreviate Events this option reduces the performance impact, but sacrifices keeping the
clipboard detail. When this is selected the step pages are not available and we cannot view the
values associated with their properties.
Expand Java Pages this is only useful if the system contains any properties of mode Java
Page. This option should be turned off if such properties are not in use.
Local Variables this option provides the means to view the local variables which are not
available from the parameter page. This option should only be turned on when we specifically
need to debug an issue with a local variable.
498
The next category is event types. This category provides additional, optional events that we can trace.
These events can be things like Flow, Declare Expression or DB Query. By selecting one of these
anytime the associated rule type is executed an event is written to the tracer. The event type Stream
Rules encompasses all UI rules, such as a Harness or a Section.
At the top of the list, we have the option to add additional event types. Since the most common types are
already provided, we should not need to use this capability except as directed by your Lead System
Architect.
The next big category is the rulesets to trace. This category allows us to eliminate some rulesets from the
tracer output, to improve its performance. The entire ruleset stack is available to us, but as a general rule
we shouldnt need to trace any of the Pega* rulesets. Our primary concern with debugging is our rules,
so by removing these rulesets we can potentially save thousands of lines of results.
499
The tracer only provides the step page and the parameter page by default for each event it traces. We
have the option of tracking additional named pages when necessary. This capability is very powerful, but
comes at the cost to performance. When one of these is enabled the system requires much more
memory to keep a record of this additional page at every step of every event. It should only be used
when there is interactions between multiple pages that cannot be traced separately. This option is not
available if Abbreviate Events is selected.
The last option is for the number of lines visible in the tracer window. By default the system displays 500
lines. This can be increased (or decreased) as necessary. Its important to note that increasing this
value affects the tracers performance, as it requires a larger amount of memory.
This downloads an XML file of all the tracer events captured during this tracer session. Once the file has
completed downloading, we can launch the Tracer viewer tool. The tool by default is a blank screen so
well need to open the file we downloaded.
Once weve selected a file, the Tracer viewer processes the file and presents the results in a tree view
500
Note that processing the file may take some time, depending on the number of events traced. More
information about this tool can be found in the same PDN article we used to download the tool.
501
But this will be used for identifying parts of the developer studio and is only useful to us when previewing
a UI rule. The real power to the UI Inspector comes from running it in the user portal.
502
To run the UI Inspector in a user portal, we need to launch the users portal from the launch button in the
developer studio. After the user portal opens, there is a floating button that allows us to toggle the UI
Inspector on or off.
While the UI Inspector is active, as our mouse hovers over any user interface element, the selected
element is highlighted in red and an information panel is displayed about that element.
The information panel displays the type, name and name of the immediate container of this element on
the left panel. The right panel displays the entire UI structure showing all the parent containers for this
element.
In this case, we can see were inspecting a section named pxDeadlineTime that is an element of the
section pyUserWorkList. We can also see that this section is contained in another section, which is
contained in another section, which is contained in another section, which is contained in another section
that is finally contained in a harness.
Hovering over any of these parent sections outlines that element with a blue dashed line.
The standard access roles are listed below and all provide access to this privileges
PegaRULES:SysAdm4
PegaRULES:SysArch4
any <application name>:Administrator access roles created when we run Application Express
Generic users should not be granted any of these roles if they do not require access to the UI Inspector.
504
To access the logs, we select Designer Studio > System > Tools > Logs and Log Files. Note that Logs
files can also be obtained from the System Management Application.
Opening this displays a new window displaying all the available logs.
505
There are several logs available to us, but the important one for debugging is the one called PEGA.
The other logs apply more to performance and are addressed in that lesson.
Each of the logs gives us the option to download a zip or the raw text of the log. Downloaded files can be
opened in any text editor. Clicking on the name of the file directly opens the log.
By default, the log is filtered to entries that match our current operator ID. To view other entries, expand
the Options and change or remove the filter by option.
Remote Logging
Via the System Management Application, we have the ability to perform remote logging. Remote logging
establishes a connection between the server and a standalone application running on our desktop. To
run the remote logger, first access the SMA and select Remote Logging under the Logging and Tracing
category.
506
From this page, download and extract the log4j socket server, available at the bottom of the page. Once
this has been extracted and run, were presented with the remote logger user interface.
Once this is running, we can establish the connection by returning the SMA and specifying the host as
our system, the port as shown in the first line of the logger (default is 8887) and any filters. The filters
work the same way as the Log File tool in the developer studio.
A really nice feature of the remote logger is that it is updated near real-time, without having to constantly
download the log files. This allows us to watch the log files as were stepping through a process and
identify when certain exceptions or events occur. Remember that this is only near real-time as the log4j
appenders work in the background, so its important not to jump through our steps too fast and allow the
remote logger to keep up with us.
507
If you want to see a sample log file, please download the log file from the Stack Trace Sample Log link
in the related content section.
Lets take a look at this sample stack trace. In this trace, the first two lines tell us about the exception.
Breaking these lines down, we can identify the following information
ERROR this is the log level. Other log levels as defined in the help might be displayed instead.
This is then followed by a series of lines starting with at. Each of these lines walks us backwards
through the calling process. The very first at is the line of code that was executing when the exception
occurred. Note that even though this is the point when the exception occurred, this may not be the point
where the exception was caused. To determine that, lets walk through the whole trace.
1. The first line indicates it was executing doActivity.
2. doActivity was called from the CallActivity method.
3. This was called by invoke
4. Which was called by resolveAndinvokeFunctionViaReflection
508
executesla_3f878d06811a288900aafea845f0efb1.step19_circum0
processevent
BatchRequestorTask.run
AgentQueue.run
Putting these items together with our knowledge of agents we can see determine that the SLA agent ran,
which runs on a batch requestor, was processing events from the agent queue, one of them was
executing its SLA and the system threw the exception at step19. We could open the executesla activity
and look at step 19, but our knowledge of SLAs tells us the fact this SLA was calling an activity it was
most likely an undefined exception path that caused the Activity name not specified.
This was a relatively simple stack trace, but applying this same logic to any encountered exceptions we
should be able to determine the root cause and subsequently debug any issue.
509
Conclusion
So what did we all learn? First off, its important to note that debugging is essential. Were always going
to encounter bugs as a byproduct of development but by using the tools available to us we can stay on
top of them and repair them early during development.
510
Launching PAL
There are a few ways to launch PAL. We can select to launch it from the Performance option in toolbar in
developer studio.
Or we can select to open PAL on a landing page. To do this we select PAL from the Performance submenu of the System Menu.
511
The last approach is to access the PAL readings through the System Management Application (SMA).
This can be done from the Requestor Management screen by selecting a requestor and clicking
Performance Details.
It is important to note that viewing the results through the SMA is only a current summary and does not
allow us to add incremental readings.
Taking Readings
No matter which approach we use to launch PAL, the way we gather performance statistics is the same.
For the sake of this demonstration, Ill be using the PAL Landing Page.
512
The first step to taking measurements is to click Reset Data. Since the system is constantly monitoring
performance, by clicking this button were eliminating any previously recorded entries from our results.
Next, we need to take some readings. We have two options to add a reading, with or without the
clipboard size. There is no difference between the two readings other than the addition of the clipboard
size, which takes a bit of extra time to calculate. When adding a reading, its a good practice to have
defined points that identify what occurred during that reading. I prefer to use one reading per flow action
or screen render, depending on what process Im measuring.
Lets go add a couple of readings now so that we have some details to analyze.
Heres our performance analyzer with readings added. Note that each reading added is shown as a
delta. This is indicates its the change from a previous reading. At the top of the list, above the column
labels, is a reading shown as full. The full reading is the total sum of all the statistics from the last time
the data was reset. Note that this can differ from the sum of all the individual deltas as this is the reading
from Reset to current.
In the readings shown, we can see the top delta has a reading that shows 3.42s for RA Elapsed. RA
Elapsed represents the time spent in rule assembly. These results can skew performance readings as
rule assembly, also known as first use assembly or FUA, is very expensive but only occurs once. This is
evidenced by the results we see here. The total elapsed time was 5.95s of which 3.42s was spent in rule
assembly. If we didnt have the additional 3.42s our total time would be less than half the measured
number. It also affects the other readings such as the total rules executed, the reads from the database
and various I/O counts.
In order to obtain good results, we should run through the process once to ensure all rules have been
assembled. I did not do that here in order to demonstrate the impact this has on performance readings.
513
Clicking on either delta or full will show us more details about this reading. There are many different
results available to us for analyzing the performance. Theres no magic number when it comes to these
results. In one situation a result of 10 minutes may be acceptable where in another anything over 100
milliseconds is considered too slow. It is up to us to work with the LSA and the business to determine
what is an acceptable result for each step of the process.
514
DB Trace
The DB trace tool is useful in tuning the application if there are any database performance issues. DB
Tracer must be run if PAL readings indicate performance issues in the Database operations. DB tracer
can trace all the SQL operations like queries or commits that are performed in PRPC.
Like the performance analyzer, the DB tracer is available from the performance button the developer
studio toolbar, the system management application and the performance landing pages. Well do this
demonstration using the performance landing pages.
515
This opens a window that lists all the possible events to trace. By default, all of them are selected. If an
event does not apply to a situation, it should be removed from the list to streamline the results. We also
have the option to generate a stack trace. Generating the stack trace is an expensive process and
should only be used when required. In most situations, well want to accept these defaults as their
shown.
Running DB Trace
To run the tracer, we just need to click the green play button. After performing our steps, we can click the
red stop button to complete the trace.
After stopping the tool, the table is updated with the results for all the PRThreads it traced. Well want to
identify the results for the thread corresponding to process where we performed our work. In this case,
the first item PS1. When in doubt, look for the largest size. It is most likely the one we want.
The results are a tab delimited file so they can be opened using any spreadsheet program, such as Excel
to then review the DB Trace findings.
516
Performance Profiler
The third tool in our Performance landing pages, the Performance Profiler is like a hybrid between the
tracer and PAL.
We run the Performance Profiler the same way we run DB Trace, by clicking on the green play icon,
performing some actions and then clicking the red stop.
Just like DB Trace, we want the record that matches the one where we did work. Again, this is most likely
the largest one. The difference here is when we open the file, unlike the DB Trace, which was stored as
a tab delimited TXT file, the data associated with the profiler is stored as a simple CSV.
The profiler can show us the CPU time and wall time across a series of transactions. It is useful when
determining which part of the process might be having performance issues, or identifying the particular
step of a Data Transform that might have a performance issue. This should be run in conjunction with
PAL to narrow down both the specific step (profiler) and the cause (PAL).
517
Once PLA is launched, we need to upload the Pega, Alert and GC log files from the system. PLA will
format these logs and provide us a summary table separates out each day. We have the options to
export a single day to Excel or, using the buttons above, export the entire result set.
The copy in Excel provides us with summaries for each of the alert, exception and garbage collections.
Using these summaries we can identify the critical events that cause poor performance and address
them.
518
Log Usage
Unlike the PAL tool, which shows data for one requestor only, the Log-Usage reports shows overall
system activity. Log Usage reports are normally accessed in SMA.
Based on the number of interactions, the Log Usage shows various Performance Analyzer counts of
system usage. This enables the system administrator to see what activities are going on from a systemlevel perspective.
For example, if a user complains that the system is slow at 1 p.m., the system administrator can choose
Start and Stop parameters to highlight that time and see whether there was a sudden spike in the number
of users, or in activities run by existing users, or some other occurrence that might cause a system
slowdown.
519
Produces a scorecard.
Sends emails.
AES aggregates the most serious alerts and exceptions into work items for use in a work flow and
assigns them for diagnosis and resolution. Each work item can be transferred to a different operator, has
SLAs defined for the work item and provides reports for tracking the work items.
AES is covered in detail in another course.
520
Compliance Score
The compliance score is a new feature available in Pega 7 and is available to be installed in previous
PRPC versions. The compliance score is a weighted measure that provides transparency to how well a
system is adhering to the guardrails.
To access the compliance score, we select it from the Guardrails sub-menu of the Application menu. This
provides us with the compliance score landing page.
521
The first section of the landing page allows us to apply filters to the results, so that we can narrow down
the possible trending situations. In most cases we shouldnt need to use these and can just leave the
filters wide open to see the score across all possible rules.
Similarly, the last section of the landing page contains a view to how alerts have trended over the last four
weeks. These can be helpful in identifying the impact of correcting warnings and how the system is
trending overall. In this section, we should see the number of alerts reduce as the warnings get
corrected.
The most important part of this landing page is the compliance score (identified with a 1). The
compliance score is a calculation of the total number of rules in the system with warnings, weighted by
their severity and justification state, as compared to the overall total number of rules. This score is based
on all the rules available to this application, except the properties and any rules in Pega provided rulesets.
This is because the number of properties or Pega provided rules in comparison to the rest of the rules in
the system can greatly skew the percentages. By eliminating these rules, the system focuses on rules
that have warnings only compared to rules that could have had warnings.
The row of results below the compliance score displays the data used to generate the compliance score
(identified with a 2). This represents the total number of rules in the system as they compare to the
subset of them that have warnings. Here we can see the percentage of rules without warnings, the total
number of rules used to provide that percentage, and two sets of numbers that represent how many rules
have warnings and how many have unjustified warnings. These later two are links to reports that list the
rules, providing us an easy way to access these rules and repair the cause of their warnings.
We wont go into the exact formula used, but it is important to point out that both justified and unjustified
rules affect this score. So the only way to ever get a full 100 is to have no rules with warnings. Among
the rules that do have warnings, our best option is to fix the rule and remove the warning. Otherwise we
should provide a good justification why this rule requires breaking the guardrails. This provides future
developers with information about the reason this rule was written outside the guardrails, as well has
having the added benefit of improving our compliance score.
If the system does not already have this new feature available, it can be obtained from the Pega
Exchange area of the PDN.
522
Guardrail Reports
Aside from the compliance score, the system also provides a series of reports that can be used to narrow
down rules that have warnings, and therefore are outside the guardrails. The reports are accessed from
the same landing pages as the other guardrails. The help files provide details on each of these reports,
but lets take a high level look at their uses.
Summary
The summary report provides at a glance the state of the system. With this report we can see the total
number of rules with warnings as well as the total number of warnings. In this example, we can see there
are 32 rules with warnings, but there are 37 warnings total. This means some of the rules have more
than one warning. This report uses a color coding system to indicate the severity of the warnings. Red
indicates the most severe. Orange are moderate warnings and Yellow are cautionary. At an absolute
minimum well want to correct the severe warnings, but we should review all the warnings in the system to
see how many of the moderate and cautionary warnings can also be removed.
Expanding any of the rows in the report allows us to drill down into the individual rules that contain these
warnings.
All Warnings
The all warnings report provides the same information as the Summary report, but in an easy to read
tabular form. This report provides us the means to sort on any of the column headers, such as by
severity or by rule type so that we can group together the warnings we want to address.
523
Charts
Just like the other reports, the charts report provides the same information. By using the charts report we
can provide visual comparisons of different warning types. This gives us another means to organize the
approach to addressing these warnings.
Alternative Reports
The guardrails landing page is not the only place in the system to obtain a view of the warnings in the
system. The Application Inventory landing page contains another report that can be used to identify rules
with warnings and address them.
524
Heat Map
The heat map report also provides us the means to view rules with warnings. First we need to switch the
shading option to number of warnings. Once this has been selected, the system displays the categories
with warnings, shaded according to the percentage of rules in that category that have warnings. Ideally,
we would like to see all white here, indicating there are no warnings, but that is not our case.
Hovering over any shaded category, we can see the number of rules with warnings, as well as rightclicking on the category brings up a detailed report drilled down to the list of rules with warnings. This
report is very useful for identifying categories of rules that are outside the guardrails. In this case, flows
are our most non-compliant so we should focus our concentration on repairing the 16 warnings in flows
first.
525
In PRPC, every operator who has Rule Check-out enabled in their operator record, as shown here,
receives a personal ruleset. This personal ruleset is what holds the copy of a rule when its checked out.
526
We can identify a personal ruleset if the operator has a ruleset with the same name as their operator ID
followed by the @ symbol. Since these rulesets sit at the top of the ruleset stack, this creates a unique
rule cache applicable to only this one operator.
So, as a best practice, this should be disabled for every operator who is not actively checking out rules.
In most cases, this applies to every operator, even developers, outside of the development system.
This reduces the number of unique ruleset stacks, thereby reducing the number of unique rule caches
which in turn reduces the overall size of the rule cache.
527
Conclusion
So what did we all learn? We already knew that performance is key to the success of any project. In this
lesson, we covered different tools we could use to identify and measure the performance of the system.
We discussed various reports we have in the system to identify how closely were adhering to best
practices, and we outlined a few best practices that can be applied to any system to improve its
performance.
528
From the System Management Application, we can view the current rule cache from the Rule Cache
Management page under the Administrations category. There is a lot of good information here, all of
which is explained in the help files, but what were interested in is the values for items that have been
pruned. Having pruned values indicates that the rule cache has been exceeded at some point. This
doesnt necessarily mean there is a problem just that we need to do more investigations. This is
especially true in development environments where there is a high turnover in the number of rules
instances.
529
In order to determine if this is a one-time occurrence or an ongoing problem, we will need to clear the
cache by clicking the Clear Cache button and then thoroughly run the system. A good time to do this
test is during load testing as the load test will ensure the system has been thoroughly executed.
Once the system has been fully run, with multiple scenarios representing the current business case, we
can review this report a second time and validate if instances are still being pruned.
If there are still instances being pruned, our next task is to see if we can reduce the overall rules in the
system. The lesson on designing for performance and reuse and specializations lessons both provide
information on how to reduce the overall number of rules in the system. This should be done and the
process of clearing the cache should also be done. Then we should re-run the system in order to evaluate
if the rule cache still needs to be tuned.
If all of these actions have taken place and we still are encountering pruned records, it may be necessary
to increase the cache. Increasing the cache has its own impact on performance by increasing the
systems possible memory footprint, which is why this approach should only be taken once all other
possible solutions to reduce the overall number of rules have been exhausted.
Tuning the rule cache involves setting the instancecountlimit settings in prconfig.xml. There is information
on how to achieve this in the help file topic Understanding caches.
530
PRPC provides an easy to setup wizard for establishing an archival strategy called the Purge and Archive
wizard. To launch this wizard, we select Configuration from the Purge/Archive sub-menu of the tools submenu under the system menu.
The first step of the wizard requests for the work pool that we want archived. Remember that work tables
are mapped to work pool entries, so this is why individual classes are not listed here. After we select a
work pool, well provide a name for this configuration and select next.
531
The next screen allows us to specify specific classes within that work pool. In most cases, well want to
select all the classes of the work pool. Rarely do we require archiving the classes at different intervals.
Selecting next brings us to the next step of the configuration.
On the third step, we specify how many days old an object needs to be in order to get archived. In this
case, were going to specify 365 days since were setting up to archive anything over one year old.
Selecting next brings us to the last step of configuration.
At this point, we can review what weve configured and if its acceptable, click finish. Its that easy to
setup an archival strategy.
One thing Id like to point out is the word resolved. Its important to note that this tool only archives work
that has been seen through to completion and does not impact any inflight work. This is fine in
production, but often we wont find objects to archive in development or QA as these are rarely worked
through to completion. Work objects used in these environments for testing should be either completed
or cancelled so that the archive wizard can clean them up too.
532
This landing page allows us to set up how we want the archive agent to behave. An important item to
note here is the option to either Archive and Purge, or just Purge the identified work objects. In most
cases, we will only need to use Archive in production. In Development, QA and other lower environments
we can often just select Purge to avoid the overhead of producing the archive files.
Other information we set up deals with the number of items to process during each run and how often to
process. In most cases, it is sufficient to retain the default 30 day cycle. However, check with the
business to confirm their requirements before implementing.
More information on the Purge and Archive wizard, including the details on accessing the archive files,
can be found in the help files.
Conclusion
Weve just covered two of the most important maintenance tasks, keeping the caches trimmed and
purging old work. But theres still other things processes we can implement to achieve a clean, lean
system.
Every project needs at least one person who acts as the system administrator. This person should be
responsible for these additional tasks and the overall maintenance of the system. The kinds of tasks
expected of this person include:
Working with the database administrator to identify database indices to be tuned for this specific
application.
Using the performance tools available and making sure developers are adhering to the
performance best practices.
Rolling log files off the server to keep the physical disk space lean.
533
Migrating an Application
Introduction
Welcome to the lesson on application migration. In this lesson we will look at the features and tools
available to package, export, and import applications.
An application is typically packaged or exported when we want to move it to another system, for example
when migrating among development, testing and production environments.
The package, export, and import features can be found under distribution in the application menu.
At the end of this lesson, you should be able to:
534
As we create data instances of certain classes, they are automatically associated with the ruleset in which
the current application is defined. The same is true if that data instance is created by a wizard.
It is possible to change the associated ruleset or clear it so that the data instance is not associated with
any ruleset. Clearing the associated ruleset results in a warning since it is a best practice to always have
an associated ruleset for each data instance.
Associating a ruleset with a data instance does not affect or restrict any runtime behavior. The data
instance remains available to all applications regardless of the associated ruleset. If the data instance
was imported from another system, it might even reference a ruleset that doesnt exist on the current
system.
535
This page lists the application selected and the chain of built-on applications ending with the base
PegaRULES application.
Depending on the applications present on the target system, we may not need to include the entire stack
in our package.
The next page lists all Organizational elements associated with our application.
536
This page lists the Access Groups associated with our application.
The next page lists the Operators associated with the application. We want to include all so we leave it as
is and click the Next button.
537
The next page lists the Work Groups associated with the application.
This page lists the Data Tables associated with the application. We can uncheck any that we want to
exclude from the application package.
538
This page lists java code archives that are used by our application.
The next page lists Database and Database Table instances associated with the application in the
system.
This page lists all the Integration Resources defined in the system. By default, the ones associated with
one of the application rulesets are selected.
539
Click the Finish button to create the Product rule. The final page of the wizard provides the following
options.
Preview shows the content of the archive that is created by the Product file.
Migrate starts the Product Migration wizard. The Product Migration wizard lets us automatically
archive, migrate, and import a Product rule to one or more destination systems. See Pega 7 help
for more information on the Product Migration wizard.
If we want to generate the archive as we go through this wizard we need to lock the rulesets that are
referenced directly or indirectly in the Product rule before using the Export button. Typically the Product
rule itself resides in a referenced ruleset and at least that ruleset was open up to this point. The archive
can be exported at any point from the Product rule form.
The wizard names the file after the application, in our case Purchasing. The version is set to the date and
time it was created.
540
Select Include Associated Data to include the data instances associated with the rulesets in this
application. Selecting this prevents us from having to include the data instances as individual instances
below.
Specify rulesets that are not part of an application. The rulesets can be entered in any order. During
import the system determines a valid order to load the rules.
Use the Minimum Ruleset Version and Maximum Ruleset Version fields to define a range within a major
version that we want to include in the product. Leave both fields blank to include all versions. Select
Exclude Non Rule Resolved Rules to exclude rule types that are associated with a Ruleset but not with a
version, for example, class and application rules.
It is possible to include instances from any class. During upload, data instances already present are by
default not overwritten. However, we have the option to overwrite existing instances if desired. In addition
to concrete classes, it is also possible to enter an abstract class that has concrete subclasses.
Use the ListView Filter and When Filter fields to specify a list view or when rule to filter the class instances
to include. It is not possible to use both a list view and a when rule in one row of this array. Leave the
fields blank to include all instances of the class.
Use a list view if the filter criteria depends only on single value properties that are exposed in the
database and the when rule if the filter criteria involves calculations or properties that are not exposed as
database columns.
541
Select Include Descendants to include instances in classes that are subclasses of the class entered in the
class field. If this option is selected the ListView Filter and When Filter are ignored.
By default, all descendant classes are included when we select Include Descendants. Use the Exclude
Classes field to enter the names of decedent classes you do not want to include. It is possible to enter
more than one class using a comma (,) to delimit the names.
Add specific class instances to include in the Individual Instances to Include section. We can specify any
class that contains instances to be included as part of our Product.
Complete the next section to include JAR files within the product archive.
Click the Query Jars button to list available JARs to include. Add JARs files to include in the product.
The date in the Creation Date field displays on the destination system before the archive is imported. The
date value entered persists even when the file is copied or renamed. Enter a text description of the
contents of this file in the Short Description field. This value also appears on the destination system
before the archive is imported.
Select 5.4 Compatibility to create an archive that can be imported into a v5.4 system or earlier-than-v5.4
system. This setting affects only the format of the output file. It does not assure backward compatibility of
the contents of the file.
542
Select Allow Unlocked Ruleset Versions if we want to be able to include unlocked ruleset versions.
Click the Preview Product File to review the contents of the product definition before exporting or
migrating it.
Click the Create Product File button to create an archive in the ServiceExport directory on the current
application server node. Typically the Product rule itself is in a ruleset included in the product, which
means that the ruleset needs to be locked before the archive can be created.
On the Installation tab we can identify an HTML rule to be displayed at the end of the import operation for
this product rule. For example, it may display further post-import instructions to the person who imported
the archive.
543
The user must have the @baseclass.zipMoveExport privilege to use the Export tab. The privilege is part
of the SysAdm4 role.
Select Export Mode to define the scope of the contents of the exported archive.
By RuleSet/Version creates an archive containing all the rules in a ruleset and version or all
versions.
By Product creates an archive containing rules and data defined by an existing product rule.
By Patch creates an archive containing rules and data defined by an existing Product Patch rule.
Note that the Product Patch rule is deprecated, use Product rule instead.
Enter a file name for the archive in the File Name field. Many special characters are allowed, but we cant
include spaces or equal signs.
Click the Perform Export button to create the file.
The Zip file created link provides access to the new file. The file is typically placed in the ServiceExport
subdirectory of the server.
If errors are reported, click the Total Number of Errors link to see the error messages.
544
The user must have the @baseclass.zipMoveImport privilege to use the Import tab.
Select the archive to upload. By default, it is not possible to upload a file larger than 25 MB. For larger
files, use FTP or another means to place the file in the ServiceExport directory. The maximum file upload
size is defined in the Initialization/MaxiumumFileUploadSizeMB in the prconfig.xml file. This field can be
left empty if the file to import already exists on the server. Click next.
Select a file from the server to import. If a file was uploaded it will be selected by default.
It is possible to view the contents of the file by using the Show Content Details button.
Select Enable advanced mode to provide more granular control over the import process if we want to be
able to select individual instances in the archive.
Click next to start the import. The system attempts to upload the rules in an order that minimizes
processing.
After processing completes, adjust access groups or application rules to provide access to the new
rulesets, versions, and class groups as needed.
545
If we import rules in a ruleset version that users already can access, they may begin executing them
immediately, possibly before other rules in the same archive are present. Similarly, declarative rules begin
executing immediately, which means that they might fail if the elements or properties they reference
havent been uploaded yet. This needs to be planned for when an archiveis imported on a system with
active users.
Conclusion
In this lesson, we explained features and tools available to simplify the packaging, exporting, and
importing of an application and components of an application.
Now, we should understand the impact of associating a data instance to a ruleset and how the Product
rule, also known as RAP, is configured. We should also know how to use the Application Packaging
Wizard, Export and Import gadgets.
546