Escolar Documentos
Profissional Documentos
Cultura Documentos
Guide v1.5.8-SNAPSHOT
Draft Draft
http://www.apache.org/licenses/LICENSE-2.0
2
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
Table of Contents
Work in progress ......................................................................................................... xi
1. Community .............................................................................................................. 1
1.1. Reporting Bugs .............................................................................................. 1
1.1.1. Introduction ......................................................................................... 1
1.1.2. Creating a testcase project: using the apacheds-archetype-testcase .................. 1
1.1.3. Installing the apacheds-archetype-testcase ................................................. 1
1.1.4. Invoking (running) the archetype: generating the testcase project ................... 1
1.1.4.1. For IDEA ................................................................................. 2
1.1.4.2. For Eclipse ............................................................................... 3
1.1.4.3. For Netbeans ............................................................................. 3
1.1.5. Note about version ............................................................................... 3
1.1.6. Why a Maven Archetype for Testing? ...................................................... 3
1.2. Building trunks ............................................................................................... 3
1.2.1. Project Hierarchy ................................................................................. 4
1.2.2. Prerequisites for building ....................................................................... 4
1.2.2.1. Maven ..................................................................................... 4
1.2.2.2. JDK 5 ...................................................................................... 4
1.2.3. Getting the code ................................................................................... 4
1.2.4. Building the trunks ............................................................................... 5
1.2.4.1. Enabling Snapshot Repositories .................................................... 5
1.2.4.2. Building the trunks .................................................................. 6
1.2.5. Building the installers ........................................................................... 6
1.2.6. Starting the server without installation ...................................................... 6
1.2.7. Integration test ..................................................................................... 6
1.2.8. Eclipse ............................................................................................... 7
1.2.8.1. Building eclipse files .................................................................. 7
1.2.8.2. Maven settings .......................................................................... 7
1.2.8.3. Eclipse hints ............................................................................. 7
1.2.8.4. Eclipse plugins .......................................................................... 7
1.2.8.5. Coding standards ....................................................................... 7
1.3. Contributing ................................................................................................... 7
2. Architecture .............................................................................................................. 8
2.1. Architectural Overview .................................................................................... 8
2.1.1. Partitions ............................................................................................ 8
2.2. Interceptors .................................................................................................... 8
2.2.1. What is it? .......................................................................................... 8
2.2.2. How does it work? ............................................................................... 8
2.2.3. JNDI Implementation ............................................................................ 8
2.2.4. The nexus proxy object ......................................................................... 9
2.2.5. Operation handling within interceptors .................................................... 10
2.2.6. Bind Operation ................................................................................... 11
2.2.7. Normalization interceptor ..................................................................... 11
2.2.8. Authentication interceptor .................................................................... 12
2.2.9. Add operation .................................................................................... 12
2.3. The Administrative Model .............................................................................. 12
2.3.1. Introduction ....................................................................................... 12
2.3.2. What exactly are subentries? ................................................................. 13
2.3.3. Administrative Areas, Entries and Points ................................................. 13
2.3.4. How are administrative areas defined? .................................................... 14
2.3.5. Subentries under an IAA or an AAA ...................................................... 15
2.3.6. Base parameter ................................................................................... 15
2.3.7. Chop parameters ................................................................................. 15
2.3.7.1. chopBefore and chopAfter ......................................................... 15
2.3.7.2. minimum and maximum ............................................................ 15
2.3.8. Specification filter parameter ................................................................ 16
iii
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft ApacheDS Advanced User Draft
Guide v1.5.8-SNAPSHOT
iv
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft ApacheDS Advanced User Draft
Guide v1.5.8-SNAPSHOT
v
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft ApacheDS Advanced User Draft
Guide v1.5.8-SNAPSHOT
vi
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft ApacheDS Advanced User Draft
Guide v1.5.8-SNAPSHOT
vii
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft ApacheDS Advanced User Draft
Guide v1.5.8-SNAPSHOT
viii
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
List of Figures
2.1. Interceptors .......................................................................................................... 10
2.2. Interceptor chaining ............................................................................................... 11
4.1. Schema Browser Person ......................................................................................... 42
4.2. Schema Browser Tree ............................................................................................ 43
4.3. Schemas view ....................................................................................................... 46
4.4. Schemas view with loaded schemas .......................................................................... 46
4.5. Schema browser ship ............................................................................................. 47
4.6. Entry editor with ship ............................................................................................ 47
4.7. SubEntry ............................................................................................................. 52
5.1. ApacheDS as a Web Application ............................................................................. 70
5.2. Tomcat Manager App in Browser ............................................................................ 73
5.3. New LDAP Connection Directory Studio 1 ................................................................ 74
5.4. New LDAP Connection Directory Studio 2 ................................................................ 75
5.5. Properties New LDAP Connection Directory Studio .................................................... 76
5.6. WebSphere Admin Console .................................................................................... 77
5.7. RootDSE Servlet in a Browser ................................................................................ 79
6.1. The activated keyDerivationInterceptor automatically creates the krb5Key attributes ........... 98
6.2. Authenticate using Apache Directory Studio .............................................................. 99
6.3. Windows Security ............................................................................................... 100
6.4. Windows Change Password ................................................................................... 101
6.5. ........................................................................................................................ 106
7.1. Hello World UML ............................................................................................... 114
7.2. Hello World LDAP Browser ................................................................................. 116
7.3. Hello World Entry Editor ...................................................................................... 117
7.4. PasswordHash Interceptor UML ............................................................................. 119
7.5. PasswordHash Interceptor PasswordEditor ............................................................... 122
7.6. PasswordHash Interceptor ModificationLog .............................................................. 123
7.7. PasswordHash Interceptor EntryEditor ..................................................................... 123
ix
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
List of Tables
2.1. Nexus Proxy Object ................................................................................................ 9
2.2. Administrative areas .............................................................................................. 14
2.3. LDAP related RFCs ............................................................................................... 19
3.1. SASL QoP levels .................................................................................................. 28
3.2. To be named ........................................................................................................ 30
3.3. ACI Trails ........................................................................................................... 34
3.4. Access Control Subentries ...................................................................................... 35
3.5. ACIItem fields ...................................................................................................... 37
3.6. Simple User Classes .............................................................................................. 39
5.1. Test Annotations ................................................................................................... 62
6.1. Protocol Providers ................................................................................................. 80
6.2. Environment parameters ......................................................................................... 83
6.3. Parameters common to all protocol providers ............................................................. 83
6.4. Parameters common to all protocol providers 1 ........................................................... 84
6.5. LDAP-Specific Configuration Parameters 1 ............................................................... 86
6.6. LDAP-Specific Configuration Parameters 2 ............................................................... 86
6.7. Kerberos-Specific Configuration Parameters ............................................................... 87
6.8. Change Password-Specific Configuration Parameters ................................................... 87
6.9. NTP-Specific configuration parameters ..................................................................... 88
6.10. Replication Startup Configuration ........................................................................... 89
6.11. Common Service Configuration Parameters .............................................................. 90
6.12. LDAP-Specific Configuration Parameters ................................................................ 91
6.13. Common Service Configuration Parameters .............................................................. 94
6.14. Kerberos-Specific Configuration Parameters ............................................................. 95
6.15. Download the unlimited strength policy JAR files ..................................................... 96
6.16. Extract the unlimited strength policy JAR files .......................................................... 96
6.17. Install the unlimited strength policy JAR files ........................................................... 96
6.18. Common Service Configuration Parameters ............................................................ 102
6.19. Change Password-Specific Configuration Parameters ................................................ 103
6.20. Abstract objectClass used to build all DNS record objectclasses .................................. 104
6.21. Address (A) record ............................................................................................. 105
6.22. Pointer (PTR) record .......................................................................................... 105
6.23. Name Server (NS) record .................................................................................... 105
6.24. Start Of Authority (SOA) record ........................................................................... 105
6.25. Common Service Configuration Parameters ............................................................ 108
6.26. Common Service Configuration Parameters ............................................................ 110
x
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
Work in progress
Unfortunately the Basic User's Guide for ApacheDS 1.5 is not finished yet. We have started to move
and revise the content, things you find here are work in progress but should be valid for ApacheDS
1.5.5. In the meantime you can have a look at the ApacheDS 1.0 Basic User's Guide, which is currently
more complete.
xi
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
Chapter 1. Community
1.1. Reporting Bugs
This site was updated for ApacheDS 1.5.5.
1.1.1. Introduction
So you found a bug in ApacheDS. Don't worry this is a good thing! We can fix it really fast but need
your help. There are different degrees to your ability to help out. Some of you have developer skills
so you might be able to write a test case that pin points that bug. If you can do this we will prioritize
your bug report above all others. Yes we will put your bug to the top most important fixes that should
be fixed first. But if you can't do this and your bug is serious then we're prioritize it ahead of others
anyway.
This wiki page shows you how you can help us to help you!!!
So you can write client code in your test case immediately. Just add your code, tar gzip the project,
and attach it to your JIRA issue on the ApacheDS JIRA here:
https://issues.apache.org/jira/browse/DIRSERVER
We'll prioritize your bug higher than others and probably fix it rapidly because the problem is isolated
thanks to your testcase submission. We will in fact strip out your testcase and add it to our suite of test
cases to make sure ApacheDS always passes this integration test you've provided.
svn co http://svn.apache.org/repos/asf/directory/samples/trunk/apacheds-archetype-testcase
cd apacheds-archetype-testcase
mvn install
This will install the archetype onto your local repository. Now you can invoke the archetype.
1
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Community Draft
This will generate the default test case project with the following tree structure:
~/foo-test$ tree
.
|-- pom.xml
`-- src
|-- main
| `-- java
| `-- com
| `-- acme
| `-- Dummy.java
`-- test
|-- java
| `-- com
| `-- acme
| |-- AdvancedTest.java
| |-- AdvancedTestApacheDsFactory.java
| `-- MinimalTest.java
`-- resources
|-- log4j.properties
|-- sevenSeas_data.ldif
`-- sevenSeas_schema.ldif
10 directories, 8 files
• Dummy.java - this is just a placeholder file to make sure that Maven works properly.
• MinimalTest - a minimal ApacheDS Integration Test. It contains two test methods to demonstrate
usage of JNDI and the ApacheDS core API. Add your own test method here.
• AdvancedTest - an advanced ApacheDS Integration Test in case you need a special configuration
for your test. It demonstrates how to add a new partition, how to enable LDAPS, how to enable a
disabled schema, how to inject a custom schema, and how to inject custom test data.
• pom.xml - the Maven Project Object Model (POM) for your new testcase project (can remain as is).
• log4j.properties - Log4j configuration file that controls ApacheDS logging for your convenience;
edit this file to control logging output.
Once you do this you can cd into foo-test and just build and test it for fun to see what happens. This
will build, and test the sample test cases (they should pass) that comes packaged with the project you
just created. CD into foo-test and run the following command:
mvn test
Now you can customize the MinimalTest.java or AdvancedTest.java file to isolate your bug. Open the
classes with your favorite editor and goto town. However if you want to pull this project into your IDE
and edit it there you can use Maven's IDEA, Eclipse and Netbeans integration to create IDE project
descriptors for them. Then you can import this project into your IDE. Here's how:
2
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Community Draft
mvn eclipse:eclipse
6. Tests isolating custom bugs can be incorporated into our community test suite for ApacheDS.
3
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Community Draft
If the build hangs or you get an out of memory exception please increase the heap space:
• For Linux:
• For Windows:
SET MAVEN_OPTS="-Xmx256m"
mvn clean install
1.2.2.1. Maven
Download [http://maven.apache.org/download.html] and install Maven 2.0.9. (ATTENTION !!! Do
NOT use an older version of Maven )
Add a MAVEN_HOME environment variable and add MAVEN_HOME/bin to your system path:
On a Linux box you could add the following to the .bashrc file (.bashrc is a file you'll find in your
home directory)
...
export MAVEN_HOME=/opt/maven-2.0.9
export PATH=$JAVA_HOME:$JAVA_HOME/bin:$MAVEN_HOME/bin:$PATH
...
Windows users, use Control Panel -> System -> Advanced -> Environment Variables
1.2.2.2. JDK 5
4
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Community Draft
Before building the trunks, you must configure Maven 2 to use the snapshot repository for
Apache. Snapshot repositories are typically configured per user at ~/.m2/settings.xml. The
following example, added to your settings.xml, will add a profile for the Apache snapshot
repository.
<settings>
<profiles>
...
<profile>
<id>apache</id>
<repositories>
<repository>
<id>apache.org</id>
<name>Maven Snapshots</name>
<url>http://people.apache.org/repo/m2-snapshot-repository</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>apache.org</id>
<name>Maven Plugin Snapshots</name>
<url>http://people.apache.org/repo/m2-snapshot-repository</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</pluginRepository>
</pluginRepositories>
</profile>
...
</profiles>
</settings>
You may either specify the profile at the command-line, each time you use 'mvn', or you may configure
the profile to always be active.
<settings>
...
<activeProfiles>
<activeProfile>apache</activeProfile>
</activeProfiles>
5
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Community Draft
...
</settings>
cd apacheds-trunk
mvn clean install
You must make sure you build the shared, installers, and daemon project modules in addition
to the apacheds module to prevent problems with stale Maven SNAPSHOT jars in the snapshot
repository from causing compilation errors. This can be guaranteed by performing all Maven
operations above in the top directory that you checked out: the apacheds-trunk directory.
A lot of plugins will be downloaded. If you are curious, you can then look at .m2/repository to see
what has been downloaded on this step. Building should finish with these lines:
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 8 minutes 30 seconds
[INFO] Finished at: Mon Oct 30 23:32:41 CET 2006
[INFO] Final Memory: 18M/32M
[INFO] ------------------------------------------------------------------------
cd apacheds-trunk
mvn install
cd installers/apacheds
mvn -Pserver-installer install
Linux:
cd apacheds-trunk/installers/apacheds-noarch
./apacheds.sh
cd apacheds-trunk
mvn -Dintegration test
6
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Community Draft
1.2.8. Eclipse
1.2.8.1. Building eclipse files
To build the .project and .classpath files for eclipse, type the following commands :
cd apacheds-trunk
mvn clean install
mvn eclipse:eclipse
cd apacheds/bootstrap-partition
mvn clean install
(The last command is necessary because eclipse:eclipse purge the target directory, and we need some
generated files which has been removed. This is why we do another *mvn clean install* in the boostrap-
partition module then import all the existing project which has been created.
You can declare new variables in Eclipse in -> Preferences... and selecting -> Build Path ->
Classpath Variables
You may also declare a specific workspace when launching eclipse. I have created a workspace-
apacheDS directory in my HOME directory, where all the ApacheDS project is built when I use
Eclipse.
Launch eclipse :
<eclipse_root>/eclipse-apacheDS.sh
1.3. Contributing
7
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
Chapter 2. Architecture
2.1. Architectural Overview
2.1.1. Partitions
A partition is a physically distinct store for a subset of the entries contained within a DSA (Directory
Server/Service Agent A.K.A the LDAP server). The entries of a partition all share the same
suffix which is the distinguished name of the namingContext from which the stored entries in the
partition are hung from the DIT. A partition can be implemented using any storage mechanism
or can even be backed in memory. The default storage mechanism for a partition is JDBM. The
addition of such a partition is described in the Basic User's Guide [http://directory.apache.org/
apacheds/1.5/apacheds-v15-basic-users-guide.html] . A partition with a different storage mechanism
simply has to implement the Partition interface and by doing so can be mounted in the server at it's
suffix/namingContext (described here [http://cwiki.apache.org/confluence/pages/createpage.action?
spaceKey=DIRxSRVx11&title=6.1.%20Implementing%20an%20alternative
%20Backend&linkCreation=true&fromPageId=55216] ).
The server can have any number of partitions (with any implementation) attached to various
namingContexts which are published by the RootDSE (empty string dn "") using the namingContexts
operational attribute. So if you want to see the partitions served by the server you can query the
RootDSE for this information.
2.2. Interceptors
2.2.1. What is it?
The mechanism is a means for injecting and isolating orthogonal services into calls against the nexus.
The nexus is the hub used to route calls to partitions to perform CRUD operations upon entries.
By injecting these services at this level, partition implementators need not duplicate fuctionality.
Services such as authentication, authorization, schema checking, normalization, operational attribute
maintenance and more are introduced using Interceptors. By using interceptors, partition implementors
need not be concerned with these aspects and can focus on raw CRUD operations against their backing
stores what ever they may be.
• DeadContext
• JavaLdapSupport
• ServerContext
• ServerDirContext
• ServerLdapContext
8
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
• AbstractContextFactory
• CoreContextFactory
• ServerDirObjectFactory
• ServerDirStateFactory
Every JNDI Context implementation in the provider holds a dedicated reference to a nexus proxy
object. This proxy contains all the operations that the nexus contains . The proxy object is at the heart
of the mechanism. We will disuss it more after covering the rest of the JNDI provider.
Calls made against JNDI Contexts take relative names as arguments. These names are relative to the
distinguished name of the JNDI Context. Within the Context implementations these relative names
are transformed into absolute distinguished names. The transformed names are used to make calls
against the proxy.
Additional processing may occur before or after a call is made by a context on its proxy to manage
JNDI provider specific functions. One such example is the handling of Java objects for serialization
and the use of object and state factories.
9
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
The base idea is to allow pre and post actions to be executed before and after the call of the next
interceptors :
10
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
Each interceptor process the pre action, call the next interceptor, wait for the response, execute the
post action, and returns. We have to implement this chain of interceptors in a way which allows us
to add new interceptors, or new pre or post actions, without having to modify the existing code or
mechanism.
public void bind( LdapDN bindDn, byte[] credentials, List mechanisms, String saslAuthId,
Collection bypass ) throws NamingException
{ ...
this.configuration.getInterceptorChain().bind( bindDn, credentials, mechanisms, saslAuthId );
...
}
this will call the first configured interceptor from a chain which is declared in the configuration file
server.xml . The first interceptor is the NormalizationService .
It is the first interceptor in the chain because as we will manupulate the DN through all interceptors,
it is important that we normalize it as soon as possible.
The normalized DN will be stored in an special form, usefull for internal comparizons. This operation
can be costly, but as the DN has already been parsed, this is quite efficient.
11
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
In the first case, we will have to search the password in the backend, and this will be a lookup operation,
which will be applied through another chain of interceptors.
Let's assume we are in the second case, because if we are in the first case, we will have to ask the
backend about the entry which DN is equal to the one we received, to get its associated password, thus
callaing a specific chain of interceptors ( FINAL_INTERCEPTOR ).
The password is compared using the given mechanism (which should be simple on a new server), and
if it matches, we create a principal object which will be stored in the connection context for future
usage.
• A partition name
• An entry name
For instance, when adding an entry which DN is cn=acme, ou=users, ou=system , we will have :
• Partition = ""ou=system"
The two first elements must exist in the base. We can't add an entry in an not existing partition, and
we can't add an entry which path is not existing.
12
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
For this reason we intend to remain ahead of the curve by implementing these aspects of administration
using Subentries and Administrative Areas similar to X.500 Directories.
Subentries are hidden leaf entries (which cannot have children). These entries immediately subordinate
to an administrative point (AP) within the directory. They are used to specify administrative
information for a part of the Directory Information Tree (DIT). Subentries can contain administrative
information for aspects of access control, schema administration, and collective attributes (and others
which have not been defined in any specification yet).
• 11.1.1 administrative area: A subtree of the DIT considered from the perspective of administration.
• 11.1.5 autonomous administrative area: A subtree of the DIT whose entries are all administered by
the same Administrative Authority. Autonomous administrative areas are non-overlapping.
• 11.1.11 inner administrative area: A specific administrative area whose scope is wholly contained
within the scope of another specific administrative area of the same type.
• 11.1.17 specific administrative area: A subset (in the form of a subtree) of an autonomous
administrative area defined for a particular aspect of administration: access control, subschema or
entry collection administration. When defined, specific administrative areas of a particular kind
partition an autonomous administrative area.
• 11.1.18 specific administrative point: The root vertex of a specific administrative area.
Now take a step back because the above definitions are, well, from a sleep inducing spec. Let's just
talk about some situations.
Presume you're the uber directory administrator over at WallyWorld (a Walmart competitor). Let's
say WallyWorld uses their corporate directory for various things including their product catalog. As
the uber admin you're going to have a bunch of people wanting access, update and even administer
your directory. Entire departments within WallyWorld are going to want to control different parts of
the directory. Sales may want to manage the product catalog, while operations may want to manage
information in other areas dealing with suppliers and store locations. Whatever the domain some
department will need to manage the information as the authority.
Each department will probably designate different people to manage different aspects of their domain.
You're not going to want to deal with their little fiefdoms instead you can delegate the administration
of access control policy to a departmental contact. You will want to empower your users and
administrative contacts in these departments so they can do part of the job for you. Plus it's much
better than having to communicate with everyone in the company to meet their needs. This is where
the delegation of authority comes into the picture.
Usually administrators do this already to an extent without defining administrative areas. Giving users
the ability to change their own passwords for example is a form of delegation. This is generally a
good idea because you don't want to set passwords for people. First because you don't want to see the
password and secondly because of the management nightmare you'd have to deal with. Expand this
13
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
idea out a little further and think about delegating administration not of users on their passwords but
of entire subtrees in the directory to administrative contacts in various departments.
Do you really want to manage the corporate product catalog or just let the sales department manage it?
But what do we mean by manage? You want sales people to create, and delete entries but they may only
trust a few people to do this. Others may just view the catelog. Who are the people with add/remove
powers and why should you have to be involved with deciding this ever changing departmental policy?
Instead you can delegate the management of access controls in this area to a administrative contact
in the sales department. The sales contact can then administer access controls for their department.
They're closer to the people in sales than you are and they probably have more bandwidth to handle
sales related needs than you do. Delegating authority in this fashion is what X.500 engineers pioneered
in the early 80's with the telecom boom in Europe. They knew different authorities will want to manage
different aspects of directory administration for themselves. These X.500 definitions are there to be
able to talk about administrative areas within the directory. Now let's get back to what these things
are exactly.
An administrative area is some part of the directory tree that is arbitrarily defined. The tree can
be split into different administrative areas to delegate authority for managing various aspects of
administration. For example you can have a partition hanging off of 'dc=example,dc=com' with an
'ou=product catalog' area. You may want this area to be managed by the sales department with respect
to the content, schema, it's visibility, and collective attributes. Perhaps you only want to delegate
only one aspect of administration , access control, since you don't want people messing around with
schema. To do so you can define everything under 'ou=product catalog' to be an administrative
area specifically for access control and delegate that aspect only. In that case the entry, 'ou=product
catalog,dc=example,dc=com' becomes an administrative entry. It is also the administrative point for
the area which is the tree rooted at this entry.
Not all administrative areas are equal. There are really two kinds : autonomous and inner areas.
Autonomous areas are areas of administration that cannot overlap. Meaning someone is assigned as
the supreme authority for that subtree. Inner areas are, as their name suggests, nested administrative
areas within autonomous areas and other inner areas. Yes, you can nest these inner areas as deep as
you like. You may be asking yourself what the point to all this is. Well, say you're the supreme admin
of admins. You delegate the authority to manage access control for the corporate catalog to the sales
admin. That admin may in turn decide to delegate yet another area of the catalog to another contact
within a different department. You delegate access control management to the sales admin over the
product catalog. The sales admin realizes that the job is way bigger than he can manage so he delegates
administration of subtrees in the catalog to various contacts in different departments. For example
regions of the catalog under 'ou=electronics' and 'ou=produce' may be delegated to different contacts
in their respective departments. However the sales admin still reserves the ability to override access
controls in the catalog. The sales admin can change who manages access controls for different parts
of the catalog. This chain of delegation is possible using inner administrative areas.
14
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
OID NAME
2.5.23.6 collectiveAttributeInnerArea
As you can see, 3 aspects, schema , collective attributes , and access control are considered. An
autonomous administrative area can hence be considered with respect to all three specific aspect of
administration. If an AP is marked as an autonomousArea it generally means that administration of
all aspects are allowed by the authority. If marked with a specific aspect then only that aspect of
administration is delegated. The administrativeRole operational attribute is multivalued so the uber
admin can delegate any number of specific administration aspects as he likes.
Also notice that two aspects, collective attribute and access controls, allow administrative points to
be inner areas. Delegated authorities for these two aspects can create inner administrative areas to
further delegate their administrative powers. The schema aspect unlike the others cannot have inner
areas because of potential conflicts this may cause which would lead to data integrity issues. For this
reason only the authority of an automomous area can manage schema for the entire subtree.
An autonomous administrative area (AAA) includes the AP and spans all descendants below the AP
down to the leaf entries of the subtree with one exception. If another AAA, let's call it AAA' (prime)
is present and rooted below the first AAA then the first AAA does not include the entries of AAA'.
Translation: an AAA spans down until other AAAs or leaf entries are encountered within the subtree.
This is due to the fact that AAAs do not overlap as do inner AAs (IAA).
A subtree specification uses various parameters described below to define the set of entries. Note that
entries need not exist for them to be included in the collection on addition.
When chopBefore is used, the entry specified is excluded from the collection. When chopAfter is
used the entry is included however all descendants below the entry are excluded.
15
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
the maximum arc length between the base and the target allowed before entries are excluded from
the collection.
So with a filter you have the ability to "refine" the subtree already specified with chop, and base
parameters. This "refinement" makes it so the collection is not really a contiguous subtree of entries
but a possibly disconnected set of selected based on the objectClass characteristics of entries. This
feature of a subtreeSpecification is very powerful. For example, I can define a subtree to cover a region
of an AA yet include only inetOrgPersons within this region.
The kinds of subentries allowed though are limited by the administrativeRole of the AP. If the AP is
for an access control AA then you can't add a subentry to it for schema administration. The AP must
have the role for schema administration as well to allow both types of subentries.
ApacheDS does not manage schema using subentries in the formal X.500 sense right now. There is a
single global subentry defined at 'cn=schema' for the entire DSA. The schema is static and cannot be
updated at runtime even by the administrator. Pretty rough for now but it's the only lagging subsystem.
We'll of course make sure this subsystem catches up.
ApacheDS does however manage collective attributes using subentries. An AP that takes the
administrativeRole for managing collective attributes can have subentries added. These subentries are
described in greater detail here: Section 4.2, “Collective Attributes” . In short, collective attributes
added to subentries show up within entries included by the subtreeSpecification. Adding, removing,
and modifying the values of collective attributes within the subentries instantly manifest changes in
the entries selected by the subtreeSpecification. Again consult Section 4.2, “Collective Attributes” for
a hands on explanation of how to use this feature.
ApacheDS performs access control and allows delegation using subentries, AAAs, and IAAs.
ApacheDS uses the Basic Access Control Scheme from X.501 to manage access control. By default
this subsystem is deactivated because it locks down everything except access by the admin. More
information about hands on use is available here: Section 3.5, “Authorization” However to summarize
its association with subentries, access control information (ACI) can be added to subentries under
an AP for access control AAs. When one or more ACI are added in this fashion, the access rules of
the ACI set apply to all entries selected by the subtreeSpecification. Even with this powerful feature
individual entries can have ACI added to them for controlling access to them. Also there are things you
can do with ACI added to subentries that cannot be done with entry level ACI. For example you cannot
allow entry addition with entry ACI. You must use subtreeSpecifications to define where entries may
be added because those entries and their parents may not exist yet.
16
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
{}
This basically selects the entire contiguous subtree below the AP. The base is the empty name and
it's rooted at the AP.
{ base "ou=users" }
If this is the subtreeSpecification under the AP, 'ou=system' , then it selects every entry under
'ou=users,ou=system' .
OK that was easy so now let's slice and dice the tree now using the minimum and maximum chop
parameters.
{ minimum 3, maximum 5 }
This selects all entries below 'ou=system' which have a DN size equal to 3 name components,
but no more than 5. So for example 'uid=jdoe,ou=users,ou=system' would be included but
'uid=jack,ou=do,ou=not,ou=select,ou=users,ou=system' would not be included. Let's continue and
combine the base with just a minimum parameter:
Here the subtree starts at 'ou=users,ou=system' if the subentry subordinates to the AP at 'ou=system'
. The user 'uid=jdoe,ou=deepenough,ou=users,ou=system' is selected by the spec where as
'uid=jbean,ou=users,ou=system' is not.
{
base "ou=users",
minimum 4,
specificExclusions { chopBefore: "ou=untrusted" }
}
Note that you can add as many exclusions as you like by comma delimiting them. For example:
{
base "ou=users",
minimum 4,
specificExclusions { chopBefore: "ou=untrusted", chopAfter: "ou=ugly", chopBefore: "ou=bad" }
}
The final example includes a refinement. Again any combination of chop, filter and base parameters
can be used. The following refinement makes sure the users selected are of the objectClass
inetOrgPerson and specialUser where the OID for the specialUser class is 32.5.2.1 (fictitious).
17
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
base "ou=users",
minimum 4,
specificExclusions { chopBefore: "ou=untrusted", chopAfter: "ou=ugly", chopBefore: "ou=bad" }
specificationFilter and:{ item:32.5.2.1, item:inetOrgPerson }
}
If you'd like to see the whole specification of the grammar used for the subtreeSpecification take a
look at Appendix A in RFC 3672 [http://www.faqs.org/rfcs/rfc3672.html] .
Of course we will revamp the schema subsystem of ApacheDS to use subentries in AAA to manage
the schema in effect within different regions of the DIT. Today most LDAP servers just have a global
scheme in effect for the entire DIT served by a DSA. We don't think that is reasonable at all. So expect
some serious advances in the design of a new schema subsystem based on subentries.
Replication is yet another excellent candidate for using subentries. Replication of specific collections
of entries can be managed for each cluster rather than replicating the entire DIT served by a DSA to
replicas. This way we don't only control what is replicated but we can also control how and where
it is replicated.
2.3.12. Conclusions
ApacheDS has implemented subentries for the administration of various aspects of the directory
and gains several powerful features as a result: namely precision application of control to entry
collections and the ability to delegate administrative authority. For details on the administration of each
aspect using subentries ( Collective [http://cwiki.apache.org/confluence/pages/createpage.action?
spaceKey=DIRxSRVx11&title=Collective&linkCreation=true&fromPageId=55219] and
Section 3.5, “Authorization” ) please see the respective documentation.
As ApacheDS progresses it will gain an immense advantage from subentries. Both for existing LDAP
features like scheme and for new experimental features like triggers, and replication.
are obsoleted RFCs) in the current 1.5 version of ADS. The flag
18
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
19
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
20
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
21
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
22
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
23
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Architecture Draft
24
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
3.1.2. Architecture
SASL workflow is implemented in the LDAP Protocol Provider's BindHandler. At the start of a
Bind, the BindHandler handles SASL negotiation. During SASL negotiation, the LDAP client is
first authenticated. After successful authentication, an LDAP context is established and a SUCCESS
message is returned.
Backend #1 is a lookup to authenticate the user using an administrative (internal) directory context.
Backend #2 is an LdapContext establishment for the user that is stored in the user's MINA session.
The DIGEST-MD5 and GSSAPI SASL mechanisms can provide message integrity and, optionally,
message confidentiality by "wrapping" or "unwrapping" data with a security layer. After the Bind has
completed the BindHandler will insert a MINA filter that handles security layer processing into the
IoFilterChain for the session that was SASL-authenticated. All subsequent LDAP operations will be
wrapped or unwrapped by the SaslFilter (assuming message integrity or privacy are negotiated). For
example, a subsequent search would arrive wrapped and thus must be unwrapped by the SaslFilter
prior to being ASN.1 decoded into a SearchRequest. Similarly, all outbound responses, including
errors and unbinds, will be wrapped by the SaslFilter.
3.1.3. CRAM-MD5
Password must be stored as plaintext in the 'userPassword' attribute.
3.1.4. DIGEST-MD5
Password must be stored as plaintext in the 'userPassword' attribute.
Realm must match realms advertised by the LDAP server, but there is no multi-realm support yet.
25
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
3.1.5. GSSAPI
Principal name is matched to the 'krb5PrincipalName' attribute under a base DN.
Principal configuration (user, service, krbtgt) can all occur on LDIF load.
When anonymous authentication is disabled , queries below the RootDSE will require authentication.
The following command will fail if anonymous access is disabled.
GSSAPI will use the Kerberos credentials of the current user. GSSAPI supports the concept of "realm,"
but the realm is part of the username, eg 'hnelson@EXAMPLE.COM'.
3.1.8. Resources
IMAP/POP AUTHorize Extension for Simple Challenge/Response
http://www.ietf.org/rfc/rfc2195.txt
RFC 4513 - Lightweight Directory Access Protocol (LDAP): Authentication Methods and Security
Mechanisms
http://www.faqs.org/rfcs/rfc4513.html
This document obsoletes RFC 2251, RFC 2829, and RFC 2830.
26
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
2. You can double-check your version of ApacheDS by interrogating the RootDSE for the supported
SASL mechanisms. Note the use of the fully-qualified domain name (FQDN), 'ldap.example.com'.
Regardless of the enabled authentication mechanisms, you will always be able to query the
RootDSE. You must see 'GSSAPI' in this returned list.
3. (OPTIONAL) Install GSSAPI support for LDAP tools on Linux. By default, some Linux variants
do not have SASL GSSAPI support installed. If Cyrus SASL GSSAPI is not present, install it with
an RPM maintenance tool such as 'yum'. Note that the SASL support in ApacheDS is unrelated to
the SASL library implementation being installed here.
$ cd <trunk>/server-main
$ vi server.xml
27
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
</list>
</property>
6. Set the FQDN of the host. The FQDN must resolve, by hosts file, or DNS. Elements of the SASL
GSSAPI mechanism are extremely picky about the FQDN you use. The FQDN should be the top-
most entry in your hosts file or matching A and PTR records in DNS. If you are running the client
and the server on the same machine, you may need to set the FQDN to be your hostname. You
will likely find a sniffer (like WireShark) very handy for figuring out what hostnames are being
assumed and whether DNS is working properly.
<!-- The FQDN of this SASL host, validated during SASL negotiation. -->
<property name="saslHost" value="ldap.example.com" />
7. Set the service principal name that the server-side of the LDAP protocol provider will use to
"accept" a GSSAPI context initiated by the LDAP client. The SASL principal MUST follow
the name-form ldap/<fqdn>@<realm>. The 'ldap' name component and the @<realm> will be
automatically added to the FQDN by the LDAP client. The LDAP client will then use this as the
service principal name when requesting a service ticket from a KDC. In our case, the KDC is
ApacheDS, itself.
<!-- The Kerberos principal name for this LDAP service, used by GSSAPI. -->
<property name="saslPrincipal" value="ldap/ldap.example.com@EXAMPLE.COM" />
8. (OPTIONAL) Enforce quality-of-protection (QoP). The QoP level directly maps to the JNDI levels.
Listing all possible levels means any level will be accepted. Listing only 'auth-conf' will allow
only 'auth-conf' connections. These SASL QoP levels are global; they affect all connections using
DIGEST-MD5 or GSSAPI.
9. Configure SASL realms. If the realm is not enabled, the connection will be rejected. Note that if
your realm does not appear here, you will see an error similar to "Nonexistent realm: dummy.com."
<!-- The realms serviced by this SASL host, used by DIGEST-MD5 and GSSAPI. -->
<property name="saslRealms">
<list>
<value>example.com</value>
<value>apache.org</value>
28
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
</list>
</property>
10.Set the search base DN. The search base DN is where a subtree-scoped DIT search will be
performed. This is BOTH where the LDAP service principal must reside, as well as where user
principals must reside. That all principals must reside in a single sub-tree is currently (4-JUN-2007)
a limitation of the SASL implementation. Work is underway to enable "multi-realm" capability, as
well as "split realm" capability. "Split realm" capability will allow you to split principals (users,
admins, services, machines) into separate subtrees.
<!-- The base DN containing users that can be SASL authenticated. -->
<property name="searchBaseDn" value="ou=users,dc=example,dc=com" />
11.Configure your host so that it knows where to get Kerberos tickets. On linux this is configured in '/
etc/krb5.conf'. The minimum config file must list the default Kerberos realm and the location of at
least one key distribution center (KDC). With ApacheDS, the KDC and LDAP server are the same,
so we'll re-use our 'ldap.example.com' hostname here.
[libdefaults]
default_realm = EXAMPLE.COM
[realms]
EXAMPLE.COM = {
kdc = ldap.example.com
}
[domain_realm]
.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM
12.Enable the Kerberos protocol provider. By default, the LDAP protocol is enabled, but the Kerberos
protocol is not. You may also change the Kerberos port so that Kerberos can bind if you're logged-
in as a non-root user. If you change the default port of '88', you must change the KDC port in the
krb5.conf, as well.
<bean class="org.apache.directory.server.core.configuration.MutableInterceptorConfiguration">
<property name="name" value="keyDerivationService" />
<property name="interceptor">
<bean class="org.apache.directory.server.core.kerberos.KeyDerivationService" />
</property>
</bean>
29
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
14.Pre-load principals using an LDIF file. If the LDAP SASL GSSAPI mechanism is enabled but
the service principal is not found then you may see a WARN'ing in the server logs. With the
KeyDerivationService enabled, you should be able to use LDIFs or LDAP to configure principals
on-the-fly. For this example, since the LDIF format is concise, we review some LDIF entries.
You will find attached to this page an example LDIF. Download the LDIF [data/sasl-gssapi-
example.ldif] and configure the 'ldifDirectory' in server.xml.
<property name="ldifDirectory">
<value>/path/to/sasl-gssapi-example.ldif</value>
</property>
15.Review the LDIF entries. The metaphor for Kerberos comes from the fact that it is "three-headed";
there is always a KDC principal, service principal, and user principal. All of these principals use
the same objectClass'es. The attributes are the minimum to satisfy their respective schema, with
the exception of the Kerberos schema. Because we are using the KeyDerivationService, we don't
need to specify the Kerberos key, key types, or key version number (kvno); they are automatically
added by the interceptor, which will also increment the kvno when the password changes. Looking
at the LDIF file you'll see the ASL license, an organizational unit (ou) for our 'users' subcontext,
and the following entries:
16.You are now ready to start the server. Upon startup, the server will load the entries from the LDIF.
$ cd <trunk>/server-main
$ ./apacheds.sh
17.Request a ticket-granting ticket (TGT) using 'kinit'. If you have not already "logged in," you must
request a fresh TGT. Without a TGT, 'ldapsearch', for example, will fail with error "No credentials
cache found." Also, if you don't specify the user principal, kinit will guess the principal name based
on the logged-in user and the realm configured in the krb5.conf.
$ kinit hnelson@EXAMPLE.COM
Password for hnelson@EXAMPLE.COM: <s3crEt>
18.You should now be able to query the DIT using Kerberos credentials. GSSAPI will use the Kerberos
credentials (TGT) of the current user. GSSAPI supports the concept of "realm," but the realm is part
of the username, eg 'hnelson@EXAMPLE.COM'. This is in contrast to other SASL mechanisms
where the realm is separately and explicitly specified.
19.(OPTIONAL) List your Kerberos credentials. You'll see that in addition to a TGT, you also now
have a service ticket for the LDAP server.
30
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
$ klist -5fea
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hnelson@EXAMPLE.COM
Valid starting Expires Service principal
06/04/07 20:42:19 06/05/07 20:41:37 krbtgt/EXAMPLE.COM@EXAMPLE.COM
Etype (skey, tkt): DES cbc mode with RSA-MD5, DES cbc mode with RSA-MD5
Addresses: (none)
06/04/07 20:42:22 06/05/07 20:41:37 ldap/ldap.example.com@EXAMPLE.COM
Etype (skey, tkt): DES cbc mode with RSA-MD5, DES cbc mode with RSA-MD5
Addresses: (none)
can also use TLS (and SSL) with the SASL authentication mechanisms.
Note that SSL certificates may be verified, depending on the LDAP client, so you should use the
FQDN of the ldap server that matches the cn in the certificate.
3.3.3. Resources
RFC 2830 - Lightweight Directory Access Protocol (v3): Extension for Transport Layer Security
http://www.faqs.org/rfcs/rfc2830.html
import javax.naming.NamingException;
import org.apache.directory.server.core.authn.AbstractAuthenticator;
import org.apache.directory.server.core.authn.LdapPrincipal;
import org.apache.directory.server.core.jndi.ServerContext;
import org.apache.directory.shared.ldap.aci.AuthenticationLevel;
31
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
import org.apache.directory.shared.ldap.name.LdapDN;
}
}
You can optionally implement the init() method to initialize your authenticator class. This will be
called when the authenticator is loaded by ApacheDS during start-up.
When a client performs an authentication, ApacheDS will call the authenticate() method. You
can get the client authentication info from the server context. After you authenticate the
client, you need to return the authorization id. If the authentication fails, you should throw an
LdapNoPermissionException.
When there are multiple authenticators registered with the same authentication type, ApacheDS will
try to use them in the order it was registered. If one fails it will use the next one, until it finds one
that successfully authenticates the client.
To tell ApacheDS to load your custom authenticators, you need to specify it in the server.xml. You
can also optionally specify the location of a .properties file containing the initialization parameters.
See the following example:
server.authenticators=myauthenticator yourauthenticator
server.authenticator.class.myauthenticator=com.mycompany.MyAuthenticator
server.authenticator.properties.myauthenticator=myauthenticator.properties
server.authenticator.class.yourauthenticator=com.yourcompany.YourAuthenticator
server.authenticator.properties.yourauthenticator=yourauthenticator.properties
3.5. Authorization
ApacheDS uses an adaptation of the X.500 basic access control scheme in combination with X.500
subentries to control access to entries and attributes within the DIT. This document will show you
how to enable the basic access control mechanism and how to define access control information to
manage access to protected resources.
32
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
operations are denied by default until enabled using an ACIItem. For this reason enabling basic access
controls is a configuration option.
To turn on the basic access control mechanism you need to set the accessControlEnabled property
in the configuration to true. This can be set programatically on the StartupConfiguration or via the
server.xml.
There is one exception to the rule of consulting entryACI attributes within ApacheDS: add
operations do not consult the entryACI within the entry being added. This is a security
precaution. If allowed users can arbitrarily add entries where they wanted by putting entryACI
into the new entry being added. This could comprimise the DSA.
Prescriptive ACI can save much effort when trying to control access to a collection of resources.
Prescriptive ACI can even be specified to apply access controls to entries that do not yet exist within
the DIT. They are a very powerful mechanism and for this reason they are the prefered mechanism
for managing access to protected resources. ApacheDS is optimized specifically for managing access
to collections of entries rather than point entries themselves.
Users should try to avoid entry ACIs whenever possible, and use prescriptive ACIs instead. Entry
ACIs are more for managing exceptional cases and should not be used excessively.
This however is not the most intuitive mechanism to use for explicitly controlling access to subentries.
A more explicit mechanism is used to specify ACIs specifically for protecting subentries. ApacheDS
uses the multivalued operational attribute, subentryACI , within administrative entries to control
access to immediately subordinate subentries.
Protection policies for ACIs themselves can be managed within the entry of an administrative point.
33
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
we start with simple examples that focus on different protection mechanisms offered by the ACIItem
syntax. We do this instead of specifying the grammar which is not the best way to learn a language.
Please don't go any further until you have read up on the use of Subentries. Knowledge of
subentries, subtreeSpecifications, administrative areas, and administrative roles are required
to properly digest the following material.
Before going on to these trails you might want to set up an Administrative Area for managing
access control via prescriptiveACI. Both subentryACI and prescriptiveACI require the presence of an
Administrative Point entry. For more information and code examples see Section 3.5.4, “ACAreas” .
3.5.4. ACAreas
3.5.4.1. Introduction
This guide will show you how to create an Access Control Specific Area and Access Control
Inner Areas for administering access controls within ApacheDS. Basic knowledge of the X.500
administrative model is presumed along with an understanding of the Basic Access Control Scheme
in X.501. For quick primers please take a look at the following documentation:
Under the AP, you can add subentries that contain prescriptiveACI attributes. Zero or more subentries
can be added, each with one or more prescriptiveACI. These subentries apply access control
information (ACI) in these prescriptiveACI attributes to collections of entries within the ACSA.
34
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
Most of the time users will create partitions in the server and set the root context of the partition
(its suffix) to be the AP for a ACSA. For example the default server.xml for ApacheDS ships with a
partition with the suffix, 'dc=example,dc=com'. We can use this suffix entry as the AP and our ACSA
can cover all entries under and including 'dc=example,dc=com'.
The code below binds to the server as admin ('uid=admin,ou=system') and modifies the suffix entry
to become an ACSA. Note that we check to make sure the attribute does not already exist before
attempting the add operation.
...
// Get a DirContext on the dc=example,dc=com entry
Hashtable env = new Hashtable();
env.put( "java.naming.factory.initial", "com.sun.jndi.ldap.LdapCtxFactory" );
env.put( "java.naming.provider.url", "ldap://localhost:389/dc=example,dc=com" );
env.put( "java.naming.security.principal", "uid=admin,ou=system" );
env.put( "java.naming.security.credentials", "secret" );
env.put( "java.naming.security.authentication", "simple" );
ctx = new InitialDirContext( env );
// If it does not exist or has no ACSA value then add the attribute
if ( administrativeRole == null || ! administrativeRole.contains( "accessControlSpecificArea" ) )
{
Attributes changes = new BasicAttributes( "administrativeRole", "accessControlSpecificArea", true );
ctx.modifyAttributes( "", DirContext.ADD_ATTRIBUTE, changes );
}
...
3.5.5. AllowSelfPasswordModify
35
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
{
identificationTag "allowSelfAccessAndModification",
precedence 14,
authenticationLevel none,
itemOrUserFirst userFirst:
{
userClasses { thisEntry },
userPermissions
{
{ protectedItems {entry}, grantsAndDenials { grantModify, grantBrowse, grantRead } },
{ protectedItems {allAttributeValues {userPassword}}, grantsAndDenials { grantAdd, grantRemove } }
}
}
}
3.5.5.1. Commentary
Note that two different user permissions are used to accurately specify self access and self modification
of the userPassword attribute within the entry. So with the first userPermission of this ACI a user
would be able to read all attributes and values within his/her entry. They also have the ability to modify
the entry but this is moot since they cannot add, remove or replace any attributes within their entry. The
second user permission completes the picture by granting add and remove permissions to all values
of userPassword. This means the user can replace the password.
3.5.6. EnableSearchForAllUsers
3.5.6.1. Enable Authenticated Users to Browse and Read Entries
in a Subtree
We presume this is your first encounter and so many bases will be covered this time around.
Every other trail will build on this information. So expect a little less to read as you gain
momentum.
Since the entire directory is locked down for all but the superuser, you're going to want to grant read
and browse access to users for certain regions of the DIT. This will probably be the first thing you'll
want to do after turning on access controls.
Before you can add a subentry with the prescriptiveACI you'll need to create an administrative area.
For now we'll make the root of the partition the adminstrative point (AP). Every entry including this
entry and those underneath it will be part of the autonous administrative area for managing access
controls. To do this we must add the administrativeRole operational attribute to the AP entry. See ???
for code and information about creating access control administrative areas.
36
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
Before we cover the anatomy of this ACIItem, you might want to add the subentry and test access
with a normal non-super user to make sure access is now granted.
{
identificationTag "enableSearchForAllUsers",
precedence 14,
authenticationLevel simple,
itemOrUserFirst userFirst:
{
userClasses { allUsers },
userPermissions
{
{
protectedItems {entry, allUserAttributeTypesAndValues},
grantsAndDenials { grantRead, grantReturnDN, grantBrowse }
}
}
}
}
There are several parameters to this simple ACIItem. Here's a breif exaplanation of each field and it's
meaning or significance.
37
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
Fields Description
itemOrUserFirst Determines order of item permissions or user
permissions.
userClasses The set of users the permissions apply to.
userPermissions Permissions on protected items
3.5.6.1.4.1. identificationTag
The identificationTag is just that a tag. It's often used with a subtring search filter to lookup a specific
ACIItem within an entry. One or more ACIItems may be present within a subentry, zero or more in
entries, so this serves as a means to address the ACIItem within entries.
3.5.6.1.4.2. precedence
Precendence is used to determine the ACI to apply when two or more ACIItem's applied to an entry
conflict. The ACIItem with the highest precedence is applied over other conflicting ACIItems.
When two or more conflicting ACIItems are encountered with the same precedence the
ACIItems with denials overpower ACIItems with grants.
Right now the use of this field may not mean too much to you. We're dealing with a very simple
situation with a single access control area. Later as you add more subentries their subtreeSpecifications
may define collections that intersect. When this happens two or more conflicting ACIItems may apply
to the same entry. Precendence is then applied to determine which permissions apply.
Another complex situation requiring precedence is the use of inner areas. These nested inner
administrative areas overlap and so do their effects. The authority within an AA may deny some
operation to all entries but grant access to subentries of inner areas so minor authorities can control
access to inner areas. Their grants to users may need to have a higher precedence over denials in outer
areas. Such situations will arise and precedence will need to be used. In this example we just assign
an arbitrary value to the precedence.
3.5.6.1.4.3. authenticationLevel
The authenticationLevel is the minimum authentication requirement for requestor for the ACI to by
applied: According to X.501:
The authenticationLevel can have three values: none, simple and strong. It's used to be able to associate
permissions with the level of trust in users. For none, the identity of the user is anonymous or does
not matter. The user can be anyone. The simple authenticationLevel means the user has authenticated
but is using a simple bind with clear text passwords. The strong authenticationLevel represents users
that bind to the directory using strong authentication mechanisms via SASL.
SASL can allow annonynous binds as well so there is a distinction here. Using SASL alone does
not mean the authenticationLevel is strong. As we add SASL mechanisms to the server, we'll qualify
each one with none, simple or strong. This will be reflected in the authenticationLevel property of the
principal making requests.
3.5.6.1.4.4. itemOrUserFirst
This field describes the order of information within the ACI whether protected items are described first
or user classes and permissions are described first. For simplicity we will only describe the userFirst
configuration in this tutorial.
3.5.6.1.4.5. userClasses
UserClasses is used to list the sets of users to which this permission applies. Several mechanisms can
be used here to define userClasses. They can be defined by name per user, by group membership, or
by the superset of all users possible and many more. In our example we have applied the ACI to all
users that have authenticated by simple or strong means.
38
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
3.5.6.1.4.6. userPermissions
These are the permissions granted or denied to those users included by the userClasses field. The
grants or denials however are qualified by the protected items operated upon. In our example we grant
read, return DN and browse to all entries, their attributes and all possible values they may have.
3.5.7. UserClasses
3.5.7.1. What are User Classes?
A large part of managing access control information involves the specification of who can perform
which operation on what protected resource (entries, attributes, values etc). At evaluation time a
requestor of an operation is known. The identity of the requestor is checked to see if it falls into the
set of users authorized to perform the operation. User classes are hence definitions of a set of zero or
more users permissions apply to. Several constructs exist for specifying a user class.
These are pretty intuitive. Two other user classes may be a bit less easy to understand or may require
some explanation. For these we discuss them in the sections below.
ApacheDS associates users within a group using the groupOfNames and groupOfUniqueNames
objectClasses. To define groups an entry of either of these objectClasses is added anywhere in the
server's DIT. member or uniqueMember attributes whose values are the DN of user entries are
present within the entry to represent membership within the group.
Although such group entries can be added anywhere within the DIT to be recognized by the
Authorization subsystem, a recommended convention exists. Use the 'ou=groups' container
under a namingContext/partition within the server to localize groups. Most of the time group
information can be stored under 'ou=groups,ou=system'.
Just like the name construct, the userGroup construct takes a single parameter: the DN of the group
entry. During ACI evaluation ApacheDS checks to see if the requestor's DN is contained within the
group. Below is a section from X.501 specification which explains just how this is done:
39
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Authentication & Authorization Draft
In order to determine whether the requestor is a member of a userGroup user class, the following
criteria apply:
• The entry named by the userGroup specification shall be an instance of the object class
groupOfNames or groupOfUniqueNames.
• The name of the requestor shall be a value of the member or uniqueMember attribute of that entry.
For more information on how to define a subtreeSpecification please see ??? and Section 2.3, “The
Administrative Model” .
For this purpose a subtree is not refined. Meaning it does not evaluate refinement filters. This
is to restrict the information needed to make a determination to just the DN of the requestor
and not the entry of the requestor.
{ identificationTag "deleteAci"
precedence 255,
authenticationLevel simple,
itemOrUserFirst userFirst:
{
userClasses
{
thisEntry,
name { "uid=jbean,ou=users,ou=system" },
name { "uid=jdoe,ou=users,ou=system" },
userGroup { "cn=Administrators,ou=groups,ou=system" }
},
userPermissions { { protectedItems {entry}, grantsAndDenials { grantRemove } } }
}
}
40
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
• Section 4.1.5, “Using Apache Directory Studio Schema Editor to load the new schema elements”
• Section 4.1.6, “Using LDIF to load schema elements in RFC 4512 format”
• Section 4.1.8, “Using JNDI to add schema elements in RFC 4512 format programmatically”
4.1.1. Motivation
The schema of an LDAP server is comprised of object classes, attributes, syntaxes and matching rules.
Basically it defines which entries are allowed within the server and how the server should handle them.
In contrast to the 1.0 release, ApacheDS 1.5.0 comes with a completely redesigned schema subsystem.
It enables dynamic schema updates, like the creation of new attribute types or object classes at runtime
(i.e. without restarting the server).
No. ApacheDS comes with a comprehensive set of predefined, standardized schema elements
(like inetOrgPerson ). It is quite common to solely use the predefined schema. The same holds
true for other directory servers, by the way.
In the following text the addition of user defined schema elements to the schema is described in tutorial
style.
41
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
The output (formatted as defines in RFC 4512 [http://www.ietf.org/rfc/rfc4512.txt] ) contains all things
which are interesting to know about an object class (required attributes, optional attributes etc.), but
is not easy to read by a human user. It is therefore often appropriate to use a GUI tool to browse the
schema (which basically performs the same search operations but presents the output prettily). One
option is Apache Directory Studio [http://directory.apache.org/studio/] , an Eclipse based LDAP tool
set which contains a powerful graphical Schema browser:
The techniques described above work for all LDAP v3 compliant servers. The ability to browse the
schema gives us a chance to check whether our future changes to the schema really took place.
The schema subsystem of ApacheDS 1.5 stores the schema elements as entries in the DIT. You can
find them within a special partition with suffix ou=schema ; simply browse the content with your
favorite LDAP Browser. With Apache Directory Studio, it looks like this:
42
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
Browsing the schema like this gives a good impression of the ApacheDS implementation of the schema
subsystem and an even better way to analyze effects during schema updates. But keep in mind that the
storage scheme is server dependent; not all LDAP server implementations store the schema elements
in the DIT.
How is this accomplished? OIDs are assigned hierarchically: The owner of an OID is allowed to create
new IDs by simply appending numbers. S/he is also allowed to delegate ownership of newly created
43
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
OIDs to someone else. This way every person or organization is able to allocate an arbitrary number
of new OIDs after obtaining one from "higher command", and they are still unique world-wide.
But if you plan to use your schema elements in a production environment (an object class for instance
which describes employees with company specific attributes), or to ship your schema elements with
a product (e.g. a CRM or portal solution), you should definitely use unique OIDs. In order to do this
you have to obtain OIDs from a branch assigned to your company or organization (your network
administrators will be helpful here, do not invent OIDs without asking or obtaining a branch from
someone who owns the prefix OID). If your company or organization does not own on OID, there
are several option to obtain one, one is the IANA (Internet Assigned Numbers Authority). It is also
possible to get an OID branch as an individual.
You can ask for your own PEN (Private Enterprise Number) here : http://pen.iana.org/pen/
PenApplication.page It takes a few weeks to have a private OID assigned to you, so be patient,
or do it early !
A ship entry is comprised of a mandatory value for common name ( cn ) of the ship, description values
and the number of guns ( numberOfGuns ). Thus a new object class ship and a new attribute type
numberOfGuns have to be added to the schema. There are different ways to accomplish the task. In
any case, we have to add the attribute type first, because the object class refers to it.
( 1.3.6.1.4.1.18060.0.4.3.2.1
NAME 'numberOfGuns' DESC 'Number of guns of a ship'
EQUALITY integerMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.27
44
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
SINGLE-VALUE
)
( 1.3.6.1.4.1.18060.0.4.3.3.1
NAME 'ship' DESC 'An entry which represents a ship'
SUP top STRUCTURAL
MUST cn MAY ( numberOfGuns $ description )
)
attributetype ( 1.3.6.1.4.1.18060.0.4.3.2.1
NAME 'numberOfGuns'
DESC 'Number of guns of a ship'
EQUALITY integerMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.27
SINGLE-VALUE
)
objectclass ( 1.3.6.1.4.1.18060.0.4.3.3.1
NAME 'ship'
DESC 'An entry which represents a ship'
SUP top
STRUCTURAL
MUST cn
MAY ( numberOfGuns $ description )
)
In Eclipse with the Apache Directory Studio plugins installed (or alternatively the standalone RCP
application of Apache Directory Studio, if you prefer this), open the Schemas view.
45
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
When it is shown, press the Open a schema file button in the toolbar of it. In the file dialog, open the
sevenSeas.schema . It is loaded and added to the schemas within the view.
Select Export For ApacheDS ... in the context menu of the sevenSeas elements. The schema will
be stored in an LDIF file which can directly be imported into ApacheDS (we choose the file name
sevenSeas.ldif [data/sevenSeas.ldif] ).
Use Apache Directory Studio to open a connection to your ApacheDS server, bind as administrator,
and select Import | LDIF Import ... in the context menu of the DIT. Choose the LDIF file previously
stored, and import the file. Alternatively, you can use command line tools like ldapmodify to load the
file. If no errors are displayed, you are done.
There are several options to check whether the additions has be successful. One is to use the browse
techniques described above. You can also use specific search commands like this one:
46
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
Of course it visible possible within the Apache Directory Studio UI as well. You will likely have to
refresh the schema to see the new elements in the schema browser
Now you are done: The schema elements are ready to use. Feel free to add your fleet!
47
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
dn: cn=schema
changetype: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.4.1.18060.0.4.3.2.1
NAME 'numberOfGuns'
DESC 'Number of guns of a ship'
EQUALITY integerMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.27
SINGLE-VALUE
)
-
add: objectClasses
objectClasses: ( 1.3.6.1.4.1.18060.0.4.3.3.1
NAME 'ship'
DESC 'An entry which represents a ship'
SUP top
STRUCTURAL
MUST cn
MAY ( numberOfGuns $ description )
)
-
java.naming.factory.initial=com.sun.jndi.ldap.LdapCtxFactory
java.naming.provider.url=ldap://zanzibar:10389/
java.naming.security.principal=uid=admin,ou=system
java.naming.security.credentials=******
java.naming.security.authentication=simple
import javax.naming.NamingException;
import javax.naming.directory.Attributes;
import javax.naming.directory.BasicAttributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
48
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
schema.createSubcontext("AttributeDefinition/numberOfGuns", attrs);
}
}
and this one (file CreateObjectClass.java [data/CreateObjectClass.java] ) the object class ship
import javax.naming.NamingException;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.BasicAttribute;
import javax.naming.directory.BasicAttributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
schema.createSubcontext("ClassDefinition/ship", attrs);
}
}
After successfully running the programs against the LDAP server defined in the connection data. The
schema elements can also be found within a schema browser, but you may have to refresh the schema
here as well. They are also ready to use
A final note for the JNDI approach: With the code shown above, the elements will be created by the
ApacheDS schema subsystem within the other schema (in contrast to the sevenSeas schema ). You
can also create them programmatically in a specific schema with the help of the X-SCHEMA attribute:
...
attrs.put("X-SCHEMA", "sevenSeas");
...
but in this case you have to create the skeleton entries for the sevenSeas schema before (loading an
LDIF file like the one generated by Apache Directory Studio as depicted above). Otherwise you will
get an NamingException ("LDAP: error code 54 - failed to modify entry cn=schema ...").
49
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
import javax.naming.NamingException;
import javax.naming.directory.Attributes;
import javax.naming.directory.BasicAttributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
4.1.9. Resources
• Internet Assigned Numbers Authority (IANA)
http://www.iana.org
4.2.1. Introduction
Collective attributes are attributes whose values are shared across a collection of entries. It's very
common to encounter situations where a bunch of entries have the same value for an attribute.
Collective attributes for LDAP are defined in RFC 3671 [http://www.faqs.org/rfcs/rfc3671.html] .
ApacheDS implements this RFC.
Rather than manage the value for this attribute in each entry a single collective attribute can be used
in a subentry. Changes to the value of this attribute would immediately be reflected to those entries
selected by the subtreeSpecification of subentry. For more information on specifying subtrees take
at ??? .
50
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
4.2.2.1. Example
For the use case above we can presume a partition at the namingContext 'dc=example,dc=com' with an
'ou=engineering' entry below containing users from the engineering team in Sunnyvale. Let's presume
no AA has yet been defined so we have to create one. We'll set the partition root 'dc=example,dc=com'
as the AP of an AA that spans the entire subtree. For this simple example the AA will be autonomous
for the collective aspect. Setting this up is just a matter of modifying the 'dc=example,dc=com' entry so
it contains the operational attribute administrativeRole with the value collectiveAttributeSpecificArea.
The code below sets up this AAA for collective attribute administration.
Now 'dc=example,dc=com' is the AP for a collective attribute AAA that spans the entire subtree under
and including it down to every leaf entry. All that remains is the addition of the subentry with the
collective attributes we want included in the entries of all engineering users. Here's what the LDIF
would look like for this subentry given that its commonName is 'engineeringLocale'.
dn: cn=engineeringLocale,dc=example,dc=com
objectClass: top
objectClass: subentry
objectClass: collectiveAttributeSubentry
cn: engineeringLocale
c-l: Sunnyvale
subtreeSpecification: {base "ou=engineering", minimum 4}
51
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
4. Its subtreeSpecification excludes entries whose number of DN name components is less than 4
Note that the minimum value of 4 is used in the subtreeSpecification to make sure that the entry
'ou=engineering,dc=example,dc=com' does not have c-l: Sunnyvale added to it. It's got 3 components
to the DN so minimum 4 chops it out of the collection.
We have included this list from RFC 3671 into the collective.schema which comes standard with
ApacheDS.
52
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
53
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Attributes, Entries & Schemas Draft
54
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
will go through a very simple example step by step to show you how this can be done.
.
|-- pom.xml
`-- src
`-- main
`-- java
`-- org
`-- apache
`-- directory
`-- EmbeddedADS.java
• data/EmbeddedADS.java
• pom.xml [data/pom.xml]
If you don't want to use Maven you need to add the following dependencies to your classpath
in order to compile and run this sample.
• apacheds-all-1.5.5.jar
• slf4j-api-1.5.6.jar
• slf4j-log4j12-1.5.6.jar
• log4j-1.2.14.jar
package org.apache.directory;
/**
* A simple example exposing how to embed Apache Directory Server
* into an application.
*
* @author <a href="mailto:dev@directory.apache.org">Apache Directory Project</a>
* @version $Rev: 985583 $, $Date: 2010-08-14 23:19:48 +0200 (Sat, 14 Aug 2010) $
*/
public class EmbeddedADS
{
55
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
...
/**
* Creates a new instance of EmbeddedADS. It initializes the directory service.
*
* @throws Exception If something went wrong
*/
public EmbeddedADS() throws Exception
{
init();
}
/**
* Main class. We just do a lookup on the server to check that it's available.
*
* @param args Not used.
*/
public static void main( String[] args ) //throws Exception
{
try
{
// Create the server
EmbeddedADS ads = new EmbeddedADS();
// Read an entry
Entry result = ads.service.getAdminSession().lookup( new LdapDN( "dc=apache,dc=org" ) );
As you can see, we first initialize the server, and immediately do a lookup to check that we can read
an entry from it.
A partition is a storage point, associated with a DN, root point for this partition. It's a bit like
a mounting point on Unix. We also need a context entry associated to this DN.
Here, we will create the apache partition, associated with the 'dc=apache,dc=org' DN.
...
/**
* Initialize the server. It creates the partition, adds the index, and
* injects the context entries for the created partitions.
*
* @throws Exception if there were some problems while initializing the system
*/
private void init() throws Exception
{
// Initialize the LDAP service
service = new DefaultDirectoryService();
56
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
...
We disabled the ChangeLog service because it's useless in our case. As you can see, the steps to
initialize the server are:
• add a partition
One important point: as the data are remanent, we have to check that the added context entry does
not exist already before adding it.
Some helper methods have been used : addPartition and addIndex . Here they are :
...
/**
* Add a new partition to the server
*
* @param partitionId The partition Id
* @param partitionDn The partition DN
* @return The newly added partition
* @throws Exception If the partition can't be added
*/
private Partition addPartition( String partitionId, String partitionDn ) throws Exception
{
// Create a new partition named 'foo'.
Partition partition = new JdbmPartition();
partition.setId( partitionId );
partition.setSuffix( partitionDn );
service.addPartition( partition );
return partition;
}
/**
* Add a new set of index on the given attributes
*
* @param partition The partition on which we want to add index
* @param attrs The list of attributes to index
*/
private void addIndex( Partition partition, String... attrs )
{
// Index some attributes on the apache partition
57
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
((JdbmPartition)partition).setIndexedAttributes( indexedAttributes );
}
...
That's it! (the attached code will contain the needed imports)
When the main method is run, you should obtain something like :
That's it!
58
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
The idea is to use ADS as an embedded server for Ldap junit tests. We will build an environment in
which it will be convenient to test Ldap applications.
We also want to avoid launching the server for every test, as it's an expensive operation. We have built
ApacheDS so that you can start a server, inject some data, launch a test, then revert the data and go
on to another test. At the end of the tests, the server is stopped.
We will simply launch only one server (if one want to test referrals, it might be necessary to initialize
2 or more servers)
We have to define a layout for the files and directory we will use in this tutorial. Let's use the maven
layout :
/
|
+--src
|
+--test
|
+--java : we will put all the sources into this directory
|
+--resources : we will put the resources files into this directory
package org.apache.directory.server.test;
import javax.naming.NamingEnumeration;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import javax.naming.ldap.LdapContext;
import org.apache.directory.server.annotations.CreateLdapServer;
import org.apache.directory.server.annotations.CreateTransport;
import org.apache.directory.server.core.integ.AbstractLdapTestUnit;
import org.apache.directory.server.core.integ.FrameworkRunner;
import org.apache.directory.server.ldap.LdapServer;
import org.junit.Test;
import org.junit.runner.RunWith;
/**
* Tests
*
59
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
@Test
public void testSearchAllAttrs() throws Exception
{
LdapContext ctx = ( LdapContext ) getWiredContext( ldapServer, null ).lookup( "ou=system" );
assertTrue( res.hasMore() );
while ( res.hasMoreElements() )
{
SearchResult result = ( SearchResult ) res.next();
System.out.println( result.getName() );
}
}
}
In order to have this test running, you will need to declare some libraries. The best solution is clearly
to define a pom.xml file for that purpose. Here it is :
<description>
Unit test for ApacheDS Server JNDI Provider
</description>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.7</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.14</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.5.10</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.directory.server</groupId>
<artifactId>apacheds-all</artifactId>
<version>1.5.6-SNAPSHOT</version>
60
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
</dependency>
<dependency>
<groupId>org.apache.directory.server</groupId>
<artifactId>apacheds-server-integ</artifactId>
<version>1.5.6-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>org.apache.directory.server</groupId>
<artifactId>apacheds-core-integ</artifactId>
<version>1.5.6-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>1.4</version>
</dependency>
</dependencies>
<build>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>2.0.2</version>
<configuration>
<source>1.5</source>
<target>1.5</target>
<optimize>true</optimize>
<showDeprecations>true</showDeprecations>
<encoding>ISO-8859-1</encoding>
</configuration>
</plugin>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<argLine>-Xmx1024m</argLine>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
</project>
Is that it ? Pretty much. All you have to do now is to run the test, using a Java 5 JVM and Maven 2.0.9 :
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running SimpleTest
log4j:WARN No appenders could be found for logger (org.apache.directory.server.integ.SiRunner).
log4j:WARN Please initialize the log4j system properly.
Ldap service started.
Ldap service stopped.
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.114 sec
Results :
61
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10 seconds
[INFO] Finished at: Fri Nov 21 15:40:32 CET 2008
[INFO] Final Memory: 11M/83M
[INFO] ------------------------------------------------------------------------
#user:~/ws-ads-1.5.4/UnitTest$
You have written your very first test using the test framework provided in ADS 1.5.5 !
• SearchTests.java [data/unit-tests/SearchTests.java]
• pom.xml [data/unit-tests/pom.xml]
5.2.2.1.1. Annotations
The first interesting part is the annotations we are using. As we said, the server is launched
automatically, and we are using the ChangeLog mechanism to restore the base in a pristine state
between each test. This is done with annotations.
62
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
The initialization will update a static ldapServer instance which is declared in the inherited
AbstractLdapTestUnit class.
/**
* Creates a JNDI LdapContext with a connection over the wire using the
* SUN LDAP provider. The connection is made using the administrative
* user as the principalDN. The context is to the rootDSE.
*
* @param ldapServer the LDAP server to get the connection to
* @return an LdapContext as the administrative user to the RootDSE
* @throws Exception if there are problems creating the context
*/
public static LdapContext getWiredContext( LdapServer ldapServer )
/**
* Creates a JNDI LdapContext with a connection over the wire using the
* SUN LDAP provider. The connection is made using the administrative
* user as the principalDN. The context is to the rootDSE.
*
* @param ldapServer the LDAP server to get the connection to
* @return an LdapContext as the administrative user to the RootDSE
* @throws Exception if there are problems creating the context
*/
public static LdapContext getWiredContext( LdapServer ldapServer, Control[] controls )
/**
* Creates a JNDI LdapContext with a connection over the wire using the
* SUN LDAP provider. The connection is made using the administrative
* user as the principalDN. The context is to the rootDSE.
*
* @param ldapServer the LDAP server to get the connection to
* @return an LdapContext as the administrative user to the RootDSE
* @throws Exception if there are problems creating the context
*/
public static LdapContext getWiredContext( LdapServer ldapServer, String principalDn, String password )
• first you have to add this API jar into the dependencies in the pom.xml file
• second you will use the getWiredConnection() method instead of the getWiredContext().
The API is :
63
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
Both methods are similar to the getWiredContext which has been described before, except that they
return a LdapConnection instance.
The setUp() method will be completed with all the needed instruction to create a new partition
import java.io.File;
import java.util.HashSet;
import java.util.Hashtable;
import java.util.Set;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.BasicAttribute;
import javax.naming.directory.BasicAttributes;
import javax.naming.directory.DirContext;
import org.apache.directory.server.core.configuration.MutablePartitionConfiguration;
import org.apache.directory.server.unit.AbstractServerTest;
...
/**
* Initialize the server.
*/
public void setUp() throws Exception
{
// Add partition 'sevenSeas'
MutablePartitionConfiguration pcfg = new MutablePartitionConfiguration();
pcfg.setName( "sevenSeas" );
pcfg.setSuffix( "o=sevenseas" );
configuration.setContextPartitionConfigurations( pcfgs );
64
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
// Now, let's call the upper class which is responsible for the
// partitions creation
super.setUp();
}
Ok, now the partition sevenseas should be created. How can we be sure of that ? let's write a test to
replace the emptytest() method :
/**
* Test that the partition has been correctly created
*/
public void testPartition() throws NamingException
{
Hashtable<Object, Object> env = new Hashtable<Object, Object>( configuration.toJndiEnvironment() );
The test should succeed. Is that all ? Well, almost. As you can see, a working space has been created
( "server-work", at the end of the setup). Do we have to take care of this working space? No. It has
been cleaned by the super class !
So everything is fine, the partition is up and running, you are ready to add more tests.
The setup has tried to load an LDIF file to inject some data into the partition, but as we didn't specify
any Ldif file to be loaded, nothing has been done. let's add some data !
First, we need to have a valid LDIF file containing your entries. We will create two branches in our
sevenseas organization :
65
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
dn: ou=groups,o=sevenSeas
objectClass: organizationalUnit
objectClass: top
ou: groups
description: Contains entries which describe groups (crews, for instance)
dn: ou=people,o=sevenSeas
objectClass: organizationalUnit
objectClass: top
ou: people
description: Contains entries which describe persons (seamen)
Save it as a text file into a directory where we will be able to read it directly. But where?
We have created the test into a directory src/test/java/org/apache/directory/demo. This is the maven
way to organized the sources, as seen before. Let's create another directory for the resources : src/test/
resources/org/apache/directory/demo. Tis is the place where we will save the ldif file.
Now, to let the server know about the ldif file, just add this line after the call to the setup() method :
...
// Now, let's call the upper class which is responsible for the
// partitions creation
super.setUp();
This is important to add the import after the setup : you can't import data while the partition
has not been created ...
The getResourceAsStream call will automatically read the file from the resources directory, based
on the current class package.
How can we be sure that the data has been imported ? Let's do a search request !
/**
* Create a context pointing to a partition
*/
private DirContext createContext( String partition ) throws NamingException
{
// Create a environment container
Hashtable<Object, Object> env =
new Hashtable<Object, Object>( configuration.toJndiEnvironment() );
66
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
assertNotNull( appRoot );
return appRoot;
}
This method is added into the body of the test class. This method is very simple and quite
straightforward : we just create an initial context pointing to the requested partition, and return a
directory context on this partition. It takes a parameter, the partition name.
/**
* Test that the partition has been correctly created
*/
public void testPartition() throws NamingException
{
DirContext appRoot = createContext( "o=sevenSeas" );
We just replaced the first lines by a call to the newly created createContext() method.
/**
* Test that the ldif data has correctly been imported
*/
public void testImport() throws NamingException
{
// Searching for all
Set<String> result = searchDNs( "(ObjectClass=*)", "o=sevenSeas", "",
SearchControls.ONELEVEL_SCOPE );
Here, we are looking for all the entries starting at the top level of the partition, within the level. We
should only get two entries.
It's not enough : the searchDNs() method does not exist. It is a private method we have created to
avoid duplicating some code all over the unit tests. Here is its code :
/**
* Performs a single level search from a root base and
* returns the set of DNs found.
*/
private Set<String> searchDNs( String filter, String partition, String base, int scope )
throws NamingException
{
DirContext appRoot = createContext( partition );
while ( result.hasMore() )
{
SearchResult entry = ( SearchResult ) result.next();
67
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
entries.add( entry.getName() );
}
return entries;
}
As for the test partiton, we call the createContext() method, then we just do some JNDI magic :
• creating a SearchControl,
Ok, let's go deeper into the server configuration. Working with default schema is fine, but some point,
you may want to use your own ObjectClasses and AttributeTypes. let's assume you have created them,
and that you have been throw the process of generating the class files for it (this process is described
in Custom Schema [http://directory.apache.org/apacheds/DIRxSRVx10/custom-schema.html] ) into a
new package (org.apache.directory.demo.schema) where all the generated files will be put.
You will just have to add those lines at the end of the setUp() method (just before the call to the super()
method) :
...
import org.apache.directory.server.core.configuration.MutablePartitionConfiguration;
import org.apache.directory.server.core.schema.bootstrap.AbstractBootstrapSchema;
import org.apache.directory.server.unit.AbstractServerTest;
import org.apache.directory.demo.schema.DemoSchema;
...
...
///
/// add the Demo schema
///
Set<AbstractBootstrapSchema> schemas = configuration.getBootstrapSchemas();
schemas.add( new DemoSchema() );
configuration.setBootstrapSchemas(schemas);
// Now, let's call the upper class which is responsible for the
// partitions creation
super.setUp();
}
If we launch the test, nothing special will happen, except that the test will succeed. That's not very
impressive...
5.2.8. Conclusion
Ok, this tutorial was a short one, but you get everything you need to play with Apache Directory
Server as a Unit Test Engine for your Ldap application. Just create your own partition, define
your schema, import your ldif file, and add all the tests you need. it's as simple as explained
68
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
If you have any problem, just feel free to post a mail to users@directory.apache.org, we will be there
to help !
5.2.9. Resources
Embedding ApacheDS - Conference Materials [http://directory.apache.org/community%26resources/
embedding-apacheds.html]
My initial aim was to demonstrate embedding ApacheDS in a very simple, but nevertheless impressive
way. I thought about embedding the server in Apache Tomcat first. But then I got a better plan:
Creating a standard web application which wraps ApacheDS and can be deployed on any compliant
application server. ApacheDS in a war-archive!
• Section 5.3.2, “Step 1: The web component which starts and stops the server”
• Section 5.3.4, “Step 2: Adding functionality: A servlet which displays the Root DSE”
Although the concepts depicted below apply to all version of ApacheDS (even before 1.0),
the configuration for starting and stopping the embedded server uses the style introduced with
ApacheDS 1.5.5. Be sure that you use this version of the server, or a later one.
The solution is quite simple. A web application carries all the necessary jar files for ApacheDS within
the lib-directory of the WEB-INF folder. When the web application is started by the servlet container,
appropriate code has to be executed to start ApacheDS. And the server has to be stopped, if the
web application goes down (for instance if the server shuts down). There are (at least) two standard
compliant ways to acomplish this:
• A Servlet (automatically started with the web application, using the lifecycle methods init and
destroy)
• A ServletContextListener
69
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
The following class diagram visualizes the complete example. The gray elements will be developed
in two steps and use Servlet and ApacheDS API.
• contextInitialized() is executed if the web application is started by the servlet container, it starts
ApacheDS embedded
70
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
• contextDestroyed() is executed if the web application is stopped by the servlet container, it stops
the embedded server
Finally the DirectoryService component is stored in the application context of the web application.
This is done in order to provided it to embedded clients in the same web app (see the servlet below
for an example).
The method contextDestroyed simply stops the protocol and shuts down the service.
StartStopListener.java
package org.apache.directory.samples.embed.webapp;
import java.io.File;
import javax.servlet.ServletContext;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
import org.apache.directory.server.core.DefaultDirectoryService;
import org.apache.directory.server.core.DirectoryService;
import org.apache.directory.server.ldap.LdapServer;
import org.apache.directory.server.protocol.shared.transport.TcpTransport;
/**
* A Servlet context listener to start and stop ApacheDS.
*
* @author <a href="mailto:dev@directory.apache.org">Apache Directory Project</a>
*/
public class StartStopListener implements ServletContextListener
{
private DirectoryService directoryService;
/**
* Startup ApacheDS embedded.
*/
public void contextInitialized( ServletContextEvent evt )
{
try
{
directoryService = new DefaultDirectoryService();
directoryService.setShutdownHookEnabled( true );
directoryService.startup();
ldapServer.start();
71
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
/**
* Shutdown ApacheDS embedded.
*/
public void contextDestroyed( ServletContextEvent evt )
{
try
{
ldapServer.stop();
directoryService.shutdown();
}
catch ( Exception e )
{
throw new RuntimeException( e );
}
}
}
web.xml
<listener>
<listener-class>
org.apache.directory.samples.embed.webapp.StartStopListener
</listener-class>
</listener>
</web-app>
To use the archetype you'll need to check it out and install it to your local repository:
svn co http://svn.apache.org/repos/asf/directory/samples/trunk/apacheds-archetype-webapp
cd apacheds-archetype-webapp
mvn install
Then change to your preferred location to create the new project and execute following command:
72
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
Then change to the created directory and run the following command:
mvn package
mvn jetty:run
One option is a command line tool like ldapsearch (see ApacheDS Basic User's Guide
[http://directory.apache.org/apacheds/1.5/apacheds-v15-basic-users-guide.html] for details on how to
73
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
connect to ApacheDS with such tools in general). Here is an example how to connect as administrator
(simple bind) and fetch the Root DSE of our embedded ApacheDS instance:
Another choice are graphical LDAP clients (see ApacheDS Basic User's Guide [http://
directory.apache.org/apacheds/1.5/apacheds-v15-basic-users-guide.html] for details on how to
connect to ApacheDS with such tools in general).
With our popular Eclipse RCP application Directory studio [http://directory.apache.org/studio/] for
instance, connecting goes like this: In the Connections view, select "New connection ...". Within a
wizard dialog, you provide the connection data (host name, port, bind DN and password).
74
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
After successfully connecting to the embedded ApacheDS, you can browse the tree, add and
manipulate entries and so on. If you check the connection properties, you can study the Root DSE
as well.
75
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
Here is a screen shot of the web based administration console of WebSphere Application Server 6.1
with the ApacheDS.war deployed and running, no changes in the deployment archive were needed.
76
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
The following servlet, which will be deployed together with the other class in the web archive, connects
to ApacheDS directly, i.e. via the internal JNDI provider. No network access is needed. In the doGet
method it performs a search operation against the Root DSE of the server, as the examples above do.
RootDseServlet.java
package org.apache.directory.samples.embed.webapp;
import java.io.PrintWriter;
import java.util.Hashtable;
import javax.naming.Context;
import javax.naming.NamingEnumeration;
import javax.naming.directory.Attribute;
import javax.naming.directory.Attributes;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.directory.server.core.DirectoryService;
import org.apache.directory.server.core.jndi.CoreContextFactory;
77
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
/**
* A servlet which displays the Root DSE of the embedded server.
*
* @author <a href="mailto:dev@directory.apache.org">Apache Directory
* Project</a>
*/
public class RootDseServlet extends HttpServlet {
try {
resp.setContentType("text/plain");
PrintWriter out = resp.getWriter();
out.flush();
} catch (Exception e) {
throw new ServletException(e);
}
}
/**
* Creates an environment configuration for JNDI access.
*/
protected Hashtable<Object, Object> createEnv() {
env.put(Context.SECURITY_PRINCIPAL, "uid=admin,ou=system");
env.put(Context.SECURITY_CREDENTIALS, "secret");
env.put(Context.SECURITY_AUTHENTICATION, "simple");
return env;
}
}
In order to make the servlet available to clients, it has to be declared in the deployment descriptor
web.xml , here are the additions (a servlet named RootDseServlet for the class above, and a URL
mapping)
web.xml, extended
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
"http://java.sun.com/dtd/web-app_2_3.dtd">
78
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Embedding ApacheDS Draft
<web-app>
...
<servlet>
<servlet-name>RootDseServlet</servlet-name>
<servlet-class>
org.apache.directory.samples.embed.webapp.RootDseServlet
</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>RootDseServlet</servlet-name>
<url-pattern>/RootDse</url-pattern>
</servlet-mapping>
</web-app>
Redeploy the web application. If you point to your tomcat server with the appropriate URL ( http://
localhost:8080/ApacheDS/RootDse ), you'll see the content of the Root DSE as depicted below:
79
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
You are viewing pre-release documentation that contains changes to configuration that are
scheduled for the Apache Directory 1.5.1 release.
• Easy POJO embeddability for containers such as Geronimo, JBoss, and OSGi
80
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
81
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
Also due to the common configuration used by all protocol providers, individual protocols are no
longer enabled in MutableServerStartupConfiguration. Instead, individual services are enabled using
the parameter 'enabled' on their individual beans.
The Kerberos protocol provider is no longer configured with a Map of properties. All configuration
properties are now available on a bean and configurable using Spring XML.
The Change Password protocol provider is no longer configured with a Map of properties. All
configuration properties are now available on a bean and configurable using Spring XML.
The NTP protocol provider is no longer configured with a Map of properties. All configuration
properties are now available on a bean and configurable using Spring XML.
DNS has now been enabled in ServerContextFactory. The DNS protocol provider is no longer
configured with a Map of properties. All configuration properties are now available on a bean and
configurable using Spring XML.
This page lists all configuration parameters which can be used in conf/server.xml in Version 1.5.1.
For a more detailed description look at the corresponding section in the Advanced User's Guide.
82
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
if ( install != null )
{
log.info( "server: loading settings from ", install.getConfigurationFile() );
...
env = ( Properties ) factory.getBean( "environment" );
...
The "environment" bean is read from the Spring configuration file, server.xml , shown below :
The bean name ("environment") may be renamed to something more explicit, like
"serverEnvironment", IMHO
The admin password should be changed when the server is started. A good thing would be that
the server cannot start if this password is kept as is.
83
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
This last parameter has been included with the last SASL addition. The description is not giving
a lot of information about what is this parameter about, except for SASL authentication. The
parameter name is not significant, and another one should be selected, IMHO.
84
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
<!-- The FQDN of this SASL host, validated during SASL negotiation. -->
<property name="saslHost" value="ldap.example.com" />
<!-- The Kerberos principal name for this LDAP service, used by GSSAPI. -->
<property name="saslPrincipal" value="ldap/ldap.example.com@EXAMPLE.COM" />
<!-- The realms serviced by this SASL host, used by DIGEST-MD5 and GSSAPI. -->
<property name="saslRealms">
<list>
<value>example.com</value>
<value>apache.org</value>
</list>
</property>
<!-- The base DN containing users that can be SASL authenticated. -->
<property name="searchBaseDn" value="ou=users,ou=system" />
<!-- limits searches to max size of 1000 entries: default value is 100 -->
85
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
<bean class="org.apache.directory.server.ldap.support.extended.LaunchDiagnosticUiHandler"/>
</list>
</property>
</bean>
We have to figure out if we should reactivate this GSSAPI configuration, or not. Not a simple
matter, right now. If SASL is to be moved to another configuration, then maybe it should be
activated as a default value. TO BE DISCUSSED...
86
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
87
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
Just wanted to know if the UDP and TCP should be enabled or if the server just accept TCP ?
<bean class="org.apache.directory.server.core.configuration.MutableInterceptorConfiguration">
<property name="name" value="replicationService" />
<property name="interceptor">
<bean class="org.apache.directory.mitosis.service.ReplicationService">
<property name="configuration">
<bean class="org.apache.directory.mitosis.configuration.ReplicationConfiguration">
<property name="replicaId">
<bean class="org.apache.directory.mitosis.common.ReplicaId">
<constructor-arg>
<value>instance_a</value>
</constructor-arg>
</bean>
</property>
<property name="serverPort" value="10390" />
<property name="peerReplicas" value="instance_b@localhost:10392" />
</bean>
</property>
</bean>
88
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
</property>
</bean>
LDAP Protocol configuration is currently being revamped in the SASL branch, as part of
making SASL configurable.
6.3.1. Before
Previously, LDAP protocol configuration existed in the MutableServerStartupConfiguration, along
with Core and Partition configuration.
<!-- limits searches to max size of 1000 entries: default value is 100 -->
<property name="maxSizeLimit" value="1000" />
<property name="extendedOperationHandlers">
<list>
<bean class="org.apache.directory.server.ldap.support.starttls.StartTlsHandler"/>
<bean class="org.apache.directory.server.ldap.support.extended.GracefulShutdownHandler"/>
<bean class="org.apache.directory.server.ldap.support.extended.LaunchDiagnosticUiHandler"/>
</list>
</property>
</bean>
6.3.2. After
At the same time as the addition of numerous configuration parameters for SASL, LDAP protocol
configuration has all moved to an LdapConfiguration bean.
89
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
</list>
</property>
<!-- The FQDN of this SASL host, validated during SASL negotiation. -->
<property name="saslHost" value="ldap.example.com" />
<!-- The Kerberos principal name for this LDAP service, used by GSSAPI. -->
<property name="saslPrincipal" value="ldap/ldap.example.com@EXAMPLE.COM" />
<!-- The realms serviced by this SASL host, used by DIGEST-MD5 and GSSAPI. -->
<property name="saslRealms">
<list>
<value>example.com</value>
<value>apache.org</value>
</list>
</property>
<!-- The base DN containing users that can be SASL authenticated. -->
<property name="searchBaseDn" value="ou=users,dc=example,dc=com" />
90
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
91
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
• ???
6.4.1. Introduction
The Kerberos provider for Apache Directory implements RFC 1510 [http://www.ietf.org/rfc/
rfc1510.txt] , the Kerberos V5 Network Authentication Service. The purpose of Kerberos is to verify
the identities of principals (users or services) on an unprotected network. While generally thought
of as a single-sign-on technology, Kerberos' true strength is in authenticating users without ever
sending their password over the network. Kerberos is designed for use on open (untrusted) networks
and, therefore, operates under the assumption that packets traveling along the network can be read,
modified, and inserted at will. This chart [http://www.computerworld.com/computerworld/records/
images/pdf/kerberos_chart.pdf] provides a good description of the protocol workflow.
Kerberos is named for the three-headed dog that guards the gates to Hades. The three heads are the
client, the Kerberos server, and the network service being accessed.
The Kerberos provider for Apache Directory, in conjunction with MINA and the Apache Directory
store, provides an easy-to-use yet fully-featured network authentication service. As implemented
within the Apache Directory, the Kerberos provder will provide:
92
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
• Easy POJO embeddability for containers such as Geronimo, JBoss, and OSGi
6.4.3. Resources
6.4.3.1. Kerberos Articles
• Centralized Authentication with Kerberos 5, Part I [http://www.linuxjournal.com/article/7336]
6.4.3.3. Standards
• Encryption and Checksum Specifications for Kerberos 5 [http://www.ietf.org/internet-drafts/draft-
ietf-krb-wg-crypto-07.txt]
• Lock down J2ME applications with Kerberos, Part 1: Introducing Kerberos data formats [http://
www-106.ibm.com/developerworks/wireless/library/wi-kerberos/]
• Lock down J2ME applications with Kerberos, Part 2: Authoring a request for a Kerberos ticket
[http://www-106.ibm.com/developerworks/wireless/library/wi-kerberos2.html]
• Lock down J2ME applications with Kerberos, Part 3: Establish secure communication with an e-
bank [http://www-106.ibm.com/developerworks/wireless/library/wi-kerberos3/]
6.4.4.1. Before
Previously, Kerberos protocol configuration existed in a PropertiesFactoryBean, along with JNDI
environment properties.
93
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
6.4.4.2. After
At the same time as the addition of numerous configuration parameters for SASL to the LDAP
protocol, Kerberos configuration has all moved to a KdcConfiguration bean.
94
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
95
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
6.4.5.1. Introduction
Due to export control restrictions, JDK 5.0 environments do not ship with support for AES-256
enabled. Kerberos uses AES-256 in the 'aes256-cts-hmac-sha1-96' encryption type. To enable
AES-256, you must download "unlimited strength" policy JAR files for your JRE. Policy JAR files
are signed by the JRE vendor so you must download policy JAR files for Sun, IBM, etc. separately.
Also, policy files may be different for each platform, such as i386, Solaris, or HP.
6.4.5.2. Installation
1. Download the unlimited strength policy JAR files.
3. Install the unlimited strength policy JAR files by copying them to the standard location. <jre-home>
refers to the directory where the J2SE Runtime Environment (JRE) was installed. Adjust pathname
separators for your environment.
96
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
6.4.6.1. Overview
This page shows how to activate and setup the KDC server of ApacheDS 1.5.5 (build from trunk
2009-08-04). This is a very simple setup (host: localhost, realm: EXAMPLE.COM). Need to check
the setup for other hosts and realms...
server.xml
<spring:beans ...>
<defaultDirectoryService ...>
...
<interceptors>
...
<keyDerivationInterceptor/>
...
</interceptors>
</defaultDirectoryService>
...
<!--
+============================================================+
| Kerberos server configuration |
+============================================================+
-->
<kdcServer id="kdcServer" searchBaseDn="ou=Users,dc=example,dc=com">
<transports>
<tcpTransport port="60088" nbThreads="4" backLog="50"/>
<udpTransport port="60088" nbThreads="4" backLog="50"/>
</transports>
<directoryService>#directoryService</directoryService>
</kdcServer>
...
<ldapServer ...
saslHost="localhost"
saslPrincipal="ldap/localhost@EXAMPLE.COM"
searchBaseDn="ou=users,dc=example,dc=com"
...>
...
</spring:beans>
log4j.logger.org.apache.directory.server.kerberos=DEBUG
97
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
A minimal /etc/krb5.conf file looks as follows (make sure the port matches!):
[libdefaults]
default_realm = EXAMPLE.COM
[realms]
EXAMPLE.COM = {
kdc = localhost:60088
}
[domain_realm]
.example.com = EXAMPLE.COM
example.com = EXAMPLE.COM
[login]
krb4_convert = true
krb4_get_tickets = false
stefan@r61:~$ klist
98
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
99
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
the original Kerberos Change Password protocol, while adding the ability for an administrator to set
a password for a new user.
The Change Password service is implemented as a protocol-provider plugin for the Apache Directory
server. As a plugin, Change Password leverages Apache MINA for front-end services and the Apache
Directory read-optimized backing store via JNDI for persistent directory services.
Change Password, in conjunction with MINA and the Apache Directory, provides an easy-to-use yet
fully-featured password service. As implemented within the Apache Directory, Change Password will
provide:
• Easy POJO embeddability for containers such as Geronimo, JBoss, and OSGi
3. Enter the Old Password and New Password (twice) and click OK.
100
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
6.5.3.1. Before
Previously, Change Password protocol configuration existed in a PropertiesFactoryBean, along with
JNDI environment properties.
6.5.3.2. After
At the same time as the addition of numerous configuration parameters for SASL to the LDAP
protocol, Change Password configuration has all moved to a ChangePasswordConfiguration bean.
101
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
</bean>
102
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
<directoryService>#directoryService</directoryService>
</changePasswordServer>
103
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
6.6.1. Introduction
ApacheDS Domain Name Service (DNS) provider implements RFC 1034 [http://www.faqs.org/
rfcs/rfc1034.html] and RFC 1035 [http://www.faqs.org/rfcs/rfc1034.html] to service DNS Protocol
requests.
The DNS provider plugins into the Apache Directory server. As a plugin, the DNS provider uses the
network layer (MINA) for front-end services and the Apache Directory read-optimized backing store
via JNDI for a persistent store.
The ApacheDS DNS provider, in conjunction with MINA and the ApacheDS LDAP JNDI store,
provides an easy-to-use yet fully-featured name resolution service. As implemented within the Apache
Directory, it will provide:
• LDAP/JMX management
• Easy POJO embeddability for containers such as Geronimo, JBoss, and OSGi
If no type argument is supplied, dig will perform a lookup for an A record. For example:
Table 6.20. Abstract objectClass used to build all DNS record objectclasses
objectclass apacheDnsAbstractRecord
apacheDnsName A sequence of labels representing a domain name
or host name
104
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
objectclass apacheDnsAbstractRecord
apacheDnsType The type of a resource record
apacheDnsClass The class of a resource record
apacheDnsTtl An integer denoting time to live
105
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
objectclass apacheDnsStartOfAuthorityRecord
apacheDnsTtl An integer denoting time to live
apacheDnsSoaMName A domain of the server that was the primary
source of data for this zone
apacheDnsSoaRName The domain which specifies the mailbox of the
person responsible for this zone
apacheDnsSoaSerial The unsigned 32 bit ver num of the original copy
of the zone
apacheDnsSoaRefresh A 32 bit time interval before the zone should be
refreshed
apacheDnsSoaRetry A 32 bit time interval that should elapse before a
failed refresh should be retired
apacheDnsSoaExpire A 32 bit time value that specifies the upper limit
on the time interval that can elapse before the zone
is no longer authoritative
apacheDnsSoaMinimum The unsigned 32 bit minimum TTL field that
should be exported with any RR from this zone.
dn: dc=tcp,dc=example,dc=com
objectClass: top
objectClass: domain
dc: tcp
description: a placeholder entry used with SRV records
106
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
dn: dc=example,dc=com
objectClass: top
objectClass: organization
objectClass: dcObject
dc: example
o: Example Inc.
6.6.1.3.2.2. Resources
There are other tools available from the same people, at www.dnsstuff.com [http://www.dnsstuff.com/
] , but I have not tested any of them.
1. MX - Change MX records from CNAME's to A records. This is supposed to improve lookup speed
and MX pointing to CNAME's is an RFC violation.
2. SOA - Change SOA values to come in line with recommended values, per dnsreports.com.
3. PTR - Add PTR records for server1.example.com. This is to address an error being generated by
AOL and Hotmail, which use reverse lookups on mail servers to weed out spam. Mail on the
example.com mailing lists has increasingly been bounced by AOL and Hotmail as spam and header
inspection points to lack of PTR record. Setting PTR records at the hosting provider is a relatively
new feature, probably added to address this problem.
6.6.3. Notes
6.6.3.1. A Zone is a Pruned Subtree
4.2 Zone "pruned subtree."
Subtree of 1..n nodes/domainNames
Zones are split by org control
A zone is a set of types.
Highest node contains SOA. SOA is 1..1 with highest node.
Below SOA is authoritative.
Highest node contains 1..n NS.
Authoritative NS only at top of zone.
A domain name id's a node.
107
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
NS in leaf is:
• non-authoritative
• referral
A in leaf is:
• non-authoritative
Non-recursive 4.3.1
1. error
2. answer
3. referral
108
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
6.7.1. Introduction
The Apache NTP Protocol Provider is a Java server that implements RFC 2030 [http://www.faqs.org/
rfcs/rfc2030.html] to service Simple Network Time Protocol requests. The Network Time Protocol is
used to synchronize computer clocks on the Internet.
Apache NTP, in conjunction with MINA and the Apache Directory, provides an easy-to-use yet fully-
featured network time synchronization service. As implemented within the Apache Directory, Apache
NTP will provide:
109
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
• Easy POJO embeddability for containers such as Geronimo, JBoss, and OSGi
6.7.3. Resources
6.7.3.1. SNTP RFC's
• RFC 2030 - Simple Network Time Protocol (SNTP) Version 4 for IPv4, IPv6 and OSI http://
www.faqs.org/rfcs/rfc2030.html
• RFC 1305 - Network Time Protocol (Version 3) Specification, Implementation and Analysis http://
www.faqs.org/rfcs/rfc1305.html
The server.xml file contains the bas configuration for the NTP server :
With such a configuration, the NTP server will listen to port 60123, with UDP and TCP transports
selected.
110
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
The last six parameters should be used only if one want to set up only a single transport, or
when UDP and TCP transport don't share the same port.
6.8.1. Introduction
The ApacheDS Dynamic Host Configuration Protocol (DHCP) provider implements RFC 2131
[http://www.faqs.org/rfcs/rfc2131.html] and RFC 2132 [http://www.faqs.org/rfcs/rfc2132.html] to
pass configuration information to hosts on a TCP/IP network.
The DHCP provider is implemented plugin into the ApacheDS network layer (MINA). As a plugin,
DHCP leverages Apache MINA for front-end services and the Apache Directory read-optimized
backing store via JNDI for a persistent store.
The ApacheDS DHCP plugin, in conjunction with MINA and the Apache Directory store, provides
an easy-to-use yet fully-featured dynamic configuration service. As implemented within the Apache
Directory, DHCP will provide:
111
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Protocol Providers Draft
• Easy POJO embeddability for containers such as Geronimo, JBoss, and OSGi
112
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
On the mailing list, people ask regularly on how to write a custom partition. If you simply plan to add
another suffix to ApacheDS (besides dc=example,dc=com, for instance) in order to store data, it is not
necessary to write any code. You can simply add some lines to the configuration. The following is for
developers who plan to implement another storage mechanism than the provided default.
Implementing your own partition is basically implementing the Partition interface from the
org.apache.directory.server.core.partition package. Please note that this is not an easy task.
Nevertheless I try to give you a starting point with some simple examples.
• contains one entry, which contains the famous "hello, world" message in an attribute value
113
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
• does not support any modification operations like delete, add etc.
• http://svn.apache.org/repos/asf/directory/sandbox/szoerner/helloWorldPartition
In order to build it, simply check it out and type "mvn install".
...
public void init(DirectoryService core) throws Exception {
// Create LDAP DN
suffixDn = new LdapDN(suffix);
suffixDn.normalize(core.getRegistries().getAttributeTypeRegistry().getNormalizerMapping());
Rdn rdn = suffixDn.getRdn();
114
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
entry.put(SchemaConstants.OU_AT, rdn.getUpValue().toString());
entry.put("description", "hello, world", "a minimal partition");
this.helloEntry = entry;
}
...
We assume that the suffix starts with "ou=" in order to create an entry of object class organizational
unit. If someone tries to set a suffix which starts with another attribute for the RDN, the setSuffix will
throw an exception.
The Partition interface requires to implement many methods for all the operations a partition should
support (adding, deleting, modifying entries ...). Due to the fact, that this is a read only partition, the
implementation in our case is minimalistic. Here is the delete method as an example.
...
public void delete(DeleteOperationContext opContext)
throws LdapOperationNotSupportedException {
throw new LdapOperationNotSupportedException(
MODIFICATION_NOT_ALLOWED_MSG, ResultCodeEnum.UNWILLING_TO_PERFORM);
}
...
Although this example should be minimal, some methods need more attention. At least if we want to
see the partitiion in an LDAP and not only in the error logs ...
The important methods are hasEntry , lookup and search . The following code is the search method.
Please note that it ignores search scopes other than BASE and search filters completely in order to
have simple code.
if (ctx.getDn().equals(this.suffixDn)) {
switch (ctx.getScope()) {
case OBJECT:
// return a result with the only entry we have
return new BaseEntryFilteringCursor(
new SingletonCursor<ServerEntry>(this.helloEntry), ctx);
}
}
package org.apache.directory.samples.partition.hello;
import org.apache.directory.server.core.DefaultDirectoryService;
import org.apache.directory.server.core.DirectoryService;
import org.apache.directory.server.ldap.LdapServer;
import org.apache.directory.server.protocol.shared.transport.TcpTransport;
/**
* Starts the server with the HelloWorld partition.
*/
public class Main {
115
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
directoryService.addPartition(helloPartition);
directoryService.startup();
ldapServer.start();
}
}
server.xml
<spring:beans xmlns:spring="http://xbean.apache.org/schemas/spring/1.0"
xmlns:s="http://www.springframework.org/schema/beans"
xmlns="http://apacheds.org/config/1.0">
...
<defaultDirectoryService ...>
...
<partitions>
...
<s:bean
id="helloPartition"
class="org.apache.directory.samples.partition.hello.HelloWorldPartition">
<s:property name="suffix" value="ou=helloWorld" />
</s:bean>
</partitions>
...
</defaultDirectoryService>
...
Note that the class HelloWorldPartition has to be in the class path of the server. Without, starting the
server leads to a ClassNotFoundException . You can copy the jar file which results from the build
to the lib/ext directory.
7.1.2.4. Verification
After adding the HelloWorldPartition to the directory service like above (embedded or via
configuration in server.xml ), you can browse it with an LDAP browser like the one from Apache
Directory Studio. Here are some screen shots.
116
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
version: 1
dn: ou=helloWorld
objectClass: organizationalUnit
objectClass: top
description: hello, world
description: a minimal partition
ou: helloWorld
$
7.1.3. To be continued
We plan to add more sophistic examples on this topic in the near feature. Stay tuned on the mailing lists.
The following is for developers who plan to implement their own interceptors in order to extend or
modify the functionality of Apache Directory Server. It contains a simple example as a starting point.
117
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
• org.apache.directory.server.core.normalization.NormalizationInterceptor
• org.apache.directory.server.core.authn.AuthenticationInterceptor
• org.apache.directory.server.core.referral.ReferralInterceptor
• org.apache.directory.server.core.authz.AciAuthorizationInterceptor
• org.apache.directory.server.core.authz.DefaultAuthorizationInterceptor
• org.apache.directory.server.core.exception.ExceptionInterceptor
• org.apache.directory.server.core.changelog.ChangeLogInterceptor
• org.apache.directory.server.core.operational.OperationalAttributeInterceptor
• org.apache.directory.server.core.schema.SchemaInterceptor
• org.apache.directory.server.core.subtree.SubentryInterceptor
• org.apache.directory.server.core.collective.CollectiveAttributeInterceptor
• org.apache.directory.server.core.event.EventInterceptor
• org.apache.directory.server.core.trigger.TriggerInterceptor
• org.apache.directory.server.core.journal.JournalInterceptor
Interceptors should usually pass the control of current invocation to the next interceptor by calling
an appropriate method on NextInterceptor . The flow control is returned when the next interceptor's
filter method returns. You can therefore implement pre-, post-, around- invocation handler by how
you place the statement.
Interceptors are a powerful way to extend and modify the server behavior. But be warned. A mistakenly
written interceptor may lead to a dis-functional or corrupt server.
To be more concrete:
• If a userpassword is set by an LDAP client in plain text, a message digest algorithm [http://
en.wikipedia.org/wiki/Cryptographic_hash_function] should be applied to the value, and the one-
way encrypted value should be stored
118
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
• the algorithm should be applied if new entries are created or existing entries are modified (hence
modify and add operations will be intercepted)
• If the value given by the client is already provided in hashed form, nothing happens, and the given
value is stored in the directory without modification
• http://svn.apache.org/repos/asf/directory/sandbox/szoerner/passwordHashInterceptor
In order to build it, simply check it out and type "mvn install".
The class HashTools contains two simple methods w.r.t. hashing. isAlreadyHashed detects whether a
value has already been hashed with a known message digest algorithm. applyHashAlgorithm applies
a hash algorithm to a sequence of bytes. See the source code and the unit tests of this class for details,
it has not that much to do with the interceptor stuff.
The central class is PasswordHashInterceptor . Every interceptor has to implement the Interceptor
interface from package org.apache.directory.server.core.interceptor . PasswordHashInterceptor does
so by extended the convenience class BaseInterceptor from the same package.
The property hashAlgorithm allows to configure the alhorithm used for hashing the passwords. It
defaults to MD5 (Message-Digest algorithm 5) [http://en.wikipedia.org/wiki/MD5] . The property
passwordAttributeName allows configuration of the attribute type which stores the user password. Its
value will be hashed if needed. The property defaults to "userPassword", which is quite common and
used for instance in the inetOrgPerson [http://www.ietf.org/rfc/rfc2798.txt] object class.
The most interesting methods of the class are add and modify . They intercept the requests ans modify
the attribute values, if needed. See below the complete source code of the class.
package org.apache.directory.samples.interceptor.pwdhash;
import java.util.List;
import org.apache.directory.server.core.entry.ClonedServerEntry;
import org.apache.directory.server.core.interceptor.BaseInterceptor;
import org.apache.directory.server.core.interceptor.NextInterceptor;
import org.apache.directory.server.core.interceptor.context.AddOperationContext;
import org.apache.directory.server.core.interceptor.context.ModifyOperationContext;
import org.apache.directory.shared.ldap.entry.EntryAttribute;
import org.apache.directory.shared.ldap.entry.Modification;
import org.apache.directory.shared.ldap.entry.ModificationOperation;
119
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
/**
* Intercepts the add operation in order to replace plain password values
* with hashed ones.
*/
@Override
public void add(NextInterceptor next, AddOperationContext opContext)
throws Exception {
super.add(next, opContext);
}
/**
* Intercepts the modify operation in order to replace plain password values
* with hashed ones.
*/
@Override
public void modify(NextInterceptor next, ModifyOperationContext opContext)
throws Exception {
After that, add the interceptor to the server.xml file in APACHEDS_INSTALLDIR/conf/ . Make sure
to backup the file before your modifications. Within server.xml find the XML elements which list the
interceptors. The easiest way to add a custom interceptor is to add a spring bean (namespace "s"). You
mya set configuration properties to the interceptor as well, if it supports some.
120
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
The following fragment shows the interceptor list with the example interceptor added just behind
normalization. For demonstration purposes, the hash algorithm is set to "MD5" (which is the default
of our interceptor anyway).
...
<interceptors>
<normalizationInterceptor/>
<s:bean class="org.apache.directory.samples.interceptor.pwdhash.PasswordHashInterceptor">
<s:property name="hashAlgorithm" value="MD5" />
</s:bean>
<authenticationInterceptor/>
<referralInterceptor/>
<aciAuthorizationInterceptor/>
<defaultAuthorizationInterceptor/>
<exceptionInterceptor/>
<operationalAttributeInterceptor/>
...
</interceptors>
...
As an alternative, the following Java code starts an ApacheDS embedded in a main method. The
list of interceptors is complemented with the example interceptor. We insert it exactly behind the
NormalizingInterceptor (the position is a little bit tricky to determine).
package org.apache.directory.samples.interceptor.pwdhash;
import java.util.List;
import org.apache.directory.server.core.DefaultDirectoryService;
import org.apache.directory.server.core.DirectoryService;
import org.apache.directory.server.core.interceptor.Interceptor;
import org.apache.directory.server.core.normalization.NormalizationInterceptor;
import org.apache.directory.server.ldap.LdapServer;
import org.apache.directory.server.protocol.shared.transport.TcpTransport;
/**
* Main class which starts an embedded server with the interceptor inserted into
* the chain.
*/
public class Main {
directoryService.startup();
ldapServer.start();
}
}
121
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
7.2.2.4. Verification
Let's check whether our new interceptor does its job! In order to do so, we use Apache Directory
Studio and connect to the server with the interceptor enabled (see above).
First we create a new entry with the following data, using "New Entry ..." within Studio.
Then we add a new attribute userPassword in the entry editor. For the value, a special editor appears:
Select "Plaintext" as the hash method and enter a new password. We selected "secret" (see screen shot
above). After pressing OK, a modify operation is sent to the server, which will be intercepted by our
example class.
122
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
After that, the value for userPassword is not "secret", but the MD5 digested value of it.
The user Kate Bush is still capable of authenticating with the password "secret", because Apache
Directory Server supports authentication with passwords hashed with this algorithm. You can verify
this by connecting with Studio and the using "cn=Kate Bush,ou=users,ou=system" as bind DN.
Here it is demonstrated with the help of the ldapsearch command line tool. The result also shows that
the userPassword value is hashed with MD5.
123
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Extending the server Draft
• Exception handling is poor. E.g. if someone configures an unsupported hash algorithm, the
interceptor fails to create an appropriate LDAP error.
• If a multivalued password attribute is used, the interceptor will simply ignore that fact (does not
apply to userPassword as of RFC 2256).
124
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
8.2. Replication
TODO ...
125
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
We already defined a simplistic objectClass and attributeTypes for representing a class within an entry.
A very primitive ClassLoader (CL) was able to load classes on demand by searching for them within
the DIT. We need to go a step further though. Every user will have their own view of the DIT with
ACI in effect, so every user will need to execute procedures in their own CL. A user's CL needs to
pull in all classes in the DIT visible to the user. This CL can be used to execute stored procedures.
Thus the user specific CL needs to load the visible classes (seen by a user) as they are needed to
execute procedures. This could really slow things down though. Some caching may be in order here
but how that's to be done properly and efficiently is yet to be determined.
We proposed some ACIItem extensions to make sure we can easily and efficiently isolate code
across users. The new creator and notCreator userClasses have been proposed here: ??? . With these
userClasses we can define a single ACIItem in a subentry at each ACSA with a subtreeSpecification
refinement that makes javaClass entries only visible to their creators.
Code reuse will also come into the picture here. The administrator may expose some classes as libraries
that users can build on. Making these classes visible to all users may in turn result in some conflicts.
126
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Triggers & Stored Procedures Draft
For example users may load libraries of a newer version. What will be our policy here? Should this
policy be something that should be decided by the Administrator? Should users be able to override
this policy?
Convensions are good but admins should have options. By default the java subsystem will exhaust
the possibilities. We will allow administrators to configure the java subsystem by specifying
a specific search order for classes. This search order is a list of search operations. Under the
ou=configuration,ou=system area we can manage this list of operations. Basically the admin can
specify each search operation as a LDAP URL to search under for javaClasses. Perhaps each URL can
be prefixed with an 'insert' directive that defines how it is inserted into the list of search operations.
User profiles can also manage this configuration by inserting their own search operations into the list.
The resultant list of search operations is used by the user's ClassLoader to discover classes within the
DIT. Users should be able to see the search order of the system so they can override or inject their own
bypass. This may be a good mechanism for users to control situations where libraries and classes in
the system might conflict with their own version. Perhaps the CL search order for the system should
be published either in the RootDSE or exposed in the ou=system configuration area.
As stated before, a stored procedure is close to an LDAP extended operation. However, we will not
register a new extended operation for each stored procedure. Instead of that, we'll have a generic
Stored Procedure extended operation where the stored procedure to be called will be specified in the
parameter list of the extended operation. (Of course this does not prevent some stored procedures to
be published as standalone extended operations.) Here is the proposed stored procedure specification
in ASN.1 format:
9.1.1.6.1. BER
0x30 LL 0x04 LL abcd [ [ 0x31 LL 0x12 LL abcd 0x30 LL ( 0xc04 LL abcd... [ 0x0A 0x01 0x0[0..2] ] )
[ 0x30 LL 0x30 LL [ 0x04 LL abcd ] 0x84 LL abcd ] | [ 0x30 LL 0x30 LL [ 0x04 LL abcd ] 0x84
LL abcd ] ]
127
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Triggers & Stored Procedures Draft
9.1.1.7. Explanations
The language field is used to specify the implementation language of stored procedure to be
called. This field allows the server to provide any kind or more than one kind of stored procedure
implementations. We'll support compiled Java SPs as default and support forJython is scheduled for
after 1.0 release.
The procedure field is used to specify the fully qualified name of the procedure to be called. An
example can be "For.Bar.proc".
The parameters field is used to specify a list of parameters (with their types and values ) to be
given to the procedure to be called. Type information is needed to enable maximum implementation
generalization. Encoding these fields with OCTETSTRING also helps generalization. Interpreting
these fields' values are up to the server. By default we'll require type field to include fully qualified
class name of a Java type and we'll require value field to include a string representation of the parameter
value if it's a primitive one and as byte[] if it's a complex Java object.
The return value of stored procedures will be provided by extended operation responses with same
semantics mentioned above.
As an implementation tip, what we're doing here is like adding reflection capability to our server
to call stored procedures. Thinking in terms of Java reflection mechanish helps us better design the
system here.
According to definitions here, stored procedures in our server will enable users to define and use
their own standard extended operations. We'll explore new usage scenarios of stored procedures like
Triggers, Task Scheduling in the near future.
Our approach provides independence of any client, any server and any language implementation which
will make us write a good IETF RFC about the enhancement.
9.1.1.8. Security
http://www.oracle.com/technology/oramag/oracle/03-jul/o43devjvm.html
128
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Triggers & Stored Procedures Draft
In this example a Stored Procedure named "Logger.logAddOperation" is executed with three operation
specific arguments After an LDAP Add operation is performed. The operation specific arguments will
be discussed later as well as the not-yet-specified set of entries the Trigger is defined on.
TODO: Order of execution of Triggered Actions when there are more than one Triggered Actions
with respect to an Event.
• Modify
• Add
• Delete
• ModifyDN.Rename
• ModifyDN.Export:Base
• ModifyDN.Export:Subtree
• ModifyDN.Import:Base
• ModifyDN.Import:Subtree
AFTER Modify
WHEN ChangedAttributes or:{ userPassword, sambaNTPassword }
CALL "com.mycompany.ldap.utils.sp.Logger:logModifiedEntry" ( $object, $modification );
129
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
Adding an index on an attribute is pretty simple. The configuration in the server.xml file needs to be
altered before bulk loading data into the server. Otherwise your index will not work properly.
Indices must be configured before loading data into the server. Indices configured after loading
entries into the server will NOT work properly unless they are built using the index builder
command supplied with the ApacheDS tools command line program. More information on this
in the Building Indices section below.
<property name="indexedAttributes">
<set>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>1.2.6.1.4.1.18060.1.1.1.3.1</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>1.2.6.1.4.1.18060.1.1.1.3.2</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>1.2.6.1.4.1.18060.1.1.1.3.3</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>1.2.6.1.4.1.18060.1.1.1.3.4</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>1.2.6.1.4.1.18060.1.1.1.3.5</value></property>
<property name="cacheSize"><value>10</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>1.2.6.1.4.1.18060.1.1.1.3.6</value></property>
<property name="cacheSize"><value>10</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>1.2.6.1.4.1.18060.1.1.1.3.7</value></property>
<property name="cacheSize"><value>10</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>dc</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
130
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Tuning Draft
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>ou</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>krb5PrincipalName</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>uid</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>objectClass</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
</set>
</property>
As you can see indices are specified using a MutableIndexConfiguration spring bean. Just add one
of these to your existing configuration setting the attributeId to the OID or name of the attribute you
want to index. There is cacheSize parameter used to set the amount of cache on your index as well.
Most of the time 100 will suffice no matter how big in capacity your server is.
This number (100) is the number of entries stored in the cache, regardless to their size. Be carefull
when dealing with huge entries - those which contains jpeg images -
So if I wanted to index the attribute initials all I have to do is append the following xml fragment to
this set of indexed attributes:
<bean class="org.apache.directory.server.core.partition.impl.btree.MutableIndexConfiguration">
<property name="attributeId"><value>initials</value></property>
<property name="cacheSize"><value>100</value></property>
</bean>
That's it. Now queries on initials alone should perform about 50X faster.
131
© 2003-2010 The Apache Software Foundation Privacy Policy
Draft Draft
Index
132
© 2003-2010 The Apache Software Foundation Privacy Policy