mirror of
https://git.openldap.org/openldap/openldap.git
synced 2026-01-01 04:29:35 -05:00
New TOC, new sdf files and merging/reworking of existing data. Makefile updated and tested also.
This commit is contained in:
parent
b3e4305131
commit
88c66bfe89
11 changed files with 750 additions and 763 deletions
|
|
@ -18,16 +18,19 @@ sdf-src: \
|
|||
../plain.sdf \
|
||||
../preamble.sdf \
|
||||
abstract.sdf \
|
||||
appendix-configs.sdf \
|
||||
backends.sdf \
|
||||
config.sdf \
|
||||
dbtools.sdf \
|
||||
glossary.sdf \
|
||||
guide.sdf \
|
||||
install.sdf \
|
||||
intro.sdf \
|
||||
maintenance.sdf \
|
||||
master.sdf \
|
||||
monitoringslapd.sdf \
|
||||
overlays.sdf \
|
||||
preface.sdf \
|
||||
proxycache.sdf \
|
||||
quickstart.sdf \
|
||||
referrals.sdf \
|
||||
replication.sdf \
|
||||
|
|
@ -36,9 +39,9 @@ sdf-src: \
|
|||
schema.sdf \
|
||||
security.sdf \
|
||||
slapdconfig.sdf \
|
||||
syncrepl.sdf \
|
||||
title.sdf \
|
||||
tls.sdf \
|
||||
troubleshooting.sdf \
|
||||
tuning.sdf
|
||||
|
||||
sdf-img: \
|
||||
|
|
|
|||
13
doc/guide/admin/appendix-configs.sdf
Normal file
13
doc/guide/admin/appendix-configs.sdf
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
|
||||
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
|
||||
|
||||
H1: Configuration File Examples
|
||||
|
||||
|
||||
H2: slapd.conf
|
||||
|
||||
|
||||
H2: ldap.conf
|
||||
|
||||
|
||||
H2: a-n-other.conf
|
||||
100
doc/guide/admin/backends.sdf
Normal file
100
doc/guide/admin/backends.sdf
Normal file
|
|
@ -0,0 +1,100 @@
|
|||
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
|
||||
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
|
||||
|
||||
H1: Backends
|
||||
|
||||
|
||||
H2: Berkley DB Backends
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: back-bdb/back-hdb Configuration
|
||||
|
||||
|
||||
H3: Further Information
|
||||
|
||||
|
||||
H2: LDAP
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: back-ldap Configuration
|
||||
|
||||
|
||||
H3: Further Information
|
||||
|
||||
|
||||
H2: LDIF
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: back-ldif Configuration
|
||||
|
||||
|
||||
H3: Further Information
|
||||
|
||||
|
||||
H2: Metadirectory
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: back-meta Configuration
|
||||
|
||||
|
||||
H3: Further Information
|
||||
|
||||
|
||||
H2: Monitor
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: back-monitor Configuration
|
||||
|
||||
|
||||
H3: Further Information
|
||||
|
||||
|
||||
H2: Relay
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: back-relay Configuration
|
||||
|
||||
|
||||
H3: Further Information
|
||||
|
||||
|
||||
H2: Perl/Shell
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: back-perl/back-shell Configuration
|
||||
|
||||
|
||||
H3: Further Information
|
||||
|
||||
|
||||
H2: SQL
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: back-sql Configuration
|
||||
|
||||
|
||||
H3: Further Information
|
||||
|
|
@ -154,6 +154,12 @@ LDAP also supports data security (integrity and confidentiality)
|
|||
services.
|
||||
|
||||
|
||||
H2: When should I use LDAP?
|
||||
|
||||
|
||||
H2: When should I not use LDAP?
|
||||
|
||||
|
||||
H2: How does LDAP work?
|
||||
|
||||
LDAP utilizes a {{client-server model}}. One or more LDAP servers
|
||||
|
|
@ -221,6 +227,9 @@ simultaneously is quite problematic. LDAPv2 should be avoided.
|
|||
LDAPv2 is disabled by default.
|
||||
|
||||
|
||||
H2: LDAP vs RDBMS
|
||||
|
||||
|
||||
H2: What is slapd and what can it do?
|
||||
|
||||
{{slapd}}(8) is an LDAP directory server that runs on many different
|
||||
|
|
|
|||
15
doc/guide/admin/maintenance.sdf
Normal file
15
doc/guide/admin/maintenance.sdf
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
|
||||
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
|
||||
|
||||
H1: Maintenance
|
||||
|
||||
|
||||
H2: Directory Backups
|
||||
|
||||
|
||||
H2: Berkeley DB Logs
|
||||
|
||||
|
||||
H2: Checkpointing
|
||||
|
||||
|
||||
|
|
@ -48,6 +48,12 @@ PB:
|
|||
!include "dbtools.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "backends.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "overlays.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "schema.sdf"; chapter
|
||||
PB:
|
||||
|
||||
|
|
@ -60,25 +66,29 @@ PB:
|
|||
!include "tls.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "monitoringslapd.sdf"; chapter
|
||||
PB:
|
||||
|
||||
#!include "tuning.sdf"; chapter
|
||||
#PB:
|
||||
|
||||
!include "referrals.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "replication.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "syncrepl.sdf"; chapter
|
||||
!include "maintenance.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "proxycache.sdf"; chapter
|
||||
!include "monitoringslapd.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "tuning.sdf"; chapter
|
||||
PB:
|
||||
|
||||
!include "troubleshooting.sdf"; chapter
|
||||
PB:
|
||||
|
||||
# Appendices
|
||||
# Config file examples
|
||||
!include "appendix-configs.sdf"; appendix
|
||||
PB:
|
||||
|
||||
# Terms
|
||||
!include "glossary.sdf"; appendix
|
||||
PB:
|
||||
|
|
|
|||
|
|
@ -1,8 +1,64 @@
|
|||
# $OpenLDAP$
|
||||
# Copyright 2003-2007 The OpenLDAP Foundation, All Rights Reserved.
|
||||
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
|
||||
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
|
||||
|
||||
H1: The Proxy Cache Engine
|
||||
H1: Overlays
|
||||
|
||||
|
||||
H2: Access Logging
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Access Logging Configuration
|
||||
|
||||
|
||||
H2: Audit Logging
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Audit Logging Configuration
|
||||
|
||||
|
||||
H2: Constraints
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Constraint Configuration
|
||||
|
||||
|
||||
H2: Dynamic Directory Services
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Dynamic Directory Service Configuration
|
||||
|
||||
|
||||
H2: Dynamic Groups
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Dynamic Group Configuration
|
||||
|
||||
|
||||
H2: Dynamic Lists
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Dynamic List Configuration
|
||||
|
||||
|
||||
H2: The Proxy Cache Engine
|
||||
|
||||
{{TERM:LDAP}} servers typically hold one or more subtrees of a
|
||||
{{TERM:DIT}}. Replica (or shadow) servers hold shadow copies of
|
||||
|
|
@ -11,7 +67,7 @@ from the master server to replica (slave) servers using LDAP Sync
|
|||
replication. An LDAP cache is a special type of replica which holds
|
||||
entries corresponding to search filters instead of subtrees.
|
||||
|
||||
H2: Overview
|
||||
H3: Overview
|
||||
|
||||
The proxy cache extension of slapd is designed to improve the
|
||||
responseiveness of the ldap and meta backends. It handles a search
|
||||
|
|
@ -52,14 +108,14 @@ The Proxy Cache paper
|
|||
design and implementation details.
|
||||
|
||||
|
||||
H2: Proxy Cache Configuration
|
||||
H3: Proxy Cache Configuration
|
||||
|
||||
The cache configuration specific directives described below must
|
||||
appear after a {{EX:overlay proxycache}} directive within a
|
||||
{{EX:"database meta"}} or {{EX:database ldap}} section of
|
||||
the server's {{slapd.conf}}(5) file.
|
||||
|
||||
H3: Setting cache parameters
|
||||
H4: Setting cache parameters
|
||||
|
||||
> proxyCache <DB> <maxentries> <nattrsets> <entrylimit> <period>
|
||||
|
||||
|
|
@ -75,7 +131,7 @@ entries in a cachable query. The <period> specifies the consistency
|
|||
check period (in seconds). In each period, queries with expired
|
||||
TTLs are removed.
|
||||
|
||||
H3: Defining attribute sets
|
||||
H4: Defining attribute sets
|
||||
|
||||
> proxyAttrset <index> <attrs...>
|
||||
|
||||
|
|
@ -84,7 +140,7 @@ set is associated with an index number from 0 to <numattrsets>-1.
|
|||
These indices are used by the proxyTemplate directive to define
|
||||
cacheable templates.
|
||||
|
||||
H3: Specifying cacheable templates
|
||||
H4: Specifying cacheable templates
|
||||
|
||||
> proxyTemplate <prototype_string> <attrset_index> <TTL>
|
||||
|
||||
|
|
@ -94,7 +150,7 @@ its prototype filter string and set of required attributes identified
|
|||
by <attrset_index>.
|
||||
|
||||
|
||||
H3: Example
|
||||
H4: Example
|
||||
|
||||
An example {{slapd.conf}}(5) database section for a caching server
|
||||
which proxies for the {{EX:"dc=example,dc=com"}} subtree held
|
||||
|
|
@ -117,7 +173,7 @@ at server {{EX:ldap.example.com}}.
|
|||
> index cn,sn,uid,mail pres,eq,sub
|
||||
|
||||
|
||||
H4: Cacheable Queries
|
||||
H5: Cacheable Queries
|
||||
|
||||
A LDAP search query is cacheable when its filter matches one of the
|
||||
templates as defined in the "proxyTemplate" statements and when it references
|
||||
|
|
@ -126,7 +182,7 @@ In the example above the attribute set number 0 defines that only the
|
|||
attributes: {{EX:mail postaladdress telephonenumber}} are cached for the following
|
||||
proxyTemplates.
|
||||
|
||||
H4: Examples:
|
||||
H5: Examples:
|
||||
|
||||
> Filter: (&(sn=Richard*)(givenName=jack))
|
||||
> Attrs: mail telephoneNumber
|
||||
|
|
@ -145,4 +201,87 @@ H4: Examples:
|
|||
|
||||
is not cacheable, because the filter does not match the template ( logical
|
||||
OR "|" condition instead of logical AND "&" )
|
||||
|
||||
|
||||
H2: Password Policies
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Password Policy Configuration
|
||||
|
||||
|
||||
H2: Referential Integrity
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Referential Integrity Configuration
|
||||
|
||||
|
||||
H2: Return Code
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Return Code Configuration
|
||||
|
||||
|
||||
H2: Rewrite/Remap
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Rewrite/Remap Configuration
|
||||
|
||||
|
||||
H2: Sync Provider
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Sync Provider Configuration
|
||||
|
||||
|
||||
H2: Translucent Proxy
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Translucent Proxy Configuration
|
||||
|
||||
|
||||
H2: Attribute Uniqueness
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Attribute Uniqueness Configuration
|
||||
|
||||
|
||||
H2: Value Sorting
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Value Sorting Configuration
|
||||
|
||||
|
||||
H2: Overlay Stacking
|
||||
|
||||
|
||||
H3: Overview
|
||||
|
||||
|
||||
H3: Example Senarios
|
||||
|
||||
|
||||
H4: Samba
|
||||
|
|
@ -9,7 +9,7 @@ P1: Preface
|
|||
# document's copyright
|
||||
P2[notoc] Copyright
|
||||
|
||||
Copyright 1998-2006, The {{ORG[expand]OLF}}, {{All Rights Reserved}}.
|
||||
Copyright 1998-2007, The {{ORG[expand]OLF}}, {{All Rights Reserved}}.
|
||||
|
||||
Copyright 1992-1996, Regents of the {{ORG[expand]UM}}, {{All Rights Reserved}}.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,356 +1,436 @@
|
|||
# $OpenLDAP$
|
||||
# Copyright 1999-2007 The OpenLDAP Foundation, All Rights Reserved.
|
||||
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
|
||||
H1: Replication with slurpd
|
||||
|
||||
Note: this section is provided for historical reasons. {{slurpd}}(8)
|
||||
is deprecated in favor of LDAP Sync based replication, commonly
|
||||
referred to as {{syncrepl}}. Syncrepl is discussed in
|
||||
{{SECT:LDAP Sync Replication}} section of this document.
|
||||
H1: Replication
|
||||
|
||||
In certain configurations, a single {{slapd}}(8) instance may be
|
||||
insufficient to handle the number of clients requiring
|
||||
directory service via LDAP. It may become necessary to
|
||||
run more than one slapd instance. At many sites,
|
||||
for instance, there are multiple slapd servers: one
|
||||
master and one or more slaves. {{TERM:DNS}} can be setup such that
|
||||
a lookup of {{EX:ldap.example.com}} returns the {{TERM:IP}} addresses
|
||||
of these servers, distributing the load among them (or
|
||||
just the slaves). This master/slave arrangement provides
|
||||
a simple and effective way to increase capacity, availability
|
||||
and reliability.
|
||||
|
||||
{{slurpd}}(8) provides the capability for a master slapd to
|
||||
propagate changes to slave slapd instances,
|
||||
implementing the master/slave replication scheme
|
||||
described above. slurpd runs on the same host as the
|
||||
master slapd instance.
|
||||
H2: Replication Strategies
|
||||
|
||||
|
||||
H3: Working with Firewalls
|
||||
|
||||
H2: Overview
|
||||
|
||||
{{slurpd}}(8) provides replication services "in band". That is, it
|
||||
uses the LDAP protocol to update a slave database from
|
||||
the master. Perhaps the easiest way to illustrate this is
|
||||
with an example. In this example, we trace the propagation
|
||||
of an LDAP modify operation from its initiation by the LDAP
|
||||
client to its distribution to the slave slapd instance.
|
||||
H2: Replication Types
|
||||
|
||||
|
||||
{{B: Sample replication scenario:}}
|
||||
H3: syncrepl replication
|
||||
|
||||
^ The LDAP client submits an LDAP modify operation to
|
||||
the slave slapd.
|
||||
|
||||
+ The slave slapd returns a referral to the LDAP
|
||||
client referring the client to the master slapd.
|
||||
H3: delta-syncrepl replication
|
||||
|
||||
+ The LDAP client submits the LDAP modify operation to
|
||||
the master slapd.
|
||||
|
||||
+ The master slapd performs the modify operation,
|
||||
writes out the change to its replication log file and returns
|
||||
a success code to the client.
|
||||
H3: N-Way Multi-Master
|
||||
|
||||
|
||||
+ The slurpd process notices that a new entry has
|
||||
been appended to the replication log file, reads the
|
||||
replication log entry, and sends the change to the slave
|
||||
slapd via LDAP.
|
||||
H3: MirrorMode
|
||||
|
||||
|
||||
H2: LDAP Sync Replication
|
||||
|
||||
The {{TERM:LDAP Sync}} Replication engine, {{TERM:syncrepl}} for
|
||||
short, is a consumer-side replication engine that enables the
|
||||
consumer {{TERM:LDAP}} server to maintain a shadow copy of a
|
||||
{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer-side
|
||||
as one of the {{slapd}}(8) threads. It creates and maintains a
|
||||
consumer replica by connecting to the replication provider to perform
|
||||
the initial DIT content load followed either by periodic content
|
||||
polling or by timely updates upon content changes.
|
||||
|
||||
Syncrepl uses the LDAP Content Synchronization (or LDAP Sync for
|
||||
short) protocol as the replica synchronization protocol. It provides
|
||||
a stateful replication which supports both pull-based and push-based
|
||||
synchronization and does not mandate the use of a history store.
|
||||
|
||||
Syncrepl keeps track of the status of the replication content by
|
||||
maintaining and exchanging synchronization cookies. Because the
|
||||
syncrepl consumer and provider maintain their content status, the
|
||||
consumer can poll the provider content to perform incremental
|
||||
synchronization by asking for the entries required to make the
|
||||
consumer replica up-to-date with the provider content. Syncrepl
|
||||
also enables convenient management of replicas by maintaining replica
|
||||
status. The consumer replica can be constructed from a consumer-side
|
||||
or a provider-side backup at any synchronization status. Syncrepl
|
||||
can automatically resynchronize the consumer replica up-to-date
|
||||
with the current provider content.
|
||||
|
||||
Syncrepl supports both pull-based and push-based synchronization.
|
||||
In its basic refreshOnly synchronization mode, the provider uses
|
||||
pull-based synchronization where the consumer servers need not be
|
||||
tracked and no history information is maintained. The information
|
||||
required for the provider to process periodic polling requests is
|
||||
contained in the synchronization cookie of the request itself. To
|
||||
optimize the pull-based synchronization, syncrepl utilizes the
|
||||
present phase of the LDAP Sync protocol as well as its delete phase,
|
||||
instead of falling back on frequent full reloads. To further optimize
|
||||
the pull-based synchronization, the provider can maintain a per-scope
|
||||
session log as a history store. In its refreshAndPersist mode of
|
||||
synchronization, the provider uses a push-based synchronization.
|
||||
The provider keeps track of the consumer servers that have requested
|
||||
a persistent search and sends them necessary updates as the provider
|
||||
replication content gets modified.
|
||||
|
||||
With syncrepl, a consumer server can create a replica without
|
||||
changing the provider's configurations and without restarting the
|
||||
provider server, if the consumer server has appropriate access
|
||||
privileges for the DIT fragment to be replicated. The consumer
|
||||
server can stop the replication also without the need for provider-side
|
||||
changes and restart.
|
||||
|
||||
Syncrepl supports both partial and sparse replications. The shadow
|
||||
DIT fragment is defined by a general search criteria consisting of
|
||||
base, scope, filter, and attribute list. The replica content is
|
||||
also subject to the access privileges of the bind identity of the
|
||||
syncrepl replication connection.
|
||||
|
||||
|
||||
H3: The LDAP Content Synchronization Protocol
|
||||
|
||||
The LDAP Sync protocol allows a client to maintain a synchronized
|
||||
copy of a DIT fragment. The LDAP Sync operation is defined as a set
|
||||
of controls and other protocol elements which extend the LDAP search
|
||||
operation. This section introduces the LDAP Content Sync protocol
|
||||
only briefly. For more information, refer to {{REF:RFC4533}}.
|
||||
|
||||
The LDAP Sync protocol supports both polling and listening for
|
||||
changes by defining two respective synchronization operations:
|
||||
{{refreshOnly}} and {{refreshAndPersist}}. Polling is implemented
|
||||
by the {{refreshOnly}} operation. The client copy is synchronized
|
||||
to the server copy at the time of polling. The server finishes the
|
||||
search operation by returning {{SearchResultDone}} at the end of
|
||||
the search operation as in the normal search. The listening is
|
||||
implemented by the {{refreshAndPersist}} operation. Instead of
|
||||
finishing the search after returning all entries currently matching
|
||||
the search criteria, the synchronization search remains persistent
|
||||
in the server. Subsequent updates to the synchronization content
|
||||
in the server cause additional entry updates to be sent to the
|
||||
client.
|
||||
|
||||
The {{refreshOnly}} operation and the refresh stage of the
|
||||
{{refreshAndPersist}} operation can be performed with a present
|
||||
phase or a delete phase.
|
||||
|
||||
In the present phase, the server sends the client the entries updated
|
||||
within the search scope since the last synchronization. The server
|
||||
sends all requested attributes, be it changed or not, of the updated
|
||||
entries. For each unchanged entry which remains in the scope, the
|
||||
server sends a present message consisting only of the name of the
|
||||
entry and the synchronization control representing state present.
|
||||
The present message does not contain any attributes of the entry.
|
||||
After the client receives all update and present entries, it can
|
||||
reliably determine the new client copy by adding the entries added
|
||||
to the server, by replacing the entries modified at the server, and
|
||||
by deleting entries in the client copy which have not been updated
|
||||
nor specified as being present at the server.
|
||||
|
||||
The transmission of the updated entries in the delete phase is the
|
||||
same as in the present phase. The server sends all the requested
|
||||
attributes of the entries updated within the search scope since the
|
||||
last synchronization to the client. In the delete phase, however,
|
||||
the server sends a delete message for each entry deleted from the
|
||||
search scope, instead of sending present messages. The delete
|
||||
message consists only of the name of the entry and the synchronization
|
||||
control representing state delete. The new client copy can be
|
||||
determined by adding, modifying, and removing entries according to
|
||||
the synchronization control attached to the {{SearchResultEntry}}
|
||||
message.
|
||||
|
||||
In the case that the LDAP Sync server maintains a history store and
|
||||
can determine which entries are scoped out of the client copy since
|
||||
the last synchronization time, the server can use the delete phase.
|
||||
If the server does not maintain any history store, cannot determine
|
||||
the scoped-out entries from the history store, or the history store
|
||||
does not cover the outdated synchronization state of the client,
|
||||
the server should use the present phase. The use of the present
|
||||
phase is much more efficient than a full content reload in terms
|
||||
of the synchronization traffic. To reduce the synchronization
|
||||
traffic further, the LDAP Sync protocol also provides several
|
||||
optimizations such as the transmission of the normalized {{EX:entryUUID}}s
|
||||
and the transmission of multiple {{EX:entryUUIDs}} in a single
|
||||
{{syncIdSet}} message.
|
||||
|
||||
At the end of the {{refreshOnly}} synchronization, the server sends
|
||||
a synchronization cookie to the client as a state indicator of the
|
||||
client copy after the synchronization is completed. The client
|
||||
will present the received cookie when it requests the next incremental
|
||||
synchronization to the server.
|
||||
|
||||
When {{refreshAndPersist}} synchronization is used, the server sends
|
||||
a synchronization cookie at the end of the refresh stage by sending
|
||||
a Sync Info message with TRUE refreshDone. It also sends a
|
||||
synchronization cookie by attaching it to {{SearchResultEntry}}
|
||||
generated in the persist stage of the synchronization search. During
|
||||
the persist stage, the server can also send a Sync Info message
|
||||
containing the synchronization cookie at any time the server wants
|
||||
to update the client-side state indicator. The server also updates
|
||||
a synchronization indicator of the client at the end of the persist
|
||||
stage.
|
||||
|
||||
In the LDAP Sync protocol, entries are uniquely identified by the
|
||||
{{EX:entryUUID}} attribute value. It can function as a reliable
|
||||
identifier of the entry. The DN of the entry, on the other hand,
|
||||
can be changed over time and hence cannot be considered as the
|
||||
reliable identifier. The {{EX:entryUUID}} is attached to each
|
||||
{{SearchResultEntry}} or {{SearchResultReference}} as a part of the
|
||||
synchronization control.
|
||||
|
||||
|
||||
H3: Syncrepl Details
|
||||
|
||||
The syncrepl engine utilizes both the {{refreshOnly}} and the
|
||||
{{refreshAndPersist}} operations of the LDAP Sync protocol. If a
|
||||
syncrepl specification is included in a database definition,
|
||||
{{slapd}}(8) launches a syncrepl engine as a {{slapd}}(8) thread
|
||||
and schedules its execution. If the {{refreshOnly}} operation is
|
||||
specified, the syncrepl engine will be rescheduled at the interval
|
||||
time after a synchronization operation is completed. If the
|
||||
{{refreshAndPersist}} operation is specified, the engine will remain
|
||||
active and process the persistent synchronization messages from the
|
||||
provider.
|
||||
|
||||
The syncrepl engine utilizes both the present phase and the delete
|
||||
phase of the refresh synchronization. It is possible to configure
|
||||
a per-scope session log in the provider server which stores the
|
||||
{{EX:entryUUID}}s of a finite number of entries deleted from a
|
||||
replication content. Multiple replicas of single provider content
|
||||
share the same per-scope session log. The syncrepl engine uses the
|
||||
delete phase if the session log is present and the state of the
|
||||
consumer server is recent enough that no session log entries are
|
||||
truncated after the last synchronization of the client. The syncrepl
|
||||
engine uses the present phase if no session log is configured for
|
||||
the replication content or if the consumer replica is too outdated
|
||||
to be covered by the session log. The current design of the session
|
||||
log store is memory based, so the information contained in the
|
||||
session log is not persistent over multiple provider invocations.
|
||||
It is not currently supported to access the session log store by
|
||||
using LDAP operations. It is also not currently supported to impose
|
||||
access control to the session log.
|
||||
|
||||
As a further optimization, even in the case the synchronization
|
||||
search is not associated with any session log, no entries will be
|
||||
transmitted to the consumer server when there has been no update
|
||||
in the replication context.
|
||||
|
||||
The syncrepl engine, which is a consumer-side replication engine,
|
||||
can work with any backends. The LDAP Sync provider can be configured
|
||||
as an overlay on any backend, but works best with the {{back-bdb}}
|
||||
or {{back-hdb}} backend.
|
||||
|
||||
The LDAP Sync provider maintains a {{EX:contextCSN}} for each
|
||||
database as the current synchronization state indicator of the
|
||||
provider content. It is the largest {{EX:entryCSN}} in the provider
|
||||
context such that no transactions for an entry having smaller
|
||||
{{EX:entryCSN}} value remains outstanding. The {{EX:contextCSN}}
|
||||
could not just be set to the largest issued {{EX:entryCSN}} because
|
||||
{{EX:entryCSN}} is obtained before a transaction starts and
|
||||
transactions are not committed in the issue order.
|
||||
|
||||
The provider stores the {{EX:contextCSN}} of a context in the
|
||||
{{EX:contextCSN}} attribute of the context suffix entry. The attribute
|
||||
is not written to the database after every update operation though;
|
||||
instead it is maintained primarily in memory. At database start
|
||||
time the provider reads the last saved {{EX:contextCSN}} into memory
|
||||
and uses the in-memory copy exclusively thereafter. By default,
|
||||
changes to the {{EX:contextCSN}} as a result of database updates
|
||||
will not be written to the database until the server is cleanly
|
||||
shut down. A checkpoint facility exists to cause the contextCSN to
|
||||
be written out more frequently if desired.
|
||||
|
||||
Note that at startup time, if the provider is unable to read a
|
||||
{{EX:contextCSN}} from the suffix entry, it will scan the entire
|
||||
database to determine the value, and this scan may take quite a
|
||||
long time on a large database. When a {{EX:contextCSN}} value is
|
||||
read, the database will still be scanned for any {{EX:entryCSN}}
|
||||
values greater than it, to make sure the {{EX:contextCSN}} value
|
||||
truly reflects the greatest committed {{EX:entryCSN}} in the database.
|
||||
On databases which support inequality indexing, setting an eq index
|
||||
on the {{EX:entryCSN}} attribute and configuring {{contextCSN}}
|
||||
checkpoints will greatly speed up this scanning step.
|
||||
|
||||
If no {{EX:contextCSN}} can be determined by reading and scanning
|
||||
the database, a new value will be generated. Also, if scanning the
|
||||
database yielded a greater {{EX:entryCSN}} than was previously
|
||||
recorded in the suffix entry's {{EX:contextCSN}} attribute, a
|
||||
checkpoint will be immediately written with the new value.
|
||||
|
||||
The consumer also stores its replica state, which is the provider's
|
||||
{{EX:contextCSN}} received as a synchronization cookie, in the
|
||||
{{EX:contextCSN}} attribute of the suffix entry. The replica state
|
||||
maintained by a consumer server is used as the synchronization state
|
||||
indicator when it performs subsequent incremental synchronization
|
||||
with the provider server. It is also used as a provider-side
|
||||
synchronization state indicator when it functions as a secondary
|
||||
provider server in a cascading replication configuration. Since
|
||||
the consumer and provider state information are maintained in the
|
||||
same location within their respective databases, any consumer can
|
||||
be promoted to a provider (and vice versa) without any special
|
||||
actions.
|
||||
|
||||
Because a general search filter can be used in the syncrepl
|
||||
specification, some entries in the context may be omitted from the
|
||||
synchronization content. The syncrepl engine creates a glue entry
|
||||
to fill in the holes in the replica context if any part of the
|
||||
replica content is subordinate to the holes. The glue entries will
|
||||
not be returned in the search result unless {{ManageDsaIT}} control
|
||||
is provided.
|
||||
|
||||
Also as a consequence of the search filter used in the syncrepl
|
||||
specification, it is possible for a modification to remove an entry
|
||||
from the replication scope even though the entry has not been deleted
|
||||
on the provider. Logically the entry must be deleted on the consumer
|
||||
but in {{refreshOnly}} mode the provider cannot detect and propagate
|
||||
this change without the use of the session log.
|
||||
|
||||
|
||||
H3: Configuring Syncrepl
|
||||
|
||||
Because syncrepl is a consumer-side replication engine, the syncrepl
|
||||
specification is defined in {{slapd.conf}}(5) of the consumer
|
||||
server, not in the provider server's configuration file. The initial
|
||||
loading of the replica content can be performed either by starting
|
||||
the syncrepl engine with no synchronization cookie or by populating
|
||||
the consumer replica by adding an {{TERM:LDIF}} file dumped as a
|
||||
backup at the provider.
|
||||
|
||||
When loading from a backup, it is not required to perform the initial
|
||||
loading from the up-to-date backup of the provider content. The
|
||||
syncrepl engine will automatically synchronize the initial consumer
|
||||
replica to the current provider content. As a result, it is not
|
||||
required to stop the provider server in order to avoid the replica
|
||||
inconsistency caused by the updates to the provider content during
|
||||
the content backup and loading process.
|
||||
|
||||
When replicating a large scale directory, especially in a bandwidth
|
||||
constrained environment, it is advised to load the consumer replica
|
||||
from a backup instead of performing a full initial load using
|
||||
syncrepl.
|
||||
|
||||
|
||||
H4: Set up the provider slapd
|
||||
|
||||
The provider is implemented as an overlay, so the overlay itself
|
||||
must first be configured in {{slapd.conf}}(5) before it can be
|
||||
used. The provider has only two configuration directives, for setting
|
||||
checkpoints on the {{EX:contextCSN}} and for configuring the session
|
||||
log. Because the LDAP Sync search is subject to access control,
|
||||
proper access control privileges should be set up for the replicated
|
||||
content.
|
||||
|
||||
The {{EX:contextCSN}} checkpoint is configured by the
|
||||
|
||||
> syncprov-checkpoint <ops> <minutes>
|
||||
|
||||
directive. Checkpoints are only tested after successful write
|
||||
operations. If {{<ops>}} operations or more than {{<minutes>}}
|
||||
time has passed since the last checkpoint, a new checkpoint is
|
||||
performed.
|
||||
|
||||
The session log is configured by the
|
||||
|
||||
> syncprov-sessionlog <size>
|
||||
|
||||
directive, where {{<size>}} is the maximum number of session log
|
||||
entries the session log can record. When a session log is configured,
|
||||
it is automatically used for all LDAP Sync searches within the
|
||||
database.
|
||||
|
||||
Note that using the session log requires searching on the {{entryUUID}}
|
||||
attribute. Setting an eq index on this attribute will greatly benefit
|
||||
the performance of the session log on the provider.
|
||||
|
||||
A more complete example of the {{slapd.conf}}(5) content is thus:
|
||||
|
||||
> database bdb
|
||||
> suffix dc=Example,dc=com
|
||||
> rootdn dc=Example,dc=com
|
||||
> directory /var/ldap/db
|
||||
> index objectclass,entryCSN,entryUUID eq
|
||||
>
|
||||
> overlay syncprov
|
||||
> syncprov-checkpoint 100 10
|
||||
> syncprov-sessionlog 100
|
||||
|
||||
|
||||
H4: Set up the consumer slapd
|
||||
|
||||
The syncrepl replication is specified in the database section of
|
||||
{{slapd.conf}}(5) for the replica context. The syncrepl engine
|
||||
is backend independent and the directive can be defined with any
|
||||
database type.
|
||||
|
||||
> database hdb
|
||||
> suffix dc=Example,dc=com
|
||||
> rootdn dc=Example,dc=com
|
||||
> directory /var/ldap/db
|
||||
> index objectclass,entryCSN,entryUUID eq
|
||||
>
|
||||
> syncrepl rid=123
|
||||
> provider=ldap://provider.example.com:389
|
||||
> type=refreshOnly
|
||||
> interval=01:00:00:00
|
||||
> searchbase="dc=example,dc=com"
|
||||
> filter="(objectClass=organizationalPerson)"
|
||||
> scope=sub
|
||||
> attrs="cn,sn,ou,telephoneNumber,title,l"
|
||||
> schemachecking=off
|
||||
> bindmethod=simple
|
||||
> binddn="cn=syncuser,dc=example,dc=com"
|
||||
> credentials=secret
|
||||
|
||||
In this example, the consumer will connect to the provider {{slapd}}(8)
|
||||
at port 389 of {{FILE:ldap://provider.example.com}} to perform a
|
||||
polling ({{refreshOnly}}) mode of synchronization once a day. It
|
||||
will bind as {{EX:cn=syncuser,dc=example,dc=com}} using simple
|
||||
authentication with password "secret". Note that the access control
|
||||
privilege of {{EX:cn=syncuser,dc=example,dc=com}} should be set
|
||||
appropriately in the provider to retrieve the desired replication
|
||||
content. Also the search limits must be high enough on the provider
|
||||
to allow the syncuser to retrieve a complete copy of the requested
|
||||
content. The consumer uses the rootdn to write to its database so
|
||||
it always has full permissions to write all content.
|
||||
|
||||
The synchronization search in the above example will search for the
|
||||
entries whose objectClass is organizationalPerson in the entire
|
||||
subtree rooted at {{EX:dc=example,dc=com}}. The requested attributes
|
||||
are {{EX:cn}}, {{EX:sn}}, {{EX:ou}}, {{EX:telephoneNumber}},
|
||||
{{EX:title}}, and {{EX:l}}. The schema checking is turned off, so
|
||||
that the consumer {{slapd}}(8) will not enforce entry schema
|
||||
checking when it process updates from the provider {{slapd}}(8).
|
||||
|
||||
For more detailed information on the syncrepl directive, see the
|
||||
{{SECT:syncrepl}} section of {{SECT:The slapd Configuration File}}
|
||||
chapter of this admin guide.
|
||||
|
||||
|
||||
H4: Start the provider and the consumer slapd
|
||||
|
||||
The provider {{slapd}}(8) is not required to be restarted.
|
||||
{{contextCSN}} is automatically generated as needed: it might be
|
||||
originally contained in the {{TERM:LDIF}} file, generated by
|
||||
{{slapadd}} (8), generated upon changes in the context, or generated
|
||||
when the first LDAP Sync search arrives at the provider. If an
|
||||
LDIF file is being loaded which did not previously contain the
|
||||
{{contextCSN}}, the {{-w}} option should be used with {{slapadd}}
|
||||
(8) to cause it to be generated. This will allow the server to
|
||||
startup a little quicker the first time it runs.
|
||||
|
||||
When starting a consumer {{slapd}}(8), it is possible to provide
|
||||
a synchronization cookie as the {{-c cookie}} command line option
|
||||
in order to start the synchronization from a specific state. The
|
||||
cookie is a comma separated list of name=value pairs. Currently
|
||||
supported syncrepl cookie fields are {{csn=<csn>}} and {{rid=<rid>}}.
|
||||
{{<csn>}} represents the current synchronization state of the
|
||||
consumer replica. {{<rid>}} identifies a consumer replica locally
|
||||
within the consumer server. It is used to relate the cookie to the
|
||||
syncrepl definition in {{slapd.conf}}(5) which has the matching
|
||||
replica identifier. The {{<rid>}} must have no more than 3 decimal
|
||||
digits. The command line cookie overrides the synchronization
|
||||
cookie stored in the consumer replica database.
|
||||
|
||||
|
||||
H2: N-Way Multi-Master
|
||||
|
||||
|
||||
H2: MirrorMode
|
||||
|
||||
+ The slave slapd performs the modify operation and
|
||||
returns a success code to the slurpd process.
|
||||
|
||||
|
||||
Note: {{ldapmodify}}(1) and other clients distributed as part of
|
||||
OpenLDAP Software do not support automatic referral chasing
|
||||
(for security reasons).
|
||||
|
||||
|
||||
|
||||
H2: Replication Logs
|
||||
|
||||
When slapd is configured to generate a replication logfile, it
|
||||
writes out a file containing {{TERM:LDIF}} change records. The
|
||||
replication log gives the replication site(s), a timestamp, the DN
|
||||
of the entry being modified, and a series of lines which specify
|
||||
the changes to make. In the example below, Barbara ({{EX:uid=bjensen}})
|
||||
has replaced the {{EX:description}} value. The change is to be
|
||||
propagated to the slapd instance running on {{EX:slave.example.net}}
|
||||
Changes to various operational attributes, such as {{EX:modifiersName}}
|
||||
and {{EX:modifyTimestamp}}, are included in the change record and
|
||||
will be propagated to the slave slapd.
|
||||
|
||||
> replica: slave.example.com:389
|
||||
> time: 809618633
|
||||
> dn: uid=bjensen,dc=example,dc=com
|
||||
> changetype: modify
|
||||
> replace: multiLineDescription
|
||||
> description: A dreamer...
|
||||
> -
|
||||
> replace: modifiersName
|
||||
> modifiersName: uid=bjensen,dc=example,dc=com
|
||||
> -
|
||||
> replace: modifyTimestamp
|
||||
> modifyTimestamp: 20000805073308Z
|
||||
> -
|
||||
|
||||
The modifications to {{EX:modifiersName}} and {{EX:modifyTimestamp}}
|
||||
operational attributes were added by the master {{slapd}}.
|
||||
|
||||
|
||||
|
||||
H2: Command-Line Options
|
||||
|
||||
This section details commonly used {{slurpd}}(8) command-line options.
|
||||
|
||||
> -d <level> | ?
|
||||
|
||||
This option sets the slurpd debug level to {{EX: <level>}}. When
|
||||
level is a `?' character, the various debugging levels are printed
|
||||
and slurpd exits, regardless of any other options you give it.
|
||||
Current debugging levels (a subset of slapd's debugging levels) are
|
||||
|
||||
!block table; colaligns="RL"; align=Center; \
|
||||
title="Table 13.1: Debugging Levels"
|
||||
Level Description
|
||||
4 heavy trace debugging
|
||||
64 configuration file processing
|
||||
65535 enable all debugging
|
||||
!endblock
|
||||
|
||||
Debugging levels are additive. That is, if you want heavy trace
|
||||
debugging and want to watch the config file being processed, you
|
||||
would set level to the sum of those two levels (in this case, 68).
|
||||
|
||||
> -f <filename>
|
||||
|
||||
This option specifies an alternate slapd configuration file. Slurpd
|
||||
does not have its own configuration file. Instead, all configuration
|
||||
information is read from the slapd configuration file.
|
||||
|
||||
> -r <filename>
|
||||
|
||||
This option specifies an alternate slapd replication log file.
|
||||
Under normal circumstances, slurpd reads the name of the slapd
|
||||
replication log file from the slapd configuration file. However,
|
||||
you can override this with the -r flag, to cause slurpd to process
|
||||
a different replication log file. See the {{SECT:Advanced slurpd
|
||||
Operation}} section for a discussion of how you might use this
|
||||
option.
|
||||
|
||||
> -o
|
||||
|
||||
Operate in "one-shot" mode. Under normal circumstances, when slurpd
|
||||
finishes processing a replication log, it remains active and
|
||||
periodically checks to see if new entries have been added to the
|
||||
replication log. In one-shot mode, by comparison, slurpd processes
|
||||
a replication log and exits immediately. If the -o option is given,
|
||||
the replication log file must be explicitly specified with the -r
|
||||
option. See the {{SECT:One-shot mode and reject files}} section
|
||||
for a discussion of this mode.
|
||||
|
||||
> -t <directory>
|
||||
|
||||
Specify an alternate directory for slurpd's temporary copies of
|
||||
replication logs. The default location is {{F:/usr/tmp}}.
|
||||
|
||||
|
||||
H2: Configuring slurpd and a slave slapd instance
|
||||
|
||||
To bring up a replica slapd instance, you must configure the master
|
||||
and slave slapd instances for replication, then shut down the master
|
||||
slapd so you can copy the database. Finally, you bring up the master
|
||||
slapd instance, the slave slapd instance, and the slurpd instance.
|
||||
These steps are detailed in the following sections. You can set up
|
||||
as many slave slapd instances as you wish.
|
||||
|
||||
|
||||
H3: Set up the master {{slapd}}
|
||||
|
||||
The following section assumes you have a properly working {{slapd}}(8)
|
||||
instance. To configure your working {{slapd}}(8) server as a
|
||||
replication master, you need to make the following changes to your
|
||||
{{slapd.conf}}(5).
|
||||
|
||||
^ Add a {{EX:replica}} directive for each replica. The {{EX:binddn=}}
|
||||
parameter should match the {{EX:updatedn}} option in the corresponding
|
||||
slave slapd configuration file, and should name an entry with write
|
||||
permission to the slave database (e.g., an entry allowed access via
|
||||
{{EX:access}} directives in the slave slapd configuration file).
|
||||
This DN generally {{should not}} be the same as the master's
|
||||
{{EX:rootdn}}.
|
||||
|
||||
+ Add a {{EX:replogfile}} directive, which tells slapd where to log
|
||||
changes. This file will be read by slurpd.
|
||||
|
||||
|
||||
H3: Set up the slave {{slapd}}
|
||||
|
||||
Install the slapd software on the host which is to be the slave
|
||||
slapd server. The configuration of the slave server should be
|
||||
identical to that of the master, with the following exceptions:
|
||||
|
||||
^ Do not include a {{EX:replica}} directive. While it is possible
|
||||
to create "chains" of replicas, in most cases this is inappropriate.
|
||||
|
||||
+ Do not include a {{EX:replogfile}} directive.
|
||||
|
||||
+ Do include an {{EX:updatedn}} line. The DN given should match the
|
||||
DN given in the {{EX:binddn=}} parameter of the corresponding
|
||||
{{EX:replica=}} directive in the master slapd config file. The
|
||||
{{EX:updatedn}} generally {{should not}} be the same as the
|
||||
{{EX:rootdn}} of the master database.
|
||||
|
||||
+ Make sure the DN given in the {{EX:updatedn}} directive has
|
||||
permission to write the database (e.g., it is is allowed {{EX:access}}
|
||||
by one or more access directives).
|
||||
|
||||
+ Use the {{EX:updateref}} directive to define the URL the slave
|
||||
should return if an update request is received.
|
||||
|
||||
|
||||
H3: Shut down the master server
|
||||
|
||||
In order to ensure that the slave starts with an exact copy of the
|
||||
master's data, you must shut down the master slapd. Do this by
|
||||
sending the master slapd process an interrupt signal with
|
||||
{{EX:kill -INT <pid>}}, where {{EX:<pid>}} is the process-id of the master
|
||||
slapd process.
|
||||
|
||||
If you like, you may restart the master slapd in read-only mode
|
||||
while you are replicating the database. During this time, the master
|
||||
slapd will return an "unwilling to perform" error to clients that
|
||||
attempt to modify data.
|
||||
|
||||
|
||||
H3: Copy the master slapd's database to the slave
|
||||
|
||||
Copy the master's database(s) to the slave. For {{TERM:BDB}} and
|
||||
{{TERM:HDB}} databases, you must copy all database files located
|
||||
in the database {{EX:directory}} specified in {{slapd.conf}}(5).
|
||||
In general, you should copy each file found in the database {{EX:
|
||||
directory}} unless you know it is not used by {{slapd}}(8).
|
||||
|
||||
Note: This copy process assumes homogeneous servers with identically
|
||||
configured OpenLDAP installations. Alternatively, you may use
|
||||
{{slapcat}} to output the master's database in LDIF format and use
|
||||
the LDIF with {{slapadd}} to populate the slave. Using LDIF avoids
|
||||
any potential incompatibilities due to differing server architectures
|
||||
or software configurations. See the {{SECT:Database Creation and
|
||||
Maintenance Tools}} chapter for details on these tools.
|
||||
|
||||
|
||||
H3: Configure the master slapd for replication
|
||||
|
||||
To configure slapd to generate a replication logfile, you add a
|
||||
"{{EX: replica}}" configuration option to the master slapd's config
|
||||
file. For example, if we wish to propagate changes to the slapd
|
||||
instance running on host {{EX:slave.example.com}}:
|
||||
|
||||
> replica uri=ldap://slave.example.com:389
|
||||
> binddn="cn=Replicator,dc=example,dc=com"
|
||||
> bindmethod=simple credentials=secret
|
||||
|
||||
In this example, changes will be sent to port 389 (the standard
|
||||
LDAP port) on host slave.example.com. The slurpd process will bind
|
||||
to the slave slapd as "{{EX:cn=Replicator,dc=example,dc=com}}" using
|
||||
simple authentication with password "{{EX:secret}}".
|
||||
|
||||
If we wish to perform the same replication using ldaps on port 636:
|
||||
|
||||
> replica uri=ldaps://slave.example.com:636
|
||||
> binddn="cn=Replicator,dc=example,dc=com"
|
||||
> bindmethod=simple credentials=secret
|
||||
|
||||
The host option is deprecated in favor of uri, but the following
|
||||
replica configuration is still supported:
|
||||
|
||||
> replica host=slave.example.com:389
|
||||
> binddn="cn=Replicator,dc=example,dc=com"
|
||||
> bindmethod=simple credentials=secret
|
||||
|
||||
Note that the DN given by the {{EX:binddn=}} directive must exist
|
||||
in the slave slapd's database (or be the rootdn specified in the
|
||||
slapd config file) in order for the bind operation to succeed. The
|
||||
DN should also be listed as the {{EX:updatedn}} for the database
|
||||
in the slave's slapd.conf(5). It is generally recommended that
|
||||
this DN be different than the {{EX:rootdn}} of the master database.
|
||||
|
||||
Note: The use of strong authentication and transport security is
|
||||
highly recommended.
|
||||
|
||||
|
||||
H3: Restart the master slapd and start the slave slapd
|
||||
|
||||
Restart the master slapd process. To check that it is
|
||||
generating replication logs, perform a modification of any
|
||||
entry in the database, and check that data has been
|
||||
written to the log file.
|
||||
|
||||
|
||||
H3: Start slurpd
|
||||
|
||||
Start the slurpd process. Slurpd should immediately send
|
||||
the test modification you made to the slave slapd. Watch
|
||||
the slave slapd's logfile to be sure that the modification
|
||||
was sent.
|
||||
|
||||
> slurpd -f <masterslapdconfigfile>
|
||||
|
||||
|
||||
|
||||
H2: Advanced slurpd Operation
|
||||
|
||||
H3: Replication errors
|
||||
|
||||
When slurpd propagates a change to a slave slapd and receives an
|
||||
error return code, it writes the reason for the error and the
|
||||
replication record to a reject file. The reject file is located in
|
||||
the same directory as the per-replica replication logfile, and has
|
||||
the same name, but with the string "{{F:.rej}}" appended. For
|
||||
example, for a replica running on host {{EX:slave.example.com}},
|
||||
port 389, the reject file, if it exists, will be named
|
||||
|
||||
> /usr/local/var/openldap/replog.slave.example.com:389.rej
|
||||
|
||||
A sample rejection log entry follows:
|
||||
|
||||
> ERROR: No such attribute
|
||||
> replica: slave.example.com:389
|
||||
> time: 809618633
|
||||
> dn: uid=bjensen,dc=example,dc=com
|
||||
> changetype: modify
|
||||
> replace: description
|
||||
> description: A dreamer...
|
||||
> -
|
||||
> replace: modifiersName
|
||||
> modifiersName: uid=bjensen,dc=example,dc=com
|
||||
> -
|
||||
> replace: modifyTimestamp
|
||||
> modifyTimestamp: 20000805073308Z
|
||||
> -
|
||||
|
||||
Note that this is precisely the same format as the original replication
|
||||
log entry, but with an {{EX:ERROR}} line prepended to the entry.
|
||||
|
||||
|
||||
|
||||
H3: One-shot mode and reject files
|
||||
|
||||
It is possible to use slurpd to process a rejection log with its
|
||||
"one-shot mode." In normal operation, slurpd watches for more
|
||||
replication records to be appended to the replication log file. In
|
||||
one-shot mode, by contrast, slurpd processes a single log file and
|
||||
exits. Slurpd ignores {{EX:ERROR}} lines at the beginning of
|
||||
replication log entries, so it's not necessary to edit them out
|
||||
before feeding it the rejection log.
|
||||
|
||||
To use one-shot mode, specify the name of the rejection log on the
|
||||
command line as the argument to the -r flag, and specify one-shot
|
||||
mode with the -o flag. For example, to process the rejection log
|
||||
file {{F:/usr/local/var/openldap/replog.slave.example.com:389}} and
|
||||
exit, use the command
|
||||
|
||||
> slurpd -r /usr/tmp/replog.slave.example.com:389 -o
|
||||
|
||||
|
|
|
|||
|
|
@ -1,404 +0,0 @@
|
|||
# $OpenLDAP$
|
||||
# Copyright 2003-2007 The OpenLDAP Foundation, All Rights Reserved.
|
||||
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
|
||||
|
||||
H1: LDAP Sync Replication
|
||||
|
||||
The {{TERM:LDAP Sync}} Replication engine, {{TERM:syncrepl}} for
|
||||
short, is a consumer-side replication engine that enables the
|
||||
consumer {{TERM:LDAP}} server to maintain a shadow copy of a
|
||||
{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer-side
|
||||
as one of the {{slapd}}(8) threads. It creates and maintains a
|
||||
consumer replica by connecting to the replication provider to perform
|
||||
the initial DIT content load followed either by periodic content
|
||||
polling or by timely updates upon content changes.
|
||||
|
||||
Syncrepl uses the LDAP Content Synchronization (or LDAP Sync for
|
||||
short) protocol as the replica synchronization protocol. It provides
|
||||
a stateful replication which supports both pull-based and push-based
|
||||
synchronization and does not mandate the use of a history store.
|
||||
|
||||
Syncrepl keeps track of the status of the replication content by
|
||||
maintaining and exchanging synchronization cookies. Because the
|
||||
syncrepl consumer and provider maintain their content status, the
|
||||
consumer can poll the provider content to perform incremental
|
||||
synchronization by asking for the entries required to make the
|
||||
consumer replica up-to-date with the provider content. Syncrepl
|
||||
also enables convenient management of replicas by maintaining replica
|
||||
status. The consumer replica can be constructed from a consumer-side
|
||||
or a provider-side backup at any synchronization status. Syncrepl
|
||||
can automatically resynchronize the consumer replica up-to-date
|
||||
with the current provider content.
|
||||
|
||||
Syncrepl supports both pull-based and push-based synchronization.
|
||||
In its basic refreshOnly synchronization mode, the provider uses
|
||||
pull-based synchronization where the consumer servers need not be
|
||||
tracked and no history information is maintained. The information
|
||||
required for the provider to process periodic polling requests is
|
||||
contained in the synchronization cookie of the request itself. To
|
||||
optimize the pull-based synchronization, syncrepl utilizes the
|
||||
present phase of the LDAP Sync protocol as well as its delete phase,
|
||||
instead of falling back on frequent full reloads. To further optimize
|
||||
the pull-based synchronization, the provider can maintain a per-scope
|
||||
session log as a history store. In its refreshAndPersist mode of
|
||||
synchronization, the provider uses a push-based synchronization.
|
||||
The provider keeps track of the consumer servers that have requested
|
||||
a persistent search and sends them necessary updates as the provider
|
||||
replication content gets modified.
|
||||
|
||||
With syncrepl, a consumer server can create a replica without
|
||||
changing the provider's configurations and without restarting the
|
||||
provider server, if the consumer server has appropriate access
|
||||
privileges for the DIT fragment to be replicated. The consumer
|
||||
server can stop the replication also without the need for provider-side
|
||||
changes and restart.
|
||||
|
||||
Syncrepl supports both partial and sparse replications. The shadow
|
||||
DIT fragment is defined by a general search criteria consisting of
|
||||
base, scope, filter, and attribute list. The replica content is
|
||||
also subject to the access privileges of the bind identity of the
|
||||
syncrepl replication connection.
|
||||
|
||||
|
||||
H2: The LDAP Content Synchronization Protocol
|
||||
|
||||
The LDAP Sync protocol allows a client to maintain a synchronized
|
||||
copy of a DIT fragment. The LDAP Sync operation is defined as a set
|
||||
of controls and other protocol elements which extend the LDAP search
|
||||
operation. This section introduces the LDAP Content Sync protocol
|
||||
only briefly. For more information, refer to {{REF:RFC4533}}.
|
||||
|
||||
The LDAP Sync protocol supports both polling and listening for
|
||||
changes by defining two respective synchronization operations:
|
||||
{{refreshOnly}} and {{refreshAndPersist}}. Polling is implemented
|
||||
by the {{refreshOnly}} operation. The client copy is synchronized
|
||||
to the server copy at the time of polling. The server finishes the
|
||||
search operation by returning {{SearchResultDone}} at the end of
|
||||
the search operation as in the normal search. The listening is
|
||||
implemented by the {{refreshAndPersist}} operation. Instead of
|
||||
finishing the search after returning all entries currently matching
|
||||
the search criteria, the synchronization search remains persistent
|
||||
in the server. Subsequent updates to the synchronization content
|
||||
in the server cause additional entry updates to be sent to the
|
||||
client.
|
||||
|
||||
The {{refreshOnly}} operation and the refresh stage of the
|
||||
{{refreshAndPersist}} operation can be performed with a present
|
||||
phase or a delete phase.
|
||||
|
||||
In the present phase, the server sends the client the entries updated
|
||||
within the search scope since the last synchronization. The server
|
||||
sends all requested attributes, be it changed or not, of the updated
|
||||
entries. For each unchanged entry which remains in the scope, the
|
||||
server sends a present message consisting only of the name of the
|
||||
entry and the synchronization control representing state present.
|
||||
The present message does not contain any attributes of the entry.
|
||||
After the client receives all update and present entries, it can
|
||||
reliably determine the new client copy by adding the entries added
|
||||
to the server, by replacing the entries modified at the server, and
|
||||
by deleting entries in the client copy which have not been updated
|
||||
nor specified as being present at the server.
|
||||
|
||||
The transmission of the updated entries in the delete phase is the
|
||||
same as in the present phase. The server sends all the requested
|
||||
attributes of the entries updated within the search scope since the
|
||||
last synchronization to the client. In the delete phase, however,
|
||||
the server sends a delete message for each entry deleted from the
|
||||
search scope, instead of sending present messages. The delete
|
||||
message consists only of the name of the entry and the synchronization
|
||||
control representing state delete. The new client copy can be
|
||||
determined by adding, modifying, and removing entries according to
|
||||
the synchronization control attached to the {{SearchResultEntry}}
|
||||
message.
|
||||
|
||||
In the case that the LDAP Sync server maintains a history store and
|
||||
can determine which entries are scoped out of the client copy since
|
||||
the last synchronization time, the server can use the delete phase.
|
||||
If the server does not maintain any history store, cannot determine
|
||||
the scoped-out entries from the history store, or the history store
|
||||
does not cover the outdated synchronization state of the client,
|
||||
the server should use the present phase. The use of the present
|
||||
phase is much more efficient than a full content reload in terms
|
||||
of the synchronization traffic. To reduce the synchronization
|
||||
traffic further, the LDAP Sync protocol also provides several
|
||||
optimizations such as the transmission of the normalized {{EX:entryUUID}}s
|
||||
and the transmission of multiple {{EX:entryUUIDs}} in a single
|
||||
{{syncIdSet}} message.
|
||||
|
||||
At the end of the {{refreshOnly}} synchronization, the server sends
|
||||
a synchronization cookie to the client as a state indicator of the
|
||||
client copy after the synchronization is completed. The client
|
||||
will present the received cookie when it requests the next incremental
|
||||
synchronization to the server.
|
||||
|
||||
When {{refreshAndPersist}} synchronization is used, the server sends
|
||||
a synchronization cookie at the end of the refresh stage by sending
|
||||
a Sync Info message with TRUE refreshDone. It also sends a
|
||||
synchronization cookie by attaching it to {{SearchResultEntry}}
|
||||
generated in the persist stage of the synchronization search. During
|
||||
the persist stage, the server can also send a Sync Info message
|
||||
containing the synchronization cookie at any time the server wants
|
||||
to update the client-side state indicator. The server also updates
|
||||
a synchronization indicator of the client at the end of the persist
|
||||
stage.
|
||||
|
||||
In the LDAP Sync protocol, entries are uniquely identified by the
|
||||
{{EX:entryUUID}} attribute value. It can function as a reliable
|
||||
identifier of the entry. The DN of the entry, on the other hand,
|
||||
can be changed over time and hence cannot be considered as the
|
||||
reliable identifier. The {{EX:entryUUID}} is attached to each
|
||||
{{SearchResultEntry}} or {{SearchResultReference}} as a part of the
|
||||
synchronization control.
|
||||
|
||||
|
||||
H2: Syncrepl Details
|
||||
|
||||
The syncrepl engine utilizes both the {{refreshOnly}} and the
|
||||
{{refreshAndPersist}} operations of the LDAP Sync protocol. If a
|
||||
syncrepl specification is included in a database definition,
|
||||
{{slapd}}(8) launches a syncrepl engine as a {{slapd}}(8) thread
|
||||
and schedules its execution. If the {{refreshOnly}} operation is
|
||||
specified, the syncrepl engine will be rescheduled at the interval
|
||||
time after a synchronization operation is completed. If the
|
||||
{{refreshAndPersist}} operation is specified, the engine will remain
|
||||
active and process the persistent synchronization messages from the
|
||||
provider.
|
||||
|
||||
The syncrepl engine utilizes both the present phase and the delete
|
||||
phase of the refresh synchronization. It is possible to configure
|
||||
a per-scope session log in the provider server which stores the
|
||||
{{EX:entryUUID}}s of a finite number of entries deleted from a
|
||||
replication content. Multiple replicas of single provider content
|
||||
share the same per-scope session log. The syncrepl engine uses the
|
||||
delete phase if the session log is present and the state of the
|
||||
consumer server is recent enough that no session log entries are
|
||||
truncated after the last synchronization of the client. The syncrepl
|
||||
engine uses the present phase if no session log is configured for
|
||||
the replication content or if the consumer replica is too outdated
|
||||
to be covered by the session log. The current design of the session
|
||||
log store is memory based, so the information contained in the
|
||||
session log is not persistent over multiple provider invocations.
|
||||
It is not currently supported to access the session log store by
|
||||
using LDAP operations. It is also not currently supported to impose
|
||||
access control to the session log.
|
||||
|
||||
As a further optimization, even in the case the synchronization
|
||||
search is not associated with any session log, no entries will be
|
||||
transmitted to the consumer server when there has been no update
|
||||
in the replication context.
|
||||
|
||||
The syncrepl engine, which is a consumer-side replication engine,
|
||||
can work with any backends. The LDAP Sync provider can be configured
|
||||
as an overlay on any backend, but works best with the {{back-bdb}}
|
||||
or {{back-hdb}} backend.
|
||||
|
||||
The LDAP Sync provider maintains a {{EX:contextCSN}} for each
|
||||
database as the current synchronization state indicator of the
|
||||
provider content. It is the largest {{EX:entryCSN}} in the provider
|
||||
context such that no transactions for an entry having smaller
|
||||
{{EX:entryCSN}} value remains outstanding. The {{EX:contextCSN}}
|
||||
could not just be set to the largest issued {{EX:entryCSN}} because
|
||||
{{EX:entryCSN}} is obtained before a transaction starts and
|
||||
transactions are not committed in the issue order.
|
||||
|
||||
The provider stores the {{EX:contextCSN}} of a context in the
|
||||
{{EX:contextCSN}} attribute of the context suffix entry. The attribute
|
||||
is not written to the database after every update operation though;
|
||||
instead it is maintained primarily in memory. At database start
|
||||
time the provider reads the last saved {{EX:contextCSN}} into memory
|
||||
and uses the in-memory copy exclusively thereafter. By default,
|
||||
changes to the {{EX:contextCSN}} as a result of database updates
|
||||
will not be written to the database until the server is cleanly
|
||||
shut down. A checkpoint facility exists to cause the contextCSN to
|
||||
be written out more frequently if desired.
|
||||
|
||||
Note that at startup time, if the provider is unable to read a
|
||||
{{EX:contextCSN}} from the suffix entry, it will scan the entire
|
||||
database to determine the value, and this scan may take quite a
|
||||
long time on a large database. When a {{EX:contextCSN}} value is
|
||||
read, the database will still be scanned for any {{EX:entryCSN}}
|
||||
values greater than it, to make sure the {{EX:contextCSN}} value
|
||||
truly reflects the greatest committed {{EX:entryCSN}} in the database.
|
||||
On databases which support inequality indexing, setting an eq index
|
||||
on the {{EX:entryCSN}} attribute and configuring {{contextCSN}}
|
||||
checkpoints will greatly speed up this scanning step.
|
||||
|
||||
If no {{EX:contextCSN}} can be determined by reading and scanning
|
||||
the database, a new value will be generated. Also, if scanning the
|
||||
database yielded a greater {{EX:entryCSN}} than was previously
|
||||
recorded in the suffix entry's {{EX:contextCSN}} attribute, a
|
||||
checkpoint will be immediately written with the new value.
|
||||
|
||||
The consumer also stores its replica state, which is the provider's
|
||||
{{EX:contextCSN}} received as a synchronization cookie, in the
|
||||
{{EX:contextCSN}} attribute of the suffix entry. The replica state
|
||||
maintained by a consumer server is used as the synchronization state
|
||||
indicator when it performs subsequent incremental synchronization
|
||||
with the provider server. It is also used as a provider-side
|
||||
synchronization state indicator when it functions as a secondary
|
||||
provider server in a cascading replication configuration. Since
|
||||
the consumer and provider state information are maintained in the
|
||||
same location within their respective databases, any consumer can
|
||||
be promoted to a provider (and vice versa) without any special
|
||||
actions.
|
||||
|
||||
Because a general search filter can be used in the syncrepl
|
||||
specification, some entries in the context may be omitted from the
|
||||
synchronization content. The syncrepl engine creates a glue entry
|
||||
to fill in the holes in the replica context if any part of the
|
||||
replica content is subordinate to the holes. The glue entries will
|
||||
not be returned in the search result unless {{ManageDsaIT}} control
|
||||
is provided.
|
||||
|
||||
Also as a consequence of the search filter used in the syncrepl
|
||||
specification, it is possible for a modification to remove an entry
|
||||
from the replication scope even though the entry has not been deleted
|
||||
on the provider. Logically the entry must be deleted on the consumer
|
||||
but in {{refreshOnly}} mode the provider cannot detect and propagate
|
||||
this change without the use of the session log.
|
||||
|
||||
|
||||
H2: Configuring Syncrepl
|
||||
|
||||
Because syncrepl is a consumer-side replication engine, the syncrepl
|
||||
specification is defined in {{slapd.conf}}(5) of the consumer
|
||||
server, not in the provider server's configuration file. The initial
|
||||
loading of the replica content can be performed either by starting
|
||||
the syncrepl engine with no synchronization cookie or by populating
|
||||
the consumer replica by adding an {{TERM:LDIF}} file dumped as a
|
||||
backup at the provider.
|
||||
|
||||
When loading from a backup, it is not required to perform the initial
|
||||
loading from the up-to-date backup of the provider content. The
|
||||
syncrepl engine will automatically synchronize the initial consumer
|
||||
replica to the current provider content. As a result, it is not
|
||||
required to stop the provider server in order to avoid the replica
|
||||
inconsistency caused by the updates to the provider content during
|
||||
the content backup and loading process.
|
||||
|
||||
When replicating a large scale directory, especially in a bandwidth
|
||||
constrained environment, it is advised to load the consumer replica
|
||||
from a backup instead of performing a full initial load using
|
||||
syncrepl.
|
||||
|
||||
|
||||
H3: Set up the provider slapd
|
||||
|
||||
The provider is implemented as an overlay, so the overlay itself
|
||||
must first be configured in {{slapd.conf}}(5) before it can be
|
||||
used. The provider has only two configuration directives, for setting
|
||||
checkpoints on the {{EX:contextCSN}} and for configuring the session
|
||||
log. Because the LDAP Sync search is subject to access control,
|
||||
proper access control privileges should be set up for the replicated
|
||||
content.
|
||||
|
||||
The {{EX:contextCSN}} checkpoint is configured by the
|
||||
|
||||
> syncprov-checkpoint <ops> <minutes>
|
||||
|
||||
directive. Checkpoints are only tested after successful write
|
||||
operations. If {{<ops>}} operations or more than {{<minutes>}}
|
||||
time has passed since the last checkpoint, a new checkpoint is
|
||||
performed.
|
||||
|
||||
The session log is configured by the
|
||||
|
||||
> syncprov-sessionlog <size>
|
||||
|
||||
directive, where {{<size>}} is the maximum number of session log
|
||||
entries the session log can record. When a session log is configured,
|
||||
it is automatically used for all LDAP Sync searches within the
|
||||
database.
|
||||
|
||||
Note that using the session log requires searching on the {{entryUUID}}
|
||||
attribute. Setting an eq index on this attribute will greatly benefit
|
||||
the performance of the session log on the provider.
|
||||
|
||||
A more complete example of the {{slapd.conf}}(5) content is thus:
|
||||
|
||||
> database bdb
|
||||
> suffix dc=Example,dc=com
|
||||
> rootdn dc=Example,dc=com
|
||||
> directory /var/ldap/db
|
||||
> index objectclass,entryCSN,entryUUID eq
|
||||
>
|
||||
> overlay syncprov
|
||||
> syncprov-checkpoint 100 10
|
||||
> syncprov-sessionlog 100
|
||||
|
||||
|
||||
H3: Set up the consumer slapd
|
||||
|
||||
The syncrepl replication is specified in the database section of
|
||||
{{slapd.conf}}(5) for the replica context. The syncrepl engine
|
||||
is backend independent and the directive can be defined with any
|
||||
database type.
|
||||
|
||||
> database hdb
|
||||
> suffix dc=Example,dc=com
|
||||
> rootdn dc=Example,dc=com
|
||||
> directory /var/ldap/db
|
||||
> index objectclass,entryCSN,entryUUID eq
|
||||
>
|
||||
> syncrepl rid=123
|
||||
> provider=ldap://provider.example.com:389
|
||||
> type=refreshOnly
|
||||
> interval=01:00:00:00
|
||||
> searchbase="dc=example,dc=com"
|
||||
> filter="(objectClass=organizationalPerson)"
|
||||
> scope=sub
|
||||
> attrs="cn,sn,ou,telephoneNumber,title,l"
|
||||
> schemachecking=off
|
||||
> bindmethod=simple
|
||||
> binddn="cn=syncuser,dc=example,dc=com"
|
||||
> credentials=secret
|
||||
|
||||
In this example, the consumer will connect to the provider {{slapd}}(8)
|
||||
at port 389 of {{FILE:ldap://provider.example.com}} to perform a
|
||||
polling ({{refreshOnly}}) mode of synchronization once a day. It
|
||||
will bind as {{EX:cn=syncuser,dc=example,dc=com}} using simple
|
||||
authentication with password "secret". Note that the access control
|
||||
privilege of {{EX:cn=syncuser,dc=example,dc=com}} should be set
|
||||
appropriately in the provider to retrieve the desired replication
|
||||
content. Also the search limits must be high enough on the provider
|
||||
to allow the syncuser to retrieve a complete copy of the requested
|
||||
content. The consumer uses the rootdn to write to its database so
|
||||
it always has full permissions to write all content.
|
||||
|
||||
The synchronization search in the above example will search for the
|
||||
entries whose objectClass is organizationalPerson in the entire
|
||||
subtree rooted at {{EX:dc=example,dc=com}}. The requested attributes
|
||||
are {{EX:cn}}, {{EX:sn}}, {{EX:ou}}, {{EX:telephoneNumber}},
|
||||
{{EX:title}}, and {{EX:l}}. The schema checking is turned off, so
|
||||
that the consumer {{slapd}}(8) will not enforce entry schema
|
||||
checking when it process updates from the provider {{slapd}}(8).
|
||||
|
||||
For more detailed information on the syncrepl directive, see the
|
||||
{{SECT:syncrepl}} section of {{SECT:The slapd Configuration File}}
|
||||
chapter of this admin guide.
|
||||
|
||||
|
||||
H3: Start the provider and the consumer slapd
|
||||
|
||||
The provider {{slapd}}(8) is not required to be restarted.
|
||||
{{contextCSN}} is automatically generated as needed: it might be
|
||||
originally contained in the {{TERM:LDIF}} file, generated by
|
||||
{{slapadd}} (8), generated upon changes in the context, or generated
|
||||
when the first LDAP Sync search arrives at the provider. If an
|
||||
LDIF file is being loaded which did not previously contain the
|
||||
{{contextCSN}}, the {{-w}} option should be used with {{slapadd}}
|
||||
(8) to cause it to be generated. This will allow the server to
|
||||
startup a little quicker the first time it runs.
|
||||
|
||||
When starting a consumer {{slapd}}(8), it is possible to provide
|
||||
a synchronization cookie as the {{-c cookie}} command line option
|
||||
in order to start the synchronization from a specific state. The
|
||||
cookie is a comma separated list of name=value pairs. Currently
|
||||
supported syncrepl cookie fields are {{csn=<csn>}} and {{rid=<rid>}}.
|
||||
{{<csn>}} represents the current synchronization state of the
|
||||
consumer replica. {{<rid>}} identifies a consumer replica locally
|
||||
within the consumer server. It is used to relate the cookie to the
|
||||
syncrepl definition in {{slapd.conf}}(5) which has the matching
|
||||
replica identifier. The {{<rid>}} must have no more than 3 decimal
|
||||
digits. The command line cookie overrides the synchronization
|
||||
cookie stored in the consumer replica database.
|
||||
22
doc/guide/admin/troubleshooting.sdf
Normal file
22
doc/guide/admin/troubleshooting.sdf
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
|
||||
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
|
||||
|
||||
H1: Troubleshooting
|
||||
|
||||
|
||||
H2: Checklist
|
||||
|
||||
|
||||
H2: User or Software errors?
|
||||
|
||||
|
||||
H2: How to contact the OpenLDAP Project
|
||||
|
||||
|
||||
H2: How to present your problem
|
||||
|
||||
|
||||
H2: Debugging slapd
|
||||
|
||||
|
||||
H2: Commercial Support
|
||||
Loading…
Reference in a new issue