ITS#9275 -- Update wording to remove slave and master terms, consolidate on provider/consumer

This commit is contained in:
Quanah Gibson-Mount 2020-06-15 20:06:35 +00:00
parent 24b45f57f2
commit 21eef84a49
124 changed files with 1185 additions and 1191 deletions

View file

@ -84,8 +84,7 @@ Currently simple and kerberos-based authentication, are supported.
To use LDAP and still have reasonable security in a networked,
Internet/Intranet environment, secure shell can be used to setup
secure, encrypted connections between client machines and the LDAP
server, and between the LDAP server and any replica or slave servers
that might be used.
server, and between all LDAP nodes that might be used.
To perform the LDAP "bind" operation:

View file

@ -60,7 +60,7 @@ attribute is updated on each successful bind operation.
.B lastbind_forward_updates
Specify that updates of the authTimestamp attribute
on a consumer should be forwarded
to a master instead of being written directly into the consumer's local
to a provider instead of being written directly into the consumer's local
database. This setting is only useful on a replication consumer, and
also requires the
.B updateref

View file

@ -69,7 +69,7 @@ sdf-img: \
intro_tree.png \
ldap-sync-refreshandpersist.png \
ldap-sync-refreshonly.png \
n-way-multi-master.png \
n-way-multi-provider.png \
push-based-complete.png \
push-based-standalone.png \
refint.png \

View file

@ -45,9 +45,9 @@ H2: Replicated Directory Service
slapd(8) includes support for {{LDAP Sync}}-based replication, called
{{syncrepl}}, which may be used to maintain shadow copies of directory
information on multiple directory servers. In its most basic
configuration, the {{master}} is a syncrepl provider and one or more
{{slave}} (or {{shadow}}) are syncrepl consumers. An example
master-slave configuration is shown in figure 3.3. Multi-Master
configuration, the {{provider}} is a syncrepl provider and one or more
{{consumer}} (or {{shadow}}) are syncrepl consumers. An example
provider-consumer configuration is shown in figure 3.3. Multi-Provider
configurations are also supported.
!import "config_repl.png"; align="center"; title="Replicated Directory Services"

View file

@ -33,7 +33,7 @@ tuned to give quick response to high-volume lookup or search
operations. They may have the ability to replicate information
widely in order to increase availability and reliability, while
reducing response time. When directory information is replicated,
temporary inconsistencies between the replicas may be okay, as long
temporary inconsistencies between the consumers may be okay, as long
as inconsistencies are resolved in a timely manner.
There are many different ways to provide a directory service.
@ -430,11 +430,11 @@ a pool of threads. This reduces the amount of system overhead
required while providing high performance.
{{B:Replication}}: {{slapd}} can be configured to maintain shadow
copies of directory information. This {{single-master/multiple-slave}}
copies of directory information. This {{single-provider/multiple-consumer}}
replication scheme is vital in high-volume environments where a
single {{slapd}} installation just doesn't provide the necessary availability
or reliability. For extremely demanding environments where a
single point of failure is not acceptable, {{multi-master}} replication
single point of failure is not acceptable, {{multi-provider}} replication
is also available. {{slapd}} includes support for {{LDAP Sync}}-based
replication.

View file

@ -87,7 +87,7 @@ type are:
.{{S: }}
+{{B: Start the server}}
Obviously this doesn't cater for any complicated deployments like {{SECT: MirrorMode}} or {{SECT: N-Way Multi-Master}},
Obviously this doesn't cater for any complicated deployments like {{SECT: MirrorMode}} or {{SECT: N-Way Multi-Provider}},
but following the above sections and using either commercial support or community support should help. Also check the
{{SECT: Troubleshooting}} section.

View file

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

View file

@ -79,7 +79,7 @@ or in raw form.
It is also used for {{SECT:delta-syncrepl replication}}
Note: An accesslog database is unique to a given master. It should
Note: An accesslog database is unique to a given provider. It should
never be replicated.
H3: Access Logging Configuration
@ -255,13 +255,13 @@ default when {{B:--enable-ldap}}.
H3: Chaining Configuration
In order to demonstrate how this overlay works, we shall discuss a typical
scenario which might be one master server and three Syncrepl slaves.
scenario which might be one provider server and three Syncrepl replicas.
On each replica, add this near the top of the {{slapd.conf}}(5) file
(global), before any database definitions:
> overlay chain
> chain-uri "ldap://ldapmaster.example.com"
> chain-uri "ldap://ldapprovider.example.com"
> chain-idassert-bind bindmethod="simple"
> binddn="cn=Manager,dc=example,dc=com"
> credentials="<secret>"
@ -271,48 +271,48 @@ On each replica, add this near the top of the {{slapd.conf}}(5) file
Add this below your {{syncrepl}} statement:
> updateref "ldap://ldapmaster.example.com/"
> updateref "ldap://ldapprovider.example.com/"
The {{B:chain-tls}} statement enables TLS from the slave to the ldap master.
The {{B:chain-tls}} statement enables TLS from the replica to the ldap provider.
The DITs are exactly the same between these machines, therefore whatever user
bound to the slave will also exist on the master. If that DN does not have
update privileges on the master, nothing will happen.
bound to the replica will also exist on the provider. If that DN does not have
update privileges on the provider, nothing will happen.
You will need to restart the slave after these {{slapd.conf}} changes.
You will need to restart the replica after these {{slapd.conf}} changes.
Then, if you are using {{loglevel stats}} (256), you can monitor an
{{ldapmodify}} on the slave and the master. (If you're using {{cn=config}}
{{ldapmodify}} on the replica and the provider. (If you're using {{cn=config}}
no restart is required.)
Now start an {{ldapmodify}} on the slave and watch the logs. You should expect
Now start an {{ldapmodify}} on the replica and watch the logs. You should expect
something like:
> Sep 6 09:27:25 slave1 slapd[29274]: conn=11 fd=31 ACCEPT from IP=143.199.102.216:45181 (IP=143.199.102.216:389)
> Sep 6 09:27:25 slave1 slapd[29274]: conn=11 op=0 STARTTLS
> Sep 6 09:27:25 slave1 slapd[29274]: conn=11 op=0 RESULT oid= err=0 text=
> Sep 6 09:27:25 slave1 slapd[29274]: conn=11 fd=31 TLS established tls_ssf=256 ssf=256
> Sep 6 09:27:28 slave1 slapd[29274]: conn=11 op=1 BIND dn="uid=user1,ou=people,dc=example,dc=com" method=128
> Sep 6 09:27:28 slave1 slapd[29274]: conn=11 op=1 BIND dn="uid=user1,ou=People,dc=example,dc=com" mech=SIMPLE ssf=0
> Sep 6 09:27:28 slave1 slapd[29274]: conn=11 op=1 RESULT tag=97 err=0 text=
> Sep 6 09:27:28 slave1 slapd[29274]: conn=11 op=2 MOD dn="uid=user1,ou=People,dc=example,dc=com"
> Sep 6 09:27:28 slave1 slapd[29274]: conn=11 op=2 MOD attr=mail
> Sep 6 09:27:28 slave1 slapd[29274]: conn=11 op=2 RESULT tag=103 err=0 text=
> Sep 6 09:27:28 slave1 slapd[29274]: conn=11 op=3 UNBIND
> Sep 6 09:27:28 slave1 slapd[29274]: conn=11 fd=31 closed
> Sep 6 09:27:28 slave1 slapd[29274]: syncrepl_entry: LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY)
> Sep 6 09:27:28 slave1 slapd[29274]: syncrepl_entry: be_search (0)
> Sep 6 09:27:28 slave1 slapd[29274]: syncrepl_entry: uid=user1,ou=People,dc=example,dc=com
> Sep 6 09:27:28 slave1 slapd[29274]: syncrepl_entry: be_modify (0)
> Sep 6 09:27:25 replica1 slapd[29274]: conn=11 fd=31 ACCEPT from IP=143.199.102.216:45181 (IP=143.199.102.216:389)
> Sep 6 09:27:25 replica1 slapd[29274]: conn=11 op=0 STARTTLS
> Sep 6 09:27:25 replica1 slapd[29274]: conn=11 op=0 RESULT oid= err=0 text=
> Sep 6 09:27:25 replica1 slapd[29274]: conn=11 fd=31 TLS established tls_ssf=256 ssf=256
> Sep 6 09:27:28 replica1 slapd[29274]: conn=11 op=1 BIND dn="uid=user1,ou=people,dc=example,dc=com" method=128
> Sep 6 09:27:28 replica1 slapd[29274]: conn=11 op=1 BIND dn="uid=user1,ou=People,dc=example,dc=com" mech=SIMPLE ssf=0
> Sep 6 09:27:28 replica1 slapd[29274]: conn=11 op=1 RESULT tag=97 err=0 text=
> Sep 6 09:27:28 replica1 slapd[29274]: conn=11 op=2 MOD dn="uid=user1,ou=People,dc=example,dc=com"
> Sep 6 09:27:28 replica1 slapd[29274]: conn=11 op=2 MOD attr=mail
> Sep 6 09:27:28 replica1 slapd[29274]: conn=11 op=2 RESULT tag=103 err=0 text=
> Sep 6 09:27:28 replica1 slapd[29274]: conn=11 op=3 UNBIND
> Sep 6 09:27:28 replica1 slapd[29274]: conn=11 fd=31 closed
> Sep 6 09:27:28 replica1 slapd[29274]: syncrepl_entry: LDAP_RES_SEARCH_ENTRY(LDAP_SYNC_MODIFY)
> Sep 6 09:27:28 replica1 slapd[29274]: syncrepl_entry: be_search (0)
> Sep 6 09:27:28 replica1 slapd[29274]: syncrepl_entry: uid=user1,ou=People,dc=example,dc=com
> Sep 6 09:27:28 replica1 slapd[29274]: syncrepl_entry: be_modify (0)
And on the master you will see this:
And on the provider you will see this:
> Sep 6 09:23:57 ldapmaster slapd[2961]: conn=55902 op=3 PROXYAUTHZ dn="uid=user1,ou=people,dc=example,dc=com"
> Sep 6 09:23:57 ldapmaster slapd[2961]: conn=55902 op=3 MOD dn="uid=user1,ou=People,dc=example,dc=com"
> Sep 6 09:23:57 ldapmaster slapd[2961]: conn=55902 op=3 MOD attr=mail
> Sep 6 09:23:57 ldapmaster slapd[2961]: conn=55902 op=3 RESULT tag=103 err=0 text=
> Sep 6 09:23:57 ldapprovider slapd[2961]: conn=55902 op=3 PROXYAUTHZ dn="uid=user1,ou=people,dc=example,dc=com"
> Sep 6 09:23:57 ldapprovider slapd[2961]: conn=55902 op=3 MOD dn="uid=user1,ou=People,dc=example,dc=com"
> Sep 6 09:23:57 ldapprovider slapd[2961]: conn=55902 op=3 MOD attr=mail
> Sep 6 09:23:57 ldapprovider slapd[2961]: conn=55902 op=3 RESULT tag=103 err=0 text=
Note: You can clearly see the PROXYAUTHZ line on the master, indicating the
proper identity assertion for the update on the master. Also note the slave
immediately receiving the Syncrepl update from the master.
Note: You can clearly see the PROXYAUTHZ line on the provider, indicating the
proper identity assertion for the update on the provider. Also note the replica
immediately receiving the Syncrepl update from the provider.
H3: Handling Chaining Errors
@ -678,8 +678,8 @@ H2: The Proxy Cache Engine
{{TERM:LDAP}} servers typically hold one or more subtrees of a
{{TERM:DIT}}. Replica (or shadow) servers hold shadow copies of
entries held by one or more master servers. Changes are propagated
from the master server to replica (slave) servers using LDAP Sync
entries held by one or more provider servers. Changes are propagated
from the provider server to replica servers using LDAP Sync
replication. An LDAP cache is a special type of replica which holds
entries corresponding to search filters instead of subtrees.

View file

@ -37,12 +37,12 @@ short, is a consumer-side replication engine that enables the
consumer {{TERM:LDAP}} server to maintain a shadow copy of a
{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer
and executes as one of the {{slapd}}(8) threads. It creates and maintains a
consumer replica by connecting to the replication provider to perform
replica by connecting to the replication provider to perform
the initial DIT content load followed either by periodic content
polling or by timely updates upon content changes.
Syncrepl uses the LDAP Content Synchronization protocol (or LDAP Sync for
short) as the replica synchronization protocol. LDAP Sync provides
short) as the consumer synchronization protocol. LDAP Sync provides
a stateful replication which supports both pull-based and push-based
synchronization and does not mandate the use of a history store.
In pull-based replication the consumer periodically
@ -58,11 +58,11 @@ maintaining and exchanging synchronization cookies. Because the
syncrepl consumer and provider maintain their content status, the
consumer can poll the provider content to perform incremental
synchronization by asking for the entries required to make the
consumer replica up-to-date with the provider content. Syncrepl
also enables convenient management of replicas by maintaining replica
status. The consumer replica can be constructed from a consumer-side
consumer up-to-date with the provider content. Syncrepl
also enables convenient management of consumers by maintaining replication
status. The consumer database can be constructed from a consumer-side
or a provider-side backup at any synchronization status. Syncrepl
can automatically resynchronize the consumer replica up-to-date
can automatically resynchronize the consumer database to be up-to-date
with the current provider content.
Syncrepl supports both pull-based and push-based synchronization.
@ -81,7 +81,7 @@ The provider keeps track of the consumer servers that have requested
a persistent search and sends them necessary updates as the provider
replication content gets modified.
With syncrepl, a consumer server can create a replica without
With syncrepl, a consumer can create a replication agreement without
changing the provider's configurations and without restarting the
provider server, if the consumer server has appropriate access
privileges for the DIT fragment to be replicated. The consumer
@ -90,7 +90,7 @@ changes and restart.
Syncrepl supports partial, sparse, and fractional replications. The shadow
DIT fragment is defined by a general search criteria consisting of
base, scope, filter, and attribute list. The replica content is
base, scope, filter, and attribute list. The consumer content is
also subject to the access privileges of the bind identity of the
syncrepl replication connection.
@ -204,13 +204,12 @@ The syncrepl engine utilizes both the present phase and the delete
phase of the refresh synchronization. It is possible to configure
a session log in the provider which stores the
{{EX:entryUUID}}s of a finite number of entries deleted from a
database. Multiple replicas share the same session log. The syncrepl
engine uses the
delete phase if the session log is present and the state of the
consumer server is recent enough that no session log entries are
database. Multiple consumers share the same session log. The syncrepl
engine uses the delete phase if the session log is present and the state
of the consumer server is recent enough that no session log entries are
truncated after the last synchronization of the client. The syncrepl
engine uses the present phase if no session log is configured for
the replication content or if the consumer replica is too outdated
the replication content or if the consumer is too outdated
to be covered by the session log. The current design of the session
log store is memory based, so the information contained in the
session log is not persistent over multiple provider invocations.
@ -265,9 +264,9 @@ database yielded a greater {{EX:entryCSN}} than was previously
recorded in the suffix entry's {{EX:contextCSN}} attribute, a
checkpoint will be immediately written with the new value.
The consumer also stores its replica state, which is the provider's
The consumer also stores its replication state, which is the provider's
{{EX:contextCSN}} received as a synchronization cookie, in the
{{EX:contextCSN}} attribute of the suffix entry. The replica state
{{EX:contextCSN}} attribute of the suffix entry. The replication state
maintained by a consumer server is used as the synchronization state
indicator when it performs subsequent incremental synchronization
with the provider server. It is also used as a provider-side
@ -281,8 +280,8 @@ actions.
Because a general search filter can be used in the syncrepl
specification, some entries in the context may be omitted from the
synchronization content. The syncrepl engine creates a glue entry
to fill in the holes in the replica context if any part of the
replica content is subordinate to the holes. The glue entries will
to fill in the holes in the consumer context if any part of the
consumer content is subordinate to the holes. The glue entries will
not be returned in the search result unless {{ManageDsaIT}} control
is provided.
@ -320,7 +319,7 @@ multiple objects.
For example, suppose you have a database consisting of 102,400 objects of 1 KB
each. Further, suppose you routinely run a batch job to change the value of
a single two-byte attribute value that appears in each of the 102,400 objects
on the master. Not counting LDAP and TCP/IP protocol overhead, each time you
on the provider. Not counting LDAP and TCP/IP protocol overhead, each time you
run this job each consumer will transfer and process {{B:100 MB}} of data to
process {{B:200KB of changes!}}
@ -338,7 +337,7 @@ situations like the one described above. Delta-syncrepl works by maintaining a
changelog of a selectable depth in a separate database on the provider. The replication consumer
checks the changelog for the changes it needs and, as long as
the changelog contains the needed changes, the consumer fetches the changes
from the changelog and applies them to its database. If, however, a replica
from the changelog and applies them to its database. If, however, a consumer
is too far out of sync (or completely empty), conventional syncrepl is used to
bring it up to date and replication then switches back to the delta-syncrepl
mode.
@ -351,12 +350,12 @@ it to another machine.
For configuration, please see the {{SECT:Delta-syncrepl}} section.
H3: N-Way Multi-Master replication
H3: N-Way Multi-Provider Replication
Multi-Master replication is a replication technique using Syncrepl to replicate
data to multiple provider ("Master") Directory servers.
Multi-Provider replication is a replication technique using Syncrepl to replicate
data to multiple provider ("Provider") Directory servers.
H4: Valid Arguments for Multi-Master replication
H4: Valid Arguments for Multi-Provider replication
* If any provider fails, other providers will continue to accept updates
* Avoids a single point of failure
@ -364,21 +363,21 @@ H4: Valid Arguments for Multi-Master replication
the network/globe.
* Good for Automatic failover/High Availability
H4: Invalid Arguments for Multi-Master replication
H4: Invalid Arguments for Multi-Provider replication
(These are often claimed to be advantages of Multi-Master replication but
(These are often claimed to be advantages of Multi-Provider replication but
those claims are false):
* It has {{B:NOTHING}} to do with load balancing
* Providers {{B:must}} propagate writes to {{B:all}} the other servers, which
means the network traffic and write load spreads across all
of the servers the same as for single-master.
of the servers the same as for single-provider.
* Server utilization and performance are at best identical for
Multi-Master and Single-Master replication; at worst Single-Master is
Multi-Provider and Single-Provider replication; at worst Single-Provider is
superior because indexing can be tuned differently to optimize for the
different usage patterns between the provider and the consumers.
H4: Arguments against Multi-Master replication
H4: Arguments against Multi-Provider replication
* Breaks the data consistency guarantees of the directory model
* {{URL:http://www.openldap.org/faq/data/cache/1240.html}}
@ -387,18 +386,18 @@ H4: Arguments against Multi-Master replication
* Typically, a particular machine cannot distinguish between losing contact
with a peer because that peer crashed, or because the network link has failed
* If a network is partitioned and multiple clients start writing to each of the
"masters" then reconciliation will be a pain; it may be best to simply deny
"providers" then reconciliation will be a pain; it may be best to simply deny
writes to the clients that are partitioned from the single provider
For configuration, please see the {{SECT:N-Way Multi-Master}} section below
For configuration, please see the {{SECT:N-Way Multi-Provider}} section below
H3: MirrorMode replication
MirrorMode is a hybrid configuration that provides all of the consistency
guarantees of single-master replication, while also providing the high
availability of multi-master. In MirrorMode two providers are set up to
replicate from each other (as a multi-master configuration), but an
guarantees of single-provider replication, while also providing the high
availability of multi-provider. In MirrorMode two providers are set up to
replicate from each other (as a multi-provider configuration), but an
external frontend is employed to direct all writes to only one of
the two servers. The second provider will only be used for writes if
the first provider crashes, at which point the frontend will switch to
@ -417,7 +416,7 @@ can be ready to take over (hot standby)
H4: Arguments against MirrorMode
* MirrorMode is not what is termed as a Multi-Master solution. This is because
* MirrorMode is not what is termed as a Multi-Provider solution. This is because
writes have to go to just one of the mirror nodes at a time
* MirrorMode can be termed as Active-Active Hot-Standby, therefore an external
server (slapd in proxy mode) or device (hardware load balancer)
@ -452,21 +451,21 @@ H4: Syncrepl configuration
Because syncrepl is a consumer-side replication engine, the syncrepl
specification is defined in {{slapd.conf}}(5) of the consumer
server, not in the provider server's configuration file. The initial
loading of the replica content can be performed either by starting
loading of the consumer content can be performed either by starting
the syncrepl engine with no synchronization cookie or by populating
the consumer replica by loading an {{TERM:LDIF}} file dumped as a
the consumer by loading an {{TERM:LDIF}} file dumped as a
backup at the provider.
When loading from a backup, it is not required to perform the initial
loading from the up-to-date backup of the provider content. The
syncrepl engine will automatically synchronize the initial consumer
replica to the current provider content. As a result, it is not
required to stop the provider server in order to avoid the replica
to the current provider content. As a result, it is not
required to stop the provider server in order to avoid the replication
inconsistency caused by the updates to the provider content during
the content backup and loading process.
When replicating a large scale directory, especially in a bandwidth
constrained environment, it is advised to load the consumer replica
constrained environment, it is advised to load the consumer
from a backup instead of performing a full initial load using
syncrepl.
@ -540,8 +539,8 @@ A more complete example of the {{slapd.conf}}(5) content is thus:
H4: Set up the consumer slapd
The syncrepl replication is specified in the database section of
{{slapd.conf}}(5) for the replica context. The syncrepl engine
The syncrepl directive is specified in the database section of
{{slapd.conf}}(5) for the consumer context. The syncrepl engine
is backend independent and the directive can be defined with any
database type.
@ -608,27 +607,27 @@ in order to start the synchronization from a specific state. The
cookie is a comma separated list of name=value pairs. Currently
supported syncrepl cookie fields are {{csn=<csn>}} and {{rid=<rid>}}.
{{<csn>}} represents the current synchronization state of the
consumer replica. {{<rid>}} identifies a consumer replica locally
consumer. {{<rid>}} identifies a consumer locally
within the consumer server. It is used to relate the cookie to the
syncrepl definition in {{slapd.conf}}(5) which has the matching
replica identifier. The {{<rid>}} must have no more than 3 decimal
{{<rid>}}. The {{<rid>}} must have no more than 3 decimal
digits. The command line cookie overrides the synchronization
cookie stored in the consumer replica database.
cookie stored in the consumer database.
H3: Delta-syncrepl
H4: Delta-syncrepl Provider configuration
Setting up delta-syncrepl requires configuration changes on both the master and
Setting up delta-syncrepl requires configuration changes on both the provider and
replica servers:
> # Give the replica DN unlimited read access. This ACL needs to be
> # Give the replicator DN unlimited read access. This ACL needs to be
> # merged with other ACL statements, and/or moved within the scope
> # of a database. The "by * break" portion causes evaluation of
> # subsequent rules. See slapd.access(5) for details.
> access to *
> by dn.base="cn=replicator,dc=symas,dc=com" read
> by dn.base="cn=replicator,dc=example,dc=com" read
> by * break
>
> # Set the module path location
@ -655,8 +654,8 @@ replica servers:
> syncprov-nopresent TRUE
> syncprov-reloadhint TRUE
>
> # Let the replica DN have limitless searches
> limits dn.exact="cn=replicator,dc=symas,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
> # Let the replicator DN have limitless searches
> limits dn.exact="cn=replicator,dc=example,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
>
> # Primary database definitions
> database mdb
@ -681,8 +680,8 @@ replica servers:
> # scan the accesslog DB every day, and purge entries older than 7 days
> logpurge 07+00:00 01+00:00
>
> # Let the replica DN have limitless searches
> limits dn.exact="cn=replicator,dc=symas,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
> # Let the replicator DN have limitless searches
> limits dn.exact="cn=replicator,dc=example,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
For more information, always consult the relevant man pages ({{slapo-accesslog}}(5) and {{slapd.conf}}(5))
@ -702,11 +701,11 @@ H4: Delta-syncrepl Consumer configuration
>
> # syncrepl directives
> syncrepl rid=0
> provider=ldap://ldapmaster.symas.com:389
> provider=ldap://ldapprovider.example.com:389
> bindmethod=simple
> binddn="cn=replicator,dc=symas,dc=com"
> binddn="cn=replicator,dc=example,dc=com"
> credentials=secret
> searchbase="dc=symas,dc=com"
> searchbase="dc=example,dc=com"
> logbase="cn=accesslog"
> logfilter="(&(objectClass=auditWriteObject)(reqResult=0))"
> schemachecking=on
@ -714,20 +713,20 @@ H4: Delta-syncrepl Consumer configuration
> retry="60 +"
> syncdata=accesslog
>
> # Refer updates to the master
> updateref ldap://ldapmaster.symas.com
> # Refer updates to the provider
> updateref ldap://ldapprovider.example.com
The above configuration assumes that you have a replicator identity defined
in your database that can be used to bind to the provider.
Note: An accesslog database is unique to a given master. It should
Note: An accesslog database is unique to a given provider. It should
never be replicated.
H3: N-Way Multi-Master
H3: N-Way Multi-Provider
For the following example we will be using 3 Master nodes. Keeping in line with
{{B:test050-syncrepl-multimaster}} of the OpenLDAP test suite, we will be configuring
For the following example we will be using 3 Provider nodes. Keeping in line with
{{B:test050-syncrepl-multiprovider}} of the OpenLDAP test suite, we will be configuring
{{slapd(8)}} via {{B:cn=config}}
This sets up the config database:
@ -754,7 +753,7 @@ second and third servers will have a different olcServerID obviously:
> olcDatabase: {0}config
> olcRootPW: secret
This sets up syncrepl as a provider (since these are all masters):
This sets up syncrepl as a provider (since these are all providers):
> dn: cn=module,cn=config
> objectClass: olcModuleList
@ -762,7 +761,7 @@ This sets up syncrepl as a provider (since these are all masters):
> olcModulePath: /usr/local/libexec/openldap
> olcModuleLoad: syncprov.la
Now we setup the first Master Node (replace $URI1, $URI2 and $URI3 etc. with your actual ldap urls):
Now we setup the first Provider Node (replace $URI1, $URI2 and $URI3 etc. with your actual ldap urls):
> dn: cn=config
> changetype: modify
@ -793,9 +792,9 @@ Now we setup the first Master Node (replace $URI1, $URI2 and $URI3 etc. with you
> add: olcMirrorMode
> olcMirrorMode: TRUE
Now start up the Master and a consumer/s, also add the above LDIF to the first consumer, second consumer etc. It will then replicate {{B:cn=config}}. You now have N-Way Multimaster on the config database.
Now start up the provider and a consumer/s, also add the above LDIF to the first consumer, second consumer etc. It will then replicate {{B:cn=config}}. You now have N-Way Multi-Provider on the config database.
We still have to replicate the actual data, not just the config, so add to the master (all active and configured consumers/masters will pull down this config, as they are all syncing). Also, replace all {{${}}} variables with whatever is applicable to your setup:
We still have to replicate the actual data, not just the config, so add to the provider (all active and configured consumers/providers will pull down this config, as they are all syncing). Also, replace all {{${}}} variables with whatever is applicable to your setup:
> dn: olcDatabase={1}$BACKEND,cn=config
> objectClass: olcDatabaseConfig
@ -912,8 +911,8 @@ can either setup in normal {{SECT:syncrepl replication}} mode, or in
H4: MirrorMode Summary
You will now have a directory architecture that provides all of the
consistency guarantees of single-master replication, while also providing the
high availability of multi-master replication.
consistency guarantees of single-provider replication, while also providing the
high availability of multi-provider replication.
H3: Syncrepl Proxy
@ -924,7 +923,7 @@ FT[align="Center"] Figure X.Y: Replacing slurpd
The following example is for a self-contained push-based replication solution:
> #######################################################################
> # Standard OpenLDAP Master/Provider
> # Standard OpenLDAP Provider
> #######################################################################
>
> include /usr/local/etc/openldap/schema/core.schema
@ -966,7 +965,7 @@ The following example is for a self-contained push-based replication solution:
> overlay syncprov
> syncprov-checkpoint 1000 60
>
> # Let the replica DN have limitless searches
> # Let the replicator DN have limitless searches
> limits dn.exact="cn=replicator,dc=suretecsystems,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
>
> database monitor
@ -1008,7 +1007,7 @@ The following example is for a self-contained push-based replication solution:
A replica configuration for this type of setup could be:
> #######################################################################
> # Standard OpenLDAP Slave without Syncrepl
> # Standard OpenLDAP Replica without Syncrepl
> #######################################################################
>
> include /usr/local/etc/openldap/schema/core.schema
@ -1031,8 +1030,9 @@ A replica configuration for this type of setup could be:
>
> database mdb
> suffix "dc=suretecsystems,dc=com"
> directory /usr/local/var/openldap-slave/data
> directory /usr/local/var/openldap-consumer/data
>
> maxsize 85899345920
> checkpoint 1024 5
>
> index objectClass eq
@ -1042,12 +1042,12 @@ A replica configuration for this type of setup could be:
> rootdn "cn=admin,dc=suretecsystems,dc=com"
> rootpw testing
>
> # Let the replica DN have limitless searches
> # Let the replicator DN have limitless searches
> limits dn.exact="cn=replicator,dc=suretecsystems,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
>
> updatedn "cn=replicator,dc=suretecsystems,dc=com"
>
> # Refer updates to the master
> # Refer updates to the provider
> updateref ldap://localhost:9011
>
> database monitor
@ -1057,7 +1057,7 @@ A replica configuration for this type of setup could be:
You can see we use the {{updatedn}} directive here and example ACLs ({{F:usr/local/etc/openldap/slapd.acl}}) for this could be:
> # Give the replica DN unlimited read access. This ACL may need to be
> # Give the replicator DN unlimited read access. This ACL may need to be
> # merged with other ACL statements.
>
> access to *
@ -1082,10 +1082,10 @@ You can see we use the {{updatedn}} directive here and example ACLs ({{F:usr/loc
In order to support more replicas, just add more {{database ldap}} sections and
increment the {{syncrepl rid}} number accordingly.
Note: You must populate the Master and Slave directories with the same data,
Note: You must populate the Provider and Replica directories with the same data,
unlike when using normal Syncrepl
If you do not have access to modify the master directory configuration you can
If you do not have access to modify the provider directory configuration you can
configure a standalone ldap proxy, which might look like:
!import "push-based-standalone.png"; align="center"; title="Syncrepl Standalone Proxy Mode"

View file

@ -567,12 +567,12 @@ H4: olcSyncrepl
> [syncdata=default|accesslog|changelog]
This directive specifies the current database as a replica of the
master content by establishing the current {{slapd}}(8) as a
This directive specifies the current database as a consumer of the
provider content by establishing the current {{slapd}}(8) as a
replication consumer site running a syncrepl replication engine.
The master database is located at the replication provider site
specified by the {{EX:provider}} parameter. The replica database is
kept up-to-date with the master content using the LDAP Content
The provider database is located at the provider site
specified by the {{EX:provider}} parameter. The consumer database is
kept up-to-date with the provider content using the LDAP Content
Synchronization protocol. See {{REF:RFC4533}}
for more information on the protocol.
@ -583,19 +583,16 @@ described by the current {{EX:syncrepl}} directive. {{EX:<replica ID>}}
is non-negative and is no more than three decimal digits in length.
The {{EX:provider}} parameter specifies the replication provider site
containing the master content as an LDAP URI. The {{EX:provider}}
containing the provider content as an LDAP URI. The {{EX:provider}}
parameter specifies a scheme, a host and optionally a port where the
provider slapd instance can be found. Either a domain name or IP
address may be used for <hostname>. Examples are
{{EX:ldap://provider.example.com:389}} or {{EX:ldaps://192.168.1.1:636}}.
If <port> is not given, the standard LDAP port number (389 or 636) is used.
Note that the syncrepl uses a consumer-initiated protocol, and hence its
specification is located at the consumer site, whereas the {{EX:replica}}
specification is located at the provider site. {{EX:syncrepl}} and
{{EX:replica}} directives define two independent replication
mechanisms. They do not represent the replication peers of each other.
specification is located on the consumer.
The content of the syncrepl replica is defined using a search
The content of the syncrepl consumer is defined using a search
specification as its result set. The consumer slapd will
send search requests to the provider slapd according to the search
specification. The search specification includes {{EX:searchbase}},
@ -618,7 +615,7 @@ synchronization operation finishes. The interval is specified
by the {{EX:interval}} parameter. It is set to one day by default.
In the {{EX:refreshAndPersist}} operation, a synchronization search
remains persistent in the provider {{slapd}} instance. Further updates to the
master replica will generate {{EX:searchResultEntry}} to the consumer slapd
provider will generate {{EX:searchResultEntry}} to the consumer slapd
as the search responses to the persistent synchronization search.
If an error occurs during replication, the consumer will attempt to reconnect
@ -631,8 +628,8 @@ indefinite number of retries until success.
The schema checking can be enforced at the LDAP Sync consumer site
by turning on the {{EX:schemachecking}} parameter.
If it is turned on, every replicated entry will be checked for its
schema as the entry is stored into the replica content.
Every entry in the replica should contain those attributes
schema as the entry is stored on the consumer.
Every entry in the consumer should contain those attributes
required by the schema definition.
If it is turned off, entries will be stored without checking
schema conformance. The default is off.
@ -640,7 +637,7 @@ schema conformance. The default is off.
The {{EX:binddn}} parameter gives the DN to bind as for the
syncrepl searches to the provider slapd. It should be a DN
which has read access to the replication content in the
master database.
provider database.
The {{EX:bindmethod}} is {{EX:simple}} or {{EX:sasl}},
depending on whether simple password-based authentication or
@ -705,14 +702,15 @@ for more details.
H4: olcUpdateref: <URL>
This directive is only applicable in a slave slapd. It
This directive is only applicable in a {{replica}} (or {{shadow}})
{{slapd}}(8) instance. It
specifies the URL to return to clients which submit update
requests upon the replica.
If specified multiple times, each {{TERM:URL}} is provided.
\Example:
> olcUpdateref: ldap://master.example.net
> olcUpdateref: ldap://provider.example.net
H4: Sample Entries

View file

@ -425,12 +425,12 @@ H4: syncrepl
> [syncdata=default|accesslog|changelog]
This directive specifies the current database as a replica of the
master content by establishing the current {{slapd}}(8) as a
This directive specifies the current database as a consumer of the
provider content by establishing the current {{slapd}}(8) as a
replication consumer site running a syncrepl replication engine.
The master database is located at the replication provider site
specified by the {{EX:provider}} parameter. The replica database is
kept up-to-date with the master content using the LDAP Content
The provider database is located at the replication provider site
specified by the {{EX:provider}} parameter. The consumer database is
kept up-to-date with the provider content using the LDAP Content
Synchronization protocol. See {{REF:RFC4533}}
for more information on the protocol.
@ -441,19 +441,16 @@ described by the current {{EX:syncrepl}} directive. {{EX:<replica ID>}}
is non-negative and is no more than three decimal digits in length.
The {{EX:provider}} parameter specifies the replication provider site
containing the master content as an LDAP URI. The {{EX:provider}}
containing the provider content as an LDAP URI. The {{EX:provider}}
parameter specifies a scheme, a host and optionally a port where the
provider slapd instance can be found. Either a domain name or IP
address may be used for <hostname>. Examples are
{{EX:ldap://provider.example.com:389}} or {{EX:ldaps://192.168.1.1:636}}.
If <port> is not given, the standard LDAP port number (389 or 636) is used.
Note that the syncrepl uses a consumer-initiated protocol, and hence its
specification is located at the consumer site, whereas the {{EX:replica}}
specification is located at the provider site. {{EX:syncrepl}} and
{{EX:replica}} directives define two independent replication
mechanisms. They do not represent the replication peers of each other.
specification is located on the consumer.
The content of the syncrepl replica is defined using a search
The content of the syncrepl consumer is defined using a search
specification as its result set. The consumer slapd will
send search requests to the provider slapd according to the search
specification. The search specification includes {{EX:searchbase}},
@ -477,7 +474,7 @@ synchronization operation finishes. The interval is specified
by the {{EX:interval}} parameter. It is set to one day by default.
In the {{EX:refreshAndPersist}} operation, a synchronization search
remains persistent in the provider {{slapd}} instance. Further updates to the
master replica will generate {{EX:searchResultEntry}} to the consumer slapd
provider will generate {{EX:searchResultEntry}} to the consumer slapd
as the search responses to the persistent synchronization search.
If an error occurs during replication, the consumer will attempt to reconnect
@ -490,8 +487,8 @@ indefinite number of retries until success.
The schema checking can be enforced at the LDAP Sync consumer site
by turning on the {{EX:schemachecking}} parameter.
If it is turned on, every replicated entry will be checked for its
schema as the entry is stored into the replica content.
Every entry in the replica should contain those attributes
schema as the entry is stored on the consumer.
Every entry in the consumer should contain those attributes
required by the schema definition.
If it is turned off, entries will be stored without checking
schema conformance. The default is off.
@ -505,7 +502,7 @@ defaults for these parameters come from {{ldap.conf}}(5).
The {{EX:binddn}} parameter gives the DN to bind as for the
syncrepl searches to the provider slapd. It should be a DN
which has read access to the replication content in the
master database.
provider database.
The {{EX:bindmethod}} is {{EX:simple}} or {{EX:sasl}},
depending on whether simple password-based authentication or
@ -570,7 +567,7 @@ more information on how to use this directive.
H4: updateref <URL>
This directive is only applicable in a {{slave}} (or {{shadow}})
This directive is only applicable in a {{replica}} (or {{shadow}})
{{slapd}}(8) instance. It
specifies the URL to return to clients which submit update
requests upon the replica.
@ -578,7 +575,7 @@ If specified multiple times, each {{TERM:URL}} is provided.
\Example:
> updateref ldap://master.example.net
> updateref ldap://provider.example.net
H3: MDB Database Directives
@ -805,7 +802,7 @@ controls).
The next section of the configuration file defines a MDB
backend that will handle queries for things in the
"dc=example,dc=com" portion of the tree. The
database is to be replicated to two slave slapds, one on
database is to be replicated to two replica slapds, one on
truelies, the other on judgmentday. Indices are to be
maintained for several attributes, and the {{EX:userPassword}}
attribute is to be protected from unauthorized access.

View file

@ -4621,7 +4621,7 @@
x="96.974648"
y="113.75929"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial" /></flowRegion><flowPara
id="flowPara27617">Master/Provider</flowPara></flowRoot> <flowRoot
id="flowPara27617">Provider</flowPara></flowRoot> <flowRoot
xml:space="preserve"
id="flowRoot3120"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial"

Before

Width:  |  Height:  |  Size: 155 KiB

After

Width:  |  Height:  |  Size: 155 KiB

View file

@ -5015,7 +5015,7 @@
x="137.38075"
y="681.46503"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial" /></flowRegion><flowPara
id="flowPara15542">Replica Pool</flowPara></flowRoot> <flowRoot
id="flowPara15542">Consumer Pool</flowPara></flowRoot> <flowRoot
xml:space="preserve"
id="flowRoot15534"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial"
@ -5027,7 +5027,7 @@
x="137.38075"
y="681.46503"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial" /></flowRegion><flowPara
id="flowPara15544">Replica Pool</flowPara></flowRoot> <path
id="flowPara15544">Consumer Pool</flowPara></flowRoot> <path
style="fill:none;fill-rule:evenodd;stroke:#000000;stroke-width:0.71494228px;stroke-linecap:butt;stroke-linejoin:miter;marker-start:url(#Arrow1Lstart);marker-end:url(#Arrow1Lend);stroke-opacity:1"
d="M 254.55844,186.23712 L 254.55844,261.49474"
id="path16515" />

Before

Width:  |  Height:  |  Size: 289 KiB

After

Width:  |  Height:  |  Size: 289 KiB

View file

@ -13,12 +13,12 @@
id="svg7893"
inkscape:version="0.46"
sodipodi:docbase="/home/ghenry/Desktop"
sodipodi:docname="n-way-multi-master.svg"
sodipodi:docname="n-way-multi-provider.svg"
sodipodi:version="0.32"
width="744.09448"
inkscape:output_extension="org.inkscape.output.svg.inkscape"
version="1.0"
inkscape:export-filename="/home/ghenry/Desktop/n-way-multi-master.png"
inkscape:export-filename="/home/ghenry/Desktop/n-way-multi-provider.png"
inkscape:export-xdpi="90"
inkscape:export-ydpi="90">
<metadata
@ -4573,7 +4573,7 @@
x="194.28572"
y="475.52304"
style="font-size:24px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial" /></flowRegion><flowPara
id="flowPara6968">N-Way Multi-Master</flowPara></flowRoot> <text
id="flowPara6968">N-Way Multi-Provider</flowPara></flowRoot> <text
xml:space="preserve"
style="font-size:40px;font-style:normal;font-weight:normal;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;font-family:Bitstream Vera Sans"
x="316"

Before

Width:  |  Height:  |  Size: 178 KiB

After

Width:  |  Height:  |  Size: 178 KiB

View file

@ -4667,7 +4667,7 @@
x="96.974648"
y="113.75929"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial" /></flowRegion><flowPara
id="flowPara27617">Master/Provider</flowPara></flowRoot> <g
id="flowPara27617">Provider</flowPara></flowRoot> <g
id="g3073"
transform="matrix(0.1267968,0,0,0.1710106,264.00249,370.01498)">
<path
@ -4739,7 +4739,7 @@
x="412.14224"
y="279.42432"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial" /></flowRegion><flowPara
id="flowPara3136">Primary directory also contains back-ldap databases that replicate from the Master directory and push out changes to the replicas</flowPara></flowRoot> <flowRoot
id="flowPara3136">Primary directory also contains back-ldap databases that replicate from the provider directory and push out changes to the replicas</flowPara></flowRoot> <flowRoot
xml:space="preserve"
id="flowRoot6975"
style="font-size:12px;font-style:normal;font-weight:normal;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;font-family:Bitstream Vera Sans"

Before

Width:  |  Height:  |  Size: 152 KiB

After

Width:  |  Height:  |  Size: 152 KiB

View file

@ -4691,7 +4691,7 @@
x="96.974648"
y="113.75929"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial" /></flowRegion><flowPara
id="flowPara27617">Master/Provider</flowPara></flowRoot> <g
id="flowPara27617">Provider</flowPara></flowRoot> <g
id="g3073"
transform="matrix(0.1267968,0,0,0.1710106,264.00249,370.01498)"
inkscape:export-filename="/anything/src/openldap/ldap/doc/guide/images/src/push-based-complete.png"
@ -4772,7 +4772,7 @@
x="412.14224"
y="279.42432"
style="font-size:18px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;writing-mode:lr-tb;text-anchor:start;font-family:Arial" /></flowRegion><flowPara
id="flowPara3136">Primary directory is a standard OpenLDAP Master, ldap proxy using Syncrepl pulls in changes from the master and pushes out to replicas. Useful if you don't have access to original master.</flowPara></flowRoot> <flowRoot
id="flowPara3136">Primary directory is a standard OpenLDAP provider, ldap proxy using Syncrepl pulls in changes from the provider and pushes out to replicas. Useful if you don't have access to original provider.</flowPara></flowRoot> <flowRoot
xml:space="preserve"
id="flowRoot6975"
style="font-size:12px;font-style:normal;font-weight:normal;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;font-family:Bitstream Vera Sans"

Before

Width:  |  Height:  |  Size: 157 KiB

After

Width:  |  Height:  |  Size: 157 KiB

View file

@ -235,7 +235,7 @@ the attribute type is non-operational.
.TP
.B LDAP_SCHEMA_DIRECTORY_OPERATION
the attribute type is operational and is pertinent to the directory
itself, i.e. it has the same value on all servers that master the
itself, i.e. it has the same value on all servers that provide the
entry containing this attribute type.
.TP
.B LDAP_SCHEMA_DISTRIBUTED_OPERATION
@ -245,7 +245,7 @@ shadowing or other distributed directory aspect. TBC.
.B LDAP_SCHEMA_DSA_OPERATION
the attribute type is operational and is pertinent to the directory
server itself, i.e. it may have different values for the same entry
when retrieved from different servers that master the entry.
when retrieved from different servers that provide the entry.
.LP
Object classes can be of three kinds:
.TP

View file

@ -711,7 +711,7 @@ environment. The attribute "cmusaslsecretOTP" is the default value.
.B olcSaslAuxpropsDontUseCopyIgnore TRUE | FALSE
Used to disable replication of the attribute(s) defined by
olcSaslAuxpropsDontUseCopy and instead use a local value for the attribute. This
allows the SASL mechanism to continue to work if the master is offline. This can
allows the SASL mechanism to continue to work if the provider is offline. This can
cause replication inconsistency. Defaults to FALSE.
.TP
.B olcSaslHost: <fqdn>
@ -773,15 +773,15 @@ Specify an integer ID from 0 to 4095 for this server (limited
to 3 hexadecimal digits). The ID may also be specified as a
hexadecimal ID by prefixing the value with "0x".
Non-zero IDs are
required when using multimaster replication and each master must have a
unique non-zero ID. Note that this requirement also applies to separate masters
required when using multi-provider replication and each provider must have a
unique non-zero ID. Note that this requirement also applies to separate providers
contributing to a glued set of databases.
If the URL is provided, this directive may be specified
multiple times, providing a complete list of participating servers
and their IDs. The fully qualified hostname of each server should be
used in the supplied URLs. The IDs are used in the "replica id" field
of all CSNs generated by the specified server. The default value is zero, which
is only valid for single master replication.
is only valid for single provider replication.
Example:
.LP
.nf
@ -1624,7 +1624,7 @@ Specifies the maximum number of aliases to dereference when trying to
resolve an entry, used to avoid infinite alias loops. The default is 15.
.TP
.B olcMirrorMode: TRUE | FALSE
This option puts a replica database into "mirror" mode. Update
This option puts a consumer database into "mirror" mode. Update
operations will be accepted from any user, not just the updatedn. The
database must already be configured as syncrepl consumer
before this keyword may be set. This mode also requires a
@ -1780,13 +1780,13 @@ FALSE, meaning the contextCSN is stored in the context entry.
.B [syncdata=default|accesslog|changelog]
.B [lazycommit]
.RS
Specify the current database as a replica which is kept up-to-date with the
master content by establishing the current
Specify the current database as a consumer which is kept up-to-date with the
provider content by establishing the current
.BR slapd (8)
as a replication consumer site running a
.B syncrepl
replication engine.
The replica content is kept synchronized to the master content using
The consumer content is kept synchronized to the provider content using
the LDAP Content Synchronization protocol. Refer to the
"OpenLDAP Administrator's Guide" for detailed information on
setting up a replicated
@ -1802,13 +1802,13 @@ directive within the replication consumer site.
It is a non-negative integer having no more than three decimal digits.
.B provider
specifies the replication provider site containing the master content
specifies the replication provider site containing the provider content
as an LDAP URI. If <port> is not given, the standard LDAP port number
(389 or 636) is used.
The content of the
.B syncrepl
replica is defined using a search
consumer is defined using a search
specification as its result set. The consumer
.B slapd
will send search requests to the provider
@ -1843,7 +1843,7 @@ after each synchronization operation finishes.
In the
.B refreshAndPersist
operation, a synchronization search remains persistent in the provider slapd.
Further updates to the master replica will generate
Further updates to the provider will generate
.B searchResultEntry
to the consumer slapd as the search responses to the persistent
synchronization search. If the initial search fails due to an error, the
@ -1972,7 +1972,7 @@ for the consumer, while sacrificing safety or durability.
.RE
.TP
.B olcUpdateDN: <dn>
This option is only applicable in a slave
This option is only applicable in a replica
database.
It specifies the DN permitted to update (subject to access controls)
the replica. It is only needed in certain push-mode
@ -1980,7 +1980,7 @@ replication scenarios. Generally, this DN
.I should not
be the same as the
.B rootdn
used at the master.
used at the provider.
.TP
.B olcUpdateRef: <url>
Specify the referral to pass back when

View file

@ -861,7 +861,7 @@ environment. The attribute "cmusaslsecretOTP" is the default value.
.B sasl\-auxprops\-dontusecopy\-ignore on | off
Used to disable replication of the attribute(s) defined by
sasl-auxprops-dontusecopy and instead use a local value for the attribute. This
allows the SASL mechanism to continue to work if the master is offline. This can
allows the SASL mechanism to continue to work if the provider is offline. This can
cause replication inconsistency. Defaults to off.
.TP
.B sasl\-host <fqdn>
@ -962,15 +962,15 @@ Specify an integer ID from 0 to 4095 for this server (limited
to 3 hexadecimal digits). The ID may also be specified as a
hexadecimal ID by prefixing the value with "0x".
Non-zero IDs are
required when using multimaster replication and each master must have a
unique non-zero ID. Note that this requirement also applies to separate masters
required when using multi-provider replication and each provider must have a
unique non-zero ID. Note that this requirement also applies to separate providers
contributing to a glued set of databases.
If the URL is provided, this directive may be specified
multiple times, providing a complete list of participating servers
and their IDs. The fully qualified hostname of each server should be
used in the supplied URLs. The IDs are used in the "replica id" field
of all CSNs generated by the specified server. The default value is zero, which
is only valid for single master replication.
is only valid for single provider replication.
Example:
.LP
.nf
@ -1567,7 +1567,7 @@ Specifies the maximum number of aliases to dereference when trying to
resolve an entry, used to avoid infinite alias loops. The default is 15.
.TP
.B mirrormode on | off
This option puts a replica database into "mirror" mode. Update
This option puts a consumer database into "mirror" mode. Update
operations will be accepted from any user, not just the updatedn. The
database must already be configured as a syncrepl consumer
before this keyword may be set. This mode also requires a
@ -1759,13 +1759,13 @@ the contextCSN is stored in the context entry.
.B [syncdata=default|accesslog|changelog]
.B [lazycommit]
.RS
Specify the current database as a replica which is kept up-to-date with the
master content by establishing the current
Specify the current database as a consumer which is kept up-to-date with the
provider content by establishing the current
.BR slapd (8)
as a replication consumer site running a
.B syncrepl
replication engine.
The replica content is kept synchronized to the master content using
The consumer content is kept synchronized to the provider content using
the LDAP Content Synchronization protocol. Refer to the
"OpenLDAP Administrator's Guide" for detailed information on
setting up a replicated
@ -1782,13 +1782,13 @@ It is a non-negative integer not greater than 999 (limited
to three decimal digits).
.B provider
specifies the replication provider site containing the master content
specifies the replication provider site containing the provider content
as an LDAP URI. If <port> is not given, the standard LDAP port number
(389 or 636) is used.
The content of the
.B syncrepl
replica is defined using a search
consumer is defined using a search
specification as its result set. The consumer
.B slapd
will send search requests to the provider
@ -1838,7 +1838,7 @@ after each synchronization operation finishes.
In the
.B refreshAndPersist
operation, a synchronization search remains persistent in the provider slapd.
Further updates to the master replica will generate
Further updates to the provider will generate
.B searchResultEntry
to the consumer slapd as the search responses to the persistent
synchronization search. If the initial search fails due to an error, the
@ -1983,7 +1983,7 @@ for the consumer, while sacrificing safety or durability.
.RE
.TP
.B updatedn <dn>
This option is only applicable in a slave
This option is only applicable in a replica
database.
It specifies the DN permitted to update (subject to access controls)
the replica. It is only needed in certain push-mode
@ -1991,7 +1991,7 @@ replication scenarios. Generally, this DN
.I should not
be the same as the
.B rootdn
used at the master.
used at the provider.
.TP
.B updateref <url>
Specify the referral to pass back when

View file

@ -223,8 +223,8 @@ access to dn.onelevel="cn=Meetings"
.SH REPLICATION
This implementation of RFC 2589 provides a restricted interpretation of how
dynamic objects replicate. Only the master takes care of handling dynamic
object expiration, while replicas simply see the dynamic object as a plain
dynamic objects replicate. Only the provider takes care of handling dynamic
object expiration, while consumers simply see the dynamic object as a plain
object.
When replicating these objects, one needs to explicitly exclude the

View file

@ -107,9 +107,9 @@ The memberof overlay may be used with any backend that provides full
read-write functionality, but it is mainly intended for use
with local storage backends. The maintenance operations it performs
are internal to the server on which the overlay is configured and
are never replicated. Replica servers should be configured with their
are never replicated. Consumer servers should be configured with their
own instances of the memberOf overlay if it is desired to maintain
these memberOf attributes on the replicas. Note that slapo-memberOf
these memberOf attributes on the consumers. Note that slapo-memberOf
is not compatible with syncrepl based replication, and should not be
used in a replicated environment. An alternative is to use slapo-dynlist
to emulate slapo-memberOf behavior.

View file

@ -62,7 +62,7 @@ and no default is given, then no policies will be enforced.
.B ppolicy_forward_updates
Specify that policy state changes that result from Bind operations (such
as recording failures, lockout, etc.) on a consumer should be forwarded
to a master instead of being written directly into the consumer's local
to a provider instead of being written directly into the consumer's local
database. This setting is only useful on a replication consumer, and
also requires the
.B updateref
@ -692,12 +692,12 @@ module.
Note that the current IETF Password Policy proposal does not define
how these operational attributes are expected to behave in a
replication environment. In general, authentication attempts on
a slave server only affect the copy of the operational attributes
on that slave and will not affect any attributes for
a user's entry on the master server. Operational attribute changes
resulting from authentication attempts on a master server
will usually replicate to the slaves (and also overwrite
any changes that originated on the slave).
a replica server only affect the copy of the operational attributes
on that replica and will not affect any attributes for
a user's entry on the provider. Operational attribute changes
resulting from authentication attempts on a provider
will usually replicate to the replicas (and also overwrite
any changes that originated on the replica).
These behaviors are not guaranteed and are subject to change
when a formal specification emerges.

View file

@ -164,7 +164,7 @@ database will be unusable.
.B \-s
disable schema checking. This option is intended to be used when loading
databases containing special objects, such as fractional objects on a
partial replica. Loading normal objects which do not conform to
partial consumer. Loading normal objects which do not conform to
schema may result in unexpected and ill behavior.
.TP
.BI \-S \ SID

View file

@ -282,12 +282,12 @@ having the matching replication identifier in its definition. The
.B rid
must be provided in order for any other specified values to be used.
.B sid
is the server id in a multi-master/mirror-mode configuration.
is the server id in a multi-provider configuration.
.B csn
is the commit sequence number received by a previous synchronization
and represents the state of the consumer replica content which the
and represents the state of the consumer content which the
syncrepl engine will synchronize to the current provider content.
In case of \fImirror-mode\fP or \fImulti-master\fP replication agreement,
In case of \fImulti-provider\fP replication agreement,
multiple
.B csn
values, semicolon separated, can appear.

View file

@ -6,7 +6,7 @@
# This file should be world readable but not world writable.
#BASE dc=example,dc=com
#URI ldap://ldap.example.com ldap://ldap-master.example.com:666
#URI ldap://ldap.example.com ldap://ldap-provider.example.com:666
#SIZELIMIT 12
#TIMELIMIT 15

View file

@ -293,7 +293,7 @@ fe_op_add( Operation *op, SlapReply *rs )
/*
* do the add if 1 && (2 || 3)
* 1) there is an add function implemented in this backend;
* 2) this backend is master for what it holds;
* 2) this backend is the provider for what it holds;
* 3) it's a replica and the dn supplied is the updatedn.
*/
if ( op->o_bd->be_add ) {

View file

@ -231,7 +231,7 @@ glue_op_func ( Operation *op, SlapReply *rs )
op->o_bd = glue_back_select (b0, &op->o_req_ndn);
/* If we're on the master backend, let overlay framework handle it */
/* If we're on the primary backend, let overlay framework handle it */
if ( op->o_bd == b0 )
return SLAP_CB_CONTINUE;
@ -285,7 +285,7 @@ glue_response ( Operation *op, SlapReply *rs )
BackendDB *be = op->o_bd;
be = glue_back_select (op->o_bd, &op->o_req_ndn);
/* If we're on the master backend, let overlay framework handle it.
/* If we're on the primary backend, let overlay framework handle it.
* Otherwise, bail out.
*/
return ( op->o_bd == be ) ? SLAP_CB_CONTINUE : SLAP_CB_BYPASS;
@ -349,7 +349,7 @@ glue_chk_controls ( Operation *op, SlapReply *rs )
/* ITS#4615 - overlays configured above the glue overlay should be
* invoked for the entire glued tree. Overlays configured below the
* glue overlay should only be invoked on the master backend.
* glue overlay should only be invoked on the primary backend.
* So, if we're searching on any subordinates, we need to force the
* current overlay chain to stop processing, without stopping the
* overall callback flow.
@ -358,7 +358,7 @@ static int
glue_sub_search( Operation *op, SlapReply *rs, BackendDB *b0,
slap_overinst *on )
{
/* Process any overlays on the master backend */
/* Process any overlays on the primary backend */
if ( op->o_bd == b0 && on->on_next ) {
BackendInfo *bi = op->o_bd->bd_info;
int rc = SLAP_CB_CONTINUE;

View file

@ -1324,7 +1324,7 @@ config_generic(ConfigArgs *c) {
break;
case CFG_MIRRORMODE:
if ( SLAP_SHADOW(c->be))
c->value_int = (SLAP_MULTIMASTER(c->be) != 0);
c->value_int = (SLAP_MULTIPROVIDER(c->be) != 0);
else
rc = 1;
break;
@ -4042,7 +4042,7 @@ config_shadow( ConfigArgs *c, slap_mask_t flag )
} else {
SLAP_DBFLAGS(c->be) |= (SLAP_DBFLAG_SHADOW | flag);
if ( !SLAP_MULTIMASTER( c->be ))
if ( !SLAP_MULTIPROVIDER( c->be ))
SLAP_DBFLAGS(c->be) |= SLAP_DBFLAG_SINGLE_SHADOW;
}

View file

@ -1995,7 +1995,7 @@ slap_client_connect( LDAP **ldp, slap_bindconf *sb )
int rc;
struct timeval tv;
/* Init connection to master */
/* Init connection to provider */
rc = ldap_initialize( &ld, sb->sb_uri.bv_val );
if ( rc != LDAP_SUCCESS ) {
Debug( LDAP_DEBUG_ANY,

View file

@ -159,7 +159,7 @@ fe_op_delete( Operation *op, SlapReply *rs )
/*
* do the delete if 1 && (2 || 3)
* 1) there is a delete function implemented in this backend;
* 2) this backend is master for what it holds;
* 2) this backend is the provider for what it holds;
* 3) it's a replica and the dn supplied is the update_ndn.
*/
if ( op->o_bd->be_delete ) {

View file

@ -497,7 +497,7 @@ int main( int argc, char **argv )
urls = optarg;
break;
case 'c': /* provide sync cookie, override if exist in replica */
case 'c': /* provide sync cookie, override if exist in consumer */
scp = (struct sync_cookie *) ch_calloc( 1,
sizeof( struct sync_cookie ));
ber_str2bv( optarg, 0, 1, &scp->octet_str );

View file

@ -275,7 +275,7 @@ fe_op_modify( Operation *op, SlapReply *rs )
/*
* do the modify if 1 && (2 || 3)
* 1) there is a modify function implemented in this backend;
* 2) this backend is master for what it holds;
* 2) this backend is the provider for what it holds;
* 3) it's a replica and the dn supplied is the update_ndn.
*/
if ( op->o_bd->be_modify ) {

View file

@ -305,7 +305,7 @@ fe_op_modrdn( Operation *op, SlapReply *rs )
/*
* do the modrdn if 1 && (2 || 3)
* 1) there is a modrdn function implemented in this backend;
* 2) this backend is master for what it holds;
* 2) this backend is the provider for what it holds;
* 3) it's a replica and the dn supplied is the update_ndn.
*/
if ( op->o_bd->be_modrdn ) {

View file

@ -1102,7 +1102,7 @@ dds_op_extended( Operation *op, SlapReply *rs )
ttl = di->di_min_ttl;
}
/* This does not apply to multi-master case */
/* This does not apply to multi-provider case */
if ( !( !SLAP_SINGLE_SHADOW( op->o_bd ) || be_isupdate( op ) ) ) {
/* we SHOULD return a referral in this case */
BerVarray defref = op->o_bd->be_update_refs

View file

@ -2104,7 +2104,7 @@ ppolicy_add(
if ( ppolicy_restrict( op, rs ) != SLAP_CB_CONTINUE )
return rs->sr_err;
/* If this is a replica, assume the master checked everything */
/* If this is a replica, assume the provider checked everything */
if ( SLAPD_SYNC_IS_SYNCCONN( op->o_connid ) )
return SLAP_CB_CONTINUE;
@ -2253,7 +2253,7 @@ ppolicy_modify( Operation *op, SlapReply *rs )
if ( pi->disable_write ) return SLAP_CB_CONTINUE;
/* If this is a replica, we may need to tweak some of the
* master's modifications. Otherwise, just pass it through.
* provider's modifications. Otherwise, just pass it through.
*/
if ( SLAPD_SYNC_IS_SYNCCONN( op->o_connid ) ) {
Modifications **prev;

View file

@ -165,7 +165,7 @@ int passwd_extop(
goto error_return;
}
/* This does not apply to multi-master case */
/* This does not apply to multi-provider case */
if(!( !SLAP_SINGLE_SHADOW( op->o_bd ) || be_isupdate( op ))) {
/* we SHOULD return a referral in this case */
BerVarray defref = op->o_bd->be_update_refs

View file

@ -1858,14 +1858,14 @@ struct BackendDB {
#define SLAP_DBFLAG_DYNAMIC 0x0400U /* this db allows dynamicObjects */
#define SLAP_DBFLAG_MONITORING 0x0800U /* custom monitoring enabled */
#define SLAP_DBFLAG_SHADOW 0x8000U /* a shadow */
#define SLAP_DBFLAG_SINGLE_SHADOW 0x4000U /* a single-master shadow */
#define SLAP_DBFLAG_SINGLE_SHADOW 0x4000U /* a single-provider shadow */
#define SLAP_DBFLAG_SYNC_SHADOW 0x1000U /* a sync shadow */
#define SLAP_DBFLAG_SLURP_SHADOW 0x2000U /* a slurp shadow */
#define SLAP_DBFLAG_SHADOW_MASK (SLAP_DBFLAG_SHADOW|SLAP_DBFLAG_SINGLE_SHADOW|SLAP_DBFLAG_SYNC_SHADOW|SLAP_DBFLAG_SLURP_SHADOW)
#define SLAP_DBFLAG_CLEAN 0x10000U /* was cleanly shutdown */
#define SLAP_DBFLAG_ACL_ADD 0x20000U /* check attr ACLs on adds */
#define SLAP_DBFLAG_SYNC_SUBENTRY 0x40000U /* use subentry for context */
#define SLAP_DBFLAG_MULTI_SHADOW 0x80000U /* uses mirrorMode/multi-master */
#define SLAP_DBFLAG_MULTI_SHADOW 0x80000U /* uses multi-provider */
#define SLAP_DBFLAG_DISABLED 0x100000U
#define SLAP_DBFLAG_LASTBIND 0x200000U
slap_mask_t be_flags;
@ -1893,7 +1893,7 @@ struct BackendDB {
#define SLAP_SYNC_SHADOW(be) (SLAP_DBFLAGS(be) & SLAP_DBFLAG_SYNC_SHADOW)
#define SLAP_SLURP_SHADOW(be) (SLAP_DBFLAGS(be) & SLAP_DBFLAG_SLURP_SHADOW)
#define SLAP_SINGLE_SHADOW(be) (SLAP_DBFLAGS(be) & SLAP_DBFLAG_SINGLE_SHADOW)
#define SLAP_MULTIMASTER(be) (SLAP_DBFLAGS(be) & SLAP_DBFLAG_MULTI_SHADOW)
#define SLAP_MULTIPROVIDER(be) (SLAP_DBFLAGS(be) & SLAP_DBFLAG_MULTI_SHADOW)
#define SLAP_DBCLEAN(be) (SLAP_DBFLAGS(be) & SLAP_DBFLAG_CLEAN)
#define SLAP_DBACL_ADD(be) (SLAP_DBFLAGS(be) & SLAP_DBFLAG_ACL_ADD)
#define SLAP_SYNC_SUBENTRY(be) (SLAP_DBFLAGS(be) & SLAP_DBFLAG_SYNC_SUBENTRY)
@ -1975,7 +1975,7 @@ struct BackendDB {
slap_access_t be_dfltaccess; /* access given if no acl matches */
AttributeName *be_extra_anlist; /* attributes that need to be added to search requests (ITS#6513) */
/* Replica Information */
/* Consumer Information */
struct berval be_update_ndn; /* allowed to make changes (in replicas) */
BerVarray be_update_refs; /* where to refer modifying clients to */
struct be_pcl *be_pending_csn_list;

View file

@ -799,7 +799,7 @@ slap_tool_init(
break;
}
/* If the named base is a glue master, operate on the
/* If the named base is a glue primary, operate on the
* entire context
*/
if ( SLAP_GLUE_INSTANCE( be ) ) {
@ -827,7 +827,7 @@ slap_tool_init(
continue;
/* If just doing the first by default and it is a
* glue subordinate, find the master.
* glue subordinate, find the primary.
*/
if ( SLAP_GLUE_SUBORDINATE(be) ) {
nosubordinates = 1;

View file

@ -184,7 +184,7 @@ static int syncrepl_dirsync_cookie(
static int syncrepl_dsee_update( syncinfo_t *si, Operation *op ) ;
/* delta-mmr overlay handler */
/* delta-mpr overlay handler */
static int syncrepl_op_modify( Operation *op, SlapReply *rs );
/* callback functions */
@ -195,7 +195,7 @@ static AttributeDescription *sync_descs[4];
static AttributeDescription *dsee_descs[7];
/* delta-mmr */
/* delta-mpr */
static AttributeDescription *ad_reqMod, *ad_reqDN;
typedef struct logschema {
@ -272,7 +272,7 @@ init_syncrepl(syncinfo_t *si)
overlay_register( &syncrepl_ov );
}
/* delta-MMR needs the overlay, nothing else does.
/* delta-MPR needs the overlay, nothing else does.
* This must happen before accesslog overlay is configured.
*/
if ( si->si_syncdata &&
@ -822,7 +822,7 @@ do_syncrep1(
si->si_syncCookie.rid = si->si_rid;
/* whenever there are multiple data sources possible, advertise sid */
si->si_syncCookie.sid = ( SLAP_MULTIMASTER( si->si_be ) || si->si_be != si->si_wbe ) ?
si->si_syncCookie.sid = ( SLAP_MULTIPROVIDER( si->si_be ) || si->si_be != si->si_wbe ) ?
slap_serverID : -1;
#ifdef LDAP_CONTROL_X_DIRSYNC
@ -1432,7 +1432,7 @@ logerr:
}
ber_scanf( ber, /*"{"*/ "}" );
}
if ( SLAP_MULTIMASTER( op->o_bd ) && check_syncprov( op, si )) {
if ( SLAP_MULTIPROVIDER( op->o_bd ) && check_syncprov( op, si )) {
slap_sync_cookie_free( &syncCookie_req, 0 );
slap_dup_sync_cookie( &syncCookie_req, &si->si_syncCookie );
}
@ -1619,7 +1619,7 @@ logerr:
continue;
}
if ( SLAP_MULTIMASTER( op->o_bd ) && check_syncprov( op, si )) {
if ( SLAP_MULTIPROVIDER( op->o_bd ) && check_syncprov( op, si )) {
slap_sync_cookie_free( &syncCookie_req, 0 );
slap_dup_sync_cookie( &syncCookie_req, &si->si_syncCookie );
}
@ -1791,10 +1791,10 @@ do_syncrepl(
* in use. This may be complicated by the use of the glue
* overlay.
*
* Typically there is a single syncprov mastering the entire
* Typically there is a single syncprov controlling the entire
* glued tree. In that case, our contextCSN updates should
* go to the master DB. But if there is no syncprov on the
* master DB, then nothing special is needed here.
* go to the primary DB. But if there is no syncprov on the
* primary DB, then nothing special is needed here.
*
* Alternatively, there may be individual syncprov overlays
* on each glued branch. In that case, each syncprov only
@ -2844,14 +2844,14 @@ syncrepl_message_to_op(
OpExtraSync oes;
op->orm_modlist = modlist;
op->o_bd = si->si_wbe;
/* delta-mmr needs additional checks in syncrepl_op_modify */
if ( SLAP_MULTIMASTER( op->o_bd )) {
/* delta-mpr needs additional checks in syncrepl_op_modify */
if ( SLAP_MULTIPROVIDER( op->o_bd )) {
oes.oe.oe_key = (void *)syncrepl_message_to_op;
oes.oe_si = si;
LDAP_SLIST_INSERT_HEAD( &op->o_extra, &oes.oe, oe_next );
}
rc = op->o_bd->be_modify( op, &rs );
if ( SLAP_MULTIMASTER( op->o_bd )) {
if ( SLAP_MULTIPROVIDER( op->o_bd )) {
LDAP_SLIST_REMOVE( &op->o_extra, &oes.oe, OpExtra, oe_next );
BER_BVZERO( &op->o_csn );
}
@ -4333,11 +4333,11 @@ syncrepl_del_nonpresent(
op->ors_limit = NULL;
op->ors_attrsonly = 0;
op->ors_filter = filter_dup( si->si_filter, op->o_tmpmemctx );
/* In multimaster, updates can continue to arrive while
/* In multi-provider, updates can continue to arrive while
* we're searching. Limit the search result to entries
* older than our newest cookie CSN.
*/
if ( SLAP_MULTIMASTER( op->o_bd )) {
if ( SLAP_MULTIPROVIDER( op->o_bd )) {
Filter *f;
int i;
@ -4367,7 +4367,7 @@ syncrepl_del_nonpresent(
rc = be->be_search( op, &rs_search );
if ( SLAP_MULTIMASTER( op->o_bd )) {
if ( SLAP_MULTIPROVIDER( op->o_bd )) {
op->ors_filter = of;
}
if ( op->ors_filter ) filter_free_x( op, op->ors_filter, 1 );

View file

@ -45,11 +45,11 @@ uid: user3
uidNumber: 5387
homeDirectory: /home/user3
loginShell: /bin/false
gecos: Slave
gecos: Consumer
gidNumber: 100
userPassword: abc
cn: Slave
sn: Slave
cn: Consumer
sn: Consumer
dn: uid=user2,ou=people,dc=example,dc=com
objectClass: person

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -41,7 +41,7 @@ fi
echo "This test tracks a case where changes are incorrectly skipped"
echo "See https://bugs.openldap.org/show_bug.cgi?id=8444 for more information."
MMR=4
MPR=4
XDIR=$TESTDIR/srv
mkdir -p $TESTDIR
@ -54,7 +54,7 @@ ITSDIR=$DATADIR/regressions/its$ITS
echo "Initializing server configurations..."
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
DBDIR=${XDIR}$n/db
CFDIR=${XDIR}$n/slapd.d
@ -66,7 +66,7 @@ done
KILLPIDS=
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
MYURI=`eval echo '$URI'$n`
MYLOG=`eval echo '$LOG'$n`
CFDIR=${XDIR}$n/slapd.d
@ -204,7 +204,7 @@ EOF
TOON4="cn=Bugs_Bunny,ou=People,$BASEDN"
for member in $TOON1 $TOON2 $TOON3 $TOON4; do
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
>$SEARCHOUT
echo "# Searching member $member after removal from Cartoonia group, provider $n" >> $SEARCHOUT
MYURI=`eval echo '$URI'$n`
@ -247,7 +247,7 @@ EOF
echo "Searching entire database on each provider after deleting Cartoonia group"
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
echo "# Searching the entire database after deleting Cartoonia, provider $n" >> $SEARCHOUT
MYURI=`eval echo '$URI'$n`
$LDAPSEARCH -S "" -b "$BASEDN" -H $MYURI -D "cn=manager,$BASEDN" -w $PASSWD \
@ -287,7 +287,7 @@ EOF
echo "Searching entire database on each provider after re-adding Cartoonia group"
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
>$SEARCHOUT
echo "# Searching the entire database after re-adding Cartoonia, provider $n" >> $SEARCHOUT
MYURI=`eval echo '$URI'$n`

View file

@ -37,7 +37,7 @@ if test $dtest = N; then
fi
# This mimics the scenario where a single server has been used until now (no
# syncprov either, so no contextCSN) and we convert it to a delta-MMR setup:
# syncprov either, so no contextCSN) and we convert it to a delta-MPR setup:
# 1. stop the server (note that there is likely no contextCSN in the DB at this point)
# 2. configure all servers to delta-replicate from each other and start them up
# - empty servers will start with a refresh of the main DB
@ -47,7 +47,7 @@ fi
echo "This test tracks a case where slapd deadlocks during a significant write load"
echo "See https://bugs.openldap.org/show_bug.cgi?id=8752 for more information."
MMR=4
MPR=4
iterations=20000
check_sync_every=100
MAPSIZE=`expr 100 \* 1024 \* 1024`
@ -59,7 +59,7 @@ ITS=8752
ITSDIR=$DATADIR/regressions/its$ITS
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
DBDIR=${XDIR}$n/db
mkdir -p ${XDIR}$n $DBDIR.1 $DBDIR.2
n=`expr $n + 1`
@ -158,22 +158,22 @@ if test $RC != 0 ; then
exit $RC
fi
echo "Stopping slapd and reworking configuration for MMR..."
echo "Stopping slapd and reworking configuration for MPR..."
kill -HUP $KILLPIDS
wait $KILLPIDS
KILLPIDS=
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
MYURI=`eval echo '$URI'$n`
MYLOG=`eval echo '$LOG'$n`
MYCONF=`eval echo '$CONF'$n`
echo "Starting provider slapd on TCP/IP URI $MYURI"
. $CONFFILTER $BACKEND $MONITORDB < $ITSDIR/slapd.conf.mmr > $TESTDIR/slapd.conf
sed -e "s/MMR/$n/g" -e "s/wronglog/log/" -e "s/@MAPSIZE@/$MAPSIZE/" $TESTDIR/slapd.conf > $MYCONF
. $CONFFILTER $BACKEND $MONITORDB < $ITSDIR/slapd.conf.mpr > $TESTDIR/slapd.conf
sed -e "s/MPR/$n/g" -e "s/wronglog/log/" -e "s/@MAPSIZE@/$MAPSIZE/" $TESTDIR/slapd.conf > $MYCONF
j=1
while [ $j -le $MMR ]; do
while [ $j -le $MPR ]; do
MMCURI=`eval echo '$URI'$j`
sed -e "s|MMC${j}|${MMCURI}|" $MYCONF > $TESTDIR/slapd.conf
mv $TESTDIR/slapd.conf $MYCONF
@ -211,10 +211,10 @@ while [ $n -le $MMR ]; do
n=`expr $n + 1`
done
echo "Setting up accesslog on each master..."
echo "Setting up accesslog on each provider..."
n=1
while [ $n -le $MMR ]; do
echo "Modifying dn: cn=Elmer_Fudd,ou=People,$BASEDN on master $n"
while [ $n -le $MPR ]; do
echo "Modifying dn: cn=Elmer_Fudd,ou=People,$BASEDN on provider $n"
MYURI=`eval echo '$URI'$n`
$LDAPMODIFY -v -D "$MANAGERDN" -H $MYURI -w $PASSWD > \
$TESTOUT 2>&1 << EOMODS
@ -246,7 +246,7 @@ done
for i in 0 1 2 3 4 5; do
j=1
while [ $j -le $MMR ]; do
while [ $j -le $MPR ]; do
MYURI=`eval echo '$URI'$j`
$LDAPSEARCH -b "$BASEDN" -H "$MYURI" \
'*' '+' >"$TESTDIR/server$j.out" 2>&1
@ -262,7 +262,7 @@ for i in 0 1 2 3 4 5; do
in_sync=1
j=1
while [ $j -lt $MMR ]; do
while [ $j -lt $MPR ]; do
k=$j
j=`expr $j + 1`
$CMP "$TESTDIR/server$k.flt" "$TESTDIR/server$j.flt" > $CMPOUT
@ -287,7 +287,7 @@ fi
echo "The next step of the test will perform $iterations random write operations and may take some time."
echo "As this test is for a deadlock, it will take manual intervention to exit the test if one occurs."
echo "Starting random master/entry modifications..."
echo "Starting random provider/entry modifications..."
DN1="cn=Elmer_Fudd,ou=People,$BASEDN"
VAL1="Fudd"
@ -304,7 +304,7 @@ n=1
while [ $n -le $iterations ]; do
seed=`date +%N|sed s/...$//`
rvalue=`echo|awk "BEGIN {srand($seed)
{print int(1+rand()*$MMR)}}"`
{print int(1+rand()*$MPR)}}"`
MYURI=`eval echo '$URI'$rvalue`
seed=`date +%N|sed s/...$//`
rvalue=`echo|awk "BEGIN {srand($seed)
@ -332,7 +332,7 @@ EOMODS
echo "Checking replication status before we start iteration $n..."
for i in 0 1 2 3 4 5; do
j=1
while [ $j -le $MMR ]; do
while [ $j -le $MPR ]; do
MYURI=`eval echo '$URI'$j`
echo "Reading database from server $j..."
$LDAPSEARCH -b "$BASEDN" -H "$MYURI" \
@ -349,7 +349,7 @@ EOMODS
in_sync=1
j=1
while [ $j -lt $MMR ]; do
while [ $j -lt $MPR ]; do
k=`expr $j + 1`
$CMP "$TESTDIR/server$j.flt" "$TESTDIR/server$k.flt" > $CMPOUT
if test $? != 0 ; then
@ -383,15 +383,15 @@ echo "As this test is for a deadlock, it will take manual intervention to exit t
echo "Starting servers again, this time with the wrong logbase setting..."
KILLPIDS=
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
MYURI=`eval echo '$URI'$n`
MYLOG=`eval echo '$LOG'$n`
MYCONF=`eval echo '$CONF'$n`
echo "Starting provider slapd on TCP/IP URI $MYURI"
. $CONFFILTER $BACKEND $MONITORDB < $ITSDIR/slapd.conf.mmr > $TESTDIR/slapd.conf
sed -e "s/MMR/$n/g" -e "s/@MAPSIZE@/$MAPSIZE/" $TESTDIR/slapd.conf > $MYCONF
. $CONFFILTER $BACKEND $MONITORDB < $ITSDIR/slapd.conf.mpr > $TESTDIR/slapd.conf
sed -e "s/MPR/$n/g" -e "s/@MAPSIZE@/$MAPSIZE/" $TESTDIR/slapd.conf > $MYCONF
j=1
while [ $j -le $MMR ]; do
while [ $j -le $MPR ]; do
MMCURI=`eval echo '$URI'$j`
sed -e "s|MMC${j}|${MMCURI}|" $MYCONF > $TESTDIR/slapd.conf
mv $TESTDIR/slapd.conf $MYCONF
@ -429,12 +429,12 @@ while [ $n -le $MMR ]; do
n=`expr $n + 1`
done
echo "Starting random master/entry modifications..."
echo "Starting random provider/entry modifications..."
n=1
while [ $n -le $iterations ]; do
seed=`date +%N|sed s/...$//`
rvalue=`echo|awk "BEGIN {srand($seed)
{print int(1+rand()*$MMR)}}"`
{print int(1+rand()*$MPR)}}"`
MYURI=`eval echo '$URI'$rvalue`
seed=`date +%N|sed s/...$//`
rvalue=`echo|awk "BEGIN {srand($seed)
@ -462,7 +462,7 @@ if [ "$check_sync_every" -gt 0 ] && [ `expr $n % $check_sync_every` = 0 ]; then
echo "Checking replication status before we start iteration $n..."
for i in 0 1 2 3 4 5; do
j=1
while [ $j -le $MMR ]; do
while [ $j -le $MPR ]; do
MYURI=`eval echo '$URI'$j`
echo "Reading database from server $j..."
$LDAPSEARCH -b "$BASEDN" -H "$MYURI" \
@ -479,7 +479,7 @@ if [ "$check_sync_every" -gt 0 ] && [ `expr $n % $check_sync_every` = 0 ]; then
in_sync=1
j=1
while [ $j -lt $MMR ]; do
while [ $j -lt $MPR ]; do
k=`expr $j + 1`
$CMP "$TESTDIR/server$j.flt" "$TESTDIR/server$k.flt" > $CMPOUT
if test $? != 0 ; then

View file

@ -21,10 +21,10 @@ include @SCHEMADIR@/nis.schema
include @DATADIR@/test.schema
#
pidfile @TESTDIR@/slapd.MMR.pid
argsfile @TESTDIR@/slapd.MMR.args
pidfile @TESTDIR@/slapd.MPR.pid
argsfile @TESTDIR@/slapd.MPR.args
serverid MMR
serverid MPR
#mod#modulepath ../servers/slapd/back-@BACKEND@/:../servers/slapd/overlays
#mod#moduleload back_@BACKEND@.la
#monitormod#modulepath ../servers/slapd/back-monitor/
@ -41,7 +41,7 @@ database @BACKEND@
suffix "dc=example,dc=com"
rootdn "cn=Manager,dc=example,dc=com"
rootpw secret
#~null~#directory @TESTDIR@/srvMMR/db.1
#~null~#directory @TESTDIR@/srvMPR/db.1
#indexdb#index objectClass eq
#indexdb#index cn,sn,uid pres,eq,sub
@ -132,7 +132,7 @@ logpurge 24:00 01+00:00
database @BACKEND@
suffix "cn=log"
rootdn "cn=Manager,dc=example,dc=com"
#~null~#directory @TESTDIR@/srvMMR/db.2
#~null~#directory @TESTDIR@/srvMPR/db.2
#indexdb#index objectClass eq
#indexdb#index entryCSN,entryUUID,reqEnd,reqResult,reqStart eq
#mdb#maxsize @MAPSIZE@

View file

@ -38,7 +38,7 @@ fi
echo "This test tracks a case where changes are not refreshed when an old db is reloaded"
echo "See https://bugs.openldap.org/show_bug.cgi?id=8800 for more information."
MMR=4
MPR=4
XDIR=$TESTDIR/srv
mkdir -p $TESTDIR
@ -50,8 +50,8 @@ ITSDIR=$DATADIR/regressions/its$ITS
n=1
while [ $n -le $MMR ]; do
echo "Initializing server configuration for MMR$n..."
while [ $n -le $MPR ]; do
echo "Initializing server configuration for MPR$n..."
DBDIR=${XDIR}$n/db
CFDIR=${XDIR}$n/slapd.d
@ -64,7 +64,7 @@ done
KILLPIDS=
n=1
while [ $n -le $MMR ]; do
while [ $n -le $MPR ]; do
MYURI=`eval echo '$URI'$n`
MYLOG=`eval echo '$LOG'$n`
CFDIR=${XDIR}$n/slapd.d
@ -128,14 +128,14 @@ echo -n "Sleeping 1 minute to ensure consumers catch up..."
sleep 60
echo "done"
echo -n "Stopping MMR1 slapd..."
echo -n "Stopping MPR1 slapd..."
kill -HUP $MPID
wait $MPID
KILLPIDS=`echo "$KILLPIDS " | sed -e "s/ $MPID / /"`;
sleep $SLEEP2
echo "done"
echo -n "Wiping primary and accesslog databases for MMR1..."
echo -n "Wiping primary and accesslog databases for MPR1..."
DBDIR="$TESTDIR/srv1/db"
CFDIR="$TESTDIR/srv1/slapd.d"
mv $DBDIR.1 $DBDIR.1.orig

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for proxy cache testing
# provider slapd config -- for proxy cache testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for proxy cache testing
# provider slapd config -- for proxy cache testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of Delta SYNC replication
# consumer slapd config -- for testing of Delta SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -44,7 +44,7 @@ argsfile @TESTDIR@/slapd.2.args
database @BACKEND@
suffix "dc=example,dc=com"
rootdn "cn=Replica,dc=example,dc=com"
rootdn "cn=consumer,dc=example,dc=com"
rootpw secret
#null#bind on
#~null~#directory @TESTDIR@/db.2.a

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing of Delta SYNC replication
# provider slapd config -- for testing of Delta SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -32,7 +32,7 @@ argsfile @TESTDIR@/slapd.1.args
#accesslogmod#moduleload accesslog.la
#######################################################################
# master database definitions
# provider database definitions
#######################################################################
database @BACKEND@

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of MSAD DIRSYNC replication
# consumer slapd config -- for testing of MSAD DIRSYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication
# consumer slapd config -- for testing of SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication
# consumer slapd config -- for testing of SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing (needs updating)
# provider slapd config -- for testing (needs updating)
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# slave slapd config -- for default referral testing
# consumer slapd config -- for default referral testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of replication
# consumer slapd config -- for testing of replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -50,7 +50,7 @@ access to *
database @BACKEND@
suffix "dc=example,dc=com"
rootdn "cn=Replica,dc=example,dc=com"
rootdn "cn=consumer,dc=example,dc=com"
rootpw secret
# HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply
# without the need to write the UpdateDN before starting replication

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication with intermediate proxy
# consumer slapd config -- for testing of SYNC replication with intermediate proxy
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication
# consumer slapd config -- for testing of SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -44,7 +44,7 @@ argsfile @TESTDIR@/slapd.4.args
database @BACKEND@
suffix "dc=example,dc=com"
rootdn "cn=Replica,dc=example,dc=com"
rootdn "cn=consumer,dc=example,dc=com"
rootpw secret
#null#bind on
#~null~#directory @TESTDIR@/db.4.a

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication
# consumer slapd config -- for testing of SYNC replication
# $OpenLDAP$
include @SCHEMADIR@/core.schema
@ -21,7 +21,7 @@ argsfile @TESTDIR@/slapd.5.args
database @BACKEND@
suffix "dc=example,dc=com"
rootdn "cn=Replica,dc=example,dc=com"
rootdn "cn=consumer,dc=example,dc=com"
rootpw secret
#~null~#directory @TESTDIR@/db.5.a
#indexdb#index objectClass eq
@ -33,7 +33,7 @@ rootpw secret
# Don't change syncrepl spec yet
syncrepl rid=1
provider=@URI4@
binddn="cn=Replica,dc=example,dc=com"
binddn="cn=consumer,dc=example,dc=com"
bindmethod=simple
credentials=secret
searchbase="dc=example,dc=com"

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication
# consumer slapd config -- for testing of SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -33,7 +33,7 @@ argsfile @TESTDIR@/slapd.6.args
database @BACKEND@
suffix "dc=example,dc=com"
rootdn "cn=Replica,dc=example,dc=com"
rootdn "cn=consumer,dc=example,dc=com"
rootpw secret
#~null~#directory @TESTDIR@/db.6.a
#indexdb#index objectClass eq

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication
# consumer slapd config -- for testing of SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -35,7 +35,7 @@ argsfile @TESTDIR@/slapd.2.args
database @BACKEND@
suffix "dc=example,dc=com"
rootdn "cn=Replica,dc=example,dc=com"
rootdn "cn=consumer,dc=example,dc=com"
rootpw secret
#null#bind on
#~null~#directory @TESTDIR@/db.2.a

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication
# consumer slapd config -- for testing of SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -33,7 +33,7 @@ argsfile @TESTDIR@/slapd.3.args
database @BACKEND@
suffix "dc=example,dc=com"
rootdn "cn=Replica,dc=example,dc=com"
rootdn "cn=consumer,dc=example,dc=com"
rootpw secret
#~null~#directory @TESTDIR@/db.3.a
#indexdb#index objectClass eq
@ -45,7 +45,7 @@ rootpw secret
# Don't change syncrepl spec yet
syncrepl rid=1
provider=@URI2@
binddn="cn=Replica,dc=example,dc=com"
binddn="cn=consumer,dc=example,dc=com"
bindmethod=simple
credentials=secret
searchbase="dc=example,dc=com"

View file

@ -1,4 +1,4 @@
# slave slapd config -- for testing of SYNC replication with intermediate proxy
# consumer slapd config -- for testing of SYNC replication with intermediate proxy
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -32,7 +32,7 @@ argsfile @TESTDIR@/slapd.1.args
#ldapmod#moduleload back_ldap.la
#######################################################################
# master database definitions
# provider database definitions
#######################################################################
database @BACKEND@

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing of SYNC replication
# provider slapd config -- for testing of SYNC replication
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##
@ -30,7 +30,7 @@ argsfile @TESTDIR@/slapd.1.args
#syncprovmod#moduleload syncprov.la
#######################################################################
# master database definitions
# provider database definitions
#######################################################################
database @BACKEND@

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -1,4 +1,4 @@
# master slapd config -- for testing
# provider slapd config -- for testing
# $OpenLDAP$
## This work is part of OpenLDAP Software <http://www.openldap.org/>.
##

View file

@ -87,29 +87,29 @@ CLIENTDIR=../clients/tools
CONF=$DATADIR/slapd.conf
CONFTWO=$DATADIR/slapd2.conf
CONF2DB=$DATADIR/slapd-2db.conf
MCONF=$DATADIR/slapd-master.conf
MCONF=$DATADIR/slapd-provider.conf
COMPCONF=$DATADIR/slapd-component.conf
PWCONF=$DATADIR/slapd-pw.conf
WHOAMICONF=$DATADIR/slapd-whoami.conf
ACLCONF=$DATADIR/slapd-acl.conf
RCONF=$DATADIR/slapd-referrals.conf
SRMASTERCONF=$DATADIR/slapd-syncrepl-master.conf
DSRMASTERCONF=$DATADIR/slapd-deltasync-master.conf
DSRSLAVECONF=$DATADIR/slapd-deltasync-slave.conf
SRPROVIDERCONF=$DATADIR/slapd-syncrepl-provider.conf
DSRPROVIDERCONF=$DATADIR/slapd-deltasync-provider.conf
DSRCONSUMERCONF=$DATADIR/slapd-deltasync-consumer.conf
PPOLICYCONF=$DATADIR/slapd-ppolicy.conf
PROXYCACHECONF=$DATADIR/slapd-proxycache.conf
PROXYAUTHZCONF=$DATADIR/slapd-proxyauthz.conf
CACHEMASTERCONF=$DATADIR/slapd-cache-master.conf
PROXYAUTHZMASTERCONF=$DATADIR/slapd-cache-master-proxyauthz.conf
R1SRSLAVECONF=$DATADIR/slapd-syncrepl-slave-refresh1.conf
R2SRSLAVECONF=$DATADIR/slapd-syncrepl-slave-refresh2.conf
P1SRSLAVECONF=$DATADIR/slapd-syncrepl-slave-persist1.conf
P2SRSLAVECONF=$DATADIR/slapd-syncrepl-slave-persist2.conf
P3SRSLAVECONF=$DATADIR/slapd-syncrepl-slave-persist3.conf
CACHEPROVIDERCONF=$DATADIR/slapd-cache-provider.conf
PROXYAUTHZPROVIDERCONF=$DATADIR/slapd-cache-provider-proxyauthz.conf
R1SRCONSUMERCONF=$DATADIR/slapd-syncrepl-consumer-refresh1.conf
R2SRCONSUMERCONF=$DATADIR/slapd-syncrepl-consumer-refresh2.conf
P1SRCONSUMERCONF=$DATADIR/slapd-syncrepl-consumer-persist1.conf
P2SRCONSUMERCONF=$DATADIR/slapd-syncrepl-consumer-persist2.conf
P3SRCONSUMERCONF=$DATADIR/slapd-syncrepl-consumer-persist3.conf
DIRSYNC1CONF=$DATADIR/slapd-dirsync1.conf
DSEESYNC1CONF=$DATADIR/slapd-dsee-slave1.conf
DSEESYNC2CONF=$DATADIR/slapd-dsee-slave2.conf
REFSLAVECONF=$DATADIR/slapd-ref-slave.conf
DSEESYNC1CONF=$DATADIR/slapd-dsee-consumer1.conf
DSEESYNC2CONF=$DATADIR/slapd-dsee-consumer2.conf
REFCONSUMERCONF=$DATADIR/slapd-ref-consumer.conf
SCHEMACONF=$DATADIR/slapd-schema.conf
TLSCONF=$DATADIR/slapd-tls.conf
TLSSASLCONF=$DATADIR/slapd-tls-sasl.conf
@ -130,7 +130,7 @@ CHAINCONF2=$DATADIR/slapd-chain2.conf
GLUESYNCCONF1=$DATADIR/slapd-glue-syncrepl1.conf
GLUESYNCCONF2=$DATADIR/slapd-glue-syncrepl2.conf
SQLCONF=$DATADIR/slapd-sql.conf
SQLSRMASTERCONF=$DATADIR/slapd-sql-syncrepl-master.conf
SQLSRPROVIDERCONF=$DATADIR/slapd-sql-syncrepl-provider.conf
TRANSLUCENTLOCALCONF=$DATADIR/slapd-translucent-local.conf
TRANSLUCENTREMOTECONF=$DATADIR/slapd-translucent-remote.conf
METACONF=$DATADIR/slapd-meta.conf
@ -141,9 +141,9 @@ GLUELDAPCONF=$DATADIR/slapd-glue-ldap.conf
ACICONF=$DATADIR/slapd-aci.conf
VALSORTCONF=$DATADIR/slapd-valsort.conf
DYNLISTCONF=$DATADIR/slapd-dynlist.conf
RSLAVECONF=$DATADIR/slapd-repl-slave-remote.conf
PLSRSLAVECONF=$DATADIR/slapd-syncrepl-slave-persist-ldap.conf
PLSRMASTERCONF=$DATADIR/slapd-syncrepl-multiproxy.conf
RCONSUMERCONF=$DATADIR/slapd-repl-consumer-remote.conf
PLSRCONSUMERCONF=$DATADIR/slapd-syncrepl-consumer-persist-ldap.conf
PLSRPROVIDERCONF=$DATADIR/slapd-syncrepl-multiproxy.conf
DDSCONF=$DATADIR/slapd-dds.conf
PASSWDCONF=$DATADIR/slapd-passwd.conf
UNDOCONF=$DATADIR/slapd-config-undo.conf
@ -298,7 +298,7 @@ MONITOR=""
REFDN="c=US"
BASEDN="dc=example,dc=com"
MANAGERDN="cn=Manager,$BASEDN"
UPDATEDN="cn=Replica,$BASEDN"
UPDATEDN="cn=consumer,$BASEDN"
PASSWD=secret
BABSDN="cn=Barbara Jensen,ou=Information Technology DivisioN,ou=People,$BASEDN"
BJORNSDN="cn=Bjorn Jensen,ou=Information Technology DivisioN,ou=People,$BASEDN"
@ -354,29 +354,29 @@ SERVER5FLT=$TESTDIR/server5.flt
SERVER6OUT=$TESTDIR/server6.out
SERVER6FLT=$TESTDIR/server6.flt
MASTEROUT=$SERVER1OUT
MASTERFLT=$SERVER1FLT
SLAVEOUT=$SERVER2OUT
SLAVE2OUT=$SERVER3OUT
SLAVEFLT=$SERVER2FLT
SLAVE2FLT=$SERVER3FLT
PROVIDEROUT=$SERVER1OUT
PROVIDERFLT=$SERVER1FLT
CONSUMEROUT=$SERVER2OUT
CONSUMER2OUT=$SERVER3OUT
CONSUMERFLT=$SERVER2FLT
CONSUMER2FLT=$SERVER3FLT
MTREADOUT=$TESTDIR/mtread.out
# original outputs for cmp
PROXYCACHEOUT=$DATADIR/proxycache.out
REFERRALOUT=$DATADIR/referrals.out
SEARCHOUTMASTER=$DATADIR/search.out.master
SEARCHOUTPROVIDER=$DATADIR/search.out.provider
SEARCHOUTX=$DATADIR/search.out.xsearch
COMPSEARCHOUT=$DATADIR/compsearch.out
MODIFYOUTMASTER=$DATADIR/modify.out.master
ADDDELOUTMASTER=$DATADIR/adddel.out.master
MODRDNOUTMASTER0=$DATADIR/modrdn.out.master.0
MODRDNOUTMASTER1=$DATADIR/modrdn.out.master.1
MODRDNOUTMASTER2=$DATADIR/modrdn.out.master.2
MODRDNOUTMASTER3=$DATADIR/modrdn.out.master.3
ACLOUTMASTER=$DATADIR/acl.out.master
REPLOUTMASTER=$DATADIR/repl.out.master
MODIFYOUTPROVIDER=$DATADIR/modify.out.provider
ADDDELOUTPROVIDER=$DATADIR/adddel.out.provider
MODRDNOUTPROVIDER0=$DATADIR/modrdn.out.provider.0
MODRDNOUTPROVIDER1=$DATADIR/modrdn.out.provider.1
MODRDNOUTPROVIDER2=$DATADIR/modrdn.out.provider.2
MODRDNOUTPROVIDER3=$DATADIR/modrdn.out.provider.3
ACLOUTPROVIDER=$DATADIR/acl.out.provider
REPLOUTPROVIDER=$DATADIR/repl.out.provider
MODSRCHFILTERS=$DATADIR/modify.search.filters
CERTIFICATETLS=$DATADIR/certificate.tls
CERTIFICATEOUT=$DATADIR/certificate.out

View file

@ -122,7 +122,7 @@ test "$KILLSERVERS" != no && kill -HUP $KILLPIDS
echo "Assuming everything is fine."
#echo "Comparing results"
#$CMP $TESTOUT $SEARCHOUTMASTER
#$CMP $TESTOUT $SEARCHOUTPROVIDER
#if test $? != 0 ; then
# echo "Comparison failed"
# exit 1

View file

@ -34,7 +34,7 @@ fi
mkdir -p $TESTDIR $DBDIR2A
echo "Starting slapd on TCP/IP port $PORT1..."
. $CONFFILTER $BACKEND $MONITORDB < $SQLSRMASTERCONF > $CONF1
. $CONFFILTER $BACKEND $MONITORDB < $SQLSRPROVIDERCONF > $CONF1
$SLAPD -f $CONF1 -h $URI1 -d $LVL $TIMING > $LOG1 2>&1 &
PID=$!
if test $WAIT != 0 ; then
@ -61,17 +61,17 @@ if test $RC != 0 ; then
exit $RC
fi
echo "Starting slave slapd on TCP/IP port $PORT2..."
. $CONFFILTER $BACKEND $MONITORDB < $R1SRSLAVECONF > $CONF2
echo "Starting consumer slapd on TCP/IP port $PORT2..."
. $CONFFILTER $BACKEND $MONITORDB < $R1SRCONSUMERCONF > $CONF2
$SLAPD -f $CONF2 -h $URI2 -d $LVL $TIMING > $LOG2 2>&1 &
SLAVEPID=$!
CONSUMERPID=$!
if test $WAIT != 0 ; then
echo SLAVEPID $SLAVEPID
echo CONSUMERPID $CONSUMERPID
read foo
fi
KILLPIDS="$KILLPIDS $SLAVEPID"
KILLPIDS="$KILLPIDS $CONSUMERPID"
echo "Using ldapsearch to check that slave slapd is running..."
echo "Using ldapsearch to check that consumer slapd is running..."
for i in 0 1 2 3 4 5; do
$LDAPSEARCH -s base -b "$MONITOR" -h $LOCALHOST -p $PORT2 \
'objectclass=*' > /dev/null 2>&1
@ -91,8 +91,8 @@ fi
cat /dev/null > $SEARCHOUT
echo "Using ldapsearch to retrieve all the entries from the master..."
echo "# Using ldapsearch to retrieve all the entries from the master..." \
echo "Using ldapsearch to retrieve all the entries from the provider..."
echo "# Using ldapsearch to retrieve all the entries from the provider..." \
>> $SEARCHOUT
$LDAPSEARCH -S "" -h $LOCALHOST -p $PORT1 -b "$BASEDN" \
-D "$MANAGERDN" -w $PASSWD \
@ -107,8 +107,8 @@ fi
cat /dev/null > $SEARCHOUT2
echo "Using ldapsearch to retrieve all the entries from the slave..."
echo "# Using ldapsearch to retrieve all the entries from the slave..." \
echo "Using ldapsearch to retrieve all the entries from the consumer..."
echo "# Using ldapsearch to retrieve all the entries from the consumer..." \
>> $SEARCHOUT2
$LDAPSEARCH -S "" -h $LOCALHOST -p $PORT2 -b "$BASEDN" \
-D "$UPDATEDN" -w $PASSWD \
@ -121,9 +121,9 @@ if test $RC != 0 ; then
exit $RC
fi
echo "Filtering ldapsearch results from master..."
echo "Filtering ldapsearch results from provider..."
$LDIFFILTER < $SEARCHOUT > $SEARCHFLT
echo "Filtering ldapsearch results from slave..."
echo "Filtering ldapsearch results from consumer..."
$LDIFFILTER < $SEARCHOUT2 > $SEARCHFLT2
echo "Comparing filter output..."
$CMP $SEARCHFLT $SEARCHFLT2 > $CMPOUT
@ -632,13 +632,13 @@ EOMODS
exit 1
fi
echo "Waiting 25 seconds for master to send changes..."
echo "Waiting 25 seconds for provider to send changes..."
sleep 25
cat /dev/null > $SEARCHOUT
echo "Using ldapsearch to retrieve all the entries from the master..."
echo "# Using ldapsearch to retrieve all the entries from the master..." \
echo "Using ldapsearch to retrieve all the entries from the provider..."
echo "# Using ldapsearch to retrieve all the entries from the provider..." \
>> $SEARCHOUT
$LDAPSEARCH -S "" -h $LOCALHOST -p $PORT1 -b "$BASEDN" \
-D "$MANAGERDN" -w $PASSWD \
@ -653,8 +653,8 @@ EOMODS
cat /dev/null > $SEARCHOUT2
echo "Using ldapsearch to retrieve all the entries from the slave..."
echo "# Using ldapsearch to retrieve all the entries from the slave..." \
echo "Using ldapsearch to retrieve all the entries from the consumer..."
echo "# Using ldapsearch to retrieve all the entries from the consumer..." \
>> $SEARCHOUT2
$LDAPSEARCH -S "" -h $LOCALHOST -p $PORT2 -b "$BASEDN" \
-D "$UPDATEDN" -w $PASSWD \
@ -667,9 +667,9 @@ EOMODS
exit $RC
fi
echo "Filtering ldapsearch results from master..."
echo "Filtering ldapsearch results from provider..."
$LDIFFILTER < $SEARCHOUT > $SEARCHFLT
echo "Filtering ldapsearch results from slave..."
echo "Filtering ldapsearch results from consumer..."
$LDIFFILTER < $SEARCHOUT2 > $SEARCHFLT2
echo "Comparing filter output..."
$CMP $SEARCHFLT $SEARCHFLT2 > $CMPOUT

View file

@ -29,9 +29,9 @@ NIS_LDIF=$SRCDIR/data/nis_sample.ldif
# Sample configuration file for your LDAP server
if test "$BACKEND" = "bdb2" ; then
NIS_CONF=$DATADIR/slapd-bdb2-nis-master.conf
NIS_CONF=$DATADIR/slapd-bdb2-nis-provider.conf
else
NIS_CONF=$DATADIR/slapd-nis-master.conf
NIS_CONF=$DATADIR/slapd-nis-provider.conf
fi
echo "Cleaning up in $DBDIR..."
@ -47,7 +47,7 @@ if [ $RC != 0 ]; then
fi
echo "Starting slapd on TCP/IP port $PORT..."
$SLAPD -f $NIS_CONF -p $PORT -d $LVL $TIMING > $MASTERLOG 2>&1 &
$SLAPD -f $NIS_CONF -p $PORT -d $LVL $TIMING > $PROVIDERLOG 2>&1 &
PID=$!
echo ">>>>> LDAP server with NIS schema is up! PID=$PID"

View file

@ -134,7 +134,7 @@ fi
test $KILLSERVERS != no && kill -HUP $KILLPIDS
LDIF=$SEARCHOUTMASTER
LDIF=$SEARCHOUTPROVIDER
echo "Filtering ldapsearch results..."
$LDIFFILTER < $SEARCHOUT > $SEARCHFLT

View file

@ -101,7 +101,7 @@ if test $RC != 0 ; then
exit $RC
fi
LDIF=$MODIFYOUTMASTER
LDIF=$MODIFYOUTPROVIDER
echo "Filtering ldapsearch results..."
$LDIFFILTER < $SEARCHOUT > $SEARCHFLT

View file

@ -95,7 +95,7 @@ if test $RC != 0 ; then
fi
LDIF=$MODRDNOUTMASTER1
LDIF=$MODRDNOUTPROVIDER1
echo "Filtering ldapsearch results..."
$LDIFFILTER < $SEARCHOUT > $SEARCHFLT
@ -122,7 +122,7 @@ if test $RC != 0 ; then
fi
LDIF=$MODRDNOUTMASTER2
LDIF=$MODRDNOUTPROVIDER2
echo "Filtering ldapsearch results..."
$LDIFFILTER < $SEARCHOUT > $SEARCHFLT
@ -166,7 +166,7 @@ if test $RC != 0 ; then
exit $RC
fi
LDIF=$MODRDNOUTMASTER0
LDIF=$MODRDNOUTPROVIDER0
echo "Filtering ldapsearch results..."
$LDIFFILTER < $SEARCHOUT > $SEARCHFLT
@ -206,7 +206,7 @@ if test $RC != 0 ; then
exit $RC
fi
LDIF=$MODRDNOUTMASTER3
LDIF=$MODRDNOUTPROVIDER3
echo "Filtering ldapsearch results..."
$LDIFFILTER < $SEARCHOUT > $SEARCHFLT

Some files were not shown because too many files have changed in this diff Show more