Design Notes: slurpd This new version differs significantly from previous versions: - It uses a multithreaded, single-process model. Previous versions forked a separate process for each replica. This new design should facilitate porting to NT, and is more straightforward. - Only one copy of the replication log is made. Previous versions made one copy of the replication log for each replica - This newer version is more object-oriented. Although still written in ANSI C (and compilable on k&r compilers), it encapsulates within the major data structures the methods used to access "private" data. General design overview: The main data structure in slurpd is a replication queue (struct rq). The rq data structure is currently implemented as a linked list of replication entries (struct re). The rq structure contains member functions used to initialize, add to, remove, and return the next re struct. In addition to the rq structure, there is one ri (replication information) struct for each replica. The ri struct encapsulates all information about a particular replica, e.g. hostname, port, bind dn. The single public member function, ri_process, is called to begin processing the replication entries in the queue. There is also a status structure (struct st) which contains the timestamp of the last successful replication operation for each replica. The contents of the st struct are flushed to disk after every successful operation. This disk file is read upon startup, and is used to allow slapd to "pick up where it left off". Threading notes: The LDAP liblthread quasi-pthreads interface is used for threading. At this point, machines which do not support pthreads, sun threads or lwp will probably not be able to run slurpd. Given the current threading method, discussed in the next paragraph, it will probably be necessary to have a separate hunk of code which handles non-threaded architectures (or we might just not worry about it). This needs further discussion. Upon startup, command-line arguments and the slapd configuration file are processed. One thread is started for each replica. Thread replicas, when no more work exists, wait on a condition variable, and the main thread's file manager routine broadcasts on this condition variable when new work is added to the queue. Additional notes: See doc/devel/replication-notes.txt