Commit 75407982 authored by Patrick D. Hunt's avatar Patrick D. Hunt
Browse files

ZOOKEEPER-23. Auto reset of watches on reconnect

git-svn-id: https://svn.apache.org/repos/asf/hadoop/zookeeper/trunk@706834 13f79535-47bb-0310-9956-ffa450edef68
parent c11e11a9
......@@ -37,6 +37,8 @@ Backward compatibile changes:
BUGFIXES:
ZOOKEEPER-23. Auto reset of watches on reconnect (breed via phunt)
ZOOKEEPER-191. forrest docs for upgrade. (mahadev via phunt)
ZOOKEEPER-201. validate magic number when reading snapshot and transaction
......
......@@ -518,7 +518,6 @@ the connection comes back up.
case SyncConnected:
// Everything is happy. Lets kick things off
// again by checking the existence of the znode
zk.exists(znode, true, this, null);
break;
case Expired:
// It's all over
......@@ -782,7 +781,6 @@ public class DataMonitor implements Watcher, StatCallback {
case SyncConnected:
// Everything is happy. Lets kick things off
// again by checking the existence of the znode
zk.exists(znode, true, this, null);
break;
case Expired:
// It's all over
......
This diff is collapsed.
......@@ -191,6 +191,9 @@ document.write("Last Published: " + document.lastModified);
<a href="#migration_code">Migrating Client Code</a>
<ul class="minitoc">
<li>
<a href="#Watch+Management">Watch Management</a>
</li>
<li>
<a href="#Java+API">Java API</a>
</li>
<li>
......@@ -266,7 +269,23 @@ Note: ZooKeeper increments the major version number (major.minor.fix) when backw
</ul>
<a name="N1003F"></a><a name="migration_code"></a>
<h3 class="h4">Migrating Client Code</h3>
<a name="N10045"></a><a name="Java+API"></a>
<a name="N10045"></a><a name="Watch+Management"></a>
<h4>Watch Management</h4>
<p>
In previous releases of ZooKeeper any watches registered by clients were lost if the client lost a connection to a ZooKeeper server.
This meant that developers had to track watches they were interested in and reregister them if a session disconnect event was recieved.
In this release the client library tracks watches that a client has registered and reregisters the watches when a connection is made to a new server.
Applications that still manually reregister interest should continue working properly as long as they are able to handle unsolicited watches.
For example, an old application may register a watch for /foo and /goo, lose the connection, and reregister only /goo.
As long as the application is able to recieve a notification for /foo, (probably ignoring it) the applications does not to be changes.
One caveat to the watch management: it is possible to miss an event for the creation and deletion of a znode if watching for creation and both the create and delete happens while the client is disconnected from ZooKeeper.
</p>
<p>
This release also allows clients to specify call specific watch functions.
This gives the developer the ability to modularize logic in different watch functions rather than cramming everything in the watch function attached to the ZooKeeper handle.
Call specific watch functions receive all session events for as long as they are active, but will only receive the watch callbacks for which they are registered.
</p>
<a name="N10052"></a><a name="Java+API"></a>
<h4>Java API</h4>
<ol>
......@@ -288,7 +307,7 @@ Note: ZooKeeper increments the major version number (major.minor.fix) when backw
Also see <a href="http://hadoop.apache.org/zookeeper/docs/current/api/index.html">the current java API</a>
</p>
<a name="N10077"></a><a name="C+API"></a>
<a name="N10084"></a><a name="C+API"></a>
<h4>C API</h4>
<ol>
......@@ -297,7 +316,7 @@ Also see <a href="http://hadoop.apache.org/zookeeper/docs/current/api/index.html
</li>
</ol>
<a name="N1008A"></a><a name="migration_data"></a>
<a name="N10097"></a><a name="migration_data"></a>
<h3 class="h4">Migrating Server Data</h3>
<p>
The following issues resulted in changes to the on-disk data format (the snapshot and transaction log files contained within the ZK data directory) and require a migration utility to be run.
......@@ -446,7 +465,7 @@ The following issues resulted in changes to the on-disk data format (the snapsho
</div>
<a name="N10120"></a><a name="changes"></a>
<a name="N1012D"></a><a name="changes"></a>
<h2 class="h3">Changes Since ZooKeeper 2.2.1</h2>
<div class="section">
<p>
......
This diff is collapsed.
......@@ -822,11 +822,13 @@ document.write("Last Published: " + document.lastModified);
</ul>
<p>Watches are maintained locally at the ZooKeeper server to which the
client is connected. This allows watches to be light weight to set,
maintain, and dispatch. It also means if a client connects to a different
server, the new server is not going to know about its watches. So, when a
client gets a disconnect event, it must consider that an implicit trigger
of all watches. When a client reconnects to a new server, the client
should re-set any watches that it is still interested in.</p>
maintain, and dispatch. When a client connects to a new server, the watch
will be triggered for any session events. Watches will not be received
while disconnected from a server. When a client reconnects, any previously
registered watches will be reregistered and triggered if needed. In
general this all occurs transparently. There is one case where a watch
may be missed: a watch for the existance of a znode not yet created will
be missed if the znode is created and deleted while disconnected.</p>
<a name="N101E9"></a><a name="sc_WatchGuarantees"></a>
<h3 class="h4">What ZooKeeper Guarantees about Watches</h3>
<p>With regard to watches, ZooKeeper maintains these
......@@ -894,10 +896,26 @@ document.write("Last Published: " + document.lastModified);
<li>
<p>A watch object, or function/context pair, will only be
triggered once for a given notification. For example, if the same
watch object is registered for an exists and a getData call for the
same file and that file is then deleted, the watch object would
only be invoked once with the deletion notification for the file.
</p>
</li>
</ul>
<ul>
<li>
<p>When you disconnect from a server (for example, when the
server fails), all of the watches you have registered are lost, so
you should treat this case as if all your watches were
triggered.</p>
server fails), you will not get any watches until the connection
is reestablished. For this reason session events are sent to all
outstanding watch handlers. Use session events to go into a safe
mode: you will not be receiving events while disconnected, so your
process should act conservatively in that mode.</p>
</li>
......@@ -905,13 +923,13 @@ document.write("Last Published: " + document.lastModified);
</div>
<a name="N10231"></a><a name="sc_ZooKeeperAccessControl"></a>
<a name="N1023A"></a><a name="sc_ZooKeeperAccessControl"></a>
<h2 class="h3">ZooKeeper access control using ACLs</h2>
<div class="section">
<p>ZooKeeper uses ACLs to control access to its znodes (the data nodes of a ZooKeeper data tree). The ACL implementation is quite similar to UNIX file access permissions: it employs permission bits to allow/disallow various operations against a node and the scope to which the bits apply. Unlike standard UNIX permissions, a ZooKeeper node is not limited by the three standard scopes for user (owner of the file), group, and world (other). ZooKeeper does not have a notion of an owner of a znode. Instead, an ACL specifies sets of ids and permissions that are associated with those ids.</p>
<p>ZooKeeper supports pluggable authentication schemes. Ids are specified using the form <em>scheme:id</em>, where <em>scheme</em> is a the authentication scheme that the id corresponds to. For example, <em>host:host1.corp.com</em> is an id for a host named <em>host1.corp.com</em>.</p>
<p>When a client connects to ZooKeeper and authenticates itself, ZooKeeper associates all the ids that correspond to a client with the clients connection. These ids are checked against the ACLs of znodes when a clients tries to access a node. ACLs are made up of pairs of <em>(scheme:expression, perms)</em>. The format of the <em>expression</em> is specific to the scheme. For example, the pair <em>(ip:19.22.0.0/16, READ)</em> gives the <em>READ</em> permission to any clients with an IP address that starts with 19.22.</p>
<a name="N10258"></a><a name="sc_ACLPermissions"></a>
<a name="N10261"></a><a name="sc_ACLPermissions"></a>
<h3 class="h4">ACL Permissions</h3>
<p>Zookeeper supports the following permissions:</p>
<ul>
......@@ -947,7 +965,7 @@ document.write("Last Published: " + document.lastModified);
<p>
<em>CREATE</em> without <em>DELETE</em>: clients create requests by creating zookeeper nodes in a parent directory. You want all clients to be able to add, but only request processor can delete. (This is kind of like the APPEND permission for files.)</p>
<p>Also, the <em>ADMIN</em> permission is there since Zookeeper doesn&rsquo;t have a notion of file owner. In some sense the <em>ADMIN</em> permission designates the entity as the owner. Zookeeper doesn&rsquo;t support the LOOKUP permission (execute permission bit on directories to allow you to LOOKUP even though you can't list the directory). Everyone implicitly has LOOKUP permission. This allows you to stat a node, but nothing more. (The problem is, if you want to call zoo_exists() on a node that doesn't exist, there is no permission to check.)</p>
<a name="N102AE"></a><a name="sc_BuiltinACLSchemes"></a>
<a name="N102B7"></a><a name="sc_BuiltinACLSchemes"></a>
<h4>Builtin ACL Schemes</h4>
<p>ZooKeeeper has the following built in schemes:</p>
<ul>
......@@ -978,7 +996,7 @@ document.write("Last Published: " + document.lastModified);
</li>
</ul>
<a name="N10303"></a><a name="Zookeeper+C+client+API"></a>
<a name="N1030C"></a><a name="Zookeeper+C+client+API"></a>
<h4>Zookeeper C client API</h4>
<p>The following constants are provided by the zookeeper C library:</p>
<ul>
......@@ -1165,7 +1183,7 @@ int main(int argc, char argv) {
</div>
<a name="N10420"></a><a name="ch_zkGuarantees"></a>
<a name="N10429"></a><a name="ch_zkGuarantees"></a>
<h2 class="h3">Consistency Guarantees</h2>
<div class="section">
<p>ZooKeeper is a high performance, scalable service. Both reads and
......@@ -1291,12 +1309,12 @@ int main(int argc, char argv) {
</div>
<a name="N10487"></a><a name="ch_bindings"></a>
<a name="N10490"></a><a name="ch_bindings"></a>
<h2 class="h3">Bindings</h2>
<div class="section">
<p>The ZooKeeper client libraries come in two languages: Java and C.
The following sections describe these.</p>
<a name="N10490"></a><a name="Java+Binding"></a>
<a name="N10499"></a><a name="Java+Binding"></a>
<h3 class="h4">Java Binding</h3>
<p>There are two packages that make up the ZooKeeper Java binding:
<strong>org.apache.zookeeper</strong> and <strong>org.apache.zookeeper.data</strong>. The rest of the
......@@ -1363,7 +1381,7 @@ int main(int argc, char argv) {
(SESSION_EXPIRED and AUTH_FAILED), the ZooKeeper object becomes invalid,
the two threads shut down, and any further ZooKeeper calls throw
errors.</p>
<a name="N104D9"></a><a name="C+Binding"></a>
<a name="N104E2"></a><a name="C+Binding"></a>
<h3 class="h4">C Binding</h3>
<p>The C binding has a single-threaded and multi-threaded library.
The multi-threaded library is easiest to use and is most similar to the
......@@ -1380,7 +1398,7 @@ int main(int argc, char argv) {
(i.e. FreeBSD 4.x). In all other cases, application developers should
link with zookeeper_mt, as it includes support for both Sync and Async
API.</p>
<a name="N104E8"></a><a name="Installation"></a>
<a name="N104F1"></a><a name="Installation"></a>
<h4>Installation</h4>
<p>If you're building the client from a check-out from the Apache
repository, follow the steps outlined below. If you're building from a
......@@ -1511,7 +1529,7 @@ int main(int argc, char argv) {
</li>
</ol>
<a name="N10591"></a><a name="Using+the+Client"></a>
<a name="N1059A"></a><a name="Using+the+Client"></a>
<h4>Using the Client</h4>
<p>You can test your client by running a zookeeper server (see
instructions on the project wiki page on how to run it) and connecting
......@@ -1564,7 +1582,7 @@ int main(int argc, char argv) {
</div>
<a name="N105D0"></a><a name="ch_guideToZkOperations"></a>
<a name="N105D9"></a><a name="ch_guideToZkOperations"></a>
<h2 class="h3">Building Blocks: A Guide to ZooKeeper Operations</h2>
<div class="section">
<p>This section surveys all the operations a developer can perform
......@@ -1582,25 +1600,25 @@ int main(int argc, char argv) {
</li>
</ul>
<a name="N105E4"></a><a name="sc_connectingToZk"></a>
<a name="N105ED"></a><a name="sc_connectingToZk"></a>
<h3 class="h4">Connecting to ZooKeeper</h3>
<p></p>
<a name="N105ED"></a><a name="sc_readOps"></a>
<a name="N105F6"></a><a name="sc_readOps"></a>
<h3 class="h4">Read Operations</h3>
<p></p>
<a name="N105F6"></a><a name="sc_writeOps"></a>
<a name="N105FF"></a><a name="sc_writeOps"></a>
<h3 class="h4">Write Operations</h3>
<p></p>
<a name="N105FF"></a><a name="sc_handlingWatches"></a>
<a name="N10608"></a><a name="sc_handlingWatches"></a>
<h3 class="h4">Handling Watches</h3>
<p></p>
<a name="N10608"></a><a name="sc_miscOps"></a>
<a name="N10611"></a><a name="sc_miscOps"></a>
<h3 class="h4">Miscelleaneous ZooKeeper Operations</h3>
<p></p>
</div>
<a name="N10612"></a><a name="ch_programStructureWithExample"></a>
<a name="N1061B"></a><a name="ch_programStructureWithExample"></a>
<h2 class="h3">Program Structure, with Simple Example</h2>
<div class="section">
<p>
......@@ -1609,7 +1627,7 @@ int main(int argc, char argv) {
</div>
<a name="N1061D"></a><a name="ch_gotchas"></a>
<a name="N10626"></a><a name="ch_gotchas"></a>
<h2 class="h3">Gotchas: Common Problems and Troubleshooting</h2>
<div class="section">
<p>So now you know ZooKeeper. It's fast, simple, your application
......@@ -1620,13 +1638,10 @@ int main(int argc, char argv) {
<li>
<p>If you are using watches, you must look for the connected watch
event. When a ZooKeeper client disconnects from a server, all the
watches are removed, so a client must treat the disconnect event as an
implicit trigger of watches. The easiest way to deal with this is to
act like the connected watch event is a watch trigger for all your
watches. The connected event makes a better trigger than the
disconnected event because you can access ZooKeeper and reestablish
watches when you are connected.</p>
event. When a ZooKeeper client disconnects from a server, you will
not receive notification of changes until reconnected. If you are
watching for a znode to come into existance, you will miss the event
if the znode is created and deleted while you are disconnected.</p>
</li>
......
This diff is collapsed.
......@@ -3,7 +3,7 @@ include $(top_srcdir)/aminclude.am
AM_CPPFLAGS = -Iinclude -Igenerated
AM_CFLAGS = -Wall -Werror
CXXFLAGS += -Wall
CXXFLAGS = -Wall -g
LIB_LDFLAGS = -no-undefined -version-info 2
......@@ -70,8 +70,9 @@ EXTRA_DIST+=$(wildcard tests/*.cc) $(wildcard tests/*.h) \
TEST_SOURCES = tests/TestDriver.cc tests/LibCMocks.cc tests/LibCSymTable.cc \
tests/MocksBase.cc tests/ZKMocks.cc tests/Util.cc tests/ThreadingUtil.cc \
tests/TestWatchers.cc tests/TestHashtable.cc \
tests/TestOperations.cc tests/TestZookeeperInit.cc tests/TestZookeeperClose.cc
tests/TestWatchers.cc \
tests/TestOperations.cc tests/TestZookeeperInit.cc \
tests/TestZookeeperClose.cc tests/TestClient.cc
SYMBOL_WRAPPERS=$(shell cat tests/wrappers.opt)
......
......@@ -28,6 +28,7 @@ AC_CONFIG_HEADER([config.h])
AM_PATH_CPPUNIT(1.10.2)
AC_PROG_CC
AM_PROG_CC_C_O
AC_PROG_CXX
AC_PROG_INSTALL
AC_PROG_LN_S
......
......@@ -35,6 +35,7 @@ static const int SYNC_OP=9;
static const int PING_OP=11;
static const int CLOSE_OP=-11;
static const int SETAUTH_OP=100;
static const int SETWATCHES_OP=101;
#ifdef __cplusplus
}
......
......@@ -26,6 +26,7 @@
#include <sys/time.h>
#include <time.h>
#include <errno.h>
#include <assert.h>
#ifdef YCA
#include <yca/yca.h>
......@@ -68,7 +69,10 @@ void watcher(zhandle_t *zzh, int type, int state, const char *path,void* context
if (!fh) {
perror(clientIdFile);
} else {
fwrite(&myid, sizeof(myid), 1, fh);
int rc = fwrite(&myid, sizeof(myid), 1, fh);
if (rc != sizeof(myid)) {
perror("writing client id");
}
fclose(fh);
}
}
......@@ -130,7 +134,7 @@ void my_data_completion(int rc, const char *value, int value_len,
fprintf(stderr, "%s: rc = %d\n", (char*)data, rc);
if (value) {
fprintf(stderr, " value_len = %d\n", value_len);
write(2, value, value_len);
assert(write(2, value, value_len) == value_len);
}
fprintf(stderr, "\nStat:\n");
dumpStat(stat);
......@@ -396,7 +400,9 @@ int main(int argc, char **argv) {
clientIdFile = argv[2];
fh = fopen(clientIdFile, "r");
if (fh) {
fread(&myid, sizeof(myid), 1, fh);
if (fread(&myid, sizeof(myid), 1, fh) != sizeof(myid)) {
memset(&myid, 0, sizeof(myid));
}
fclose(fh);
}
}
......
......@@ -271,6 +271,7 @@ void *do_io(void *v)
int interest;
int timeout;
int maxfd=1;
int rc;
zookeeper_interest(zh, &fd, &interest, &tv);
if (fd != -1) {
......@@ -292,7 +293,7 @@ void *do_io(void *v)
while(read(adaptor_threads->self_pipe[0],b,sizeof(b))==sizeof(b)){}
}
// dispatch zookeeper events
zookeeper_process(zh, interest);
rc = zookeeper_process(zh, interest);
// check the current state of the zhandle and terminate
// if it is_unrecoverable()
if(is_unrecoverable(zh))
......
......@@ -29,6 +29,7 @@
#define WATCHER_EVENT_XID -1
#define PING_XID -2
#define AUTH_XID -4
#define SET_WATCHES_XID -8
/* zookeeper state constants */
#define EXPIRED_SESSION_STATE_DEF -112
......@@ -194,7 +195,8 @@ struct _zhandle {
* available in the socket recv buffer */
struct timeval socket_readable;
zk_hashtable* active_node_watchers;
zk_hashtable* active_node_watchers;
zk_hashtable* active_exist_watchers;
zk_hashtable* active_child_watchers;
};
......@@ -224,11 +226,11 @@ int32_t inc_ref_counter(zhandle_t* zh,int i);
// atomic post-increment
int32_t fetch_and_add(volatile int32_t* operand, int incr);
// in mt mode process session event asynchronously by the completion thread
int queue_session_event(zhandle_t *zh, int state);
#define PROCESS_SESSION_EVENT(zh,newstate) queue_session_event(zh,newstate)
#else
// in single-threaded mode process session event immediately
#define PROCESS_SESSION_EVENT(zh,newstate) deliverWatchers(zh,ZOO_SESSION_EVENT,newstate,0)
//#define PROCESS_SESSION_EVENT(zh,newstate) deliverWatchers(zh,ZOO_SESSION_EVENT,newstate,0)
#define PROCESS_SESSION_EVENT(zh,newstate) queue_session_event(zh,newstate)
#endif
#ifdef __cplusplus
......
......@@ -39,9 +39,9 @@ hashtable_impl* getImpl(zk_hashtable* ht){
return ht->ht;
}
typedef struct _watcher_object_list_t {
struct watcher_object_list {
watcher_object_t* head;
} watcher_object_list_t;
};
watcher_object_t* getFirstWatcher(zk_hashtable* ht,const char* path)
{
......@@ -54,6 +54,7 @@ watcher_object_t* getFirstWatcher(zk_hashtable* ht,const char* path)
watcher_object_t* clone_watcher_object(watcher_object_t* wo)
{
watcher_object_t* res=calloc(1,sizeof(watcher_object_t));
assert(res);
res->watcher=wo->watcher;
res->context=wo->context;
return res;
......@@ -78,6 +79,7 @@ static int string_equal(void *key1,void *key2)
watcher_object_t* create_watcher_object(watcher_fn watcher,void* ctx)
{
watcher_object_t* wo=calloc(1,sizeof(watcher_object_t));
assert(wo);
wo->watcher=watcher;
wo->context=ctx;
return wo;
......@@ -86,6 +88,7 @@ watcher_object_t* create_watcher_object(watcher_fn watcher,void* ctx)
static watcher_object_list_t* create_watcher_object_list(watcher_object_t* head)
{
watcher_object_list_t* wl=calloc(1,sizeof(watcher_object_list_t));
assert(wl);
wl->head=head;
return wl;
}
......@@ -106,6 +109,7 @@ static void destroy_watcher_object_list(watcher_object_list_t* list)
zk_hashtable* create_zk_hashtable()
{
struct _zk_hashtable *ht=calloc(1,sizeof(struct _zk_hashtable));
assert(ht);
#ifdef THREADED
pthread_mutex_init(&ht->lock, 0);
#endif
......@@ -113,42 +117,6 @@ zk_hashtable* create_zk_hashtable()
return ht;
}
int get_element_count(zk_hashtable *ht)
{
int res;
#ifdef THREADED
pthread_mutex_lock(&ht->lock);
#endif
res=hashtable_count(ht->ht);
#ifdef THREADED
pthread_mutex_unlock(&ht->lock);
#endif
return res;
}
int get_watcher_count(zk_hashtable* ht,const char* path)
{
int res=0;
watcher_object_list_t* wl;
watcher_object_t* wo;
#ifdef THREADED
pthread_mutex_lock(&ht->lock);
#endif
wl=hashtable_search(ht->ht,(void*)path);
if(wl==0)
goto done;
wo=wl->head;
while(wo!=0){
res++;
wo=wo->next;
}
done:
#ifdef THREADED
pthread_mutex_unlock(&ht->lock);
#endif
return res;
}
static void do_clean_hashtable(zk_hashtable* ht)
{
struct hashtable_itr *it;
......@@ -156,11 +124,11 @@ static void do_clean_hashtable(zk_hashtable* ht)
if(hashtable_count(ht->ht)==0)
return;
it=hashtable_iterator(ht->ht);
do{
do {
watcher_object_list_t* w=hashtable_iterator_value(it);
destroy_watcher_object_list(w);
hasMore=hashtable_iterator_remove(it);
}while(hasMore);
} while(hasMore);
free(it);
}
......@@ -190,9 +158,9 @@ void destroy_zk_hashtable(zk_hashtable* ht)
// searches for a watcher object instance in a watcher object list;
// two watcher objects are equal if their watcher function and context pointers
// are equal
static watcher_object_t* search_watcher(watcher_object_list_t* wl,watcher_object_t* wo)
static watcher_object_t* search_watcher(watcher_object_list_t** wl,watcher_object_t* wo)
{
watcher_object_t* wobj=wl->head;
watcher_object_t* wobj=(*wl)->head;
while(wobj!=0){
if(wobj->watcher==wo->watcher && wobj->context==wo->context)
return wobj;
......@@ -201,10 +169,29 @@ static watcher_object_t* search_watcher(watcher_object_list_t* wl,watcher_object
return 0;
}
int add_to_list(watcher_object_list_t **wl, watcher_object_t *wo, int clone)
{
if (search_watcher(wl, wo)==0) {
watcher_object_t* cloned=wo;
if (clone) {
cloned = clone_watcher_object(wo);
assert(cloned);
}
cloned->next = (*wl)->head;
(*wl)->head = cloned;
return 1;
} else if (!clone) {
// If it's here and we aren't supposed to clone, we must destroy
free(wo);
}
return 0;
}
static int do_insert_watcher_object(zk_hashtable *ht, const char *path, watcher_object_t* wo)
{
int res=1;
watcher_object_list_t* wl;
wl=hashtable_search(ht->ht,(void*)path);
if(wl==0){
int res;
......@@ -213,15 +200,29 @@ static int do_insert_watcher_object(zk_hashtable *ht, const char *path, watcher_
assert(res);
}else{
/* path already exists; check if the watcher already exists */
if(search_watcher(wl,wo)==0){
wo->next=wl->head;
wl->head=wo; // insert the new watcher at the head
}else
res=0; // the watcher already exists -- do not insert!
res = add_to_list(&wl, wo, 1);
}
return res;
}
char **collect_keys(zk_hashtable *ht, int *count)
{
char **list;
struct hashtable_itr *it;
int i;
*count = hashtable_count(ht->ht);
list = calloc(*count, sizeof(char*));
it=hashtable_iterator(ht->ht);
for(i = 0; i < *count; i++) {
list[i] = strdup(hashtable_iterator_key(it));
hashtable_iterator_advance(it);
}
free(it);
return list;
}
int insert_watcher_object(zk_hashtable *ht, const char *path, watcher_object_t* wo)
{
int res;
......@@ -235,102 +236,63 @@ int insert_watcher_object(zk_hashtable *ht, const char *path, watcher_object_t*
return res;
}
static void copy_watchers(zk_hashtable* dst,const char* path,watcher_object_list_t* wl)
static void copy_watchers(watcher_object_list_t *from, watcher_object_list_t *to, int clone)
{
if(wl==0)
return;
watcher_object_t* wo=wl->head;
while(wo!=0){
int res;
watcher_object_t* cloned=clone_watcher_object(wo);
res=do_insert_watcher_object(dst,path,cloned);
// was it a duplicate?
if(res==0)
free(cloned); // yes, didn't get inserted
wo=wo->next;
watcher_object_t* wo=from->head;
while(wo){
watcher_object_t *next = wo->next;
add_to_list(&to, wo, clone);
wo=next;
}
}
static void copy_table(zk_hashtable* dst,zk_hashtable* src)
{
static void copy_table(zk_hashtable *from, watcher_object_list_t *to) {
struct hashtable_itr *it;
int hasMore;
if(hashtable_count(src->ht)==0)
if(hashtable_count(from->ht)==0)
return;
it=hashtable_iterator(src->ht);
do{