Você está na página 1de 13

Log in Subscribe RSS Feed

Technical blog on web technologies Home About


Recent Posts
Cambridge city geospatial
statistics
API to access the Cambridge city
geospatial data
REST service + Python client to
access geographic data
Massachusetts Census 2010
Towns maps and statistics using
Python
Python, Twitter statistics and the
2012 French presidential election
Twitter sentiment analysis using
Python and NLTK
Python dictionary implementation
Python string objects
implementation
Python integer objects
implementation
Python and cryptography with
pycrypto
Search
GO Meta
Log in
Entries RSS
Comments RSS
WordPress.org
Laurent Luce's Blog

Python threads synchronization: Locks, RLocks, Semaphores,
Conditions, Events and Queues
February 5, 2011
This article describes the Python threading synchronization mechanisms in details. We are
going to study the following types: Lock, RLock, Semaphore, Condition, Event and Queue.
Also, we are going to look at the Python internals behind those mechanisms.
The source code of the programs below can be found at github.com/laurentluce/python-
tutorials under threads/.
First, lets look at a simple program using the threading module with no synchronization.
Threading
We want to write a program fetching the content of some URLs and writing it to a file. We
could do it serially with no threads but to speed things up, we are going to create 2 threads
processing half of the URLs each.
Note: The best way here would be to use a queue with the URLs to fetch but this example is
more suitable to begin our tutorial.
The class FetchUrls is thread based and it takes a list of URLs to fetch and a file object to
write the content to.
The main function starts the 2 threads and then wait for them to finish.
The issue is that both threads are going to write to the file at the same time, resulting in a big
mess. We need to find a way to only have 1 thread writing to the file at a given time. To do
that, one way is to use synchronization mechanisms like locks.
Lock
Locks have 2 states: locked and unlocked. 2 methods are used to manipulate them: acquire()
and release(). Those are the rules:
if the state is unlocked: a call to acquire() changes the state to locked.
01 class FetchUrls(threading.Thread):
02 """
03 Thread checking URLs.
04 """
05
06 def __init__(self, urls, output):
07 """
08 Constructor.
09
10 @param urls list of urls to check
11 @param output file to write urls output
12 """
13 threading.Thread.__init__(self)
14 self.urls = urls
15 self.output = output
16
17 def run(self):
18 """
19 Thread run method. Check URLs one by one.
20 """
21 while self.urls:
22 url = self.urls.pop()
23 req = urllib2.Request(url)
24 try:
25 d = urllib2.urlopen(req)
26 except urllib2.URLError, e:
27 print 'URL %s failed: %s' % (url, e.reason)
28 self.output.write(d.read())
29 print 'write done by %s' % self.name
30 print 'URL %s fetched by %s' % (url, self.name)
01 def main():
02 # list 1 of urls to fetch
03 urls1 = ['http://www.google.com', 'http://www.facebook.com']
04 # list 2 of urls to fetch
05 urls2 = ['http://www.yahoo.com', 'http://www.youtube.com']
06 f = open('output.txt', 'w+')
07 t1 = FetchUrls(urls1, f)
08 t2 = FetchUrls(urls2, f)
09 t1.start()
10 t2.start()
11 t1.join()
12 t2.join()
13 f.close()
14
15 if __name__ == '__main__':
16 main()
if the state is locked: a call to acquire() blocks until another thread calls release().
if the state is unlocked: a call to release() raises a RuntimeError exception.
if the state is locked: a call to release() changes the state to unlocked().
To solve our issue of 2 threads writing to the same file at the same time, we pass a lock to
the FetchUrls constructor and we use it to protect the file write operation. I am just going to
highlight the modifications relevant to locks. The source code can be found in threads/lock.py.
Lets look at the program output when we run it:
The write operation is now protected by a lock and we dont have 2 threads writing to the file
at the same time.
Lets take a look at the Python internals. I am using Python 2.6.6 on Linux.
The method Lock() of the threading module is equal to thread.allocate_lock. You can see the
code in Lib/threading.py.
The C implementation can be found in Python/thread_pthread.h. We assume that our system
supports POSIX semaphores. sem_init() initializes the semaphore at the address pointed by
lock. The initial value of the semaphore is 1 which means unlocked. The semaphore is shared
between the threads of the process.
01 class FetchUrls(threading.Thread):
02 ...
03
04 def __init__(self, urls, output, lock):
05 ...
06 self.lock = lock
07
08 def run(self):
09 ...
10 while self.urls:
11 ...
12 self.lock.acquire()
13 print 'lock acquired by %s' % self.name
14 self.output.write(d.read())
15 print 'write done by %s' % self.name
16 print 'lock released by %s' % self.name
17 self.lock.release()
18 ...
19
20 def main():
21 ...
22 lock = threading.Lock()
23 ...
24 t1 = FetchUrls(urls1, f, lock)
25 t2 = FetchUrls(urls2, f, lock)
26 ...
01 $ python locks.py
02 lock acquired by Thread-2
03 write done by Thread-2
04 lock released by Thread-2
05 URL http://www.youtube.com fetched by Thread-2
06 lock acquired by Thread-1
07 write done by Thread-1
08 lock released by Thread-1
09 URL http://www.facebook.com fetched by Thread-1
10 lock acquired by Thread-2
11 write done by Thread-2
12 lock released by Thread-2
13 URL http://www.yahoo.com fetched by Thread-2
14 lock acquired by Thread-1
15 write done by Thread-1
16 lock released by Thread-1
17 URL http://www.google.com fetched by Thread-1
1 Lock = _allocate_lock
2 _allocate_lock = thread.allocate_lock
01 PyThread_type_lock
02 PyThread_allocate_lock(void)
03 {
04 ...
05 lock = (sem_t *)malloc(sizeof(sem_t));
06
When the acquire() method is called, the following C code is executed. waitflag is equal to 1
by default which means the call blocks until the lock is unlocked. sem_wait() decrements the
semaphores value or blocks until the value is greater than 0 e.g. unlocked by another thread.
When the release() method is called, the following C code is executed. sem_post() increments
the semaphores value e.g. unlocks the semaphore.
You can also use the with statement. The Lock object can be used as a context manager.
The advantage of using with is that the acquire() method will be called when the with
block is entered and release() will be called when the block is exited. Lets rewrite the class
FetchUrls using the with statement.
RLock
RLock is a reentrant lock. acquire() can be called multiple times by the same thread without
blocking. Keep in mind that release() needs to be called the same number of times to unlock
the resource.
Using Lock, the second call to acquire() by the same thread will block:
If you use RLock, the second call to acquire() wont block.
RLock also uses thread.allocate_lock() but it keeps track of the owner thread to support the
reentrant feature. Following is the RLock acquire() method implementation. If the thread
calling acquire() is the owner of the resource then the counter is incremented by one. If not, it
tries to acquire it. First time it acquires the lock, the owner is saved and the counter is
initialized to 1.
Lets look at the RLock release() method. First is a check to make sure the thread calling the
method is the owner of the lock. The counter is decremented and if it is equal to 0 then the
resource is unlocked and available for grab by another thread.
07 if (lock) {
08 status = sem_init(lock,0,1);
09 CHECK_STATUS("sem_init");
10 ....
11 }
12 ...
13 }
01 int
02 PyThread_acquire_lock(PyThread_type_lock lock, int waitflag)
03 {
04 ...
05 do {
06 if (waitflag)
07 status = fix_status(sem_wait(thelock));
08 else
09 status = fix_status(sem_trywait(thelock));
10 } while (status == EINTR); /* Retry if interrupted by a signal */
11 ....
12 }
1 void
2 PyThread_release_lock(PyThread_type_lock lock)
3 {
4 ...
5 status = sem_post(thelock);
6 ...
7 }
01 class FetchUrls(threading.Thread):
02 ...
03 def run(self):
04 ...
05 while self.urls:
06 ...
07 with self.lock:
08 print 'lock acquired by %s' % self.name
09 self.output.write(d.read())
10 print 'write done by %s' % self.name
11 print 'lock released by %s' % self.name
12 ...
1 lock = threading.Lock()
2 lock.acquire()
3 lock.acquire()
1 rlock = threading.RLock()
2 rlock.acquire()
3 rlock.acquire()
01 def acquire(self, blocking=1):
02 me = _get_ident()
03 if self.__owner == me:
04 self.__count = self.__count + 1
05 ...
06 return 1
07 rc = self.__block.acquire(blocking)
08 if rc:
09 self.__owner = me
10 self.__count = 1
11 ...
12 ...
13 return rc
Condition
This is a synchronization mechanism where a thread waits for a specific condition and another
thread signals that this condition has happened. Once the condition happened, the thread
acquires the lock to get exclusive access to the shared resource.
A good way to illustrate this mechanism is by looking at a producer/consumer example. The
producer appends random integers to a list at random time and the consumer retrieves those
integers from the list. The source code can be found in threads/condition.py.
Lets look at the producer class. The producer acquires the lock, appends an integer, notifies
the consumer thread that there is something to retrieve and release the lock. It does that
forever with a random pause in between each append operation.
Next is the consumer class. It acquires the lock, checks if there is an integer in the list, if there
is nothing, it waits to be notified by the producer. Once the element is retrieved from the list,
it releases the lock.
Note that a call to wait() releases the lock so the producer can acquire the resource and do its
work.
1 def release(self):
2 if self.__owner != _get_ident():
3 raise RuntimeError("cannot release un-acquired lock")
4 self.__count = count = self.__count - 1
5 if not count:
6 self.__owner = None
7 self.__block.release()
8 ...
9 ...
01 class Producer(threading.Thread):
02 """
03 Produces random integers to a list
04 """
05
06 def __init__(self, integers, condition):
07 """
08 Constructor.
09
10 @param integers list of integers
11 @param condition condition synchronization object
12 """
13 threading.Thread.__init__(self)
14 self.integers = integers
15 self.condition = condition
16
17 def run(self):
18 """
19 Thread run method. Append random integers to the integers list
20 at random time.
21 """
22 while True:
23 integer = random.randint(0, 256)
24 self.condition.acquire()
25 print 'condition acquired by %s' % self.name
26 self.integers.append(integer)
27 print '%d appended to list by %s' % (integer, self.name)
28 print 'condition notified by %s' % self.name
29 self.condition.notify()
30 print 'condition released by %s' % self.name
31 self.condition.release()
32 time.sleep(1)
01 class Consumer(threading.Thread):
02 """
03 Consumes random integers from a list
04 """
05
06 def __init__(self, integers, condition):
07 """
08 Constructor.
09
10 @param integers list of integers
11 @param condition condition synchronization object
12 """
13 threading.Thread.__init__(self)
14 self.integers = integers
15 self.condition = condition
16
17 def run(self):
18 """
19 Thread run method. Consumes integers from list
20 """
21 while True:
22 self.condition.acquire()
23 print 'condition acquired by %s' % self.name
24 while True:
25 if self.integers:
26 integer = self.integers.pop()
27 print '%d popped from list by %s' % (integer, self.name)
28 break
29 print 'condition wait by %s' % self.name
30 self.condition.wait()
31 print 'condition released by %s' % self.name
32 self.condition.release()
We need to write our main creating 2 threads and starting them:
The output of this program looks like this:
Thread-1 appends 159 to the list then notifies the consumer and releases the lock. Thread-2
acquires the lock, retrieves 159 and releases the lock. The producer is still waiting at that time
because of the time.sleep(1) so the consumer acquires the lock again then waits to get
notified by the producer. When wait() is called, it unlocks the resource so the producer can
acquire it and append a new integer to the list before notifying the consumer.
Lets look at the Python internals for this condition synchronization mechanism. The
conditions constructor creates a RLock object if no existing lock is passed to the constructor.
This lock will be used when acquire() and release() are called.
Next is the wait() method. We assume that we are calling wait() with no timeout value to
simplify the explanation of the wait() methods code. A new lock named waiter is created and
the state is set to locked. The waiter lock is used for communication between the threads so
the producer can notify the consumer by releasing this waiter lock. The lock object is added to
the waiters list and the method is blocking at waiter.acquire(). Note that the condition lock
state is saved at the beginning and restored when wait() returns.
01 def main():
02 integers = []
03 condition = threading.Condition()
04 t1 = Producer(integers, condition)
05 t2 = Consumer(integers, condition)
06 t1.start()
07 t2.start()
08 t1.join()
09 t2.join()
10
11 if __name__ == '__main__':
12 main()
01 $ python condition.py
02 condition acquired by Thread-1
03 159 appended to list by Thread-1
04 condition notified by Thread-1
05 condition released by Thread-1
06 condition acquired by Thread-2
07 159 popped from list by Thread-2
08 condition released by Thread-2
09 condition acquired by Thread-2
10 condition wait by Thread-2
11 condition acquired by Thread-1
12 116 appended to list by Thread-1
13 condition notified by Thread-1
14 condition released by Thread-1
15 116 popped from list by Thread-2
16 condition released by Thread-2
17 condition acquired by Thread-2
18 condition wait by Thread-2
1 class _Condition(_Verbose):
2
3 def __init__(self, lock=None, verbose=None):
4 _Verbose.__init__(self, verbose)
5 if lock is None:
6 lock = RLock()
7 self.__lock = lock
01 def wait(self, timeout=None):
02 ...
03 waiter = _allocate_lock()
The notify() method is used to release the waiter lock. The producer calls notify() to notify the
consumer blocked on wait().
You can also use the with statement with the Condition object so acquire() and release()
are called for us. Lets rewrite the producer class and the consumer class using with.
Semaphore
A semaphore is based on an internal counter which is decremented each time acquire() is
called and incremented each time release() is called. If the counter is equal to 0 then acquire()
blocks. It is the Python implementation of the Dijkstra semaphore concept: P() and V(). Using
a semaphore makes sense when you want to control access to a resource with limited
capacity like a server.
Here is a simple example:
Lets look at the Python internals. The constructor takes a value which is the counter initial
value. This value defaults to 1. A condition instance is created with a lock to protect the
counter and to notify the other thread when the semaphore is released.
Next is the acquire() method. If the semaphores counter is equal to 0, it blocks on the
conditions wait() method until it gets notified by a different thread. If the semaphores
counter is greater than 0, it decrements the value.
04 waiter.acquire()
05 self.__waiters.append(waiter)
06 saved_state = self._release_save()
07 try: # restore state no matter what (e.g., KeyboardInterrupt)
08 if timeout is None:
09 waiter.acquire()
10 ...
11 ...
12 finally:
13 self._acquire_restore(saved_state)
01 def notify(self, n=1):
02 ...
03 __waiters = self.__waiters
04 waiters = __waiters[:n]
05 ...
06 for waiter in waiters:
07 waiter.release()
08 try:
09 __waiters.remove(waiter)
10 except ValueError:
11 pass
01 class Producer(threading.Thread):
02 ...
03 def run(self):
04 while True:
05 integer = random.randint(0, 256)
06 with self.condition:
07 print 'condition acquired by %s' % self.name
08 self.integers.append(integer)
09 print '%d appended to list by %s' % (integer, self.name)
10 print 'condition notified by %s' % self.name
11 self.condition.notify()
12 print 'condition released by %s' % self.name
13 time.sleep(1)
14
15 class Consumer(threading.Thread):
16 ...
17 def run(self):
18 while True:
19 with self.condition:
20 print 'condition acquired by %s' % self.name
21 while True:
22 if self.integers:
23 integer = self.integers.pop()
24 print '%d popped from list by %s' % (integer,
self.name)
25 break
26 print 'condition wait by %s' % self.name
27 self.condition.wait()
28 print 'condition released by %s' % self.name
1 semaphore = threading.Semaphore()
2 semaphore.acquire()
3 # work on a shared resource
4 ...
5 semaphore.release()
1 class _Semaphore(_Verbose):
2 ...
3 def __init__(self, value=1, verbose=None):
4 _Verbose.__init__(self, verbose)
5 self.__cond = Condition(Lock())
6 self.__value = value
7 ...
01 def acquire(self, blocking=1):
02 rc = False
03 self.__cond.acquire()
04 while self.__value == 0:
05 ...
06 self.__cond.wait()
The semaphores release() method increments the counter and then notifies the other thread.
Note that there is also a bounded semaphore you can use to make sure you never call
release() too many times. Here is the Python internal code use for it:
You can also use the with statement with the Semaphore object so acquire() and release()
are called for us.
Event
This is a simple mechanism. A thread signals an event and the other thread(s) wait for it.
Lets go back to our producer and consumer example and convert it to use an event instead
of a condition. The source code can be found in threads/event.py.
First the producer class. We pass an Event instance to the constructor instead of a Condition
instance. Each time an integer is added to the list, the event is set then cleared right away to
notify the consumer. The event instance is cleared by default.
Next is the consumer class. We also pass an Event instance to the constructor. The consumer
instance is blocking on wait() until the event is set indicating that there is an integer to
consume.
07 else:
08 self.__value = self.__value - 1
09 rc = True
10 self.__cond.release()
11 return rc
1 def release(self):
2 self.__cond.acquire()
3 self.__value = self.__value + 1
4 self.__cond.notify()
5 self.__cond.release()
01 class _BoundedSemaphore(_Semaphore):
02 """Semaphore that checks that # releases is <= # acquires"""
03 def __init__(self, value=1, verbose=None):
04 _Semaphore.__init__(self, value, verbose)
05 self._initial_value = value
06
07 def release(self):
08 if self._Semaphore__value >= self._initial_value:
09 raise ValueError, "Semaphore released too many times"
10 return _Semaphore.release(self)
1 semaphore = threading.Semaphore()
2 with semaphore:
3 # work on a shared resource
4 ...
01 class Producer(threading.Thread):
02 """
03 Produces random integers to a list
04 """
05
06 def __init__(self, integers, event):
07 """
08 Constructor.
09
10 @param integers list of integers
11 @param event event synchronization object
12 """
13 threading.Thread.__init__(self)
14 self.integers = integers
15 self.event = event
16
17 def run(self):
18 """
19 Thread run method. Append random integers to the integers list
20 at random time.
21 """
22 while True:
23 integer = random.randint(0, 256)
24 self.integers.append(integer)
25 print '%d appended to list by %s' % (integer, self.name)
26 print 'event set by %s' % self.name
27 self.event.set()
28 self.event.clear()
29 print 'event cleared by %s' % self.name
30 time.sleep(1)
01 class Consumer(threading.Thread):
02 """
03 Consumes random integers from a list
04 """
05
06 def __init__(self, integers, event):
07 """
08 Constructor.
09
10 @param integers list of integers
11 @param event event synchronization object
12 """
13 threading.Thread.__init__(self)
14 self.integers = integers
15 self.event = event
16
This is the output when we run the program. Thread-1 appends 124 to the list and then set
the event to notify the consumer. The consumers call to wait() stops blocking and the integer
is retrieved from the list.
Lets look at the Python internals. First is the Event constructor. A condition instance is
created with a lock to protect the event flag value and to notify the other thread when the
event has been set.
Following is the set() method. It sets the flag to True and notifies the other threads. The
condition object is used to protect the critical part when the flags value is changed.
Its opposite is the clear() method setting the flag to False.
The wait() method blocks until the set method is called. The wait() method does nothing if the
flag is set.
Queue
Queues are a great mechanism when we need to exchange information between threads as
it takes care of locking for us.
We are interested in the following 4 Queue methods:
put: Put an item to the queue.
17 def run(self):
18 """
19 Thread run method. Consumes integers from list
20 """
21 while True:
22 self.event.wait()
23 try:
24 integer = self.integers.pop()
25 print '%d popped from list by %s' % (integer, self.name)
26 except IndexError:
27 # catch pop on empty list
28 time.sleep(1)
1 $ python event.py
2 124 appended to list by Thread-1
3 event set by Thread-1
4 event cleared by Thread-1
5 124 popped from list by Thread-2
6 223 appended to list by Thread-1
7 event set by Thread-1
8 event cleared by Thread-1
9 223 popped from list by Thread-2
1 class _Event(_Verbose):
2 def __init__(self, verbose=None):
3 _Verbose.__init__(self, verbose)
4 self.__cond = Condition(Lock())
5 self.__flag = False
1 def set(self):
2 self.__cond.acquire()
3 try:
4 self.__flag = True
5 self.__cond.notify_all()
6 finally:
7 self.__cond.release()
1 def clear(self):
2 self.__cond.acquire()
3 try:
4 self.__flag = False
5 finally:
6 self.__cond.release()
1 def wait(self, timeout=None):
2 self.__cond.acquire()
3 try:
4 if not self.__flag:
5 self.__cond.wait(timeout)
6 finally:
7 self.__cond.release()
get: Remove and return an item from the queue.
task_done: Needs to be called each time an item has been processed.
join: Blocks until all items have been processed.
Lets convert our producer/consumer program to use a queue. The source code can be found
in threads/queue.py.
First the producer class. We dont need to pass the integers list because we are using the
queue to store the integers generated. The thread generates and puts the integers in the
queue in a forever loop.
Next is our consumer class. The thread gets the integer from the queue and indicates that it is
done working on it using task_done().
Here is the output of the program.
The Queue module takes care of locking for us which is a great advantage. It is interesting to
look at the Python internals to understand how the locking mechanism works underneath.
The Queue constructor creates a lock to protect the queue when an element is added or
removed. Some conditions objects are created to notify events like the queue is not empty
(get() call stops blocking), queue is not full (put() call stops blocking) and all items have been
processed (join() call stops blocking).
The put() method adds an item or waits if the queue is full. It notifies the threads blocked on
get() that the queue is not empty. See above for an explanation on the Condition object for
more details.
01 class Producer(threading.Thread):
02 """
03 Produces random integers to a list
04 """
05
06 def __init__(self, queue):
07 """
08 Constructor.
09
10 @param integers list of integers
11 @param queue queue synchronization object
12 """
13 threading.Thread.__init__(self)
14 self.queue = queue
15
16 def run(self):
17 """
18 Thread run method. Append random integers to the integers list at
19 random time.
20 """
21 while True:
22 integer = random.randint(0, 256)
23 self.queue.put(integer)
24 print '%d put to queue by %s' % (integer, self.name)
25 time.sleep(1)
01 class Consumer(threading.Thread):
02 """
03 Consumes random integers from a list
04 """
05
06 def __init__(self, queue):
07 """
08 Constructor.
09
10 @param integers list of integers
11 @param queue queue synchronization object
12 """
13 threading.Thread.__init__(self)
14 self.queue = queue
15
16 def run(self):
17 """
18 Thread run method. Consumes integers from list
19 """
20 while True:
21 integer = self.queue.get()
22 print '%d popped from list by %s' % (integer, self.name)
23 self.queue.task_done()
1 $ python queue.py
2 61 put to queue by Thread-1
3 61 popped from list by Thread-2
4 6 put to queue by Thread-1
5 6 popped from list by Thread-2
1 class Queue:
2 def __init__(self, maxsize=0):
3 ...
4 self.mutex = threading.Lock()
5 self.not_empty = threading.Condition(self.mutex)
6 self.not_full = threading.Condition(self.mutex)
7 self.all_tasks_done = threading.Condition(self.mutex)
8 self.unfinished_tasks = 0
01 def put(self, item, block=True, timeout=None):
02 ...
03 self.not_full.acquire()
04 try:
The get() method removes an element from the queue or waits if the queue is empty. It
notifies the threads blocked on put() that the queue is not full.
When the method task_done() is called, the number of unfinished tasks is decremented. If the
counter is equal to 0 then the threads waiting on the queue join() method continue their
execution.
Thats it for now. I hope you enjoyed this article. Please write a comment if you have any
feedback. If you need help with a project written in Python or with building a new web
service, I am available as a freelancer: LinkedIn profile. Follow me on Twitter @laurentluce.
tags: Python
posted in Uncategorized by Laurent Luce
Follow comments via the RSS Feed | Leave a comment | Trackback URL
21 Comments to "Python threads synchronization: Locks, RLocks,
Semaphores, Conditions, Events and Queues"
1. Mark wrote:
Thanks for article.
Nice diagram usage. Clear and simple.
What tool did you use ?
Link | February 6th, 2011 at 7:06 am
05 if self.maxsize > 0:
06 ...
07 elif timeout is None:
08 while self._qsize() == self.maxsize:
09 self.not_full.wait()
10 self._put(item)
11 self.unfinished_tasks += 1
12 self.not_empty.notify()
13 finally:
14 self.not_full.release()
01 def get(self, block=True, timeout=None):
02 ...
03 self.not_empty.acquire()
04 try:
05 ...
06 elif timeout is None:
07 while not self._qsize():
08 self.not_empty.wait()
09 item = self._get()
10 self.not_full.notify()
11 return item
12 finally:
13 self.not_empty.release()
01 def task_done(self):
02 self.all_tasks_done.acquire()
03 try:
04 unfinished = self.unfinished_tasks - 1
05 if unfinished <= 0:
06 if unfinished < 0:
07 raise ValueError('task_done() called too many times')
08 self.all_tasks_done.notify_all()
09 self.unfinished_tasks = unfinished
10 finally:
11 self.all_tasks_done.release()
12
13 def join(self):
14 self.all_tasks_done.acquire()
15 try:
16 while self.unfinished_tasks:
17 self.all_tasks_done.wait()
18 finally:
19 self.all_tasks_done.release()
2. Laurent Luce wrote:
@Mark: Dia: http://live.gnome.org/Dia
Link | February 6th, 2011 at 5:12 pm
3. links for 2011-02-06 Bloggitation wrote:
[...] Python threads synchronization: Locks, RLocks, Semaphores, Conditions, Events
and Queues (tags: python programming) Categories: Links LikeBe the first to like this
post. Comments (0) Trackbacks (0) Leave a comment Trackback [...]
Link | February 6th, 2011 at 11:02 pm
4. Foxconn H67S Review wrote:
[...] Python threads synchronization: Locks, RLocks, Semaphores [...]
Link | February 7th, 2011 at 6:41 am
5. links for 2011-02-07 Donghai Ma wrote:
[...] Python threads synchronization: Locks, RLocks, Semaphores, Conditions, Events
and Queues | Laurent L (tags: python threads tutorial programming synchronization)
[...]
Link | February 7th, 2011 at 9:02 pm
6. forums | imported wrote:
[...] Python Locks, RLocks, Semaphores, Conditions, Events and Queues
(laurentluce.com) [...]
Link | February 8th, 2011 at 4:11 am
7. sahil wrote:
Hey Man Thanks for this tutorial, really cleared my basics as far as locks were
concerned.
Link | February 17th, 2011 at 10:59 pm
8. google android activesync, android activesync download | Android Activesync
wrote:
[...] Python threads synchronization: Locks, RLocks, Semaphores [...]
Link | March 21st, 2011 at 8:05 am
9. Subodh Nijsure wrote:
Real nice article Laurent. I am just getting started in Python helped a lot
Link | March 23rd, 2011 at 11:24 am
10. James wrote:
The best article Ive found on these different thread sync mechanisms, thanks a lot!
Link | September 7th, 2011 at 1:15 am
11. Ryo wrote:
Its so a nice article!! I really appreciate!
By the way I have a question.
The block described locked? on the flow chart in Condition Section says that the
program goes to next step when locked is yes. Is this a mistake?
Or Am I misunderstanding sth?
Link | October 11th, 2011 at 10:34 pm
12. Laurent Luce wrote:
@Ryo: Thanks for noticing. You are correct. I updated the diagram.
Link | October 23rd, 2011 at 9:35 pm
13. Jimmy Chiang wrote:
I seldom leave comment on blog but I just want to say thank you. Great tutorial and
the example source code. This really helps me a lot!
Link | May 1st, 2012 at 4:15 pm
14. songzhengang wrote:
whats different between Condition and Event?
condition : wait -> notify
event : wait > set
Link | September 12th, 2012 at 2:50 am
15. Laurent Luce wrote:
@songzhengang: You can use a Condition when the thread is interested in waiting
for something to become true, and once it is true, to have exclusive access to some
shared resource.
Link | October 17th, 2012 at 10:28 am
16. pythonik wrote:
Excellent tutorial on multithreading and the use of these classes. Very well explained.
Link | April 21st, 2013 at 12:09 am
17. Vincent Zhang wrote:
Great Tutorial. The best python tutorial on thread synchronization that Ive read. I really
like the way you showed the internal implementation of different methods.
Link | April 23rd, 2013 at 8:40 pm
18. Gavin Jackson wrote:
Laurent,
Thanks for your excellent post! I will keep this one bookmarked!
Link | August 4th, 2013 at 11:02 pm
19. Red Lv wrote:
Really useful to someone like me.
I wonder whether you haven written something about communication between multi-
processed.
Looking forward to!
Link | August 7th, 2013 at 9:13 pm
20. Abhi wrote:
Really good article and explanation . Best Article on multithreading with locks . Thanks a
lot
If anyone interested in GIL(Global interpreter lock) here is good explanation
http://blip.tv/carlfk/mindblowing-python-gil-2243379
Link | September 30th, 2013 at 6:23 pm
21. ccitizen0 wrote:
Great article! Thanks!
Link | October 4th, 2013 at 9:06 am
Leave Your Comment
Name (required)
Mail (will not be published) (required)
Website
Post Comment
Powered by Wordpress and MySQL. Theme by Shlomi Noach, openark.org

Você também pode gostar