So we need to carefully choose what we use/return inside the callable passed to process pool executor. However, we can not use any objects that is not picklable. While the API is similar, we must remember that the `ProcessPoolExecutor` uses the `multiprocessing` module and is not affected by the Global Interpreter Lock. The `ThreadPoolExecutor` is better suited for network operations or I/O. It works perfectly! But of course, we would want to use the `ProcessPoolExecutor` for CPU intensive tasks. So let’s modify our previous example and use ProcessPool instead:įrom concurrent.futures import ProcessPoolExecutor The process pool executor has a very similar API. So I highly recommend taking the time to read through the docs. We can get the result of the future by calling the `result()` method on it.Ī good understanding of the `Future` object and knowing it’s methods would be really beneficial for understanding and doing async programming in Python. We take a really short nap for 5 secs and then it’s done. In our example, the task doesn’t complete until 5 seconds, so the first call to `done()` will return `False`. When a task finishes (returns a value or is interrupted by an exception), the thread pool executor sets the value to the future object. As we can see in the docs, the `Future` object has a method – `done()` which tells us if the future has resolved, that is a value has been set for that particular future object. When we `submit()` a task, we get back a ` Future`. Then we submitted a task to the thread pool executor which waits 5 seconds before returning the message it gets as it’s first argument. By default the number is 5 but we chose to use 3 just because we can -). We first construct a ThreadPoolExecutor with the number of threads we want in the pool. I hope the code is pretty self explanatory. ThreadPoolExecutorįrom concurrent.futures import ThreadPoolExecutorįuture = pool.submit(return_after_5_secs, ("hello")) The pool would assign tasks to the available resources (threads or processes) and schedule them to run. In both case, we get a pool of threads or processes and we can submit tasks to this pool. As their names suggest, one uses multi threading and the other one uses multi-processing. However it has two very useful concrete subclasses – `ThreadPoolExecutor` and `ProcessPoolExecutor`. This module features the `Executor` class which is an abstract class and it can not be used directly. We will discuss and go through code samples for the common usages of this module. It will live a lot longer and be way easier to incrementally modify than the esoteric code gen approach.The ` concurrent.futures` module is part of the standard library which provides a high level API for launching async tasks. That verbose thing that is easier for a wide audience to quickly grok despite uglier code with more boilerplate, that is great software, with mature design decision making. Don’t solve your problem (like arg parsing in the Rust example) with code gen.Īlways just write a bit of unmysterious extra boilerplate code that your colleagues of mixed skill will thank you for. Stepping back though, whether with macros or metaprogramming in the dynamic data model, the hugest rule of thumb is: don’t do this stuff. I do agree macros could reduce boilerplate though, but add the extra complexity that someone has to understand AST transformations instead of just regular old Python type dynamic code. The flexibility of Python’s data model generally means that in a lot of cases where you’d want a macro in another language or costly reflection, you can just dynamically create a function or class (with type()), dynamically change things to be properties (by calling the property decorator as a function), mess with how isinstance and issubclass work if you want interface enforcement, etc.
0 Comments
Leave a Reply. |