https://learning.oreilly.com/library/view/using-asyncio-in/9781492075325/ch01.html
Chapter 1:
Threading—as a programming model—is best suited to certain kinds of computational tasks that are best executed with multiple CPUs and shared memory for efficient communication between the threads. In such tasks, the use of multicore processing with shared memory is a necessary evil because the problem domain requires it. For e.g. matrix multiplication using multiple cores with shared memory as done by numpy.
But Network programming is not one of those domains. The key insight is that network programming involves a great deal of “waiting for things to happen,” and because of this, we don’t need the operating system to efficiently distribute our tasks over multiple CPUs. Furthermore, we don’t need the risks that preemptive multitasking brings, such as race conditions when working with shared memory. This is where AsyncIO helps. Typically OS has limits on number of threads. But since AsyncIO is a single threaded model, it offers a simple way to support many thousands of simultaneous socket connections, including being able to handle many long-lived connections for newer technologies like WebSockets, or MQTT.
Chapter 2:
Drawbacks of threads:
1. Resource-intensive: Threads require extra operating system resources to create, such as preallocated, per-thread stack space that consumes process virtual memory up front.
2. Threading can affect throughput - At very high concurrency levels (say, >5,000 threads), there can also be an impact on throughput due to context-switching costs, assuming you can figure out how to configure your operating system to even allow you to create that many threads!
3. Threading is inflexible - The operating system will continually share CPU time with all threads regardless of whether a thread is ready to do work or not.
https://github.com/dharm0us/python_threading_cutlery_threadbots
Chapter 3:
Asyncio provides another tool for concurrent programming in Python, that is more lightweight than threads or multiprocessing. In a very simple sense it does this by having an event loop execute a collection of tasks, with a key difference being that each task chooses when to yield control back to the event loop.
Quickstart:
You only need to know about seven functions to use Asyncio [for everyday use].
They cover:
1. Starting the asyncio event loop
2. Calling async/await functions
3. Creating a task to be run on the loop
4. Waiting for multiple tasks to complete
5. Closing the loop after all concurrent tasks have completed
asyncio provides both a loop specification, AbstractEventLoop, and an implementation, BaseEventLoop. The clear separation between specification and implementation makes it possible for third-party developers to make alternative implementations of the event loop, and this has already happened with the uvloop project, which provides a much faster loop implementation than the one in the asyncio standard library module.
Task is a subclass of Future, but they could easily be considered to be in the same tier. A Future instance represents some sort of ongoing action that will return a result via notification on the event loop, while a Task represents a coroutine running on the event loop. The short version is: a future is “loop-aware,” while a task is both “loop-aware” and “coroutine-aware.” As an end-user developer, you will be working with tasks much more than futures, but for a framework designer, the proportion might be the other way around, depending on the details.
Coroutines internals
>>> async def f():
... return 123
...
>>> type(f)
<class 'function'>
>>> import inspect
>>> inspect.iscoroutinefunction(f)
True
>>> coro = f()
>>> type(coro)
<class 'coroutine'>
Coroutine internals: using send() and StopIteration
>>> async def f():
... return 123
>>> coro = f()
>>> try:
... coro.send(None)
... except StopIteration as e:
... print('The answer was:', e.value)
... The answer was: 123
A coroutine is initiated by “sending” it a None. Internally, this is what the event loop is going to be doing to your precious coroutines; you will never have to do this manually. All the coroutines you make will be executed either with loop.create_task(coro) or await coro. It’s the loop that does the .send(None) behind the scenes.
When the coroutine returns, a special kind of exception is raised, called StopIteration. Note that we can access the return value of the coroutine via the value attribute of the exception itself. Again, you don’t need to know that it works like this: from your point of view, async def functions will simply return a value with the return statement, just like normal functions.
These two points, the send() and the StopIteration, define the start and end of the executing coroutine, respectively.
The New await Keyword This new keyword await always takes a parameter and will accept only a thing called an awaitable, which is defined as one of these (exclusively!):
A coroutine (i.e., the result of a called async def function)
OR
any object implementing the __await__() special method. That special method must return an iterator.
It is useful to look at how coroutines may be fed exceptions. This is most commonly used for cancellation: when you call task.cancel(), the event loop will internally use coro.throw() to raise asyncio.CancelledError inside your coroutine.
>>> import asyncio
>>> async def f():
... try:
... while True: await asyncio.sleep(0)
... except asyncio.CancelledError:
... print('I was cancelled!')
... else:
... return 111
>>> coro = f()
>>> coro.send(None)
>>> coro.send(None)
>>> coro.throw(asyncio.CancelledError)
I was cancelled!
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Event Loop
==================
Further reading/videos:
1. Dave Beazley, “Python Concurrency from the Ground Up: LIVE!”,