diff --git a/dubbo-user-book-en/demos/thread-model.md b/dubbo-user-book-en/demos/thread-model.md index cee3c63..f77ae11 100644 --- a/dubbo-user-book-en/demos/thread-model.md +++ b/dubbo-user-book-en/demos/thread-model.md @@ -32,3 +32,31 @@ Thus, we need different dispatch strategies and different thread pool configurat * fixed: A fixed size of thread pool. It creates threads when starts, never shut down.(default). * cached: A cached thread pool. Automatically delete the thread when it’s in idle for one minute. Recreate when needed. * limit: elastic thread pool. But it can only increase the size of the thread pool. The reason is to avoid performance issue caused by traffic spike when decrease the size of the thread pool. + +## Thread call graph + +The default configuration, for example, a service call thread model diagram: +![dubbo-protocol](../sources/images/thread-model.png) + +### Thread pool +In the provider side, there are three thread pool: + +* Boss: thread pool netty all, contains only a NioEventLoop by default.Used to receive the client connection channel, and to channel after registration to the worker thread in the pool a NioEventLoop (which is actually registered on the Selector NioEventLoop have); +* Worker thread pool: all netty, default in dubbo contains "nuclear number + 1" NioEventLoop (default is 2 * auditing in netty).Worker thread in the pool every NioEventLoop block (Selector. The select ()) for registration on the channel ready event, then make the corresponding processing; +* Server thread pool: dubbo server-side business thread pool, the default worker thread will be decoded the request message to the thread pool for processing。 + +On the consumer side, there are two thread pool: + +* The worker thread pool: worker thread pool with the provider. +* The client thread pool: dubbo server-side business thread pool, the default worker thread after decoding the response message will be passed on to the thread pool for processing. + +### Communication process +From the point of view of threading model process of communication.For synchronous invocation (for example) + +* The consumer end user thread before making a request to create a DefaultFuture object;Object and requestID as DefaultFuture key stored in ` ` ` Map < Long, DefaultFuture > FUTURES ` ` ` in (note: each requestID is a unique identifier, request the corresponding Response Response responseID equals the requestID); +* Called after netty coding and requests, and then immediately call DefaultFuture# get are blocked waiting (blocked waiting for a response is not null conditions); +* The provider side netty - after the server receives the request, decoding, and then passed on to the server thread pool for processing; +* Netty server thread pool after processing is completed, call code and sends a response message to the consumer end; +* cOnsumer end after the response is received, decoding, and then handed over to the client thread pool, the client thread pool from ` ` ` Map < Long, DefaultFuture > FUTURES ` ` ` extract key = responseID DefaultFuture object, and then the response after the news to fill in the response attribute, arouse the consumer end user threads blocked; +* The final consumer response are obtained. + diff --git a/dubbo-user-book-en/sources/images/thread-model.png b/dubbo-user-book-en/sources/images/thread-model.png new file mode 100644 index 0000000..28d0940 Binary files /dev/null and b/dubbo-user-book-en/sources/images/thread-model.png differ diff --git a/dubbo-user-book/demos/thread-model.md b/dubbo-user-book/demos/thread-model.md index ef662e2..6b901c4 100644 --- a/dubbo-user-book/demos/thread-model.md +++ b/dubbo-user-book/demos/thread-model.md @@ -15,7 +15,7 @@ ``` -Dispatcher +## 调度程序 * `all` 所有消息都派发到线程池,包括请求,响应,连接事件,断开事件,心跳等。 * `direct` 所有消息都不派发到线程池,全部在 IO 线程上直接执行。 @@ -23,9 +23,36 @@ Dispatcher * `execution` 只请求消息派发到线程池,不含响应,响应和其它连接断开事件,心跳等消息,直接在 IO 线程上执行。 * `connection` 在 IO 线程上,将连接断开事件放入队列,有序逐个执行,其它消息派发到线程池。 -ThreadPool +## 线程池 * `fixed` 固定大小线程池,启动时建立线程,不关闭,一直持有。(缺省) * `cached` 缓存线程池,空闲一分钟自动删除,需要时重建。 * `limited` 可伸缩线程池,但池中的线程数只会增长不会收缩。只增长不收缩的目的是为了避免收缩时突然来了大流量引起的性能问题。 -* `eager` 优先创建`Worker`线程池。在任务数量大于`corePoolSize`但是小于`maximumPoolSize`时,优先创建`Worker`来处理任务。当任务数量大于`maximumPoolSize`时,将任务放入阻塞队列中。阻塞队列充满时抛出`RejectedExecutionException`。(相比于`cached`:`cached`在任务数量超过`maximumPoolSize`时直接抛出异常而不是将任务放入阻塞队列) \ No newline at end of file +* `eager` 优先创建`Worker`线程池。在任务数量大于`corePoolSize`但是小于`maximumPoolSize`时,优先创建`Worker`来处理任务。当任务数量大于`maximumPoolSize`时,将任务放入阻塞队列中。阻塞队列充满时抛出`RejectedExecutionException`。(相比于`cached`:`cached`在任务数量超过`maximumPoolSize`时直接抛出异常而不是将任务放入阻塞队列) + +## 线程调用图 + +以默认的配置为例,给出一张服务调用的线程模型图: +![dubbo-protocol](../sources/images/thread-model.png) + +### 线程池 +在provider端,存在三个线程池: + +* boss线程池:netty所有,默认只包含一个NioEventLoop。用于接收客户端的连接channel,并且之后将channel注册到worker线程池中的一个NioEventLoop上(实际上是注册在NioEventLoop所拥有的那个Selector上); +* worker线程池:netty所有,在dubbo中默认包含“核数+1”个NioEventLoop(在netty中默认是2*核数)。worker线程池中的每一个NioEventLoop去阻塞(Selector.select())获取注册在其上的channel准备就绪的事件,然后做出相应处理; +* server线程池:dubbo服务端的业务线程池,默认worker线程会将解码后的请求消息交由该线程池进行处理。 + +在consumer端,存在两个线程池: + +* worker线程池:同provider的worker线程池。 +* client线程池:dubbo服务端的业务线程池,默认worker线程会将解码后的响应消息交由该线程池进行处理。 + +### 通信流程 +从线程模型的角度来看通信流程。(以同步调用为例) + +* consumer端用户线程在发出请求之前会先创建一个DefaultFuture对象;并将requestID作为DefaultFuture对象的key存储在```Map FUTURES```中(注意:每一个requestID是一个请求的唯一标识,最后相应的响应Response的responseID就等于这个requestID); +* 之后调用netty编码并发出请求,然后马上调用DefaultFuture#get进行阻塞等待(阻塞等待response不为空的条件); +* provider端netty-server接收到请求后,解码,然后交由server线程池进行处理; +* server线程池处理完成之后,调用netty编码并发送响应消息给consumer端; +* consumer端接收到响应后,解码,然后交给client线程池处理,client线程池从```Map FUTURES```中获取key=responseID的DefaultFuture对象,然后将响应消息填充到其response属性后,唤醒consumer端阻塞的用户线程; +* 最后consumer得到了响应。 diff --git a/dubbo-user-book/sources/images/thread-model.png b/dubbo-user-book/sources/images/thread-model.png new file mode 100644 index 0000000..28d0940 Binary files /dev/null and b/dubbo-user-book/sources/images/thread-model.png differ