需要結(jié)合下面這幾篇文章看坪蚁,下面是自己學(xué)習(xí)的記錄。
https://blog.csdn.net/u011564172/article/details/62043236
https://blog.csdn.net/u011564172/article/details/60875013
https://blog.csdn.net/u011564172/article/details/60143168
https://blog.csdn.net/u011564172/article/details/59113617
Master Main 方法中,調(diào)用 RpcEnv 的 create 方法纺荧,返回 NettyRpcEnv 實(shí)例晴埂,NettyRpcEnv 繼承自 RpcEnv痕囱,create 方法最終啟動(dòng)了 Netty 服務(wù)(具體請參考 Spark RPC之Netty啟動(dòng))诞吱,流程入下圖:
RpcEnv create 方法 返回的 NettyRpcEnv 實(shí)例剂陡,隨后調(diào)用了 setupEndpoint 方法:
val rpcEnv = RpcEnv.create(SYSTEM_NAME, host, port, conf, securityMgr)
val masterEndpoint = rpcEnv.setupEndpoint(ENDPOINT_NAME,
new Master(rpcEnv, rpcEnv.address, webUiPort, securityMgr, conf))
其實(shí)是調(diào)用了 Dispatcher 的 registerRpcEndpoint 方法:
//NettyRpcEnv.scala 中的代碼
override def setupEndpoint(name: String, endpoint: RpcEndpoint): RpcEndpointRef = {
dispatcher.registerRpcEndpoint(name, endpoint)
}
在 NettyRpcEnv.scala 中創(chuàng)建了 TransportContext:
private val transportContext = new TransportContext(transportConf,
new NettyRpcHandler(dispatcher, this, streamManager))
TransportContext 構(gòu)造函數(shù)中創(chuàng)建了 NettyRpcHandler狈涮,NettyRpcHandler 繼承自 RpcHandler,看下 NettyRpcHandler 類的部分代碼:
private[netty] class NettyRpcHandler(
dispatcher: Dispatcher,
nettyEnv: NettyRpcEnv,
streamManager: StreamManager) extends RpcHandler with Logging {
override def receive(
client: TransportClient,
message: ByteBuffer,
callback: RpcResponseCallback): Unit = {
val messageToDispatch = internalReceive(client, message)
dispatcher.postRemoteMessage(messageToDispatch, callback)
}
override def receive(
client: TransportClient,
message: ByteBuffer): Unit = {
val messageToDispatch = internalReceive(client, message)
dispatcher.postOneWayMessage(messageToDispatch)
}
}
可以看到 有兩個(gè) 重寫的 receive 方法鸭栖,我們知道 receive 方法用來接收 遠(yuǎn)端發(fā)來的 RPC消息,最終調(diào)用了 Dispatcher 的 postMessage 方法握巢。
那 receive 最終由哪里調(diào)用呢晕鹊?其實(shí)最終是從 TransportRequestHandler 的 rpcHandler 調(diào)用的。
TransportRequestHandler 類 的 rpcHandler 成員暴浦,持有了 NettyRpcHandler 的引用溅话。我們看下 NettyRpcHandler 如何一步步把自己傳給 TransportRequestHandler 的 rpcHandle 的:
TransportContext 的 rpcHandler 成員持有了 NettyRpcHandler 的引用:
public TransportContext(TransportConf conf, RpcHandler rpcHandler) {
this(conf, rpcHandler, false);
}
public TransportContext(...RpcHandler rpcHandler) {
...
this.rpcHandler = rpcHandler;
}
TransportContext 把 rpcHandler 傳給了 TransportServer:
public TransportServer createServer(int port, List<TransportServerBootstrap> bootstraps) {
return new TransportServer(this, null, port, rpcHandler, bootstraps);
}
TransportServer 的成員 appRpcHandler 持有了 NettyRpcHandler 的引用:
public TransportServer(...RpcHandler appRpcHandler) {
...
this.appRpcHandler = appRpcHandler;
}
在 TransportServer 的 init 方法中,把 appRpcHandler 傳給了 TransportContext 的initializePipeline 方法:
private void init(String hostToBind, int portToBind) {
...
context.initializePipeline(ch, rpcHandler);
}
我們看下 TransportContext 的initializePipeline 方法:
public TransportChannelHandler initializePipeline(SocketChannel channel, RpcHandler channelRpcHandler) {
...
TransportChannelHandler channelHandler = createChannelHandler(channel, channelRpcHandler);
//下面把 TransportChannelHandler 添加到 pipeline 中歌焦。
channel.pipeline()
.addLast("encoder", ENCODER)
.addLast(TransportFrameDecoder.HANDLER_NAME, NettyUtils.createFrameDecoder())
.addLast("decoder", DECODER)
.addLast("idleStateHandler", new IdleStateHandler(0, 0, conf.connectionTimeoutMs() / 1000))
// NOTE: Chunks are currently guaranteed to be returned in the order of request, but this
// would require more logic to guarantee if this were not part of the same event loop.
.addLast("handler", channelHandler);
return channelHandler;
}
initializePipeline 方法創(chuàng)建了 TransportChannelHandler飞几,并返回。
看下 createChannelHandler 方法:
private TransportChannelHandler createChannelHandler(...RpcHandler rpcHandler) {
TransportRequestHandler requestHandler = new TransportRequestHandler(channel, client,
rpcHandler);
return new TransportChannelHandler(client, responseHandler, requestHandler,
conf.connectionTimeoutMs(), closeIdleConnections);
}
最終 TransportRequestHandler 的成員 rpcHandler 持有了 NettyRpcHandler 的引用独撇。
我們 看下 TransportRequestHandler 中 使用 rpcHandler 的地方:
private void processRpcRequest(final RpcRequest req) {
rpcHandler.receive(reverseClient, req.body().nioByteBuffer(), new RpcResponseCallback() {
@Override
public void onSuccess(ByteBuffer response) {
respond(new RpcResponse(req.requestId, new NioManagedBuffer(response)));
}
});
}
private void processOneWayMessage(OneWayMessage req) {
rpcHandler.receive(reverseClient, req.body().nioByteBuffer());
}
NettyRpcHandler 重寫的 receive 方法屑墨,最終在這里被回調(diào)的:rpcHandler.receive。
上面兩個(gè)方法在這里調(diào)用:
@Override
public void handle(RequestMessage request) {
if (request instanceof ChunkFetchRequest) {
processFetchRequest((ChunkFetchRequest) request);
} else if (request instanceof RpcRequest) {
processRpcRequest((RpcRequest) request);
} else if (request instanceof OneWayMessage) {
processOneWayMessage((OneWayMessage) request);
} else if (request instanceof StreamRequest) {
processStreamRequest((StreamRequest) request);
} else {
throw new IllegalArgumentException("Unknown request type: " + request);
}
}
handle 方法對 RequestMessage 做了區(qū)分纷铣,驗(yàn)證了我們上面提到的卵史。
在 TransportChannelHandler.java 中調(diào)用了 handle 方法:
@Override
public void channelRead(ChannelHandlerContext ctx, Object request) throws Exception {
if (request instanceof RequestMessage) {
requestHandler.handle((RequestMessage) request);
} else if (request instanceof ResponseMessage) {
responseHandler.handle((ResponseMessage) request);
} else {
ctx.fireChannelRead(request);
}
}
而 channelRead 中的消息是 client 通過 RPC 發(fā)過來的。
處理client 的 RpcRequest 請求
RpcEndpointRef和RpcEndpoint不在一臺機(jī)器
上圖的過程3搜立,簡化了流程以躯,這個(gè)簡化的流程就是我們上面分析的。
不在同一臺機(jī)器時(shí)啄踊,需要借助于netty忧设,大致步驟如下
- 如Spark RPC之Netty啟動(dòng) 所述,創(chuàng)建RpcEnv時(shí)啟動(dòng)netty server颠通,同時(shí)將TransportChannelHandler添加到pipeline中
- 如上圖址晕,TransportChannelHandler處理netty接收到的數(shù)據(jù),依次交給TransportRequestHandler蒜哀、NettyRpcHandler處理斩箫。
- 最后交由Dispatcher、Inbox撵儿,請參考Spark RPC之Dispatcher乘客、Inbox、Outbox 淀歇∫缀耍看下 Dispatcher 流程圖:
RpcEndpointRef和RpcEndpoint在一臺機(jī)器
在同一臺機(jī)器時(shí),不需要netty浪默,直接訪問RpcEndpoint牡直,如上圖缀匕,依然交給Dispatcher、Inbox處理碰逸。