引言
之前的一篇介紹IO 模型的文章IO 模型知多少 -- 理論篇
比較偏理論桂肌,很多同學(xué)反應(yīng)不是很好理解秤茅。這一篇咱們換一個(gè)角度,從代碼角度來(lái)分析一下艇挨。
socket 編程基礎(chǔ)
開(kāi)始之前,我們先來(lái)梳理一下夹攒,需要提前了解的幾個(gè)概念:
socket: 直譯為“插座”蝴蜓,在計(jì)算機(jī)通信領(lǐng)域鞭缭,socket 被翻譯為“套接字”受神,它是計(jì)算機(jī)之間進(jìn)行通信的一種約定或一種方式抛猖。通過(guò) socket 這種約定,一臺(tái)計(jì)算機(jī)可以接收其他計(jì)算機(jī)的數(shù)據(jù)鼻听,也可以向其他計(jì)算機(jī)發(fā)送數(shù)據(jù)财著。我們把插頭插到插座上就能從電網(wǎng)獲得電力供應(yīng),同樣撑碴,應(yīng)用程序?yàn)榱伺c遠(yuǎn)程計(jì)算機(jī)進(jìn)行數(shù)據(jù)傳輸撑教,需要連接到因特網(wǎng),而 socket 就是用來(lái)連接到因特網(wǎng)的工具灰羽。
另外還需要知道的是驮履,socket 編程的基本流程。
同步阻塞IO
先回顧下概念:阻塞IO是指廉嚼,應(yīng)用進(jìn)程中線程在發(fā)起IO調(diào)用后至內(nèi)核執(zhí)行IO操作返回結(jié)果之前玫镐,若發(fā)起系統(tǒng)調(diào)用的線程一直處于等待狀態(tài),則此次IO操作為阻塞IO怠噪。
public static void Start()
{
//1. 創(chuàng)建Tcp Socket對(duì)象
var serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);
//2. 綁定Ip端口
serverSocket.Bind(ipEndpoint);
//3. 開(kāi)啟監(jiān)聽(tīng)恐似,指定最大連接數(shù)
serverSocket.Listen(10);
Console.WriteLine($"服務(wù)端已啟動(dòng)({ipEndpoint})-等待連接...");
while (true)
{
//4. 等待客戶端連接
var clientSocket = serverSocket.Accept();//阻塞
Console.WriteLine($"{clientSocket.RemoteEndPoint}-已連接");
Span<byte> buffer = new Span<byte>(new byte[512]);
Console.WriteLine($"{clientSocket.RemoteEndPoint}-開(kāi)始接收數(shù)據(jù)...");
int readLength = clientSocket.Receive(buffer);//阻塞
var msg = Encoding.UTF8.GetString(buffer.ToArray(), 0, readLength);
Console.WriteLine($"{clientSocket.RemoteEndPoint}-接收數(shù)據(jù):{msg}");
var sendBuffer = Encoding.UTF8.GetBytes($"received:{msg}");
clientSocket.Send(sendBuffer);
}
}
代碼很簡(jiǎn)單,直接看注釋就OK了傍念,運(yùn)行結(jié)果如上圖所示矫夷,但有幾個(gè)問(wèn)題點(diǎn)需要著重說(shuō)明下:
- 等待連接處
serverSocket.Accept()
,線程阻塞憋槐! - 接收數(shù)據(jù)處
clientSocket.Receive(buffer)
双藕,線程阻塞!
會(huì)導(dǎo)致什么問(wèn)題呢:
- 只有一次數(shù)據(jù)讀取完成后阳仔,才可以接受下一個(gè)連接請(qǐng)求
- 一個(gè)連接忧陪,只能接收一次數(shù)據(jù)
同步非阻塞IO
看完,你可能會(huì)說(shuō)近范,這兩個(gè)問(wèn)題很好解決啊嘶摊,創(chuàng)建一個(gè)新線程去接收數(shù)據(jù)就是了。于是就有了下面的代碼改進(jìn)评矩。
public static void Start2()
{
//1. 創(chuàng)建Tcp Socket對(duì)象
var serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);
//2. 綁定Ip端口
serverSocket.Bind(ipEndpoint);
//3. 開(kāi)啟監(jiān)聽(tīng)叶堆,指定最大連接數(shù)
serverSocket.Listen(10);
Console.WriteLine($"服務(wù)端已啟動(dòng)({ipEndpoint})-等待連接...");
while (true)
{
//4. 等待客戶端連接
var clientSocket = serverSocket.Accept();//阻塞
Task.Run(() => ReceiveData(clientSocket));
}
}
private static void ReceiveData(Socket clientSocket)
{
Console.WriteLine($"{clientSocket.RemoteEndPoint}-已連接");
Span<byte> buffer = new Span<byte>(new byte[512]);
while (true)
{
if (clientSocket.Available == 0) continue;
Console.WriteLine($"{clientSocket.RemoteEndPoint}-開(kāi)始接收數(shù)據(jù)...");
int readLength = clientSocket.Receive(buffer);//阻塞
var msg = Encoding.UTF8.GetString(buffer.ToArray(), 0, readLength);
Console.WriteLine($"{clientSocket.RemoteEndPoint}-接收數(shù)據(jù):{msg}");
var sendBuffer = Encoding.UTF8.GetBytes($"received:{msg}");
clientSocket.Send(sendBuffer);
}
}
是的,多線程解決了上述的問(wèn)題斥杜,但如果你觀察以上動(dòng)圖后虱颗,你應(yīng)該能發(fā)現(xiàn)個(gè)問(wèn)題:才建立4個(gè)客戶端連接,CPU的占用率就開(kāi)始直線上升了蔗喂。
而這個(gè)問(wèn)題的本質(zhì)就是上枕,服務(wù)端的IO模型為阻塞IO模型,為了解決阻塞導(dǎo)致的問(wèn)題弱恒,采用重復(fù)輪詢辨萍,導(dǎo)致無(wú)效的系統(tǒng)調(diào)用,從而導(dǎo)致CPU持續(xù)走高返弹。
IO多路復(fù)用
既然知道原因所在锈玉,咱們就來(lái)予以改造。適用異步方式來(lái)處理連接义起、接收和發(fā)送數(shù)據(jù)拉背。
public static class NioServer
{
private static ManualResetEvent _acceptEvent = new ManualResetEvent(true);
private static ManualResetEvent _readEvent = new ManualResetEvent(true);
public static void Start()
{
//1. 創(chuàng)建Tcp Socket對(duì)象
var serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
// serverSocket.Blocking = false;//設(shè)置為非阻塞
var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);
//2. 綁定Ip端口
serverSocket.Bind(ipEndpoint);
//3. 開(kāi)啟監(jiān)聽(tīng),指定最大連接數(shù)
serverSocket.Listen(10);
Console.WriteLine($"服務(wù)端已啟動(dòng)({ipEndpoint})-等待連接...");
while (true)
{
_acceptEvent.Reset();//重置信號(hào)量
serverSocket.BeginAccept(OnClientConnected, serverSocket);
_acceptEvent.WaitOne();//阻塞
}
}
private static void OnClientConnected(IAsyncResult ar)
{
_acceptEvent.Set();//當(dāng)有客戶端連接進(jìn)來(lái)后默终,則釋放信號(hào)量
var serverSocket = ar.AsyncState as Socket;
Debug.Assert(serverSocket != null, nameof(serverSocket) + " != null");
var clientSocket = serverSocket.EndAccept(ar);
Console.WriteLine($"{clientSocket.RemoteEndPoint}-已連接");
while (true)
{
_readEvent.Reset();//重置信號(hào)量
var stateObj = new StateObject { ClientSocket = clientSocket };
clientSocket.BeginReceive(stateObj.Buffer, 0, stateObj.Buffer.Length, SocketFlags.None, OnMessageReceived, stateObj);
_readEvent.WaitOne();//阻塞等待
}
}
private static void OnMessageReceived(IAsyncResult ar)
{
var state = ar.AsyncState as StateObject;
Debug.Assert(state != null, nameof(state) + " != null");
var receiveLength = state.ClientSocket.EndReceive(ar);
if (receiveLength > 0)
{
var msg = Encoding.UTF8.GetString(state.Buffer, 0, receiveLength);
Console.WriteLine($"{state.ClientSocket.RemoteEndPoint}-接收數(shù)據(jù):{msg}");
var sendBuffer = Encoding.UTF8.GetBytes($"received:{msg}");
state.ClientSocket.BeginSend(sendBuffer, 0, sendBuffer.Length, SocketFlags.None,
SendMessage, state.ClientSocket);
}
}
private static void SendMessage(IAsyncResult ar)
{
var clientSocket = ar.AsyncState as Socket;
Debug.Assert(clientSocket != null, nameof(clientSocket) + " != null");
clientSocket.EndSend(ar);
_readEvent.Set(); //發(fā)送完畢后椅棺,釋放信號(hào)量
}
}
public class StateObject
{
// Client socket.
public Socket ClientSocket = null;
// Size of receive buffer.
public const int BufferSize = 1024;
// Receive buffer.
public byte[] Buffer = new byte[BufferSize];
}
首先來(lái)看運(yùn)行結(jié)果犁罩,從下圖可以看到,除了建立連接時(shí)CPU出現(xiàn)抖動(dòng)外两疚,在消息接收和發(fā)送階段床估,CPU占有率趨于平緩,且占用率低诱渤。
分析代碼后我們發(fā)現(xiàn):
- CPU使用率是下來(lái)了丐巫,但代碼復(fù)雜度上升了。
- 使用異步接口處理客戶端連接:
BeginAccept
和EndAccept
- 使用異步接口接收數(shù)據(jù):
BeginReceive
和EndReceive
- 使用異步接口發(fā)送數(shù)據(jù):
BeginSend
和EndSend
- 使用
ManualResetEvent
進(jìn)行線程同步勺美,避免線程空轉(zhuǎn)
那你可能好奇递胧,以上模型是何種IO多路復(fù)用模型呢?
好問(wèn)題赡茸,我們來(lái)一探究竟缎脾。
驗(yàn)證I/O模型
要想驗(yàn)證應(yīng)用使用的何種IO模型,只需要確定應(yīng)用運(yùn)行時(shí)發(fā)起了哪些系統(tǒng)調(diào)用即可占卧。對(duì)于Linux系統(tǒng)來(lái)說(shuō)赊锚,我們可以借助strace
命令來(lái)跟蹤指定應(yīng)用發(fā)起的系統(tǒng)調(diào)用和信號(hào)。
驗(yàn)證同步阻塞I/O發(fā)起的系統(tǒng)調(diào)用
可以使用VSCode Remote 連接到自己的Linux系統(tǒng)上屉栓,然后新建項(xiàng)目Io.Demo
舷蒲,以上面非阻塞IO的代碼進(jìn)行測(cè)試,執(zhí)行以下啟動(dòng)跟蹤命令:
shengjie@ubuntu:~/coding/dotnet$ ls
Io.Demo
shengjie@ubuntu:~/coding/dotnet$ strace -ff -o Io.Demo/strace/io dotnet run --project Io.Demo/
Press any key to start!
服務(wù)端已啟動(dòng)(127.0.0.1:5001)-等待連接...
127.0.0.1:36876-已連接
127.0.0.1:36876-開(kāi)始接收數(shù)據(jù)...
127.0.0.1:36876-接收數(shù)據(jù):1
另起命令行友多,執(zhí)行nc localhost 5001
模擬客戶端連接牲平。
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ nc localhost 5001
1
received:1
使用netstat
命令查看建立的連接。
shengjie@ubuntu:/proc/3763$ netstat -natp | grep 5001
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:5001 0.0.0.0:* LISTEN 3763/Io.Demo
tcp 0 0 127.0.0.1:36920 127.0.0.1:5001 ESTABLISHED 3798/nc
tcp 0 0 127.0.0.1:5001 127.0.0.1:36920 ESTABLISHED 3763/Io.Demo
另起命令行域滥,執(zhí)行ps -h | grep dotnet
抓取進(jìn)程Id纵柿。
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ ps -h | grep dotnet
3694 pts/1 S+ 0:11 strace -ff -o Io.Demo/strace/io dotnet run --project Io.Demo/
3696 pts/1 Sl+ 0:01 dotnet run --project Io.Demo/
3763 pts/1 Sl+ 0:00 /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo
3779 pts/2 S+ 0:00 grep --color=auto dotnet
shengjie@ubuntu:~/coding/dotnet$ ls Io.Demo/strace/ # 查看生成的系統(tǒng)調(diào)用文件
io.3696 io.3702 io.3708 io.3714 io.3720 io.3726 io.3732 io.3738 io.3744 io.3750 io.3766 io.3772 io.3782 io.3827
io.3697 io.3703 io.3709 io.3715 io.3721 io.3727 io.3733 io.3739 io.3745 io.3751 io.3767 io.3773 io.3786 io.3828
io.3698 io.3704 io.3710 io.3716 io.3722 io.3728 io.3734 io.3740 io.3746 io.3752 io.3768 io.3774 io.3787
io.3699 io.3705 io.3711 io.3717 io.3723 io.3729 io.3735 io.3741 io.3747 io.3763 io.3769 io.3777 io.3797
io.3700 io.3706 io.3712 io.3718 io.3724 io.3730 io.3736 io.3742 io.3748 io.3764 io.3770 io.3780 io.3799
io.3701 io.3707 io.3713 io.3719 io.3725 io.3731 io.3737 io.3743 io.3749 io.3765 io.3771 io.3781 io.3800
有上可知,進(jìn)程Id為3763启绰,依次執(zhí)行以下命令可以查看該進(jìn)程的線程和產(chǎn)生的文件描述符:
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cd /proc/3763 # 進(jìn)入進(jìn)程目錄
shengjie@ubuntu:/proc/3763$ ls
attr cmdline environ io mem ns pagemap sched smaps_rollup syscall wchan
autogroup comm exe limits mountinfo numa_maps patch_state schedstat stack task
auxv coredump_filter fd loginuid mounts oom_adj personality sessionid stat timers
cgroup cpuset fdinfo map_files mountstats oom_score projid_map setgroups statm timerslack_ns
clear_refs cwd gid_map maps net oom_score_adj root smaps status uid_map
shengjie@ubuntu:/proc/3763$ ll task # 查看當(dāng)前進(jìn)程啟動(dòng)的線程
total 0
dr-xr-xr-x 9 shengjie shengjie 0 5月 10 16:36 ./
dr-xr-xr-x 9 shengjie shengjie 0 5月 10 16:34 ../
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 16:36 3763/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 16:36 3765/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 16:36 3766/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 16:36 3767/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 16:36 3768/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 16:36 3769/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 16:36 3770/
shengjie@ubuntu:/proc/3763$ ll fd 查看當(dāng)前進(jìn)程系統(tǒng)調(diào)用產(chǎn)生的文件描述符
total 0
dr-x------ 2 shengjie shengjie 0 5月 10 16:36 ./
dr-xr-xr-x 9 shengjie shengjie 0 5月 10 16:34 ../
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 0 -> /dev/pts/1
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 1 -> /dev/pts/1
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 10 -> 'socket:[44292]'
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 100 -> /dev/random
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 11 -> 'socket:[41675]'
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 13 -> 'pipe:[45206]'
l-wx------ 1 shengjie shengjie 64 5月 10 16:37 14 -> 'pipe:[45206]'
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 15 -> /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 16 -> /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 17 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 18 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Console.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 19 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.dll
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 2 -> /dev/pts/1
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 20 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.Extensions.dll
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 21 -> /dev/pts/1
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 22 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Text.Encoding.Extensions.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 23 -> /dev/urandom
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 24 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Sockets.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 25 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Primitives.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 26 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/Microsoft.Win32.Primitives.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 27 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Tracing.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 28 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.Tasks.dll
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 29 -> 'socket:[43429]'
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 3 -> 'pipe:[42148]'
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 30 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.ThreadPool.dll
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 31 -> 'socket:[42149]'
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 32 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Memory.dll
l-wx------ 1 shengjie shengjie 64 5月 10 16:37 4 -> 'pipe:[42148]'
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 42 -> /dev/urandom
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 5 -> /dev/pts/1
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 6 -> /dev/pts/1
lrwx------ 1 shengjie shengjie 64 5月 10 16:37 7 -> /dev/pts/1
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 9 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Private.CoreLib.dll
lr-x------ 1 shengjie shengjie 64 5月 10 16:37 99 -> /dev/urandom
從上面的輸出來(lái)看昂儒,.NET Core控制臺(tái)應(yīng)用啟動(dòng)時(shí)啟動(dòng)了多個(gè)線程,并在10委可、11渊跋、29、31號(hào)文件描述符啟動(dòng)了socket監(jiān)聽(tīng)着倾。那哪一個(gè)文件描述符監(jiān)聽(tīng)的是5001端口呢拾酝。
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cat /proc/net/tcp | grep 1389 # 查看5001端口號(hào)相關(guān)的tcp鏈接(0x1389 為5001十六進(jìn)制)
4: 0100007F:1389 00000000:0000 0A 00000000:00000000 00:00000000 00000000 1000 0 43429 1 0000000000000000 100 0 0 10 0
12: 0100007F:9038 0100007F:1389 01 00000000:00000000 00:00000000 00000000 1000 0 44343 1 0000000000000000 20 4 30 10 -1
13: 0100007F:1389 0100007F:9038 01 00000000:00000000 00:00000000 00000000 1000 0 42149 1 0000000000000000 20 4 29 10 -1
從中可以看到inode為[43429]的socket監(jiān)聽(tīng)在5001端口號(hào),所以可以找到上面的輸出行lrwx------ 1 shengjie shengjie 64 5月 10 16:37 29 -> 'socket:[43429]'
卡者,進(jìn)而判斷監(jiān)聽(tīng)5001端口號(hào)socket對(duì)應(yīng)的文件描述符為29蒿囤。
當(dāng)然,也可以從記錄到strace
目錄的日志文件找到線索崇决。在文中我們已經(jīng)提及材诽,socket服務(wù)端編程的一般流程底挫,都要經(jīng)過(guò)socket->bind->accept->read->write流程。所以可以通過(guò)抓取關(guān)鍵字脸侥,查看相關(guān)系統(tǒng)調(diào)用建邓。
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ grep 'bind' strace/ -rn
strace/io.3696:4570:bind(10, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-3696-327175-socket"}, 110) = 0
strace/io.3763:2241:bind(11, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-3763-328365-socket"}, 110) = 0
strace/io.3763:2949:bind(29, {sa_family=AF_INET, sin_port=htons(5001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
strace/io.3713:4634:bind(11, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-3713-327405-socket"}, 110) = 0
從上可知,在主線程也就是io.3763
線程的系統(tǒng)調(diào)用文件中湿痢,將29號(hào)文件描述符與監(jiān)聽(tīng)在127.0.0.1:5001
的socket進(jìn)行了綁定涝缝。同時(shí)也明白了.NET Core自動(dòng)建立的另外2個(gè)socket是與diagnostic相關(guān)扑庞。
接下來(lái)咱們重點(diǎn)看下3763號(hào)線程產(chǎn)生的系統(tǒng)調(diào)用譬重。
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cd strace/
shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ cat io.3763 # 僅截取相關(guān)片段
socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 29
setsockopt(29, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
bind(29, {sa_family=AF_INET, sin_port=htons(5001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
listen(29, 10)
write(21, "\346\234\215\345\212\241\347\253\257\345\267\262\345\220\257\345\212\250(127.0.0.1:500"..., 51) = 51
accept4(29, {sa_family=AF_INET, sin_port=htons(36920), sin_addr=inet_addr("127.0.0.1")}, [16], SOCK_CLOEXEC) = 31
write(21, "127.0.0.1:36920-\345\267\262\350\277\236\346\216\245\n", 26) = 26
write(21, "127.0.0.1:36920-\345\274\200\345\247\213\346\216\245\346\224\266\346\225\260\346"..., 38) = 38
recvmsg(31, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="1\n", iov_len=512}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 2
write(21, "127.0.0.1:36920-\346\216\245\346\224\266\346\225\260\346\215\256\357\274\2321"..., 34) = 34
sendmsg(31, {msg_name=NULL, msg_namelen=0, msg_iov=[{iov_base="received:1\n", iov_len=11}], msg_iovlen=1, msg_controllen=0, msg_flags=0}, 0) = 11
accept4(29, 0x7fecf001c978, [16], SOCK_CLOEXEC) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGWINCH {si_signo=SIGWINCH, si_code=SI_KERNEL} ---
從中我們可以發(fā)現(xiàn)幾個(gè)關(guān)鍵的系統(tǒng)調(diào)用:
- socket
- bind
- listen
- accept4
- recvmsg
- sendmsg
通過(guò)命令man
命令可以查看下accept4
和recvmsg
系統(tǒng)調(diào)用的相關(guān)說(shuō)明:
shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man accept4
If no pending connections are present on the queue, and the socket is not marked as nonblocking, accept() blocks the caller until a
connection is present.
shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man recvmsg
If no messages are available at the socket, the receive calls wait for a message to arrive, unless the socket is nonblocking (see fcntl(2))
也就是說(shuō)accept4
和recvmsg
是阻塞式系統(tǒng)調(diào)用。
驗(yàn)證I/O多路復(fù)用發(fā)起的系統(tǒng)調(diào)用
同樣以上面I/O多路復(fù)用的代碼進(jìn)行驗(yàn)證罐氨,驗(yàn)證步驟類似:
shengjie@ubuntu:~/coding/dotnet$ strace -ff -o Io.Demo/strace2/io dotnet run --project Io.Demo/
Press any key to start!
服務(wù)端已啟動(dòng)(127.0.0.1:5001)-等待連接...
127.0.0.1:37098-已連接
127.0.0.1:37098-接收數(shù)據(jù):1
127.0.0.1:37098-接收數(shù)據(jù):2
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ nc localhost 5001
1
received:1
2
received:2
shengjie@ubuntu:/proc/2449$ netstat -natp | grep 5001
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 127.0.0.1:5001 0.0.0.0:* LISTEN 2449/Io.Demo
tcp 0 0 127.0.0.1:5001 127.0.0.1:56296 ESTABLISHED 2449/Io.Demo
tcp 0 0 127.0.0.1:56296 127.0.0.1:5001 ESTABLISHED 2499/nc
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ ps -h | grep dotnet
2400 pts/3 S+ 0:10 strace -ff -o ./Io.Demo/strace2/io dotnet run --project Io.Demo/
2402 pts/3 Sl+ 0:01 dotnet run --project Io.Demo/
2449 pts/3 Sl+ 0:00 /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo
2516 pts/5 S+ 0:00 grep --color=auto dotnet
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cd /proc/2449/
shengjie@ubuntu:/proc/2449$ ll task
total 0
dr-xr-xr-x 11 shengjie shengjie 0 5月 10 22:15 ./
dr-xr-xr-x 9 shengjie shengjie 0 5月 10 22:15 ../
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2449/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2451/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2452/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2453/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2454/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2455/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2456/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2459/
dr-xr-xr-x 7 shengjie shengjie 0 5月 10 22:15 2462/
shengjie@ubuntu:/proc/2449$ ll fd
total 0
dr-x------ 2 shengjie shengjie 0 5月 10 22:15 ./
dr-xr-xr-x 9 shengjie shengjie 0 5月 10 22:15 ../
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 0 -> /dev/pts/3
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 1 -> /dev/pts/3
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 10 -> 'socket:[35001]'
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 100 -> /dev/random
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 11 -> 'socket:[34304]'
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 13 -> 'pipe:[31528]'
l-wx------ 1 shengjie shengjie 64 5月 10 22:16 14 -> 'pipe:[31528]'
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 15 -> /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 16 -> /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 17 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 18 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Console.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 19 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.dll
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 2 -> /dev/pts/3
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 20 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.Extensions.dll
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 21 -> /dev/pts/3
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 22 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Text.Encoding.Extensions.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 23 -> /dev/urandom
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 24 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Sockets.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 25 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Primitives.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 26 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/Microsoft.Win32.Primitives.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 27 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Tracing.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 28 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.Tasks.dll
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 29 -> 'socket:[31529]'
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 3 -> 'pipe:[32055]'
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 30 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.ThreadPool.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 31 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Collections.Concurrent.dll
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 32 -> 'anon_inode:[eventpoll]'
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 33 -> 'pipe:[32059]'
l-wx------ 1 shengjie shengjie 64 5月 10 22:16 34 -> 'pipe:[32059]'
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 35 -> 'socket:[35017]'
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 36 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Memory.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 37 -> /dev/urandom
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 38 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Debug.dll
l-wx------ 1 shengjie shengjie 64 5月 10 22:16 4 -> 'pipe:[32055]'
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 5 -> /dev/pts/3
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 6 -> /dev/pts/3
lrwx------ 1 shengjie shengjie 64 5月 10 22:16 7 -> /dev/pts/3
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 9 -> /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Private.CoreLib.dll
lr-x------ 1 shengjie shengjie 64 5月 10 22:16 99 -> /dev/urandom
shengjie@ubuntu:/proc/2449$ cat /proc/net/tcp | grep 1389
0: 0100007F:1389 00000000:0000 0A 00000000:00000000 00:00000000 00000000 1000 0 31529 1 0000000000000000 100 0 0 10 0
8: 0100007F:1389 0100007F:DBE8 01 00000000:00000000 00:00000000 00000000 1000 0 35017 1 0000000000000000 20 4 29 10 -1
12: 0100007F:DBE8 0100007F:1389 01 00000000:00000000 00:00000000 00000000 1000 0 28496 1 0000000000000000 20 4 30 10 -1
過(guò)濾strace2
目錄日志臀规,抓取監(jiān)聽(tīng)在localhost:5001
socket對(duì)應(yīng)的文件描述符。
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ grep 'bind' strace2/ -rn
strace2/io.2449:2243:bind(11, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2449-23147-socket"}, 110) = 0
strace2/io.2449:2950:bind(29, {sa_family=AF_INET, sin_port=htons(5001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
strace2/io.2365:4568:bind(10, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2365-19043-socket"}, 110) = 0
strace2/io.2420:4634:bind(11, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2420-22262-socket"}, 110) = 0
strace2/io.2402:4569:bind(10, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2402-22042-socket"}, 110) = 0
從中可以看出同樣是29號(hào)文件描述符栅隐,相關(guān)系統(tǒng)調(diào)用記錄中io.2449
文件中塔嬉,打開(kāi)文件,可以發(fā)現(xiàn)相關(guān)系統(tǒng)調(diào)用如下:
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cat strace2/io.2449 # 截取相關(guān)系統(tǒng)調(diào)用
socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 29
setsockopt(29, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
bind(29, {sa_family=AF_INET, sin_port=htons(5001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
listen(29, 10)
accept4(29, 0x7fa16c01b9e8, [16], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
epoll_create1(EPOLL_CLOEXEC) = 32
epoll_ctl(32, EPOLL_CTL_ADD, 29, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=0, u64=0}}) = 0
accept4(29, 0x7fa16c01cd60, [16], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
從中我們可以發(fā)現(xiàn)accept4
直接返回-1而不阻塞租悄,監(jiān)聽(tīng)在127.0.0.1:5001
的socket對(duì)應(yīng)的29號(hào)文件描述符最終作為epoll_ctl
的參數(shù)關(guān)聯(lián)到epoll_create1
創(chuàng)建的32號(hào)文件描述符上谨究。最終32號(hào)文件描述符會(huì)被epoll_wait
阻塞,以等待連接請(qǐng)求泣棋。我們可以抓取epoll
相關(guān)的系統(tǒng)調(diào)用來(lái)驗(yàn)證:
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ grep 'epoll' strace2/ -rn
strace2/io.2459:364:epoll_ctl(32, EPOLL_CTL_ADD, 35, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=1, u64=1}}) = 0
strace2/io.2462:21:epoll_wait(32, [{EPOLLIN, {u32=0, u64=0}}], 1024, -1) = 1
strace2/io.2462:42:epoll_wait(32, [{EPOLLOUT, {u32=1, u64=1}}], 1024, -1) = 1
strace2/io.2462:43:epoll_wait(32, [{EPOLLIN|EPOLLOUT, {u32=1, u64=1}}], 1024, -1) = 1
strace2/io.2462:53:epoll_wait(32,
strace2/io.2449:3033:epoll_create1(EPOLL_CLOEXEC) = 32
strace2/io.2449:3035:epoll_ctl(32, EPOLL_CTL_ADD, 33, {EPOLLIN|EPOLLET, {u32=4294967295, u64=18446744073709551615}}) = 0
strace2/io.2449:3061:epoll_ctl(32, EPOLL_CTL_ADD, 29, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=0, u64=0}}) = 0
因此我們可以斷定同步非阻塞I/O的示例使用的時(shí)IO多路復(fù)用的epoll模型胶哲。
關(guān)于epoll相關(guān)命令,man命令可以查看下epoll_create1
潭辈、epoll_ctl
和鸯屿、epoll_wait
系統(tǒng)調(diào)用的相關(guān)說(shuō)明:
shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man epoll_create
DESCRIPTION
epoll_create() creates a new epoll(7) instance. Since Linux 2.6.8, the size argument is ignored, but must be
greater than zero; see NOTES below.
epoll_create() returns a file descriptor referring to the new epoll instance. This file descriptor is used
for all the subsequent calls to the epoll interface.
shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man epoll_ctl
DESCRIPTION
This system call performs control operations on the epoll(7) instance referred to by the file descriptor
epfd. It requests that the operation op be performed for the target file descriptor, fd.
Valid values for the op argument are:
EPOLL_CTL_ADD
Register the target file descriptor fd on the epoll instance referred to by the file descriptor epfd
and associate the event event with the internal file linked to fd.
EPOLL_CTL_MOD
Change the event event associated with the target file descriptor fd.
EPOLL_CTL_DEL
Remove (deregister) the target file descriptor fd from the epoll instance referred to by epfd. The
event is ignored and can be NULL (but see BUGS below).
shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man epoll_wait
DESCRIPTION
The epoll_wait() system call waits for events on the epoll(7) instance referred to by the file descriptor
epfd. The memory area pointed to by events will contain the events that will be available for the caller.
Up to maxevents are returned by epoll_wait(). The maxevents argument must be greater than zero.
The timeout argument specifies the number of milliseconds that epoll_wait() will block. Time is measured
against the CLOCK_MONOTONIC clock. The call will block until either:
* a file descriptor delivers an event;
* the call is interrupted by a signal handler; or
* the timeout expires.
簡(jiǎn)而言之,epoll通過(guò)創(chuàng)建一個(gè)新的文件描述符來(lái)替換舊的文件描述符來(lái)完成阻塞工作把敢,當(dāng)有事件或超時(shí)時(shí)通知原有文件描述符進(jìn)行處理寄摆,以實(shí)現(xiàn)非阻塞的線程模型。
總結(jié)
寫(xiě)完這篇文章修赞,對(duì)I/O模型的理解有所加深婶恼,但由于對(duì)Linux系統(tǒng)的了解不深,所以難免有紕漏之處柏副,大家多多指教熙尉。
同時(shí)也不僅感嘆Linux的強(qiáng)大之處,一切皆文件的設(shè)計(jì)思想搓扯,讓一切都有跡可循〖焯担現(xiàn)在.NET 已經(jīng)完全實(shí)現(xiàn)跨平臺(tái)了,那么Linux操作系統(tǒng)大家就有必要熟悉起來(lái)了锨推。