1剔猿、Tomcat Cluster
(1) httpd + tomcat cluster
httpd: mod_proxy, mod_proxy_http, mod_proxy_balancer
tomcat cluster:http connector
(2) httpd + tomcat cluster
httpd: mod_proxy, mod_proxy_ajp, mod_proxy_balancer
tomcat cluster:ajp connector
(3) nginx + tomcat cluster
實驗環(huán)境
1归敬、同步三臺主機(jī)的時間
2、將三臺主機(jī)的主機(jī)名按照上圖修改
3汪茧、修改本機(jī)的/etc/hosts文件,使三臺主機(jī)可以互相解析主機(jī)名
4呀舔、在A和B上安裝好open-jdk和tomcat扩灯,并啟動tomcat
示例1:實現(xiàn)nginx反向代理tomcat集群
1、在A和B上的設(shè)置
[root@node1 tomcat]#mkdir -pv /usr/share/tomcat/webapps/myapp/WEB-INF
vim /usr/share/tomcat/webapps/myapp/index.jsp
<%@ page language="java" %>
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1> ---在B上將顏色改為green惧磺,Tomcat改為B
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
html>
測試
http://172.18.21.107:8080/myapp/
http://172.18.21.7:8080/myapp/
2捻撑、調(diào)度器上設(shè)置
yum install nginx
vim /etc/nginx/nginx.conf
upstream tcsrvs{
server 172.18.21.107:8080;
server 172.18.21.7:8080;
}
vim default.conf
location / {
proxy_pass http://tcsrvs;
}
測試:http://172.18.21.106/myapp/
示例2:實現(xiàn)httpd反向代理tomcat集群
1、在A和B上的設(shè)置同上
2番捂、在調(diào)度器上的設(shè)置
yum install httpd -y
cd /etc/httpd/conf.d
vim vhost.conf
<proxy balancer://tcsrvs> ---定義一個后端服務(wù)器組
BalancerMember http://172.18.21.107:8080 ---如果和后端服務(wù)器連接的協(xié)議為ajp協(xié)議江解,把http改為ajp并且把端口改為8009即可
BalancerMember http://172.18.21.7:8080
ProxySet lbmethod=byrequests--- lbmethod為定義調(diào)度算法,有三種:byrequests相當(dāng)于rr和wrr膘流、bybusyness相當(dāng)于LC、bytraffic根據(jù)流量調(diào)度
</Proxy>
namevirtualhost *:80
<VirtualHost *:80>
ServerName www.magedu.com
documentroot /app/website1
<directory /app/website1>
Require all granted
</directory>
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
</VirtualHost>
httpd -t
service httpd start
測試
http://172.18.21.106/myapp/
httpd的負(fù)載集群功能具有健康狀態(tài)檢查功能
可以把后端的一個tomcat停掉
然后訪問http://172.18.21.106/myapp/
會發(fā)現(xiàn)不往關(guān)閉的tomcat主機(jī)調(diào)度了
備注:
在BalancerMember http://172.18.21.107:8080后面可以加一些鍵和值耕魄,比如手動定義后端主機(jī)的狀態(tài)和權(quán)重等
如BalancerMember http://172.18.21.107:8080 status=D loadfactor=2 表示將后端這臺主機(jī)手動標(biāo)記為不可以用狀態(tài)彭谁,設(shè)置權(quán)重為2
BalancerMember:
BalancerMember [balancerurl] url [key=value [key=value ...]]
status:手動定義后端主機(jī)的狀態(tài)
D: Worker is disabled and will not accept any requests.不可用
S: Worker is administratively stopped.
I: Worker is in ignore-errors mode and will always be considered available.
H: Worker is in hot-standby mode and will only be used if no other viable workers are available.備用
E: Worker is in an error state.
N: Worker is in drain mode and will only accept existing sticky sessions destined for itself and ignore all other requests.
loadfactor:負(fù)載因子,即權(quán)重则奥;
2狭园、實現(xiàn)httpd和nginx的會話粘性綁定
tomcat集群保存客戶端會話的方式有三種:
(1) session sticky 在調(diào)度器上設(shè)置,將后端的tomcat服務(wù)器響應(yīng)給客戶端的響應(yīng)報文中添加cookie唱矛,這樣客戶端就會收到這個cookie信息井辜,下次訪問時攜帶此cookie信息管闷,就會被調(diào)度至同一個后端tomcat服務(wù)器,實現(xiàn)會話粘性綁定
(2) session cluster 在后端的tomcat上設(shè)置刷允,實現(xiàn)將客戶端的會話同時保存至后端的所有tomcat集群碧囊,使所有的集群都有相同的會話,這樣即使一臺服務(wù)器壞了破托,還有其他服務(wù)器可以使用
tomcat delta manager
(3) session server 使用專門的緩存服務(wù)器來保存會話歧蒋,比如memcached
此示例是根據(jù)上圖的拓?fù)鋱D實現(xiàn)
示例1:httpd的會話綁定
在兩個后端tomcat上的設(shè)置
vim /etc/tomcat/server.xml
<Engine name="Catalina" defaultHost="localhost" jvmroute="tomcatA"> ---在此行增加一個jvmroute="tomcatA,在B上增加jvmroute="tomcatB"
systemctl restart tomcat
在調(diào)度器上的設(shè)置
vim vhost.conf
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED --env表示除非后端的這個主機(jī)蕩機(jī)谜洽,否則一直調(diào)度到此服務(wù)器,實現(xiàn)會話綁定
<proxy balancer://tcsrvs>
BalancerMember http://172.18.21.107:8080 route=tomcatA
BalancerMember http://172.18.21.7:8080 route=tomcatB
ProxySet lbmethod=byrequests
ProxySet stickysession=ROUTEID
</Proxy>
namevirtualhost *:80
<VirtualHost *:80>
ServerName www.magedu.com
documentroot /app/website1
<directory /app/website1>
Require all granted
</directory>
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
</VirtualHost>
service httpd reload
測試
http://172.18.21.106/myapp/
示例2:nginx的會話綁定
vim /etc/nginx/nginx.conf
upstream tcsrvs{
server 172.18.21.107:8080;
server 172.18.21.7:8080;
hash $request_uri consistent;
}
vim /etc/nginx/conf.d/default.conf
location / {
proxy_pass http://tcsrvs;
}
nginx -t
nginx -s reload
測試
http://172.18.21.106/myapp/發(fā)現(xiàn)只能調(diào)度至第一次訪問的后端服務(wù)器序臂,實現(xiàn)會話綁定实束。
3、實現(xiàn)后端的兩個tomcat服務(wù)器保存有相同的會話
此實驗是在1中的拓?fù)鋱D實現(xiàn)的
在A和B上的設(shè)置
訪問tomcat的官方文檔
http://172.18.21.107:8080/docs/cluster-howto.html
Document---->Clustering
將官方文檔中的如下內(nèi)容復(fù)制到tomcat配置文件的<engine>或<host>中,此實驗放到Engine中
vim /etc/tomcat/server.xml
<Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatA"> ---在B中增加jvmRoute="tomcatB"
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.21.4" ---多播地址构订,用來后端的兩個tomcat進(jìn)行多播通訊避矢,發(fā)送心跳信息等,證明它們在同一個集群內(nèi)亥宿,并且告訴對方是否存活砂沛,為了防止實驗時和教室其他同學(xué)沖突,最好修改一下
port="45564"
frequency="500" ---表示每0.5s發(fā)送一次心跳信息告訴其他成員自己還活著
dropTime="3000"/> ---表示3s沒有發(fā)送信息就證明壞了
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="172.18.21.107" ---另外一臺主機(jī)修改為172.18.21.7
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/> ---表示在多播內(nèi)核其他成員通訊的最大線程尺上,如果只有三臺主機(jī)圆到,最大線程是2就可以了
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
注意:CentOS 7上的tomcat自帶的文檔中的配置示例有語法錯誤;沒有加最后的/
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
cp /etc/tomcat/web.xml /usr/share/tomcat/webapps/myapp/WEB-INF/ ---生產(chǎn)中web.sml文件程序已經(jīng)寫好并放到WEB-INF目錄马绝,不需要復(fù)制了
cd /usr/share/tomcat/webapps/myapp/WEB-INF/
vim web.xml
在此文件內(nèi)部沒有注釋的地方加如下內(nèi)容
<distributable/> ---注意一定要在這個文件的內(nèi)部,而不是最后加
systemctl restart tomcat
在調(diào)度器上的設(shè)置
vim vhost.conf
#Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<proxy balancer://tcsrvs>
BalancerMember http://172.18.21.107:8080
BalancerMember http://172.18.21.7:8080
ProxySet lbmethod=byrequests
#ProxySet stickysession=ROUTEID
</Proxy>
namevirtualhost *:80
<VirtualHost *:80>
ServerName www.magedu.com
documentroot /app/website1
<directory /app/website1>
Require all granted
</directory>
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
</VirtualHost>
service httpd reload
測試
http://172.18.21.106/myapp/
發(fā)現(xiàn)無論訪問后端的哪個主機(jī)掷邦,會話都是一樣的椭赋,說明已經(jīng)成功將客戶端的會話保存至兩臺tomcat服務(wù)器中
4哪怔、memcache
- 簡述
memcach是 高性能、分布式的內(nèi)存對象緩存系統(tǒng)认境,memcache緩存時鍵和值都在內(nèi)存中存儲,所以訪問速度很快亩冬,memcache緩存時要依賴于客戶端的智能,緩存的過程是客戶端先去訪問后端的tomcat服務(wù)器硅急,然后再請求memcache將訪問的數(shù)據(jù)緩存下來佳遂,下次再訪問時就直接會訪問memcache中的緩存,這一點與varnish不同讶迁,varnish是客戶端直接去訪問varnish緩存服務(wù)器,如果緩存上面沒有啸驯,varnish會替客戶端到后端服務(wù)器上去尋找祟峦,然后緩存下來,所以memcache就被稱為旁掛式緩存宅楞,強(qiáng)依賴于客戶端的智能
varnish相當(dāng)于是遞歸袱吆,而memcache相當(dāng)于迭代 - 特性
緩存:cache距淫,無持久存儲功能,因為存儲于內(nèi)存當(dāng)中斷電就會丟失榕暇;
bypass緩存,依賴于客戶端的智能狰晚;
k/v cache缴啡,僅支持存儲可流式化數(shù)據(jù),也就是發(fā)送數(shù)據(jù)之前打碎业栅,然后到接收端再重組的過程稱為可流式化。 - 安裝和配置
yum install memcached
監(jiān)聽的端口:11211/tcp, 11211/udp
systemctl start memcached
memcached -h ---可以查看程序命令常用選項 - 命令:
統(tǒng)計類:stats, stats items, stats slabs, stats sizes
存儲類:set, add, replace, append在后面插入值, prepend在前面插入值
命令格式:<command name> <key> <flags標(biāo)志位> <exptime> <bytes>
<cas unique>
檢索類:get, delete, incr/decr自增和減
清空:flush_all
示例
[root@node1 ~]#telnet 127.0.0.1 11211
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
add mykey 1 600 7 ---1是標(biāo)志位反镇,可以隨便定義娘汞,600是過期時間為10分鐘,7是存儲7個字節(jié)你弦,mykey是鍵
helloo ---為值,注意值的字節(jié)數(shù)一定要和定義的對上尸昧,定義的是7個字節(jié)旷偿,如果不加前面的空格,不是7個字節(jié)會報錯
STORED
get mykey
VALUE mykey 1 7
helloo
END
append mykey 1 600 7 ---在值后面追加內(nèi)容
system
STORED
get mykey
VALUE mykey 1 14
helloo system
END
prepend mykey 1 600 4 ---在值前面追加內(nèi)容
new
STORED
get mykey
VALUE mykey 1 18
new helloo system
END
add count 1 1200 1 ---添加一個count鍵
0
STORED
get count
VALUE count 1 1
0
END
incr count 2 ---自增
2
get count
VALUE count 1 1
2
END
incr count 2
4
decr count 1 ---自減
3
delete count ---刪除鍵
DELETED
get count
END
stats ---查看狀態(tài)
flush_all --清空所有鍵和值
OK
5萍程、實現(xiàn)session會話保持到memcache服務(wù)器
要想將會話保存至后端memcache中茫负,并且每個memcache都保存相同的會話,需要一個管理項目memcached-session-manager
項目地址:https://github.com/magro/memcached-session-manager
下載如下jar文件至各tomcat的/usr/share/tomcat/lib/目錄中,其中的${version}要換成你所需要的版本號榕吼,tc${6,7,8}要換成與tomcat版本相同的版本號。
1羹蚣、Add memcached-session-manager jars to tomcat
memcached-session-manager-2.1.1.jar
memcached-session-manager-tc7-2.1.1.jar ---要根據(jù)tomcat的版本戴质,實驗時是7版本度宦,所以這里要下載tc7
spymemcached-2.9.1.jar
2告匠、Add custom serializers to your webapp (optional)
這里下載的是kryo-serializer,有如下jar文件需要下載
msm-kryo-serializer-2.1.1.jar
kryo-serializers-0.42.jar
kryo-4.0.1.jar
minlog-1.3.0.jar
reflectasm-1.11.3-shaded.jar
reflectasm-1.11.3.jar
asm-5.2.jar
objenesis-2.6.jar
實現(xiàn)過程如下
1后专、在director上實現(xiàn)nginx或者h(yuǎn)ttpd的反向代理至tomcat集群,本實驗用的是httpd
vim /etc/httpd/conf.d/vhost.conf
<proxy balancer://tcsrvs>
BalancerMember http://172.18.21.107:8080
BalancerMember http://172.18.21.7:8080
ProxySet lbmethod=byrequests
</Proxy>
namevirtualhost *:80
<VirtualHost *:80>
ServerName www.magedu.com
documentroot /app/website1
<directory /app/website1>
Require all granted
</directory>
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
</VirtualHost>
service httpd start
2戚哎、在兩個后端服務(wù)器上的設(shè)置
安裝tomcat和memcache并啟動服務(wù)
[root@node1 app]#ls /app
asm-5.2.jar memcached-session-manager-2.1.1.jar msm-kryo-serializer-2.1.1.jar reflectasm-1.11.3-shaded.jar
kryo-4.0.1.jar memcached-session-manager-tc7-2.1.1.jar objenesis-2.6.jar spymemcached-2.9.1.jar
kryo-serializers-0.42.jar minlog-1.3.0.jar reflectasm-1.11.3.jar
[root@node1 app]#cd /app
[root@node1 app]#cp * /usr/share/tomcat/lib/ ---復(fù)制.jar文件到此目錄
vim /etc/tomcat/server.xml ---將官方文檔中的此段內(nèi)容復(fù)制到tomcat的配置文件中
<Context path="/myapp" docBase="/usr/share/tomcat/webapps/myapp" reloadable="true"> ---訪問的uri為myapp,實際上訪問的是/usr/share/tomcat/webapps/myapp
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:172.18.21.107:11211,n2:172.18.21.7:11211"
sticky="false"
sessionBackupAsync="false"
lockingMode="uriPattern:/path1|/path2"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
systemctl restart tomcat
3型凳、在tomcat上分別創(chuàng)建測試目錄和.jsp文件
mkdir -pv /usr/share/tomcat/webapps/myapp/WEB-INF/
vim /usr/share/tomcat/webapps/myapp/index.jsp
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head> ---在B上改為B
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1> ---在B上改為B和green
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
4、測試
安裝客戶端工具
yum install -y libmemcached --不安裝此軟件包無法使用memdump等客戶端工具
http://172.18.21.106/myapp/ ---訪問發(fā)現(xiàn)被調(diào)度到不同的tomcat主機(jī)甘畅,但會話是一樣的
[root@node2 myapp]#memdump --server 172.18.21.107:11211 ---此命令可以查看到memcache中緩存的值
validity:643C757E5D5176595045F4BC02048072-n1
643C757E5D5176595045F4BC02048072-n1
bak:643C757E5D5176595045F4BC02048072-n2
[root@node2 myapp]#systemctl stop memcached ---關(guān)閉一臺memcached
http://172.18.21.106/myapp/ ---繼續(xù)訪問發(fā)現(xiàn)會話仍然不變,說明會話在兩臺memcache中都緩存了
總結(jié):客戶端發(fā)起請求疏唾,通過前端的調(diào)度器,將請求調(diào)度至后端的tomcat服務(wù)器函似,如果調(diào)度的是tomcatA就會把會話緩存至memcacheA,同時備份至memcacheB撇寞,這樣兩個緩存服務(wù)器中就都有客戶端的會話,下次客戶端再訪問時無論調(diào)度到哪個tomcat蔑担,都會從后端的memcache中取會話,就會得到相同的會話內(nèi)容钟沛。