Vmware部署Nginx+KeepAlived集群雙主架構(gòu)的問題及解決方法
用nginx做負載均衡,作為架構(gòu)的最前端或中間層,隨著日益增長的訪問量,需要給負載均衡做高可用架構(gòu),利用keepalived解決單點風(fēng)險,一旦 nginx宕機能快速切換到備份服務(wù)器。
Vmware網(wǎng)絡(luò)配置可能遇到的問題解決方法
- 啟動
VMware DHCP Service和VMware NAT Service兩個服務(wù) - 在網(wǎng)絡(luò)適配器開啟網(wǎng)絡(luò)共享,允許其他網(wǎng)絡(luò)打勾保存,重啟虛擬機
安裝
節(jié)點部署
| 節(jié)點 | 地址 | 服務(wù) |
|---|---|---|
| centos7_1 | 192.168.211.130 | Keepalived+Nginx |
| centos7_2 | 192.168.211.131 | Keepalived+Nginx |
| centos7_3 | 192.168.211.132 | Redis服務(wù)器 |
| web1(物理機) | 192.168.211.128 | FastApi+Celery |
| web2(物理機) | 192.168.211.129 | FastApi+Celery |
web的配置
web1啟動python http服務(wù)器
vim index.html <html> <body> <h1>Web Svr 1</h1> </body> </html> nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &
web2啟動python http服務(wù)器
vim index.html <html> <body> <h1>Web Svr 2</h1> </body> </html> nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &
關(guān)閉防火墻
firewall-cmd --state systemctl stop firewalld.service systemctl disable firewalld.service
現(xiàn)在瀏覽器訪問就正常了,頁面顯示W(wǎng)eb Svr 1 和 2
centos1和2安裝Nginx
首先配置阿里云的源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
安裝依賴包
yum -y install gcc yum install -y pcre pcre-devel yum install -y zlib zlib-devel yum install -y openssl openssl-devel
下載nginx,并解壓
wget http://nginx.org/download/nginx-1.8.0.tar.gz tar -zxvf nginx-1.8.0.tar.gz
安裝nginx
cd nginx-1.8.0 ./configure --user=nobody --group=nobody --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --with-http_ssl_module make make install cd /usr/local/nginx/sbin/ # 檢查配置文件 ./nginx -t # 啟動nginx ./nginx
開放nginx訪問
firewall-cmd --zone=public --add-port=80/tcp --permanent systemctl restart firewalld.service
此時訪問130和131都可以看到nginx的首頁。
創(chuàng)建nginx啟動文件
需要在init.d文件夾中創(chuàng)建nginx啟動文件。 這樣每次服務(wù)器重新啟動init進程都會自動啟動Nginx。
cd /etc/init.d/
vim nginx
#!/bin/sh
#
# nginx - this script starts and stops the nginx daemin
#
# chkconfig: - 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# pidfile: /var/run/nginx.pid
# user: nginx
# Source function library.
. /etc/rc.d/init.d/functions
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0
nginx="/usr/local/nginx/sbin/nginx"
prog=$(basename $nginx)
NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"
lockfile=/var/run/nginx.lock
start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}
stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}
restart() {
configtest || return $?
stop
start
}
reload() {
configtest || return $?
echo -n $"Reloading $prog: "
killproc $nginx -HUP
RETVAL=$?
echo
}
force_reload() {
restart
}
configtest() {
$nginx -t -c $NGINX_CONF_FILE
}
rh_status() {
status $prog
}
rh_status_q() {
rh_status >/dev/null 2>&1
}
case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac
校驗配置文件依次輸入下列命令
chkconfig --add nginx chkconfig --level 345 nginx on
給這個文件添加執(zhí)行權(quán)限
chmod +x nginx ls functions netconsole network nginx README
啟動Nginx服務(wù)
service nginx start service nginx status service nginx reload
Nginx反向代理、負載均衡(centos_1)
修改nginx.conf配置文件,去除注釋的代碼
cd /usr/local/nginx/conf/ mv nginx.conf nginx.conf.bak egrep -v '^#' nginx.conf.bak egrep -v '^#|^[ ]*#' nginx.conf.bak egrep -v '^#|^[ ]*#|^$' nginx.conf.bak egrep -v '^#|^[ ]*#|^$' nginx.conf.bak >> nginx.conf cat nginx.conf
輸出如下
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 80;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
重新加載nginx配置
# 測試配置文件是否正常 ../sbin/nginx -t # 重新加載nginx配置 ../sbin/nginx -s reload
配置nginx反向代理、負載均衡
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# websvr 服務(wù)器集群(也可以叫負載均衡池)
upstream websvr {
server 192.168.211.128:8001 weight=1;
server 192.168.211.129:8001 weight=2;
}
server {
listen 80;
# 用來指定ip地址或者域名,多個配置之間用空格分隔
server_name 192.168.211.130;
location / {
# 將所有請求交給websvr集群去處理
proxy_pass http://websvr;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
現(xiàn)在重啟nginx
sbin/nginx -s reload
websvr名稱可自定義,可以指明這些服務(wù)器的含義。也就是只需要添加upstream websvr和proxy_pass就可以實現(xiàn)負載均衡。
現(xiàn)在訪問130,頁面上就會出現(xiàn)Web Svr 1和Web Svr 2切換,會根據(jù)權(quán)重選擇服務(wù)器,weight值越大,權(quán)重越高,也就是重復(fù)刷新該頁面,平均Web Svr 2出現(xiàn)2次,Web Svr 1出現(xiàn)1次。
到目前為止,仍然不能實現(xiàn)高可用,雖然web服務(wù)可以這樣做,單點故障可以通過這種方式處理,但是如果nginx服務(wù)故障了,整個系統(tǒng)基本就無法訪問了,那么就需要使用多臺Nginx來保障。
多個Nginx協(xié)同工作,Nginx高可用【雙機主從模式】
在131服務(wù)器(centos_2)上新增一臺nginx服務(wù),和之前的配置一樣,只需要修改 nginx.conf 即可
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
upstream websvr {
server 192.168.211.128:8001 weight=1;
server 192.168.211.129:8001 weight=2;
}
server {
listen 80;
server_name 192.168.211.131;
location / {
proxy_pass http://websvr;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
# 重新加載nginx
sbin/nginx -s reload
現(xiàn)在訪問 http://192.168.211.130/ 也可以得到和 http://192.168.211.131/ 類似的結(jié)果。
這兩臺Nginx服務(wù)器的IP是不同的,那怎么做才能將這兩臺nginx服務(wù)器一起工作呢?這就需要用到keepalived了。
安裝軟件,兩臺centos同時安裝
yum install keepalived pcre-devel -y
配置keepalived
兩臺均備份
cp /etc/keepalived/keepalived.conf keepalived.conf.bak
centos_1配置Keepalived-MASTER
[root@localhost keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
script_user root
enable_script_security
}
vrrp_script chk_nginx {
# 指定監(jiān)控腳本,檢測nginx服務(wù)是否正常運行
script "/etc/keepalived/chk_nginx.sh"
# 指定監(jiān)控時間,每10s執(zhí)行一次
interval 10
# 腳本結(jié)果導(dǎo)致的優(yōu)先級變更,檢測失?。_本返回非0)則優(yōu)先級 -5
# weight -5
# # 檢測連續(xù)2次失敗才算確定是真失敗。會用weight減少優(yōu)先級(1-255之間)
# fall 2
# 檢測1次成功就算成功。但不修改優(yōu)先級
# rise 1
}
vrrp_instance VI_1 {
# 指定keepalived的角色,主機設(shè)置為MASTER,備用機設(shè)置為BACKUP
state BACKUP
# 指定HA監(jiān)測網(wǎng)絡(luò)的接口。centos7使用 ip addr 獲取
interface ens33
# 主備的virtual_router_id必須一樣,可以設(shè)置為IP后一組:must be between 1 & 255
virtual_router_id 51
# 優(yōu)先級值,在同一個vrrp_instance下, MASTRE 一定要高于 BAUCKUP,MASTER恢復(fù)后,BACKUP自動交接
priority 90
# VRRP 廣播周期秒數(shù),如果沒檢測到該廣播,就被認為服務(wù)掛了,將切換主備
advert_int 1
# 設(shè)置驗證類型和密碼。主從必須一樣
authentication {
# 設(shè)置vrrp驗證類型,主要有PASS和AH兩種
auth_type PASS
# 加密的密碼,兩臺服務(wù)器一定要一樣,才能正常通信
auth_pass 1111
}
track_script {
# 執(zhí)行監(jiān)控的服務(wù),引用VRRP腳本,即在 vrrp_script 部分指定的名字。定期運行它們來改變優(yōu)先級
chk_nginx
}
virtual_ipaddress {
# VRRP HA 虛擬地址 如果有多個VIP,繼續(xù)換行填寫
192.168.211.140
}
}
把配置文件發(fā)送到131節(jié)點
scp /etc/keepalived/keppalived.conf 192.168.211.131:/etc/keepalived/keepalived.conf
對于131節(jié)點只需要修改
state BACKUP priority 90
主keepalived配置監(jiān)控腳本chk_nginx.sh
創(chuàng)建一個腳本,用于在keepalived中執(zhí)行
vi /etc/keepalived/chk_nginx.sh
#!/bin/bash
# 查看是否有 nginx進程 把值賦給變量counter
counter=`ps -C nginx --no-header |wc -l`
# 如果沒有進程值得為 0
if [ $counter -eq 0 ];then
# 嘗試啟動nginx
echo "Keepalived Info: Try to start nginx" >> /var/log/messages
/etc/nginx/sbin/nginx
sleep 3
if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then
# 輸出日至道系統(tǒng)消息
echo "Keepalived Info: Unable to start nginx" >> /var/log/messages
# 如果還沒沒啟動,則結(jié)束 keepalived 進程
# killall keepalived
# 或者停止
/etc/init.d/keepalived stop
exit 1
else
echo "Keepalived Info: Nginx service has been restored" >> /var/log/messages
exit 0
fi
else
# 狀態(tài)正常
echo "Keepalived Info: Nginx detection is normal" >> /var/log/messages;
exit 0
fi
接下來授予執(zhí)行權(quán)限,并測試
chmod +x chk_nginx.sh ./chk_nginx.sh
兩邊重啟keepalived
systemctl restart keepalived systemctl status keepalived
此時訪問.140也是可以正常顯示的,也就是綁定的IP成功了。執(zhí)行前可以通過下面命令實時查看 messages 中的輸出日志
tail -f /var/log/messages # 如果nginx關(guān)閉 Keepalived Info: Try to start nginx Keepalived Info: Nginx service has been restored # nginx正常打開 Keepalived Info: Nginx detection is normal
當nginx檢測正常,就會返回0;檢測沒有了,返回1,但是keepalived似乎不是檢測這個返回值來實現(xiàn)轉(zhuǎn)移,而是檢測keepalived服務(wù)是否存在,來釋放本地VIP后,最終轉(zhuǎn)移虛擬IP,到另一臺服務(wù)器。
參考文章
https://www.jianshu.com/p/7e8e61d34960
https://www.cnblogs.com/zhangxingeng/p/10721083.html
到此這篇關(guān)于Vmware部署Nginx+KeepAlived集群雙主架構(gòu)的文章就介紹到這了,更多相關(guān)Nginx+KeepAlived集群內(nèi)容請搜索本站以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持本站!
版權(quán)聲明:本站文章來源標注為YINGSOO的內(nèi)容版權(quán)均為本站所有,歡迎引用、轉(zhuǎn)載,請保持原文完整并注明來源及原文鏈接。禁止復(fù)制或仿造本網(wǎng)站,禁止在非maisonbaluchon.cn所屬的服務(wù)器上建立鏡像,否則將依法追究法律責任。本站部分內(nèi)容來源于網(wǎng)友推薦、互聯(lián)網(wǎng)收集整理而來,僅供學(xué)習(xí)參考,不代表本站立場,如有內(nèi)容涉嫌侵權(quán),請聯(lián)系alex-e#qq.com處理。
關(guān)注官方微信