400-035-6699
当前位置: 首页 » 技术支持 » 博文资讯 »

nginx实现高效负载均衡策略详解

随着互联网的迅猛发展,网站的访问量呈现爆发式增长,这就要求我们的服务器必须具备高性能和高可用性。为了满足这一需求,负载均衡和反向代理技术应运而生。今天,就让我们一起来探讨一下这两种技术,以及如何通过它们来提升网站的性能和可靠性。
### 负载均衡:让服务器高效协作
当网站的访问量过大时,单台服务器可能无法承受如此之大的负载。这时,我们就需要将请求分配到多台服务器上,这就是负载均衡的原理。通过负载均衡,我们可以将一台服务器的负载分散到多台服务器,这样可以提高系统的吞吐率,同时提高系统的可伸缩性和可靠性。
举个例子,当用户请求发送到负载均衡服务器时,负载均衡服务器会根据配置的规则将请求转发到不同的Web服务器上。这样一来,每一台服务器只需要处理一部分请求,从而大大减轻了服务器的压力。
### 反向代理:提高服务器处理能力
反向代理技术可以让Web服务器处理更多的请求,从而提高服务器的处理能力。在反向代理中,用户请求首先发送到反向代理服务器,然后由反向代理服务器将请求转发到后端服务器。
在使用反向代理时,我们可以将静态资源直接从反向代理服务器读取,而不需要从后端服务器获取。这样,后端服务器就可以集中处理动态请求,从而提高了服务器的处理能力。
### nginx:实现负载均衡和反向代理的利器
nginx是一个高性能的Web服务器,它可以实现负载均衡和反向代理。在nginx中,我们可以使用upstream模块来实现简单的负载均衡。在upstream模块中,我们定义一个服务器列表,默认的方式是轮询。如果要确定同一个访问者发出的请求总是由同一个后端服务器来处理,可以设置IP_hash。
在配置好upstream后,我们还需要在server段内添加相应的内容,将请求转发到upstream定义的服务器列表上。
### 高可用nginx负载均衡器:确保业务连续性
为了保证nginx负载均衡器的高可用性,我们可以使用Keepalived来实现。Keepalived是一个高性能的故障切换和高可用解决方案,它可以在主节点发生故障时自动将VIP切换到备节点,从而保证业务的连续性。
在使用Keepalived时,我们需要在主备节点上分别配置Keepalived,并设置相应的参数,如虚拟ip地址、优先级等。此外,我们还需要编写监控脚本,用于监控Keepalived和nginx的状态,并在发生故障时进行相应的处理。
通过以上技术,我们可以大大提高网站的性能和可靠性,从而为用户提供更好的体验。当然,这只是一个简单的介绍,实际操作中可能需要根据具体情况进行调整。希望这篇文章能对您有所帮助。

目录

nginx实现高效负载均衡策略详解

nginx负载均衡

nginx负载均衡介绍

反向代理与负载均衡

nginx负载均衡配置

Keepalived高可用nginx负载均衡器

修改Web服务器的默认主页

开启nginx负载均衡和反向代理

安装Keepalived

配置Keepalived

编写脚本监控Keepalived和nginx的状态

配置keepalived加入监控脚本的配置

nginx负载均衡介绍

nginx应用场景之一就是负载均衡。在访问量较多的时候,可以通过负载均衡,将多个请求分摊到多台服务器上,相当于把一台服务器需要承担的负载量交给多台服务器处理,进而提高系统的吞吐率;另外如果其中某一台服务器挂掉,其他服务器还可以正常提供服务,以此来提高系统的可伸缩性与可靠性。

下图为负载均衡示例图,当用户请求发送后,首先发送到负载均衡服务器,而后由负载均衡服务器根据配置规则将请求转发到不同的web服务器上。
7f9bba76-9e97-11ef-93f3-92fbcf53809c.png

反向代理与负载均衡

nginx通常被用作后端服务器的反向代理,这样就可以很方便的实现动静分离以及负载均衡,从而大大提高服务器的处理能力。

nginx实现动静分离,其实就是在反向代理的时候,如果是静态资源,就直接从nginx发布的路径去读取,而不需要从后台服务器获取了。

但是要注意,这种情况下需要保证后端跟前端的程序保持一致,可以使用Rsync做服务端自动同步或者使用NFS、MFS分布式共享存储。

HTTP Proxy模块,功能很多,最常用的是proxy_pass和proxy_Cache

如果要使用proxy_cache,需要集成第三方的ngx_cache_purge模块,用来清除指定的URL缓存。这个集成需要在安装nginx的时候去做,如:

./configure --add-module=../ngx_cache_purge-1.0 ......

nginx通过upstream模块来实现简单的负载均衡,upstream需要定义在http段内

在upstream段内,定义一个服务器列表,默认的方式是轮询,如果要确定同一个访问者发出的请求总是由同一个后端服务器来处理,可以设置ip_hash,如:

upstream idfsoft.com {
  ip_hash;
  server 127.0.0.1:9080 weight=5;
  server 127.0.0.1:8080 weight=5;
  server 127.0.0.1:1111;
}

注意:这个方法本质还是轮询,而且由于客户端的ip可能是不断变化的,比如动态IP,代理,FQ等,因此ip_hash并不能完全保证同一个客户端总是由同一个服务器来处理。

定义好upstream后,需要在server段内添加如下内容:

server {
  location / {
    proxy_pass http://idfsoft.com;
  }
}

nginx负载均衡配置

环境说明

系统 IP 角色 服务
centos8 192.168.222.250 Nginx负载均衡器 nginx
centos8 192.168.222.137 Web1服务器 apache
centos8 192.168.222.138 Web2服务器 nginx

nginx负载均衡器使用源码的方式安装nginx,另外两台Web服务器使用yum的方式分别安装nginx与apache服务

nginx源码安装可以看我的博客nginx,里面有nginx详细的源码安装

修改Web服务器的默认主页
Web1:

[root@Web1 ~]# yum -y install httpd   //下载服务
[root@Web1 ~]# systemctl stop firewalld.service  //关闭防火墙
[root@Web1 ~]# vim /etc/seLinux/config 
SELINUX=disabled
[root@Web1 ~]# setenforce 0
[root@Web1 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web1 ~]# cd /var/www/html/
[root@Web1 html]# ls
[root@Web1 html]# echo "apache" > index.html  //编辑内容到网站里面
[root@Web1 html]# cat index.html 
apache
[root@Web1 html]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@Web1 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
LISTEN     0          128                         *:80                        *:*                    

访问:
7fad2478-9e97-11ef-93f3-92fbcf53809c.png

Web2:

[root@Web2 ~]# yum -y install nginx  //下载服务
[root@Web2 ~]# systemctl stop firewalld.service //关闭防火墙 
[root@Web2 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web2 ~]# setenforce 0
[root@Web2 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web2 ~]# cd /usr/share/nginx/html/
[root@Web2 html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@Web2 html]# echo "nginx" > index.html  //编辑内容到网站里面
[root@Web2 html]# cat index.html 
nginx
[root@Web2 html]# systemctl enable --now nginx.service 
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@Web2 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     PRoCEss     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    

访问:
7fce95e0-9e97-11ef-93f3-92fbcf53809c.png

开启nginx负载均衡和反向代理

[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段内添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
             proxy_pass http://webserver;
        }

[root@nginx ~]# systemctl reload nginx.service 
//重新加载配置

测试:
在浏览器输入nginx负载均衡器的ip地址
7fd955f2-9e97-11ef-93f3-92fbcf53809c.png
7fe9f5ba-9e97-11ef-93f3-92fbcf53809c.png
编辑nginx负载均衡器的nginx配置文件

[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
 upstream webserver {      //在http字段内修改
    server 192.168.222.137 weight=3;
    server 192.168.222.138;
}
[root@nginx ~]# systemctl reload nginx.service 
//重新加载配置
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
apache
[root@nginx ~]# curl 192.168.222.250
nginx
//可以观察到每访问三次apache就会访问一次nginx,意思就是配置要连续访问3次,才会进行下一次轮查询,当集群中有配置较低,较老的服务器可以进行使用,来减轻这些服务器的压力。
[root@nginx ~]# vim /usr/local/nginx/conf/nginx.conf
 upstream webserver {    //http字段里面进行修改
     ip_hash; 
    server 192.168.222.137 weight=3;
    server 192.168.222.138;
}
[root@nginx ~]# systemctl reload nginx.service 
//重新加载配置
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
[root@nginx ~]# curl 192.168.222.250
nginx
//可以看见访问到的全部是nginx,因为ip_hash配置,这条配置可以让客户端访问到服务器端,以后就一直是此服务器来进行响应客户端,所以才会一直访问到nginx,当然前面已经说过,这个方式的本质还是轮询,并不能保证一个客户端总是由同一个服务器来进行响应

Keepalived高可用nginx负载均衡器

实验环境

系统 角色 服务 IP
centos8 nginx负载均衡器,master nginx,keepalived 192.168.222.250
centos8 nginx负载均衡器,backup nginx,keepalived 192.168.222.139
centos8 Web1服务器 apache 192.168.222.137
centos8 Web2服务器 nginx 192.168.222.138

nginx源码安装可以看我的博客nginx,里面有nginx详细的源码安装
VIP为:192.168.222.133

修改Web服务器的默认主页

Web1:

[root@Web1 ~]# yum -y install httpd   //下载服务
[root@Web1 ~]# systemctl stop firewalld.service  //关闭防火墙
[root@Web1 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web1 ~]# setenforce 0
[root@Web1 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web1 ~]# cd /var/www/html/
[root@Web1 html]# ls
[root@Web1 html]# echo "apache" > index.html  //编辑内容到网站里面
[root@Web1 html]# cat index.html 
apache
[root@Web1 html]# systemctl enable --now httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@Web1 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
LISTEN     0          128                         *:80                        *:*                    

访问:
7fad2478-9e97-11ef-93f3-92fbcf53809c.png

Web2:

[root@Web2 ~]# yum -y install nginx  //下载服务
[root@Web2 ~]# systemctl stop firewalld.service //关闭防火墙 
[root@Web2 ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@Web2 ~]# setenforce 0
[root@Web2 ~]# systemctl disable --now firewalld.service 
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Web2 ~]# cd /usr/share/nginx/html/
[root@Web2 html]# ls
404.html  50x.html  index.html  nginx-logo.png  poweredby.png
[root@Web2 html]# echo "nginx" > index.html  //编辑内容到网站里面
[root@Web2 html]# cat index.html 
nginx
[root@Web2 html]# systemctl enable --now nginx.service 
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@Web2 html]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:111                 0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:80                  0.0.0.0:*                    
LISTEN     0          32              192.168.122.1:53                  0.0.0.0:*                    
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:111                    [::]:*                    
LISTEN     0          128                      [::]:80                     [::]:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    

访问:
7fce95e0-9e97-11ef-93f3-92fbcf53809c.png

开启nginx负载均衡和反向代理

Keepalived高可用的主节点的nginx是需要设置开机自启的
master:

[root@master ~]# systemctl status nginx.service 
● nginx.service - nginx server daemon
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabLED; vendor preset: disabled)
   Active: active (running) since Tue 2022-10-18 21:27:54 CST; 1h 1min ago
  Process: 46768 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
 Main PID: 46769 (nginx)
    Tasks: 2 (limit: 12221)
   Memory: 2.6M
   CGroup: /system.slice/nginx.service
           ├─46769 nginx: master process /usr/local/nginx/sbin/nginx
           └─46770 nginx: worker process

Oct 18 21:27:54 nginx systemd[1]: Starting nginx server daemon...
Oct 18 21:27:54 nginx systemd[1]: Started nginx server daemon.
[root@master ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段内添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
            proxy_pass http://webserver;
        }

[root@master ~]# systemctl reload nginx.service 
//重新加载配置

测试:
在浏览器输入nginx负载均衡器的IP地址
7fd955f2-9e97-11ef-93f3-92fbcf53809c.png
7fe9f5ba-9e97-11ef-93f3-92fbcf53809c.png

backup:
Keepalived高可用的备用节点的nginx是不设置开机自启的,如果开启的话,后面访问VIP的时候可能会访问不到,可以在需要测试的时候进行开启

[root@backup ~]# systemctl status nginx.service 
● nginx.service - nginx server daemon
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-10-18 22:25:31 CST; 1s ago
  Process: 73641 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
 Main PID: 73642 (nginx)
    Tasks: 2 (limit: 12221)
   Memory: 2.7M
   CGroup: /system.slice/nginx.service
           ├─73642 nginx: master process /usr/local/nginx/sbin/nginx
           └─73643 nginx: worker process

Oct 18 22:25:31 backup systemd[1]: Starting nginx server daemon...
Oct 18 22:25:31 backup systemd[1]: Started nginx server daemon.
[root@backup ~]# vim /usr/local/nginx/conf/nginx.conf
...

upstream webserver {              //http字段内添加
    server 192.168.222.137;
    server 192.168.222.138;
}
...

 location / {               //server字段里面修改
            root   html;
            proxy_pass http://webserver;
        }
[root@backup ~]# systemctl reload nginx.service 
//重新加载一下配置

访问:
在浏览器输入nginx负载均衡器的IP地址
801e6200-9e97-11ef-93f3-92fbcf53809c.png
803e00a6-9e97-11ef-93f3-92fbcf53809c.png

安装Keepalived

master:

[root@master ~]# dnf list all |grep keepalived  //查找系统中是否存在其安装包
Failed to set locale, defaulting to C.UTF-8
keepalived.x86_64                                      2.1.5-6.el8                                            AppStream 
[root@master ~]# dnf -y install keepalived

backup:

[root@backup ~]# dnf list all |grep keepalived //查找系统中是否存在其安装包
Failed to set locale, defaulting to C.UTF-8
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
Module yaml error: Unexpected key in data: static_context [line 9 col 3]
keepalived.x86_64                                                 2.1.5-6.el8                                            AppStream   
[root@backup ~]# dnf -y install keepalived

配置Keepalived

master

[root@master ~]# cd /etc/keepalived/
[root@master keepalived]# ls
keepalived.conf
[root@master keepalived]# mv keepalived.conf{,-bak}  //备份一下配置文件
[root@master keepalived]# ls
keepalived.conf-bak
[root@master keepalived]# vim keepalived.conf  //编辑一个新配置文件
[root@master keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_instance VI_1 {        //这里主备节点需要一致
    state BACKUP
    interface ens33      //网卡
    virtual_router_id 51
    priority 100     //这里比备节点的高
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密码(可以随机生成)
    }
    virtual_iPaddress {
        192.168.222.133    //高可用虚拟IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.250 80 {  //主节点ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.139 80 {   //备节点ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master keepalived]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.

backup:

[root@backup ~]# cd /etc/keepalived/
[root@backup keepalived]# ls
keepalived.conf
[root@backup keepalived]# mv keepalived.conf{,-bak} //备份一下配置文件
[root@backup keepalived]# ls
keepalived.conf-bak
[root@backup keepalived]# vim keepalived.conf //编辑新的配置文件
[root@backup keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02    
}

VRRP_instance VI_1 {       //这里主备节点需要一致
    state BACKUP
    interface ens33      //网卡
    virtual_router_id 51
    priority 90     //这里比主节点的小
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu   //密码(可以随机生成)
    }
    virtual_ipaddress {
        192.168.222.133    //高可用虚拟IP(VIP)地址
    }
}

virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.222.250 80 {   //主节点ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.222.137 80 {   //备节点ip
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup keepalived]# systemctl enable --now keepalived.service 
Created symlink /etc/systemd/system/multi-user.target.wants/keepalived.service → /usr/lib/systemd/system/keepalived.service.
[root@backup keepalived]# systemctl start nginx
//此时测试的时候可以开启nginx

查看VIP
master:

[root@master keepalived]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:0528 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever 
  
 

backup:

[root@backup keepalived]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:31f9 brd ffffff:ff inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ffaff9/64 scope link valid_lft forever preferred_lft forever 
  
 

//VIP在master主机上面因为在Keepalived配置文件里我们设置master的优先级要比backup高一些,所以VIP在这里很正常

访问:
805253ee-9e97-11ef-93f3-92fbcf53809c.png
80751082-9e97-11ef-93f3-92fbcf53809c.png

master:

[root@master keepalived]# curl 192.168.222.133
apache
[root@master keepalived]# curl 192.168.222.133
nginx

此是关闭master上面的nginx和keepalived的

[root@master keepalived]# systemctl stop nginx.service 
[root@master keepalived]# systemctl stop keepalived.service 
[root@master keepalived]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:0528 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever //此时master上面没有VIP 
  
 

backup:

[root@backup keepalived]# systemctl enable --now keepalived
[root@backup keepalived]# systemctl start nginx.service 
[root@backup keepalived]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:31f9 brd ffffff:ff inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ffaff9/64 scope link valid_lft forever preferred_lft forever //此时backup上面出现VIP,备节点变成了主节点 [root@backup keepalived]# curl 192.168.222.133 apache [root@backup keepalived]# curl 192.168.222.133 nginx 
  
 

访问:
8088ef80-9e97-11ef-93f3-92fbcf53809c.png
80ac123a-9e97-11ef-93f3-92fbcf53809c.png

可以看到,其中一个nginx负载均衡器挂掉了,也不会影响正常访问,这就是nginx负载均衡的高可用的配置

重启master上面的nginx和keepalived

[root@master keepalived]# systemctl enable --now keepalived
[root@master keepalived]# systemctl enable --now nginx
[root@master keepalived]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:0528 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever //可以发现VIP出现在master节点上面 
  
 

编写脚本监控Keepalived和nginx的状态

master:

[root@master keepalived]# cd
[root@master ~]# mkdir /scripts
[root@master ~]# cd /scripts/
[root@master scripts]# vim check_nginx.sh
[root@master scripts]# cat check_nginx.sh
#!/bin/bash
nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
    if [ $nginx_status -lt 1 ];then
            systemctl stop keepalived
    fi
[root@master scripts]# chmod +x check_nginx.sh 
[root@master scripts]# ll
total 4
-rwxr-xr-x. 1 root root 151 Oct 19 00:32 check_nginx.sh
[root@master scripts]# vim notify.sh
[root@master scripts]# cat notify.sh 
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
    ;;
    backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
    ;;
    *)
         echo "Usage:$0 master|backup VIP"
    ;;
esac

[root@master scripts]# chmod +x notify.sh 
[root@master scripts]# ll
total 8
-rwxr-xr-x. 1 root root 151 Oct 19 00:32 check_nginx.sh
-rwxr-xr-x. 1 root root 399 Oct 19 00:35 notify.sh

backup:
可以先提前创建好存放脚本的目录

[root@backup keepalived]# cd
[root@backup ~]# mkdir  /scripts
[root@backup ~]# cd /scripts/

从主节点上面将脚本到备节点提前创建好的存放目录里面

[root@master scripts]# scp notify.sh 192.168.222.139:/scripts/
root@192.168.222.139's password: 
notify.sh                                                          100%  399   216.0KB/s   00:00    
[root@backup scripts]# ls
notify.sh
[root@backup scripts]# cat notify.sh 
#!/bin/bash
case "$1" in
    master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
    ;;
    backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep 'nginx'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
    ;;
    *)
         echo "Usage:$0 master|backup VIP"
    ;;
esac

配置keepalived加入监控脚本的配置

master:

[root@master scripts]# cd
[root@master ~]# vim /etc/keepalived/keepalived.conf
[root@master ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
 
global_defs {
   router_id lb01
}
 
vrrp_script nginx_check{
    script "/scripts/check_nginx.sh"
    interval 5
    weight -20
}
  
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
! Configuration File for keepalived
 
global_defs {
   router_id lb01
}
 
vrrp_script nginx_check{                                //添加
    script "/scripts/check_nginx.sh"                    //添加
    interval 1                                          //添加
    weight -20                                          //添加
}                                                       //添加
 
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
     track_script {                     //添加
        nginx_check                     //添加
    }                                   //添加
    notify_master "/scripts/notify.sh master"  //添加
}
virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.222.250 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@master ~]# systemctl restart keepalived.service 

backup:
backup无需检测nginx是否正常,当升级为MASTER时启动nginx,当降级为BACKUP时关闭

[root@backup scripts]# cd
[root@backup ~]# vim /etc/keepalived/keepalived.conf
[root@backup ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
 
global_defs {
   router_id lb02
}
 
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass tushanbu
    }
    virtual_ipaddress {
        192.168.222.133
    }
    notify_master "/scripts/notify.sh master"           //添加
    notify_backup "/scripts/notify.sh backup"           //添加
}
virtual_server 192.168.222.133 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
 
    real_server 192.168.222.250 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
 
    real_server 192.168.222.139 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}
[root@backup ~]# systemctl restart keepalived.service 

测试
正常状态运行查看状态

[root@master ~]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:2983:57 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff8357/64 scope link valid_lft forever preferred_lft forever [root@master]# curl 192.168.222.133 apache [root@master]# curl 192.168.222.133 nginx //此时VIP在主节点上面 
  
 

关闭master的nginx

[root@master ~]# systemctl stop nginx.service 
[root@master ~]# ss -antl
State      Recv-Q     Send-Q          Local Address:Port           Peer Address:Port     Process     
LISTEN     0          128                   0.0.0.0:22                  0.0.0.0:*                    
LISTEN     0          128                      [::]:22                     [::]:*                    
[root@master ~]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:0528 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever //没有VIP 
  
 

backup:

[root@backup ~]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:29:31f9 brd ffffff:ff inet 192.168.222.139/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ffaff9/64 scope link valid_lft forever preferred_lft forever [root@backup ~]# curl 192.168.222.133 apache [root@backup ~]# curl 192.168.222.133 nginx //备节点变成主机节点 
  
 

重新开启master的nginx

[root@master ~]# systemctl restart keepalived.service 
[root@master ~]# systemctl restart nginx.service 
[root@master ~]# ip a
1: lo: 
 
   mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: 
  
    mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:0c:2983:57 brd ffffff:ff inet 192.168.222.250/24 brd 192.168.222.255 scope global noprefixroute ens33 valid_lft forever preferred_lft forever inet 192.168.222.133/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff8357/64 scope link valid_lft forever preferred_lft forever [root@master]# curl 192.168.222.133 apache [root@master]# curl 192.168.222.133 nginx //此时VIP重新回到master上面
  
 

审核编辑:彭菁

【限时免费】一键获取网络规划系统模板+传输架构设计+连通性评估方案

负载均衡相关文章

服务电话:
400-035-6699
企服商城