热点新闻
Docker-进阶(容器网络、compose、harbor、swarm)
2023-07-05 10:56  浏览:484  搜索引擎搜索“爱农网”
温馨提示:信息一旦丢失不一定找得到,请务必收藏信息以备急用!本站所有信息均是注册会员发布如遇到侵权请联系文章中的联系方式或客服删除!
联系我时,请说明是在爱农网看到的信息,谢谢。
展会发布 展会网站大全 报名观展合作 软文发布

接着Docker-入门,上次了解了Docker的基本命令,镜像容器操作,自定义镜像两种方式(容器和DockerFile),数据卷实现容器数据持久化和容器间数据共享,都是日常工作中最常用的。

一、Linux中的网卡与虚拟化

Docker本地容器之间的通信实现,就是利用的OS提供的网络虚拟化,所以了解OS的网卡和虚拟化有利于对Docker容器网络通信的了解

1.查看网卡信息

方式 描述
ip a 查看所有网卡信息
ip link show 查看网卡连接信息
ls /sys/class/net 网卡存放路径,显示所有网卡文件

执行命令:

-bash-4.2# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:8c:4f:3b brd ff:ff:ff:ff:ff:ff inet 192.168.42.4/24 brd 192.168.42.255 scope global noprefixroute dynamic ens33 valid_lft 2610sec preferred_lft 2610sec inet6 2409:8920:5020:94ce:803d:cd84:9c91:3d7d/64 scope global noprefixroute dynamic valid_lft 1167sec preferred_lft 1167sec inet6 2409:8921:5010:fe4:e1c5:8f4d:7a68:ecd5/64 scope global noprefixroute dynamic valid_lft 3510sec preferred_lft 3510sec inet6 fe80::9de3:e8e0:6be0:4b41/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:1a:96:3a:45 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:1aff:fe96:3a45/64 scope link valid_lft forever preferred_lft forever

网卡类型:

  • lo:本地网卡
  • ens33:连接网络的网卡
  • docker0:docker的网卡

网卡信息:

  • 状态:UP/DOWN/UNKOWN等
  • link/ether:MAC地址
  • inet:绑定的IP地址

2.网卡操作

命令 描述
ip addr add ip/掩码 dev 网卡名 为网卡添加IP
ip addr delete ip/掩码 dev 网卡名 删除网卡ip
ip link set 网卡名 up 启用网卡
ip link set 网卡名 down 停用网卡
ifup 网卡名 启用网卡
ifdown 网卡名 停用网卡

为网卡添加IP:

-bash-4.2# ip addr add 192.168.42.5/24 dev ens33 -bash-4.2# ip a |grep ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.42.4/24 brd 192.168.42.255 scope global noprefixroute dynamic ens33 inet 192.168.42.5/24 scope global secondary ens33

删除网卡ip:

-bash-4.2# ip addr delete 192.168.42.5/24 dev ens33 -bash-4.2# ip a |grep ens33 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 192.168.42.4/24 brd 192.168.42.255 scope global noprefixroute dynamic ens33

3.network namespace

network namespace能创建多个隔离的网络空间,每个空间拥有独立的网络信息。是实现网络虚拟化的重要功能

3.1 namespace操作
命令 描述
ip netns add 空间名 添加命名空间
ip netns list 查看所有命名空间
ip netns delete 空间名 删除命名空间
ip netns exec 空间名 网卡命令 执行命名空间中的网卡操作

添加命名空间,并查看该空间的网卡信息:

-bash-4.2# ip netns add ns1 -bash-4.2# ip netns list ns1 -bash-4.2# ip netns exec ns1 ip a 1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

可以看到,创建出的命名空间内,默认本地网卡是停用状态

3.2 veth pair

目前命名空间只有一个本地网卡,并不能和外界进行通讯,包括宿主机。
veth pair是一个成对的端口,是命名空间实现外界通讯的桥梁






命令 描述
ip link add 对1名 type veth peer name 对2名 创建veth pair
ip link set 网卡名 netns 空间名 为命名空间分配网卡

创建veth pair

-bash-4.2# ip link add veth-ns1 type veth peer name veth-ns2 -bash-4.2# ip a | grep veth 80: veth-ns2@veth-ns1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 81: veth-ns1@veth-ns2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 -bash-4.2# ip addr add 192.168.42.1/24 dev veth-ns1 -bash-4.2# ip a | grep veth 80: veth-ns2@veth-ns1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 81: veth-ns1@veth-ns2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000 inet 192.168.42.1/24 scope global veth-ns1

veth-ns1分配给ns1空间,并分配IP与启用:

-bash-4.2# ip link set veth-ns1 netns ns1 -bash-4.2# ip netns exec ns1 ip addr add 192.168.42.1/24 dev veth-ns1 -bash-4.2# ip netns exec ns1 ip link set veth-ns1 up -bash-4.2# ip netns exec ns1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 81: veth-ns1@if80: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000 link/ether 66:66:f8:22:c4:ae brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.42.1/24 scope global veth-ns1 valid_lft forever preferred_lft forever inet6 fe80::6466:f8ff:fe22:c4ae/64 scope link valid_lft forever preferred_lft forever

宿主机创建另一个命名空间ns2,并做相同的操作:

-bash-4.2# ip netns add ns2 -bash-4.2# ip netns list ns2 ns1 (id: 0) -bash-4.2# ip link set veth-ns2 netns ns2 -bash-4.2# ip netns exec ns2 ip addr add 192.168.42.2/24 dev veth-ns2 -bash-4.2# ip netns exec ns2 ip link set veth-ns2 up -bash-4.2# ip netns exec ns2 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 80: veth-ns2@if81: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether c2:cd:24:23:ae:bc brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 192.168.42.2/24 scope global veth-ns2 valid_lft forever preferred_lft forever inet6 fe80::c0cd:24ff:fe23:aebc/64 scope link valid_lft forever preferred_lft forever

ping操作:

-bash-4.2# ip netns exec ns1 ping 192.168.42.2 PING 192.168.42.2 (192.168.42.2) 56(84) bytes of data. 64 bytes from 192.168.42.2: icmp_seq=1 ttl=64 time=0.618 ms 64 bytes from 192.168.42.2: icmp_seq=2 ttl=64 time=0.045 ms ^C --- 192.168.42.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1005ms rtt min/avg/max/mdev = 0.045/0.331/0.618/0.287 ms

二、Docker容器网络通信

1.准备工作

官方tomcat镜像不带扩展指令,需要自动搭建tomcat镜像

DockerFile内容,使用ADD将tomcat传入并解压,再安装JDK环境:

# 基础镜像 FROM centos-yum:1.0 MAINTAINER aruba # 申明一个变量 ENV path /usr/local # 设置工作目录 WORKDIR $path # 安装tomcat java ADD apache-tomcat-8.5.81.tar.gz $path RUN yum install -y java-1.8.0-openjdk EXPOSE 8080 # 最后执行命令 CMD /bin/bash | $path/apache-tomcat-8.5.81/bin/startup.sh

构建镜像:

docker build -f dockerfile3 -t centos-tomcat:latest .

最后创建两个容器:

-bash-4.2# docker run -i -d --name tm1 -p 8081:8080 tomcat -bash-4.2# docker run -i -d --name tm2 -p 8082:8080 tomcat

2.容器网络-Bridge

2.1 容器的网络

容器默认使用的是Bridge模式,查看两个容器的IP信息

-bash-4.2# docker exec -it tm1 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 140: eth0@if141: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever -bash-4.2# docker exec -it tm2 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 142: eth0@if143: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever

tm1对外网卡eth0@if141的IP为172.17.0.2,tm2对外网卡eth0@if143的IP为172.17.0.3,并且容器间可以ping通:

-bash-4.2# docker exec -it tm2 ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=1.37 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.047 ms ^C --- 172.17.0.2 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1002ms rtt min/avg/max/mdev = 0.047/0.710/1.373/0.663 ms

2.2 宿主机网络信息

查看宿主机IP信息:

-bash-4.2# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:8c:4f:3b brd ff:ff:ff:ff:ff:ff inet 192.168.42.4/24 brd 192.168.42.255 scope global noprefixroute dynamic ens33 valid_lft 3446sec preferred_lft 3446sec inet6 2409:8921:5010:fe4:e1c5:8f4d:7a68:ecd5/64 scope global noprefixroute dynamic valid_lft 3599sec preferred_lft 3599sec inet6 fe80::9de3:e8e0:6be0:4b41/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:1a:96:3a:45 brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever inet6 fe80::42:1aff:fe96:3a45/64 scope link valid_lft forever preferred_lft forever 141: veth346c0f4@if140: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 52:27:e5:72:82:1d brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::5027:e5ff:fe72:821d/64 scope link valid_lft forever preferred_lft forever 143: veth3ed01fc@if142: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default link/ether 72:38:9b:25:ca:98 brd ff:ff:ff:ff:ff:ff link-netnsid 3 inet6 fe80::7038:9bff:fe25:ca98/64 scope link valid_lft forever preferred_lft forever

发现除了之前的三个网卡外,还多了两个网卡:veth346c0f4@if140veth3ed01fc@if142,并且是由docker0创建的,注意docker0的IP为172.17.0.1,由此得知,每创建一个容器,docker会相对应的为该容器创建一个类似veth-pair的映射对,容器共享docker0的网络,在该网段下的容器都能够相互通信






3.docker网络操作

命令 描述
docker network ls 查看网络模式
docker network inspect 模式名 查看网络模式详情
docker network create [--subnet=ip网段/掩码 -d 模式] 模式名 创建新的网络模式
docker network rm 模式名 删除网络模式
docker network connect 模式名 容器名 赋予容器网络模式
docker run --network 模式名 容器名 创建时指定容器的网络模式

Docker默认只有三种:

-bash-4.2# docker network ls NETWORK ID NAME DRIVER SCOPE 965bd4c85719 bridge bridge local 38f75ca6b94e host host local 1c340dce736e none null local

3.1 创建新的网络模式

-bash-4.2# docker network create --subnet=172.18.0.0/24 tomcat-net 1b9a69c1df7359eda6652827e137661e6796f9524245b734200c69ed377d571d

3.2 指定容器网络模式

创建容器时指定网络:

-bash-4.2# docker run -i -d --name tm3 -p 8083:8080 --network tomcat-net centos-tomcat b847abe2ab98501261a075e4a3282fd8fc804b7a727ba802cadee8c1e7e114c1

查看网络信息:

-bash-4.2# docker exec -it tm3 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 151: eth0@if152: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 172.18.0.2/24 brd 172.18.0.255 scope global eth0 valid_lft forever preferred_lft forever

3.3 添加网络模式

由于tm3tm1处于不同的网段,所以ping不通:

-bash-4.2# docker exec -it tm3 ping 172.17.0.2 PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data. ^C --- 172.17.0.2 ping statistics --- 10 packets transmitted, 0 received, 100% packet loss, time 9003ms

tm1添加网络,再次尝试ping:

-bash-4.2# docker network connect tomcat-net tm1 -bash-4.2# docker exec -it tm3 ping tm1 PING tm1 (172.18.0.3) 56(84) bytes of data. 64 bytes from tm1.tomcat-net (172.18.0.3): icmp_seq=1 ttl=64 time=0.117 ms 64 bytes from tm1.tomcat-net (172.18.0.3): icmp_seq=2 ttl=64 time=0.048 ms 64 bytes from tm1.tomcat-net (172.18.0.3): icmp_seq=3 ttl=64 time=0.048 ms ^C --- tm1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2006ms rtt min/avg/max/mdev = 0.048/0.071/0.117/0.032 ms

4.网络模式-Host

Host模式就是共享宿主机的网络,此时就不需要指定端口映射

-bash-4.2# docker run -d -i --name tm4 --network host centos-tomcat 3520c1b8d102bad92248d1193938aa8eabf62af456780f651c8857d7e2151992

浏览器上进行访问:






5.网络模式-None

None模式不能与外部进行通信,只有lo网卡:

-bash-4.2# docker run -i -d --name tm5 --network none centos-tomcat 18fe219a4f0e77f4a4cd8f756eec72cc90889f34b7def57326cceb09813c0e29 -bash-4.2# docker exec -it tm5 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever

三、DockerCompose

一台宿主机可能会部署多个容器,容器还可能有依赖关系,每次都手动启动容器是非常麻烦的,Compose就是定义和运行多容器的工具,使用yml文件配置应用程序需要的所有服务,使用docker-compose up命令创建并启动所有服务

官网地址:https://docs.docker.com/compose

1.compose安装

直接通过yum安装:

-bash-4.2# yum install -y docker-compose-plugin -bash-4.2# docker compose version Docker Compose version v2.6.0

2.官方demo入手

根据官网的demo来初次使用compose:https://docs.docker.com/compose/gettingstarted/

2.1 创建目录

mkdir composetest cd composetest

2.2 创建app.py

vi app.py

内容为:

import time import redis from flask import Flask app = Flask(__name__) cache = redis.Redis(host='redis', port=6379) def get_hit_count(): retries = 5 while True: try: return cache.incr('hits') except redis.exceptions.ConnectionError as exc: if retries == 0: raise exc retries -= 1 time.sleep(0.5) @app.route('/') def hello(): count = get_hit_count() return 'Hello World! I have been seen {} times.\n'.format(count)

2.3 创建requirements.txt

vi requirements.txt

内容为:

flask redis

2.4 创建Dockerfile

vi Dockerfile

内容为:

# syntax=docker/dockerfile:1 FROM python:3.7-alpine WORKDIR /code ENV FLASK_APP=app.py ENV FLASK_RUN_HOST=0.0.0.0 RUN apk add --no-cache gcc musl-dev linux-headers COPY requirements.txt requirements.txt RUN pip install -r requirements.txt EXPOSE 5000 COPY . . CMD ["flask", "run"]

2.5 创建docker-compose.yml

vi docker-compose.yml

内容为:

version: "3.9" services: web: build: . ports: - "8000:5000" redis: image: "redis:alpine"

2.6 一键启动

docker compose up

等待镜像拉取等操作完成后,出现如下输出,说明启动成功:






另开终端查看docker镜像和容器信息:






浏览器访问:






3.配置规则

由上面的demo可以知道,compose的核心就是yml文件,在yml中可以配置各个容器,以及容器的依赖关系,yml的可配置项有很多,官方文档查看:https://docs.docker.com/compose/compose-file/compose-file-v3/

常用的如下:

version: "3.9" #配置文件版本 services: #各个服务 容器名1: build: . #dockerfile路径 image: centos #容器使用的镜像 expose: 8080 #对外开放端口 restart: always #自动重启 容器名2: depends_on: # 依赖启动 - 容器名1 ports: #端口映射关系 - "8000:5000" volumes: #数据卷 # - 宿主机绝对路径/卷标:容器目标路径 - db_data:/var/lib/mysql # 卷标方式 - /root/mysql:/var/lib/mysql2 # 绝对路径方式 environment: #环境配置 volumes: # 卷标配置 # 卷标名:[路径地址/{}自动生成目录] db_data: {}

4.常用命令

docker compose命令都需要在工作目录下进行,如上述的composetest目录下

命令 描述
docker compose up 根据yml创建service 使用-f指定yml文件,-d守护模式
docker compose ps 查看启动的服务
docker compose images 查看镜像
docker compose stop/start 停止和启动服务
docker compose down 停止并删除服务,数据卷和network
docker compose exec 服务名 命令 执行某个服务的命令

5.部署WP博客

创建目录:

mkdir my_wordpress cd my_wordpress

docker-compose.yml文件:

version: "3.9" services: db: image: mysql:5.7 volumes: - db_data:/var/lib/mysql restart: always environment: MYSQL_ROOT_PASSWORD: somewordpress MYSQL_DATAbase: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress wordpress: depends_on: - db image: wordpress:latest volumes: - wordpress_data:/var/www/html ports: - "8000:80" restart: always environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD: wordpress WORDPRESS_DB_NAME: wordpress volumes: db_data: {} wordpress_data: {}

一键启动:

docker compose up

浏览器访问:






6.水平扩展

水平扩展也很简单,一条命令即可:

docker compose up --scale 服务名=扩展数量 -d

运用:

-bash-4.2# docker compose ps NAME COMMAND SERVICE STATUS PORTS my_wordpress-db-1 "docker-entrypoint.s…" db running 33060/tcp my_wordpress-wordpress-1 "docker-entrypoint.s…" wordpress running 0.0.0.0:8000->80/tcp, :::8000->80/tcp -bash-4.2# docker compose up --scale db=4 -d [+] Running 5/5 ⠿ Container my_wordpress-db-4 St... 4.3s ⠿ Container my_wordpress-db-1 St... 4.7s ⠿ Container my_wordpress-db-2 St... 5.2s ⠿ Container my_wordpress-db-3 St... 5.1s ⠿ Container my_wordpress-wordpress-1 Started 4.3s

四、Harbor

Harbor是私有的镜像仓库,实际开发项目中,必然有自己打包的镜像,需要上传到仓库,以便于其他开发和运维人员拉取,首先来看下如何上传到Docker Hub

1.Docker Hub镜像上传

1.1 登录Docker Hub

如果没有账号,需要去注册:https://hub.docker.com/

docker login

输入账号密码后提示登陆成功:

WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded

1.2 docker tag修改镜像名

上传到docker hub有一个规则,即镜像名需要包含账号id(账户名),具体规则为:userId/镜像名

可以通过命令修改:docker tag 新镜像名 原镜像名

修改镜像名后,上传:

-bash-4.2# docker images REPOSITORY TAG IMAGE ID CREATED SIZE composetest_web latest 0f00ff5df20b 2 hours ago 185MB centos-tomcat latest 8175d2379676 5 hours ago 643MB centos-share 1.0 281ec8de6b48 9 hours ago 458MB centos-vim 1.0 eb36a8966f6b 10 hours ago 458MB centos-yum 1.0 ab2bd0073604 10 hours ago 366MB tomcat-hello 1.0 ba5599c90061 14 hours ago 680MB tomcat latest fb5657adc892 8 months ago 680MB wordpress latest c3c92cc3dcb1 8 months ago 616MB mysql 5.7 c20987f18b13 8 months ago 448MB redis alpine 3900abf41552 9 months ago 32.4MB centos latest 5d0da3dc9764 11 months ago 231MB -bash-4.2# docker tag centos-tomcat aruba233/centos-tomcat -bash-4.2# docker push aruba233/centos-tomcat Using default tag: latest The push refers to repository [docker.io/aruba233/centos-tomcat] 1b2f91c23757: Pushed 3cf600e944ba: Pushed 3cc84259d05d: Pushed 74ddd0ec08fa: Mounted from library/centos latest: digest: sha256:c04d539be1ae8e21d9293ff7adeb74de9acbde9b96a509e3ce9037415edae408 size: 1167

上传成功后,就可以在docker hub上搜索到了:






2. 阿里云镜像上传

如果嫌docker hub网络太慢,可以使用阿里云的仓库:https://cr.console.aliyun.com/cn-hangzhou/instances/repositories

2.1 创建实例





2.2 创建命名空间





2.3 创建仓库










创建完成后会有提示如何使用:






2.4 上传镜像

根据提示操作即可

登录:

docker login --username=aliyun0098478676 registry.cn-hangzhou.aliyuncs.com

重命名镜像:

docker tag 8175 registry.cn-hangzhou.aliyuncs.com/aruba/centos-tomcat:latest

上传镜像:

docker push registry.cn-hangzhou.aliyuncs.com/aruba/centos-tomcat:latest

3. Harbor

上面两种都是公共仓库,想在自己服务器上搭建一个仓库也是必不可少的,Harbor就是由由VMware开源的Docker镜像仓库管理项目

3.1 Harbor安装

官网:https://goharbor.io/

下载地址:https://github.com/goharbor/harbor/releases

下载后,传入linux系统,进行解压:

tar -xvf harbor-offline-installer-v2.6.0.tgz mv harbor /usr/local

修改配置文件:

cd /usr/local/harbor cp harbor.yml.tmpl harbor.yml vi harbor.yml

修改如下3处,换成自己对应的IP:






3.2 配置证书

harbor内部使用nginx反向代理和https,所以要配置CA签名证书

3.2.1 生成CA证书

-bash-4.2# openssl genrsa -out ca.key 4096 -bash-4.2# openssl req -x509 -new -nodes -sha512 -days 3650 \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=192.168.42.4" \ -key ca.key -out ca.crt

3.2.2 生成服务器公私钥

-bash-4.2# openssl genrsa -out 192.168.42.4.key -bash-4.2# openssl req -sha512 -new \ -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=192.168.42.4" \ -key 192.168.42.4.key -out 192.168.42.4.csr

3.2.3 生成一个x509 v3扩展文件

  • 域名方式:

-bash-4.2# cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1=harbor.od.com DNS.2=harbor.od.com DNS.3=harbor.od.com EOF

  • IP方式:

-bash-4.2# cat > v3.ext <<-EOF authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth subjectAltName = IP:192.168.42.4 EOF

3.2.4 生成CA签名证书

-bash-4.2# openssl x509 -req -sha512 -days 3650 \ -extfile v3.ext \ -CA ca.crt -CAkey ca.key -CAcreateserial \ -in 192.168.42.4.csr \ -out 192.168.42.4.crt

3.2.5 将CA签名证书和服务器私钥放入/data/cert/目录

-bash-4.2# mkdir -p /data/cert/ -bash-4.2# cp 192.168.42.4.crt /data/cert/ -bash-4.2# cp 192.168.42.4.key /data/cert/

3.3 Docker配置证书

上面我们为harbor准备了证书,同样Docker也需要进行配置,因为Docker是要与仓库进行交互的

3.3.1 CA签名证书类型crt转换为cert

-bash-4.2# openssl x509 -inform PEM -in 192.168.42.4.crt -out 192.168.42.4.cert

3.3.2 存放到Docker的证书目录

docker需要CA公钥,CA签名证书(cert文件)与服务器私钥:

-bash-4.2# mkdir -p /etc/docker/certs.d/192.168.42.4/ -bash-4.2# cp 192.168.42.4.key /etc/docker/certs.d/192.168.42.4/ -bash-4.2# cp 192.168.42.4.cert /etc/docker/certs.d/192.168.42.4/ -bash-4.2# cp ca.crt /etc/docker/certs.d/192.168.42.4/

3.3.3 重启Docker

systemctl restart docker

3.4 初始化harbor

执行harbor的prepare脚本,为nginx配置HTTPS,此过程会拉取nginx镜像:

-bash-4.2# ./prepare

执行harbor的install脚本,初始化harbor:

-bash-4.2# ./install.sh

运行harbor

-bash-4.2# docker compose up

浏览器中访问:






yml中可以找到默认的登录账号密码:admin/Harbor12345






正常登录后:






3.5 部署harbor项目

harbor上创建一个项目:






再创建一个账号:






为该账号赋予管理员权限:






将该账户分配为刚刚创建项目的成员:






3.6 上传镜像到harbor

Docker登录到harbor:

-bash-4.2# docker login 192.168.42.4

上传镜像:

-bash-4.2# docker tag centos-tomcat 192.168.42.4/aruba/centos-tomcat:latest -bash-4.2# docker push 192.168.42.4/aruba/centos-tomcat:latest

等待上传成功后,查看harbor中的镜像仓库:






五、Swarm

前面我们了解了Docker容器间通信,使用虚拟网络技术实现,但还有一个问题,对于多台服务器部署的docker容器间通信,该如何实现呢?
Swarm是Docker官方提供的一款集群管理工具,由若干个Docker管理机抽象成一个整体,一个Docker管理机又能集中管理它下面的Docker工作机,功能与k8s类似,但更轻量,功能也比k8s少些






Swarm和Redis集群一样遵循半数原则,超过1/2(包括1/2)个Manager节点挂了,整个集群将不可用

1.集群搭建

1.1 克隆虚拟机

克隆三台虚拟机,分别设置HOSTNAME,以便后续标识

修改/etc/sysconfig/network

-bash-4.2# vi /etc/sysconfig/network

内容为:

# Created by anaconda NETWORKING=yes HOSTNAME=manager

修改/etc/hosts

-bash-4.2# vi /etc/hosts

追加内容:

192.168.42.4 manager

另外两台也做相应的配置,只是主机名不同,修改完后重启

1.2 集群搭建

Manager节点执行下面命令:

-bash-4.2# docker swarm init --advertise-addr 192.168.42.4 Swarm initialized: current node (7ltgt5p0vggy3w876p0k5xfrn) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-1w3lb9uhu3hnsipp1iz7ogmvsatcon2zeufxysagd3sbf663fh-dqqdm5d5l7a96fe4g7qh96ffl 192.168.42.4:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

其他节点执行上面的提示命令,加入到集群中:

-bash-4.2# docker swarm join --token SWMTKN-1-1w3lb9uhu3hnsipp1iz7ogmvsatcon2zeufxysagd3sbf663fh-dqqdm5d5l7a96fe4g7qh96ffl 192.168.42.4:2377 This node joined a swarm as a worker.

1.3 docker node ls查看集群节点

查看集群节点,可以看到一个manager statusLeader,即Manager节点:

-bash-4.2# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 7ltgt5p0vggy3w876p0k5xfrn * manager Ready Active Leader 20.10.17 rtavw2dgchlmhjmfth2uv0pnc node1 Ready Active 20.10.17 979ie68u7wn73w3nvlc315ldp node2 Ready Active 20.10.17

1.4 节点类型转换

Manager节点可以将Worker节点的类型进行升降级

命令 描述
docker node promote 主机名 将节点升级为Manager节点
docker node demote 主机名 将节点降级为Worker节点

将一个节点升级为Manager节点:

-bash-4.2# docker node promote node1 Node node1 promoted to a manager in the swarm. -bash-4.2# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 7ltgt5p0vggy3w876p0k5xfrn * manager Ready Active Leader 20.10.17 rtavw2dgchlmhjmfth2uv0pnc node1 Ready Active Reachable 20.10.17 979ie68u7wn73w3nvlc315ldp node2 Ready Active 20.10.17

2.集群半数协议(Raft一致性协议)

由于我们只有三个节点,所以要测试集群可用,只能把三个节点都上升为Manager节点,都上升完后,关闭Leader主机后,查看节点状态,发现有一台Worker已经成为Leader节点了:

-bash-4.2# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION 7ltgt5p0vggy3w876p0k5xfrn manager Unknown Active Unreachable 20.10.17 rtavw2dgchlmhjmfth2uv0pnc * node1 Ready Active Leader 20.10.17 979ie68u7wn73w3nvlc315ldp node2 Ready Active Reachable 20.10.17

3.Swarm服务编排

上面compose可以在一个宿主机上实现水平扩展,Swarm则可以在Docker集群中实现水平扩展,容器是Docker中的叫法,在Swarm中,称为服务Service

Service相关命令如下:

命令 描述
docker service create --name 名称 -p 端口映射 镜像名 创建一个service
docker service ls 列举出所有swarm服务
docker service logs 服务名 查看对应服务的日志输出
docker service inspect 服务名 查看服务详情
docker service ps 服务名 列举出对应服务的容器信息,信息包含运行的主机
docker service scale 服务名=扩展数量 水平扩展服务,将自动扩展到docker集群的节点中
docker service rm 服务名 删除服务
3.1 创建Service

-bash-4.2# docker service create --name tomcat-service tomcat

成功后查看服务:

-bash-4.2# docker service ls ID NAME MODE REPLICAS IMAGE PORTS ay4xeyz5qd1a tomcat-service replicated 1/1 tomcat:latest

再查看容器信息,发现创建在node2上:

-bash-4.2# docker service ps tomcat-service ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS g90gcs36uzlv tomcat-service.1 tomcat:latest node2 Running Running 8 minutes ago

3.2 水平扩展

将service扩展到3个:

-bash-4.2# docker service scale tomcat-service=3 tomcat-service scaled to 3 overall progress: 3 out of 3 tasks 1/3: running 2/3: running 3/3: running verify: Service converged

再次查看容器信息,发现三个节点上都创建了:

-bash-4.2# docker service ps tomcat-service ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS g90gcs36uzlv tomcat-service.1 tomcat:latest node2 Running Running 11 minutes ago gh45bx3jwn57 tomcat-service.2 tomcat:latest manager Running Running about a minute ago xfmeqdob533o tomcat-service.3 tomcat:latest node1 Running Running less than a second ago

3.3 自动创建与重启

如果手动停止或删除容器,或者遇到异常导致服务崩溃退出,Swarm还是会自动创建启动容器,以拉起服务

随便找个节点将tomcat-service容器停止,发现又会有一个新的容器被创建并启动:

-bash-4.2# docker ps | grep tomcat-service 58ac6e5387bf tomcat:latest "catalina.sh run" 3 minutes ago Up 3 minutes 8080/tcp tomcat-service.3.abmm4wcn8l7kabndc7jz3xset -bash-4.2# docker stop 58ac6e5387bf 58ac6e5387bf -bash-4.2# docker ps | grep tomcat-service 4a4902622d93 tomcat:latest "catalina.sh run" 6 seconds ago Up Less than a second 8080/tcp tomcat-service.3.mxt6k34ad4jltrnstf7qarbja

4.多机通信

Swarm是通过VXLAN(Virtual eXtensible LAN)实现的多机之间的通信,VXLAN(Virtual eXtensible LAN)技术是当前最为主流的Overlay标准,仅仅做了解即可

重新部署一个tomcat服务:

-bash-4.2# docker service create -d --name mytomcat -p 8080:8080 tomcat ph2b51mqz9tbcogsiiu7e71ny -bash-4.2# docker service ps mytomcat ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS 8yzqrwmtj41j mytomcat.1 tomcat:latest manager Running Running 6 seconds ago

从上面可知,目前服务只部署在一个节点上,Manager节点的IP为:192.168.42.4,使用其他节点IP在浏览器进行访问:






结果发现其他节点同样可以访问,这就是VXLAN实现的,可以通过网络命令查看下:

-bash-4.2# docker network ls NETWORK ID NAME DRIVER SCOPE 48ea34f8d77f bridge bridge local 2d68d63c8396 docker_gwbridge bridge local 9a32710598fb harbor_harbor bridge local 38f75ca6b94e host host local f4cb49asob14 ingress overlay swarm 1c340dce736e none null local

发现ingress的类型为overlay






发布人:5a7a****    IP:61.145.95.***     举报/删稿
展会推荐
让朕来说2句
评论
收藏
点赞
转发