学习环境基于阿里云CentOS Linux release 7.8.2003 (Core)

Docker入门学习图

1. Docker概述

1.1 Docker是什么?

Docker是一个开源的应用容器引擎,由Go语言编写,遵从 Apache2.0 协议开源。Docker可以让开发者打包应用及其依赖包到一个轻量级,可移植的容器中,然后发布到任何的Linux服务器上,也可以实现虚拟化。
容器完全使用沙箱机制,各自独立,相互之间不会有任何的接口暴露,<mark>更重要的是容器的性能开销极低。</mark>

1.2 Docker与Vmware

Dcoker是当今非常热门的容器技术,在早期容器技术还未出来之前,我们使用的是虚拟化技术!例如:Vmware 虚拟机可以虚拟出多台计算机,但是这种做法相当的笨重,电脑性能差一点的,开不了几台就卡死了!

vmware: linux centos原生的镜像系统,就相当于有一台电脑,可以做到隔离性,不过要开启多台虚拟机,要花费大量时间!
docker:隔离性,每个镜像十分的小巧,最核心的环境镜像环境(mysql+jdk) 4m,运行镜像就可以,秒级启动!

Docker与Vmware运行结构图:

左边是利用了Vmware虚拟机技术,由一个主机OS虚拟出两个完整的系统,不同的APP运行在了不同的虚拟机上。而右边是由一个主机OS,内部搭建了一个Docker引擎,里面存储了多个容器,APP就可以运行在不同的容器中,相互隔离!

那么为什么docker比vm快

  1. Docker有着比虚拟机更少的抽象层,由于Docker不需要Hypervisor实现硬件资源的虚拟化,运行在docker上的应用程序用的实际上是物理机的硬件资源,因此在CPU,内存,利用率上Docker将会在效率上明显优势!
  2. Docker内核利用了宿主的,而不需要Cuest OS(虚拟机OS),因此当创建一个容器时,docker不需要和虚拟机一样重新去加载一个操作系统,避免了引导,加载操作系统比较费时费资源的过程。当创建一个虚拟机的过程,需要去加载Guest OS,在这个新建的过程是分钟级别的,而docke由于是直接利用了宿主的操作系统省略了这个过程,所有新建一个docker是秒级的!
Docker 虚拟机(VM)
操作系统 与宿主共享OS 宿主OS上运行宿主OS
存储大小 镜像小,便于存储与传输 镜像庞大
运行性能 几乎无额外性能损失 操作系统额外CPU,内存消耗
移植性 轻便灵活 笨重,虚拟机耦合度搞
硬件亲和力 面向软件开发者 面向硬件运维者

出处:https://www.cnblogs.com/fanqisoft/

1.3 Docker为什么出现?

传统的开发和运维直接存在的问题:
开发工程师将开发完成后的应用交给运维工程师去部署,有可能会出现如下问题:开发工程师自己的开发环境下,程序可以运行,运维工程师部署之后发现不能运行,而且如果遇到版本更新问题,可能环境要重新配置,这对于运维来说,环境配置就变得非常繁琐。

Docker可以做到将开发完成后的项目,打包成镜像(带开发环境的那种),发布到docker仓库中,运维工程师只需要从仓库中下载即可,就可以直接运行了!

1.4 Docker能干嘛?

早期的虚拟化技术

缺点:

  • 运行起来慢
  • 冗余步骤多
  • 占用资源多

容器化技术


其中每一个App+LIB都是一个独立的容器,可以把它理解成简易的Linux系统!
注意:容器化技术不是模拟出一个完整的操作系统!

比较Docker和传统虚拟的不同

  • 传统虚拟机,虚拟出一套硬件,运行一个完整的操作系统,然后在这个系统上安装和运行软件
  • Docker容器内的应用是直接运行在宿主机的内部,容器自己是没有内核的,也没有虚拟出我们的硬件,所以比较轻便。
  • 每个容器都是相互隔离的,每个容器的内部都有一套属于自己的文件系统,互不影响!

2. Docker安装

2.1 Docker的基本构成

  • 镜像(images)

镜像好比如一个模板,可以用来创建容器,如tomcat镜像,就可以用来创建一个个tomcat容器,最终项目和服务运行就是在这个容器中。

  • 容器(container)

Docker利用容器技术,独立运行一个或一组应用,通过镜像来创建,容器可以把它理解成一个简易的Linux系统,包含了启动,暂停,删除等基本命令!

  • 仓库(repository)

仓库就是存放镜像的地方,分为公有仓库和私有仓库,例如国外的Docket HUB仓库,国内的阿里也有自己的镜像仓库,配置镜像加速器可以提高访问效率。

2.2 Docker安装步骤

Centos使用yum进行安装:

# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3: 更新并安装 Docker-CE(社区版)
sudo yum makecache fast
sudo yum -y install docker-ce
# Step 4: 开启Docker服务
sudo service docker start

安装完成后,查看docker版本

2.3 配置阿里云镜像加速

进行阿里云官网:

如果没有开通镜像服务,需要手动开通!

如下四步即可配置阿里云镜像加速!

2.4 Docker执行

docker执行hello-world

内部执行原理:

Docker是如何工作的,Docker是一个CS架构,Docker的守护进程运行在主机上面,通过socket从客户端进行访问,Docker-Server接收到Docker-Client的指令后,开始执行

其中容器MySQL和Tomcat就好比两个小的虚拟机!

3. Docker常用命令

3.1 镜像基本命令

当我们要使用镜像时,如果本地不存在,会去远程的仓库中下载到本地,然后使用,默认的远程仓库地址是Docker HUB

  1. 列出镜像列表

    参数说明:
  • REPOSITORY:镜像名
  • TAG:镜像标签
  • IMAGE ID:镜像ID(唯一)
  • CREATE:创建时间
  • SIZE:镜像大小

同一个仓库源可以有多个镜像标签,代表着不同的版本号的镜像,可以使用TAG来指定使用不同的镜像!

根据条件查找镜像

例如查找所有镜像,并且只显示镜像的ID

  1. Docker hub仓库中查找镜像

    参数说明:
  • NAME:镜像名称
  • DESCRIPTION:镜像描述
  • STARS:点赞数
  • OFFICIAL:是否docker官方发布
  • AUTOMATED:是否自动构建

查看其它可选参数:

例如查询:点赞数超过500的mysql

  1. 下载镜像

使用docker pull来下载镜像,默认下载最新版的

指定tag下载,具体的tag可以去docker hub上面查询

可见这种下载方法可以将内存利用率发挥到极致,因此说docker是轻便的,小巧的!

  1. 删除镜像

    根据IMAGE ID删除镜像
    删除所有镜像,-f表示强制删除

3.2 容器基本命令

说明:我们有了镜像才可以创建容器!
我们本地没有centos系统,于是可以从docker hub上下载一个

  1. 启动容器
docker run
# 参数说明:
--name="Name"  指定容器名称,用于区分
-d				后台方式运行
-it				使用交互方式运行,进入容器查看内容
-p	
	-p ip:主机端口:容器端口
	-p 主机端口:容器端口(常用)
	-p 容器端口
-P				随机指定端口

正常启动:

  1. 退出:
exit 	:退出
ctrl+p+q:后台运行


显示最近创建的容器,-n表示最近的个数

退出,但是容器后台运行

  1. 删除容器


删除所有容器

5. 启动和停止容器操作

docker restart [containid]
docker stop[containid]
docker kill[containid]

  1. 后台启动容器
[root@liuzeyu12a ~]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              latest              831691599b88        2 weeks ago         215MB
hello-world         latest              bf756fb1ae65        6 months ago        13.3kB
[root@liuzeyu12a ~]# docker run -d centos
ca02cf2075dfcc633a97814267e6e97ea36a93a4928597b01da94b6f2d988e15
[root@liuzeyu12a ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@liuzeyu12a ~]#

其中docker run -d centos为什么没有运行起来docker容器呢?原因是容器使用后台运行,就必须拥有一个前台进程,否则它就会自己退出,例如前面我们使用docker -it centos /bin/bash,开启前后台交互模式,使用CTRL+p+q切换前台,所以docker run -d centos没有开启centos!

如果想要后台运行,可以为容器写一段shell脚本,让其后台运行,并且开启 /bin/bash

  1. 查看后台容器的日志输出


-tf表示带时间戳的输出,10表示输出前10条日志

  1. 查看容器中的进程信息

可以看到和Linux的top命令一样

  1. 查看镜像的元数据
[root@liuzeyu12a ~]# docker inspect 86f343885d65
[
    {
   
        "Id": "86f343885d650d2bfd0a92b310173bb505bfc5f115803814fa804e8b0a2b0549",
        "Created": "2020-07-05T03:12:51.265306484Z",
        "Path": "/bin/bash",
        "Args": [
            "-c",
            "while true;do echo liuzeyu12a;sleep 5;done"
        ],
        "State": {
   
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 26836,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2020-07-05T03:12:51.520695574Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:831691599b88ad6cc2a4abbd0e89661a121aff14cfa289ad840fd3946f274f1f",
        "ResolvConfPath": "/var/lib/docker/containers/86f343885d650d2bfd0a92b310173bb505bfc5f115803814fa804e8b0a2b0549/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/86f343885d650d2bfd0a92b310173bb505bfc5f115803814fa804e8b0a2b0549/hostname",
        "HostsPath": "/var/lib/docker/containers/86f343885d650d2bfd0a92b310173bb505bfc5f115803814fa804e8b0a2b0549/hosts",
        "LogPath": "/var/lib/docker/containers/86f343885d650d2bfd0a92b310173bb505bfc5f115803814fa804e8b0a2b0549/86f343885d650d2bfd0a92b310173bb505bfc5f115803814fa804e8b0a2b0549-json.log",
        "Name": "/musing_dubinsky",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
   
            "Binds": null,
            "ContainerIDFile": "",
            "LogConfig": {
   
                "Type": "json-file",
                "Config": {
   }
            },
            "NetworkMode": "default",
            "PortBindings": {
   },
            "RestartPolicy": {
   
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Capabilities": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
   
            "Data": {
   
                "LowerDir": "/var/lib/docker/overlay2/8d8e20774da8102e5e3bf3ca09cf972cf67ccfca1b26ce75ea869a3701055a4f-init/diff:/var/lib/docker/overlay2/57778160b70fcfa973aaec5e3e7e1d8ef87c36f0aa3d734ccaebae1b1d8d897c/diff",
                "MergedDir": "/var/lib/docker/overlay2/8d8e20774da8102e5e3bf3ca09cf972cf67ccfca1b26ce75ea869a3701055a4f/merged",
                "UpperDir": "/var/lib/docker/overlay2/8d8e20774da8102e5e3bf3ca09cf972cf67ccfca1b26ce75ea869a3701055a4f/diff",
                "WorkDir": "/var/lib/docker/overlay2/8d8e20774da8102e5e3bf3ca09cf972cf67ccfca1b26ce75ea869a3701055a4f/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [],
        "Config": {
   
            "Hostname": "86f343885d65",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": [
                "/bin/bash",
                "-c",
                "while true;do echo liuzeyu12a;sleep 5;done"
            ],
            "Image": "centos",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": null,
            "OnBuild": null,
            "Labels": {
   
                "org.label-schema.build-date": "20200611",
                "org.label-schema.license": "GPLv2",
                "org.label-schema.name": "CentOS Base Image",
                "org.label-schema.schema-version": "1.0",
                "org.label-schema.vendor": "CentOS"
            }
        },
        "NetworkSettings": {
   
            "Bridge": "",
            "SandboxID": "e15b43f04fdffa47c3001f63ef7a4ef7d9e3a9fbc904d77a598311d277dde2a5",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
   },
            "SandboxKey": "/var/run/docker/netns/e15b43f04fdf",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "eb2aeee7348406c528c1dff58d983b240a907f164f29671ec3f524c229004b38",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
   
                "bridge": {
   
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "9e5c8c57724aec25417c318ddb60b1174bd357f9bea4b9cad2f8e869c10a13d5",
                    "EndpointID": "eb2aeee7348406c528c1dff58d983b240a907f164f29671ec3f524c229004b38",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }
]
[root@liuzeyu12a ~]#

  1. 进入当前正在运行的容器

法一:进入容器开启一个新的进程,可以在里面操作(常用)

法二:进入当前正在执行的终端,不会开启新的进程

  1. 将docker容器内部文件拷贝到Linux本地

3.3 常用命令小结


4. 服务器部署

4.1 ngnix服务器部署


进入容器

尝试在内网访问:

此时我们通过外网访问容器的Ngnix

分析:

4.2 tomcat服务器部署

下载8.5版本的tomcat

使用官方提高的方法运行:

docker run -it --rm tomcat:8.5

打开一个新的终端,可以看到tomcat确实启动了

我们按住CTRL+C终止,查看运行过的容器

发现使用官方推荐给我们的运行方式,结束运行后,就会将这个容器运行后的记录删掉。

所以我们使用自己的启动方式

docker run -d --name tomcat01 -p 3355:8080 tomcat:8.5


发现一个特点,以容器存在的tomcat,webapps目录下为空,因为这已经是一个被***版的tomcat,那么它还可以访问主页吗?

很明显,因为webapps下为空,所以已经不能访问webapps下的网站了
解决方案:复制将webapps.dist下的内容复刖一份给webapps


排查原因:

  • 阿里云安全组拦截
  • Linux防火墙拦截
  • tomcat镜像内部防火墙拦截

进入阿里云安全规则配置,放行3355端口

重新访问:

思考一个问题:当我们安装了类似nginx或tomcat服务器后,配置服务器如果都需要进入服务器,那岂不是非常的麻烦,docker为我们提供了数据卷技术,就是容器内部与外部提供一个映射路径,做到可以在外部修改文件,容器内部同步修改的特点!

4.3 可视化面板

Portainer:基于Go,是一个轻量级的图形化界面管理工具,可轻松管理Docker主机,并不常用,用于练习玩玩。

[root@liuzeyu12a ~]# [root@k8s-master ~]# docker run -d -p 8088:9000 --restart=always -v /var/run/docker.sock:/var/run/docker.sock --privileged=true portainer/portainer


同样我们需要在阿里云安全组放行8088端口

登录界面

进入portainer面标

可以看到portainer可以对docker的镜像和容器进行管理。

5. Docker镜像

5.1 联合文件系统

联合文件系统(UnionFs),是docker镜像的基础,是一种分层,轻量级并且高性能的文件系统。<mark>它支持对文件系统的一次修改作为一层来一次次叠加</mark>,同时可以实现将不同目录挂载到同一个虚拟文件系统下,基于基础镜像,镜像可以通过分层进行继承,但是没有父镜像的概念。
另外,不同的docker容器可以共享一些基础的文件系统层,再加上自己独有的改动层,大大的提高了存储的效率。

例如我们下载MySQL:

可以看到下载了12层,我们指定版本5.7进行下载

只下载了4层,因为其它的8层已经下载完毕,可以直接使用,这就是采用联合文件系统的一种体现,将内存的利用率发挥到极致!

5.2 镜像加载原理

docker的镜像其实是由一层层的文件系统构成,这种哦层级结构称为联合文件系统。

镜像的加载过程如下:

bootfs(boot file system)主要包含bootloader和kernel,其中bootloader负责引导kernel,Linux刚启动是会去加载bootfs系统,在docker镜像的最底层是bootfs。这层与我们典型的Linux/Unix系统一样,包含了boot加载器和内核,当boot加载完成之后内核就在内存中了,此时内存的使用权已由boofs转交给内核了,此时系统也会下载bootfs。

rootfs(root file system),在bootfs之上,包含的就是典型的Linux系统中的/dev,/proc,/etc等标准目录文件。rootfs就是各种不同的操作系统发行版,例如Ubuntu,Centos等。

问题为什么虚拟机都要几个G,而docker才200M?
对于一个精简的OS,rootfs可以很小,只需要包含最基本的命令,工具和程序库就可以了,因为底层直接用的是主机的Kernel,自己只需要提供rootfs即可,由此可见不同的Linux发行版本bootfs基本上是一致的,所以不同的发性版本可以公用bootfs。

Docker镜像都是只读的,当容器一经启动,一个新的可写层被加载到镜像的顶部,这一层被称为容器层,容器层之下的都被称为镜像层!

5.3 commit镜像

我们知道,刚刚下载的tomcat是一个***版,在webapps下是没有资源的,我们需要从webapps.dist中复制一份过去。
commit定制镜像步骤,层层构建:

docker commit -a "作者" -m "信息" 镜像名 镜像名:版本

查看docker的镜像:

可以发现我们新打包的tomcat镜像,并且tag = V1.0

6. 容器数据卷技术

6.1 数据卷介绍

为什么会有这个技术
当我们安装了类似nginx或tomcat服务器后,配置服务器如果都需要进入服务器,那岂不是非常的麻烦?还有就是安装MySQL服务器的容器,如果把数据库信息都存在了容器中,如果容器一删除,那么数据全部没了,没错做到持久化数据的作用?
如果能做到在容器内外数据同步,容器外修改同步到容器内,那岂不是美滋滋?
docker为我们提供了数据卷技术,就是容器内部与外部提供一个映射路径,做到可以在外部修改文件,容器内部同步修改的特点!

总结一句话:数据卷技术就是解决容器内外数据同步和容器内数据持久化的作用!

如上图,将Linux下的/home/mysql与docker容器中的/mysql目录做映射关系。

[root@liuzeyu12a]# docker run -it -v /home/ceshi/:/home centos /bin/bash
[root@liuzeyu12a ceshi]# vim test.java
[root@b018f36e1e51 home]# ls
test.java
[root@b018f36e1e51 home]#

可见,容器内外已经数据可以同步,并且容器删除或者停止,数据都会在主机中保存。

6.2 MySQL实战

[root@liuzeyu12a ceshi]# docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=809080 mysql:5.7
[root@liuzeyu12a ceshi]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                         NAMES
b14829011d64        mysql:5.7           "docker-entrypoint.s…"   4 seconds ago       Up 3 seconds        3306/tcp, 33060/tcp, 0.0.0.0:3310->3310/tcp   ecstatic_volhard

查看容器的元数据

[root@liuzeyu12a conf]# docker inspect b14829011d64

"Mounts": [
            {
   
                "Type": "bind",
                "Source": "/home/mysql/data",
                "Destination": "/var/lib/mysql",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
   
                "Type": "bind",
                "Source": "/home/mysql/conf",
                "Destination": "/etc/mysql/conf.d",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],

可以看到数据卷的源路径和目的路径,检查主机是否数据已经同步

data下已经同步,但是conf下并没有同步,这是因为docker容器下的/etc/mysql/conf.d没有挂载到其它路径,所以并不可以同步!

尝试MySQL连接数据,存入数据,测试同步

[root@liuzeyu12a mysql]# ls data
auto.cnf    client-cert.pem  ibdata1      mysql               public_key.pem   sys
ca-key.pem  client-key.pem   ib_logfile0  performance_schema  server-cert.pem
ca.pem      ib_buffer_pool   ib_logfile1  private_key.pem     server-key.pem

[root@liuzeyu12a mysql]# ls data
auto.cnf    client-cert.pem  ibdata1      ibtmp1              private_key.pem  server-key.pem
ca-key.pem  client-key.pem   ib_logfile0  mysql               public_key.pem   sys
ca.pem      ib_buffer_pool   ib_logfile1  performance_schema  server-cert.pem  test

测试删除掉MySQL数据库,测试是否数据已经持久化

[root@liuzeyu12a mysql]# docker rm -f ff5f4efc9428
ff5f4efc9428
[root@liuzeyu12a mysql]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@liuzeyu12a mysql]#

可见数据已经持久化

6.3 具名挂载和匿名挂载

  • 匿名挂载
[root@liuzeyu12a ~]# docker run -d -P --name nginx01 -v /etc/nginx nginx
df4b2f1785563e43f78cf4c8f39521fbd6ce0227976bb1ae39c73487af747f21
[root@liuzeyu12a ~]# docker volume ls #查看所有卷
DRIVER              VOLUME NAME
local               0dbd0a271c66f28438adab6cdeef1996dee63b79235685b0f5b90b73a2c83b86
local               6dfed4586a8823a925c88a439ca4f22006fd631b06a9498032cfd18bfd1340fd
local               7a72c88bbfd2d63c72838b957b918f99ff3cfdc8bee37dc1eaf2ebcd10c840a7
local               88a2d030a5ce67bb4426e47b95c97033b7eaecbecf8b694dd5244fb25c2021c9
[root@liuzeyu12a ~]#

匿名挂载只写了容器内的路径,不写容器外的路径!

  • 具名挂载(常用)
[root@liuzeyu12a ~]# docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx nginx
3b89b02ca02f467ddf2d1917380aab18b5181733aee426c7d3b07bc11fea5604
[root@liuzeyu12a ~]# docker volume ls
DRIVER              VOLUME NAME
local               0dbd0a271c66f28438adab6cdeef1996dee63b79235685b0f5b90b73a2c83b86
local               6dfed4586a8823a925c88a439ca4f22006fd631b06a9498032cfd18bfd1340fd
local               7a72c88bbfd2d63c72838b957b918f99ff3cfdc8bee37dc1eaf2ebcd10c840a7
local               88a2d030a5ce67bb4426e47b95c97033b7eaecbecf8b694dd5244fb25c2021c9
local               juming-nginx
[root@liuzeyu12a ~]#

具名使用的是一个名称来替代容器外的路径,它的真实路径在哪里呢?

[root@liuzeyu12a ~]# docker volume inspect juming-nginx
[
    {
   
        "CreatedAt": "2020-07-05T23:26:53+08:00",
        "Driver": "local",
        "Labels": null,
        "Mountpoint": "/var/lib/docker/volumes/juming-nginx/_data",
        "Name": "juming-nginx",
        "Options": null,
        "Scope": "local"
    }
]
[root@liuzeyu12a ~]#

可以看到挂载到的容器外地址"Mountpoint": "/var/lib/docker/volumes/juming-nginx/_data",,访问

[root@liuzeyu12a ~]# cd /var/lib/docker/volumes/juming-nginx/_data
[root@liuzeyu12a _data]# ls
conf.d  fastcgi_params  koi-utf  koi-win  mime.types  modules  nginx.conf  scgi_params  uwsgi_params  win-utf
[root@liuzeyu12a _data]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                   NAMES
3b89b02ca02f        nginx               "/docker-entrypoint.…"   3 minutes ago       Up 3 minutes        0.0.0.0:32769->80/tcp   nginx02
df4b2f178556        nginx               "/docker-entrypoint.…"   7 minutes ago       Up 7 minutes        0.0.0.0:32768->80/tcp   nginx01
[root@liuzeyu12a _data]# docker exec -it 3b89b02ca02f /bin/bash
root@3b89b02ca02f:/# cd /etc/nginx
root@3b89b02ca02f:/etc/nginx# ls
conf.d  fastcgi_params  koi-utf  koi-win  mime.types  modules  nginx.conf  scgi_params  uwsgi_params  win-utf
root@3b89b02ca02f:/etc/nginx#

可以看到容器内部的/etc/nginx 与 容器外/var/lib/docker/volumes/juming-nginx/_data内容相同。
所以,如果使用的具名挂载,它的挂载位置在/var/lib/docker/volumes/名称/_data

如何确定是匿名挂载,具名挂载,还是指定路径挂载

-v 容器内部路径   		匿名挂载
-v 名称:容器内部路径	具名挂载
-v /容器外:容器内		指定路径挂载
  • 扩展
# 一旦设置了容器的读写权限,容器对我们挂载出来的内容就会有了权限了。
docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx:ro nginx:latest
docker run -d -P --name nginx02 -v juming-nginx:/etc/nginx:rw nginx:latest

有些地方看到在在容器内路径后面加上:ro或者rw是是没啥意思呢?
其实加上ro表示的是只读,容器内部的路径下文件只读,只能能通过宿主机来修改操作!rw则表示可读可修改!

7. DockerFile

7.1 DockerFile基础操作

Dockerfile就是用来构建docker镜像的构建文件,镜像是一层一层的,dockerfile内容的命令也就是构建镜像的每一层!

[root@liuzeyu12a docker-test-volume]# cat dockerfile1 #创建dockerfile1并写入下面命令
FROM centos
VOLUME ["volume01","volume02"]
CMD echo "---end---"
CMD /bin/bash
[root@liuzeyu12a docker-test-volume]#

#构建dockerfile创建共享卷
[root@liuzeyu12a ~]# docker images 
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
centos                latest              470325a2b6fd        About a minute ago   215MB
[root@liuzeyu12a docker-test-volume]# docker build -f dockerfile1 -t 831691599b88 ./
Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM centos
 ---> 470325a2b6fd
Step 2/4 : VOLUME ["volume01","volume02"]
 ---> Running in 026871088dcc
Removing intermediate container 026871088dcc
 ---> 8a3b2e7f086f
Step 3/4 : CMD echo "---end---"
 ---> Running in 795a23085868
Removing intermediate container 795a23085868
 ---> 8e7c0a827cad
Step 4/4 : CMD /bin/bash
 ---> Running in 5724be3fa05d
Removing intermediate container 5724be3fa05d
 ---> 634c60d01d00
Successfully built 634c60d01d00
Successfully tagged 831691599b88:latest
# 运行centos
[root@liuzeyu12a ~]# docker run -it 831691599b88 /bin/bash
[root@c6c45ef04079 /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  volume01  volume02
[root@c6c45ef04079 /]#

可以看到在centos的根目录下,已经为我们创建了volume01 volume02。
需要注意的是:如果允许容器写的是容器名,则必须带上tag!

创建完共享卷后,按理说volume01 volume02会和我们的宿主机存在映射的文件,那么宿主机的路径是在哪里呢?
查看镜像的元数据:

[root@liuzeyu12a docker-test-volume]# docker inspect c6c45ef04079
 "Mounts": [
            {
   
                "Type": "volume",
                "Name": "1d991e5a80edb172c6fff332919dbd0169296e1b877250b074c3dda930823033",
                "Source": "/var/lib/docker/volumes/1d991e5a80edb172c6fff332919dbd0169296e1b877250b074c3dda930823033/_data",
                "Destination": "volume02",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
   
                "Type": "volume",
                "Name": "555d7417c455b312c817fd630f07cb41f9a5286e1981f424881df34fa3ecadd1",
                "Source": "/var/lib/docker/volumes/555d7417c455b312c817fd630f07cb41f9a5286e1981f424881df34fa3ecadd1/_data",
                "Destination": "volume01",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],

看到映射关系分别是:

volume01-----/var/lib/docker/volumes/555d7417c455b312c817fd630f07cb41f9a5286e1981f424881df34fa3ecadd1/_data

volume02----/var/lib/docker/volumes/1d991e5a80edb172c6fff332919dbd0169296e1b877250b074c3dda930823033/_data

所以我们此处的数据卷以匿名挂载的形式挂载,以volume01数据卷为例,访问/var/lib/docker/volumes/555d7417c455b312c817fd630f07cb41f9a5286e1981f424881df34fa3ecadd1/_data

[root@liuzeyu12a docker-test-volume]# cd /var/lib/docker/volumes/555d7417c455b312c817fd630f07cb41f9a5286e1981f424881df34fa3ecadd1/_data
[root@liuzeyu12a _data]# ls
[root@liuzeyu12a _data]# touch liu.text #宿主机创建文件
[root@liuzeyu12a _data]#


[root@c6c45ef04079 /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  volume01  volume02
[root@c6c45ef04079 /]# cd volume01 
[root@c6c45ef04079 volume01]# ls #查看容器内部
liu.text

7.2 数据卷容器

根据所以学的知识可知,我们可以利用数据卷技术,实现宿主机和容器之间的数据同步和持久化操作,容器之间可以使用数据卷技术实现数据同步是持久化吗?
创建三个容器,其中有一个是提供数据卷的docker容器,其余类似于“Java继承” 使用–volume-from与带数据卷的容器产生联系!

[root@liuzeyu12a _data]# docker images # 使用liuze***ntos
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
centos                latest              470325a2b6fd        35 minutes ago      215MB
liuze***ntos        1.0                 16af046ab7b2        48 minutes ago      215MB

#创建数据卷
[root@liuzeyu12a docker-test-volume]# docker build -f dockerfile1 -t liuze***ntos:1.0 ./
Sending build context to Docker daemon  2.048kB
Step 1/4 : FROM centos
 ---> 470325a2b6fd
Step 2/4 : VOLUME ["volume01","volume02"]
 ---> Running in 2bf233ffdb9f
Removing intermediate container 2bf233ffdb9f
 ---> 00201ded9178
Step 3/4 : CMD echo "---end---"
 ---> Running in 012e337fdf84
Removing intermediate container 012e337fdf84
 ---> e6b565888546
Step 4/4 : CMD /bin/bash
 ---> Running in f619d5df0698
Removing intermediate container f619d5df0698
 ---> 46feb771d84f
Successfully built 46feb771d84f
Successfully tagged liuze***ntos:1.0
[root@liuzeyu12a docker-test-volume]# docker iamges
docker: 'iamges' is not a docker command.
See 'docker --help'

#启动docker01
[root@liuzeyu12a docker-test-volume]# docker run -it --name docker01 46feb771d84f /bin/bash
[root@7d0426c6bb42 /]# ls -l
total 56
lrwxrwxrwx  1 root root    7 May 11  2019 bin -> usr/bin
drwxr-xr-x  5 root root  360 Jul  6 02:20 dev
drwxr-xr-x  1 root root 4096 Jul  6 02:20 etc
drwxr-xr-x  2 root root 4096 May 11  2019 home
lrwxrwxrwx  1 root root    7 May 11  2019 lib -> usr/lib
lrwxrwxrwx  1 root root    9 May 11  2019 lib64 -> usr/lib64
drwx------  2 root root 4096 Jun 11 02:35 lost+found
drwxr-xr-x  2 root root 4096 May 11  2019 media
drwxr-xr-x  2 root root 4096 May 11  2019 mnt
drwxr-xr-x  2 root root 4096 May 11  2019 opt
dr-xr-xr-x 89 root root    0 Jul  6 02:20 proc
dr-xr-x---  2 root root 4096 Jun 11 02:35 root
drwxr-xr-x 11 root root 4096 Jun 11 02:35 run
lrwxrwxrwx  1 root root    8 May 11  2019 sbin -> usr/sbin
drwxr-xr-x  2 root root 4096 May 11  2019 srv
dr-xr-xr-x 13 root root    0 Jul  6 01:35 sys
drwxrwxrwt  7 root root 4096 Jun 11 02:35 tmp
drwxr-xr-x 12 root root 4096 Jun 11 02:35 usr
drwxr-xr-x 20 root root 4096 Jun 11 02:35 var
drwxr-xr-x  2 root root 4096 Jul  6 02:20 volume01
drwxr-xr-x  2 root root 4096 Jul  6 02:20 volume02

可以看到volume01,volume02,在docker01下的volume01创建A.java

[root@7d0426c6bb42 /]# cd volume01
[root@7d0426c6bb42 volume01]# ls
[root@7d0426c6bb42 volume01]# touch A.java
[root@7d0426c6bb42 volume01]# ls
A.java
[root@7d0426c6bb42 volume01]#

创建docker02,docker03

[root@liuzeyu12a ~]# docker run -it --name docker02 --volumes-from docker01 liuze***ntos:1.0
[root@3f07c4697007 /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  volume01  volume02
[root@3f07c4697007 /]# cd volume01
[root@3f07c4697007 volume01]# ls
A.java
[root@3f07c4697007 volume01]#

[root@liuzeyu12a ~]# docker run -it --name docker03 --volumes-from docker01 liuze***ntos:1.0
[root@75509ae25da2 /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  volume01  volume02
[root@75509ae25da2 /]# cd volume01
[root@75509ae25da2 volume01]# ls
A.java
[root@75509ae25da2 volume01]#

此时,docker02和docker03可以同步docker01的数据。那如果再次创建docker04,docker03 --volumes-from docker03,那还能不能得到卷的数据呢?

[root@liuzeyu12a ~]# docker run -it --name docker04 --volumes-from docker03 liuze***ntos:1.0
[root@897f32127f3b /]# ls
bin  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var  volume01  volume02
[root@897f32127f3b /]# cd volume01
[root@897f32127f3b volume01]# ls
A.java
[root@897f32127f3b volume01]#

显然数据也是可以同步的!
如果将docker01删除并停止,其它的docker数据还会不会保留?

[root@liuzeyu12a docker-test-volume]# docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED              STATUS              PORTS               NAMES
897f32127f3b        liuze***ntos:1.0   "/bin/sh -c /bin/bash"   About a minute ago   Up About a minute                       docker04
75509ae25da2        liuze***ntos:1.0   "/bin/sh -c /bin/bash"   5 minutes ago        Up 5 minutes                            docker03
3f07c4697007        liuze***ntos:1.0   "/bin/sh -c /bin/bash"   8 minutes ago        Up 8 minutes                            docker02

其它的docker数据是可以持久化!

[root@897f32127f3b /]# cd volume01
[root@897f32127f3b volume01]# ls
A.java
[root@897f32127f3b volume01]#

小结

  • 容器之间配置信息的传递,数据的传递,一直持续到没有容器在运行为止
  • 一旦容器的数据持久化到本地的宿主目录,本地的目录将不会被删除。

7.3 Dockerfile详解

7.3.1 Dockerfile介绍

Dockerfile是用来构建docker镜像的构建文件,是一堆命令脚本!

构建步骤

  1. 编写dockerfile文件
  2. docker build成一个镜像
  3. docker run运行镜像
  4. ducker push发布镜像(doker hub或阿里云)

进如docker hub官网,以cenos为例,查看它是如何被构建的?

随便进入一个版本

发现进入的GitHub,发现如下一段脚本!

FROM scratch
ADD centos-7-x86_64-docker.tar.xz /

LABEL \
    org.label-schema.schema-version="1.0" \
    org.label-schema.name="CentOS Base Image" \
    org.label-schema.vendor="CentOS" \
    org.label-schema.license="GPLv2" \
    org.label-schema.build-date="20200504" \
    org.opencontainers.image.title="CentOS Base Image" \
    org.opencontainers.image.vendor="CentOS" \
    org.opencontainers.image.licenses="GPL-2.0-only" \
    org.opencontainers.image.created="2020-05-04 00:00:00+01:00"

CMD ["/bin/bash"]

这就是构建centos镜像的dockerfile文件,每一个以大写字母开头的指令会构建一个新的镜像层!

7.3.2 Dockerfile构建

最原生的docker镜像,往往缺少很多的功能,是一个***版的系统,所以我们通常在使用前都会自己再重新构建。

dockerfile文件编写需要注意

  • 每个指令都必须大写
  • 执行从上到下
  • #表示的是注释
  • 每一个指令都会提交一个新的镜像层并提交

  • Dockerfile是面向开发,我们以后要发布项目,做镜像,就需要编写dokerfile问及那
  • Docker镜像逐渐成为企业交付得标准
  • Dockfile构建文件,定义了一系列步骤,源代码
  • Docker Images:构建生成得镜像,发布和运行产品,比起原来得jar,war包,Docker Images更是包含了运行环境的配置。
  • Docker 容器就是镜像运行起来提供的服务器

Dockerfile指令

FROM			#指定基础镜像
MAINTAINER		#指定镜像创建者信息(姓名<邮箱>)
RUN				#执行构建命令
CMD				#设置容器启动时执行的命令(只有最后一个会生效。可以被替代)
ENTRYPOINT		#设置容器启动时执行的命令,可以追加命令
USER			#设置容器的用户
EXPOSE			#设置容器要映射到宿主机的端口
ENV				#用于设置环境变量
ADD				#从src目录复制文件到dest目录,如果时压缩包会自动解压
VOLUME			#指定挂载点
DIRWORK			#工作目录
ONBUILD			#在子镜像中执行,构建一个被继承,就会执行该命令
COPY			#类似于ADD,将文件复制到镜像中,不会解压


需求:我们以原生的centos为基础镜像,为它添加额外的功能,如vim指令,ifconfig,构建一个新的镜像

  1. 编写dockerfile文件
[root@liuzeyu12a dockerfile]# cat mydockerfile
FROM centos
MAINTAINER liuzeyu<liuzeyu12a@163.com>

ENV MYPATH /usr/local
WORKDIR $MYPATH

RUN yum -y install vim
RUN yum -y install net-tools

EXPOSE 80

CMD echo $MYPATH
CMD echo "---end---"
CMD /bin/bash

  1. 构建镜像
[root@liuzeyu12a dockerfile]# docker build -f mydockerfile -t centos:latest .
Sending build context to Docker daemon  2.048kB
Step 1/10 : FROM centos
 ---> 470325a2b6fd   #从基础构建,所以不需要下载
Step 2/10 : MAINTAINER liuzeyu<liuzeyu12a@163.com>
....
Step 3/10 : ENV MYPATH /usr/local
....
Step 4/10 : WORKDIR $MYPATH
....
Step 5/10 : RUN yum -y install vim
....
Step 6/10 : RUN yum -y install net-tools
....
Step 7/10 : EXPOSE 80
....
Step 8/10 : CMD echo $MYPATH
....
Step 9/10 : CMD echo "---end---"
....
Step 10/10 : CMD /bin/bash
....
Successfully tagged centos:latest

#docker build ".“最后的”."号,其实是在指定镜像构建过程中的上下文环境的目录

可以看到dockerfile有10个指令,所以镜像也就构建了10层,共分为10步完成!

  1. 测试新镜像功能
[root@liuzeyu12a dockerfile]# docker run -it 5db2845f9925 
[root@fc900825c29c local]# pwd
/usr/local
[root@fc900825c29c local]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.2  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:ac:11:00:02  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@fc900825c29c local]# ls
bin  etc  games  include  lib  lib64  libexec  sbin  share  src
[root@fc900825c29c local]# vim L.java
[root@fc900825c29c local]#

缺少的功能都已经构建完成,启动容器不用加 /bin/bash因为dockerfile文件最后一行已经回在容器启动的时候去执行。

  1. 查看一个镜像构建的具体过程
[root@liuzeyu12a dockerfile]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
fc900825c29c        5db2845f9925        "/bin/sh -c /bin/bash"   10 minutes ago      Up 10 minutes       80/tcp              brave_sutherland
[root@liuzeyu12a dockerfile]# docker history 5db2845f9925
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
5db2845f9925        10 minutes ago      /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "/bin… 0B
70f4c94dc229        10 minutes ago      /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "echo… 0B
7965b5568e75        10 minutes ago      /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "echo… 0B
ab2cc2e9201d        10 minutes ago      /bin/sh -c #(nop) EXPOSE 80 0B
5757e3146515        10 minutes ago      /bin/sh -c yum -y install net-tools             22.7MB
27fa9f7f1a64        10 minutes ago      /bin/sh -c yum -y install vim                   57.2MB
a21d2752d9af        11 minutes ago      /bin/sh -c #(nop) WORKDIR /usr/local 0B
c82227413f4b        11 minutes ago      /bin/sh -c #(nop) ENV MYPATH=/usr/local 0B
a787f8ee6549        11 minutes ago      /bin/sh -c #(nop) MAINTAINER liuzeyu<liuzey… 0B
470325a2b6fd        3 hours ago         /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "/bin… 0B
9ef80623e347        3 hours ago         /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "echo… 0B
59f46cc7d171        3 hours ago         /bin/sh -c #(nop) VOLUME [volume01 volume02] 0B
8e7aa8e282e1        3 hours ago         /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "/bin… 0B
f7e478f27834        3 hours ago         /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "echo… 0B
bf7611fd9b48        3 hours ago         /bin/sh -c #(nop) VOLUME [volume01 volume02] 0B
16af046ab7b2        3 hours ago         /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "/bin… 0B
afd045525cb9        3 hours ago         /bin/sh -c #(nop) CMD ["/bin/sh" "-c" "echo… 0B
e8d821f4f1a8        3 hours ago         /bin/sh -c #(nop) VOLUME [volume01 volume02] 0B
831691599b88        2 weeks ago         /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing>           2 weeks ago         /bin/sh -c #(nop) LABEL org.label-schema.sc… 0B
<missing>           2 weeks ago         /bin/sh -c #(nop) ADD file:84700c11fcc969ac0… 215MB
[root@liuzeyu12a dockerfile]#

可以看到镜像的每一个阶段的构建过程,也就是构建的历史记录!

7.3.3 CMD与ENTRYPOINT的区别

CMD:用于docker容器启动时执行,回覆盖上一条名,只有最后一条命令起作用!

#书写dockerfile文件
[root@liuzeyu12a dockerfile]# cat cmd-dockerfile
FROM centos
CMD ["ls","-a"]

#构建容器
[root@liuzeyu12a dockerfile]# docker build -f cmd-dockerfile -t centos .
Sending build context to Docker daemon  3.072kB
Step 1/2 : FROM centos
 ---> 00d0ba211c11
Step 2/2 : CMD ["ls","-a"]
 ---> Running in 95115c5d2bb1
Removing intermediate container 95115c5d2bb1
 ---> 72a381d54dc4
Successfully built 72a381d54dc4
Successfully tagged centos:latest

#运行容器
[root@liuzeyu12a dockerfile]# docker run 72a381d54dc4
.
..
bin
etc
games
include
lib
lib64
libexec
sbin
share
src

#倘若运行容器时追加CMD命令,-l,它将会替换掉即将执行的 ls -a指令,而直接执行 -l,显然这对于Linux是非法指令,所以报错!

[root@liuzeyu12a dockerfile]# docker run 72a381d54dc4 -l
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-l\": executable file not found in $PATH": unknown.

ENTRYPOING:容器启动时会去执行的命令,但和CMD不同的是,ENTRYPOINT会将其它指令追加在后头

#编写dockerfile文件
[root@liuzeyu12a dockerfile]# cat entrypoint-dockerfile
FROM centos
ENTRYPOINT ["ls","-a"]

#构建docker镜像 ,其中注意的是ENTRYPOINT与[]中间有一个空格
[root@liuzeyu12a dockerfile]# docker build -f entrypoint-dockerfile -t centos .
Sending build context to Docker daemon  4.096kB
Step 1/2 : FROM centos
 ---> 72a381d54dc4
Step 2/2 : ENTRYPOINT ["ls","-a"]
 ---> Running in abe8ef1c61a2
Removing intermediate container abe8ef1c61a2
 ---> 9d3dc8bf3008
Successfully built 9d3dc8bf3008
Successfully tagged centos:latest、

#运行容器
[root@liuzeyu12a dockerfile]# docker run 9d3dc8bf3008
.
..
bin
etc
games
include
lib
lib64
libexec
sbin
share
src

#追加要执行的命令,发现竟然可以运行成功,这是因为-l直接追加到了 ls -a后面,变成ls -al
[root@liuzeyu12a dockerfile]# docker run 9d3dc8bf3008 -l
total 48
drwxr-xr-x 12 root root 4096 Jun 11 02:35 .
drwxr-xr-x  1 root root 4096 Jun 11 02:35 ..
drwxr-xr-x  2 root root 4096 May 11  2019 bin
drwxr-xr-x  2 root root 4096 May 11  2019 etc
drwxr-xr-x  2 root root 4096 May 11  2019 games
drwxr-xr-x  2 root root 4096 May 11  2019 include
drwxr-xr-x  2 root root 4096 May 11  2019 lib
drwxr-xr-x  2 root root 4096 May 11  2019 lib64
drwxr-xr-x  2 root root 4096 May 11  2019 libexec
drwxr-xr-x  2 root root 4096 May 11  2019 sbin
drwxr-xr-x  5 root root 4096 Jun 11 02:35 share
drwxr-xr-x  2 root root 4096 May 11  2019 src
[root@liuzeyu12a dockerfile]#

7.4 部署Tomcat镜像(ADD,COPY)

  1. 准备镜像文件tomcat,jdk压缩包


2. 编写Dockerfile文件

[root@liuzeyu12a tomcat]# cat Dockerfile
FROM centos
MAINTAINER liuzeyu<liuzeyu12a@163.com>

COPY readme.txt /usr/local/readme.txt

ADD apache-tomcat-8.5.56.tar.gz /usr/local/
ADD jdk-8u144-linux-x64.tar.gz /usr/local/

RUN yum -y install vim

ENV MYPATH /usr/local
WORKDIR $MYPATH

ENV JAVA_HOME /usr/local/jdk1.8.0_144
ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
ENV CATALINA_HOME /usr/local/apache-tomcat-8.5.56
ENV CATALINA_BASH /usr/local/apache-tomcat-8.5.56
ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin

EXPOSE 8080

CMD /usr/local/apache-tomcat-8.5.56/bin/startup.sh && tail -F /usr/local/apache-tomcat-8.5.56/bin/logs/catalina.out
[root@liuzeyu12a tomcat]#

  1. 构建镜像
[root@liuzeyu12a tomcat]# docker build -t diytomcat .
Sending build context to Docker daemon  195.9MB
Step 1/15 : FROM centos
 ---> 9d3dc8bf3008
Step 2/15 : MAINTAINER liuzeyu<liuzeyu12a@163.com>
 ---> Using cache
 ---> dd6ce2e768a7
Step 3/15 : COPY readme.txt /usr/local/readme.txt
 ---> c5dad16763be
Step 4/15 : ADD apache-tomcat-8.5.56.tar.gz /usr/local/
 ---> 3f7df1ab7800
Step 5/15 : ADD jdk-8u144-linux-x64.tar.gz /usr/local/
 ---> 0b71609d5306
Step 6/15 : RUN yum -y install vim
 ---> Running in ae164124179c
Last metadata expiration check: 5:29:05 ago on Mon Jul  6 04:17:49 2020.
Package vim-enhanced-2:8.0.1763-13.el8.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
Removing intermediate container ae164124179c
 ---> a8d8ea362dc9
Step 7/15 : ENV MYPATH /usr/local
 ---> Running in 97b926fc5fff
Removing intermediate container 97b926fc5fff
 ---> 5df614ad66f6
Step 8/15 : WORKDIR $MYPATH
 ---> Running in f590e2d91db5
Removing intermediate container f590e2d91db5
 ---> 5e52e10e3521
Step 9/15 : ENV JAVA_HOME /usr/local/jdk1.8.0_144
 ---> Running in a9fd7eac4ddd
Removing intermediate container a9fd7eac4ddd
 ---> 358306c4cce3
Step 10/15 : ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
 ---> Running in c9a4919b82bd
Removing intermediate container c9a4919b82bd
 ---> 26e38ca39fa0
Step 11/15 : ENV CATALINA_HOME /usr/local/apache-tomcat-8.5.56
 ---> Running in e44fb7fcea91
Removing intermediate container e44fb7fcea91
 ---> c889c8b69275
Step 12/15 : ENV CATALINA_BASH /usr/local/apache-tomcat-8.5.56
 ---> Running in 5cf4c0e69543
Removing intermediate container 5cf4c0e69543
 ---> 527578b98314
Step 13/15 : ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin
 ---> Running in 8c604d7d2a5b
Removing intermediate container 8c604d7d2a5b
 ---> fd0270144262
Step 14/15 : EXPOSE 8080
 ---> Running in b1013a997bff
Removing intermediate container b1013a997bff
 ---> daa2a5037a1d
Step 15/15 : CMD /usr/local/apache-tomcat-8.5.56/bin/startup.sh && tail -F /usr/local/apache-tomcat-8.5.56/bin/logs/catalina.out
 ---> Running in 6e33a6bfedda
Removing intermediate container 6e33a6bfedda
 ---> 21324f96c673
Successfully built 21324f96c673
Successfully tagged diytomcat:latest
[root@liuzeyu12a tomcat]#

[root@liuzeyu12a tomcat]# docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
diytomcat             latest              21324f96c673        25 seconds ago      696MB
centos                latest              9d3dc8bf3008        45 minutes ago      295MB
liuze***ntos        1.0                 46feb771d84f        7 hours ago         215MB
tomcat01              V1.0                1167f3e95095        25 hours ago        534MB
centos                <none>              831691599b88        2 weeks ago         215MB
tomcat                8.5                 e010d327a904        3 weeks ago         529MB
nginx                 latest              2622e6cca7eb        3 weeks ago         132MB
mysql                 5.7                 9cfcce23593a        3 weeks ago         448MB
portainer/portainer   latest              cd645f5a4769        4 weeks ago         79.1MB
hello-world           latest              bf756fb1ae65        6 months ago        13.3kB

  1. 运行容器并设置挂载目录
docker run -d -p 9090:8080 --name tomcat -v /home/tomcat/build/test:/usr/local/apache-tomcat-8.5.56/webapps/test -v /home/tomcat/build/tomcatlogs:/usr/local/apache-tomcat-8.5.56/logs/ diytomcat
  1. 查看数据的同步

[root@liuzeyu12a build]# ls tomcatlogs/
catalina.2020-07-06.log  host-manager.2020-07-06.log  localhost_access_log.2020-07-06.txt
catalina.out             localhost.2020-07-06.log     manager.2020-07-06.log

  1. 查看正在允许的容器
[root@liuzeyu12a build]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
9cc1c049cb06        diytomcat           "/bin/sh -c '/usr/lo…"   4 minutes ago       Up 4 minutes        0.0.0.0:9090->8080/tcp   tomcat03

# 进入正在允许的容器
[root@liuzeyu12a build]# docker exec -it tomcat03 /bin/bash
[root@9cc1c049cb06 local]# ls
aegis  apache-tomcat-8.5.56  bin  etc  games  include  jdk1.8.0_144  lib  lib64  libexec  readme.txt  sbin  share  src

#查看当前目录
[root@9cc1c049cb06 local]# pwd
/usr/local
[root@9cc1c049cb06 local]# ls
aegis  apache-tomcat-8.5.56  bin  etc  games  include  jdk1.8.0_144  lib  lib64  libexec  readme.txt  sbin  share  src

# 查看tomcat目录
[root@9cc1c049cb06 local]# ls -l apache-tomcat-8.5.56/
total 148
-rw-r----- 1 root root 19318 Jun  3 20:22 BUILDING.txt
-rw-r----- 1 root root  5408 Jun  3 20:22 CONTRIBUTING.md
-rw-r----- 1 root root 57011 Jun  3 20:22 LICENSE
-rw-r----- 1 root root  1726 Jun  3 20:22 NOTICE
-rw-r----- 1 root root  3255 Jun  3 20:22 README.md
-rw-r----- 1 root root  7136 Jun  3 20:22 RELEASE-NOTES
-rw-r----- 1 root root 16262 Jun  3 20:22 RUNNING.txt
drwxr-x--- 2 root root  4096 Jun  3 20:22 bin
drwx------ 1 root root  4096 Jul  6 13:37 conf
drwxr-x--- 2 root root  4096 Jun  3 20:19 lib
drwxr-xr-x 2 root root  4096 Jul  6 13:37 logs
drwxr-x--- 2 root root  4096 Jun  3 20:19 temp
drwxr-x--- 1 root root  4096 Jul  6 13:37 webapps
drwxr-x--- 1 root root  4096 Jul  6 13:37 work
  1. 访问tomcat首页
[root@9cc1c049cb06 local]# curl -I 127.0.0.1:8080
HTTP/1.1 200
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Date: Mon, 06 Jul 2020 13:49:01 GMT

使用浏览器访问:

  1. 尝试自己部署一个网站在docker的tomcat上

由于本地宿主机已经和docker内部做了映射

/home/tomcat/build/test:/usr/local/apache-tomcat-8.5.56/webapps/test

所以我们部署,只需要在宿主机的
/home/tomcat/build/test
部署即可!

在tomcat上面的webapps下部署目录,需要注意目录结构:

-webapps
	-项目名
		-页面.html
		-WEB-INF
			-web.xml
这就是一个最基础的的项目部署条件!

搭建项目:

[root@bfe9666af268 webapps]# cd test/hello-tomcat/
[root@bfe9666af268 hello-tomcat]# ls
WEB-INF  hello.html
[root@bfe9666af268 hello-tomcat]# ls WEB-INF
web.xml
[root@bfe9666af268 hello-tomcat]#

提供web.xml和hello.html页面

<?xml version="1.0" encoding="UTF-8"?>
 <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5">

 </web-app>
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>liuzeyu12a</title>
</head>
<body>
    <h1>LiuZeyu cool!!!</h1>
</body>
</html>

测试:

点击学习tomcat如何发布项目

8 镜像发布

8.1 发布镜像到Docker Hub

  1. 首先要有一个docker hub账号

  1. 封装镜像,打上tag

这里必须注意格式:docker账号/镜像名:[tag]

[root@liuzeyu12a tomcat]# docker tag diytomcat:latest liuzeyu12a/diytomcat:1.0
[root@liuzeyu12a tomcat]# docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
diytomcat              latest              659eedd626b7        2 hours ago         663MB
liuzeyu12a/diytomcat   1.0                 659eedd626b7        2 hours ago         663MB

  1. docker push到远程仓库
#登录
[root@liuzeyu12a tomcat]# docker login -u liuzeyu12a
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@liuzeyu12a tomcat]# docker images

#推送,传上去的时间几个h不等...
[root@liuzeyu12a tomcat]# docker push liuzeyu12a/diytomcat:1.0
The push refers to repository [docker.io/liuzeyu12a/diytomcat]
9feb96716809: Pushing [=============================================>     ]  52.34MB/57.17MB
73db2c879a3a: Pushing [=======>                                           ]  54.26MB/376.2MB
5696afaceeb2: Pushed
4d0d67147ca1: Pushed
eb29745b8228: Mounted from library/centos

发布成功!

8.1 发布镜像到阿里云镜像仓库

  1. 进入阿里云镜像服务
  2. 创建命名空间,用于管理仓库
  3. 创建镜像仓库




点击进来查看仓库信息!

可以看到具体的使用步骤

  1. 按照上面所写步骤,将本地镜像diytomcat 推送到阿里云
[root@liuzeyu12a ~]# docker images
REPOSITORY                                                   TAG                 IMAGE ID            CREATED             SIZE
diytomcat-aliyun/diytomcat                                   latest              659eedd626b7        13 hours ago        663MB
diytomcat                                                    latest              659eedd626b7        13 hours ago        663MB
liuzeyu12a/diytomcat                                         1.0                 659eedd626b7        13 hours ago        663MB

  • 登录阿里云Docker Registry
$ sudo docker login --username=lzy15359809080 registry.cn-hangzhou.aliyuncs.com
  • 注意一定要把镜像重新打包
docker tag [ImageId] registry.cn-hangzhou.aliyuncs.com/liuzeyu12a-hub/diytomcat:latest
  • push
[root@liuzeyu12a ~]# docker push registry.cn-hangzhou.aliyuncs.com/liuzeyu12a-hub/diytomcat:latest
The push refers to repository [registry.cn-hangzhou.aliyuncs.com/liuzeyu12a-hub/diytomcat]
9feb96716809: Pushing [==============>                                    ]  16.76MB/57.17MB
73db2c879a3a: Pushing [>                                                  ]  6.551MB/376.2MB
5696afaceeb2: Pushing [==================>                                ]  5.391MB/14.53MB
4d0d67147ca1: Pushed
eb29745b8228: Pushing [==>                                                ]  12.55MB/215.3MB
9feb96716809: Pushing [========================>                          ]  27.46MB/57.17MB
73db2c879a3a: Pushing [=>                                                 ]  9.864MB/376.2MB
5696afaceeb2: Pushing [===============================>                   ]  9.137MB/14.53MB
diytomcat                                                    latest              659eedd626b7        13 hours ago        663MB
eb29745b8228: Pushing [====>                                              ]  21.36MB/215.3MB

静等几分钟,发布成功!

9. Docker网络

9.1 Docker0详解

查看服务器网络

发现一张docker0的网卡,这一张网卡是我们安装完docker后自己生成的,它的作用为各个容器提供路由转发。

安装两个tomcat01,tomcat02

[root@liuzeyu12a ~]# docker run -d -P --name tomcat01 tomcat
[root@liuzeyu12a ~]# docker run -d -P --name tomcat02 tomcat


# 查看主机网卡:
[root@liuzeyu12a ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:16:3e:14:33:52 brd ff:ff:ff:ff:ff:ff
    inet 172.16.17.65/20 brd 172.16.31.255 scope global dynamic eth0
       valid_lft 315193979sec preferred_lft 315193979sec
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:75:5d:9d:4b brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
131: veth6e30570@if130: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 66:74:6c:e9:2f:2b brd ff:ff:ff:ff:ff:ff link-netnsid 0
133: veth8e4ce19@if132: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether 6e:02:7d:c9:77:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 1

# 查看安装tomcat01的容器
[root@liuzeyu12a ~]# docker exec 866b3a6c9c65 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
130: eth0@if131: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

# 查看安装tomcat02的容器
[root@liuzeyu12a ~]# docker exec 4c36f6b19701 ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
132: eth0@if133: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

由此可以发现一个现象,当我们安装两个tomcat,在主机下就会多两个网卡,而且这两个网卡是可以互相访问的,用的是veth-paire技术。

测试tomcat01和tomcat02的网络连通性:

# tomcat02 pingtomcat01
[root@liuzeyu12a ~]# docker exec tomcat02 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.116 ms
64 bytes from 172.17.0.2: icmp_seq=3 ttl=64 time=0.077 ms
^C

# tomcat01 pingtomcat02
[root@liuzeyu12a ~]# docker exec tomcat01 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.074 ms
^C

# tomcat01 ping Dcoker0
[root@liuzeyu12a ~]# docker exec tomcat01 ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.095 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.070 ms
^C
[root@liuzeyu12a ~]#

可以发现都是ping通的!
但是!!我们如果使用容器名去ping的话,是ping不通的!

[root@liuzeyu12a ~]# docker exec tomcat01 ping tomcat02
ping: tomcat02: Name or service not known

那么有什么办法能够做到ping通呢?请看自定义网络!

9.2 自定义网络

[root@liuzeyu12a ~]# docker network --help

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network
  create      Create a network
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

Run 'docker network COMMAND --help' for more information on a command.

# 查看docker的网络信息
[root@liuzeyu12a ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4a10d56fc7d1        bridge              bridge              local
92572394aa82        host                host                local
9d233e7bc390        none                null                local
[root@liuzeyu12a ~]#
  • bridge:桥接,docker容器默认使用的网络模式
  • host:与宿主机器共享网络
  • none:不进行配置
[root@liuzeyu12a ~]# docker run -d -P --name tomcat01 --network brige tomcat
等价于
[root@liuzeyu12a ~]# docker run -d -P --name tomcat01 tomcat

创建一个自定义的网络,不再使用Docker0

[root@liuzeyu12a ~]# docker network create --driver bridge --subnet 172.20.0.0/16 --gateway 172.20.0.1 mynet
33c9b9028d0b28abfe63e90a1f819aac089af113498b64e39a76e61228a5f869
[root@liuzeyu12a ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4a10d56fc7d1        bridge              bridge              local
92572394aa82        host                host                local
33c9b9028d0b        mynet               bridge              local
9d233e7bc390        none                null                local
[root@liuzeyu12a ~]# docker inspect mynet
[
    {
   
        "Name": "mynet",
        "Id": "33c9b9028d0b28abfe63e90a1f819aac089af113498b64e39a76e61228a5f869",
        "Created": "2020-07-07T13:55:18.648611917+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
   
            "Driver": "default",
            "Options": {
   },
            "Config": [
                {
   
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
   
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
   },
        "Options": {
   },
        "Labels": {
   }
    }
]

启动两台tomcat,指定网络 --net 启动

[root@liuzeyu12a ~]# docker run -d -P --name tomat-net-01 --net mynet tomcat
8efd6cc5784dbff9e08e45359cfb33970d93b32f4873a0ad86d7b62cbb1d8056
[root@liuzeyu12a ~]# docker run -d -P --name tomat-net-02 --net mynet tomcat
45c1584c51dcd3b2e344f3bdcb3293e69dc640e1897d58658a77e8936ca7197c
[root@liuzeyu12a ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
45c1584c51dc        tomcat              "catalina.sh run"   7 seconds ago       Up 6 seconds        0.0.0.0:32771->8080/tcp   tomat-net-02
8efd6cc5784d        tomcat              "catalina.sh run"   12 seconds ago      Up 11 seconds       0.0.0.0:32770->8080/tcp   tomat-net-01

再次查看我们自定义的网络

[root@liuzeyu12a ~]# docker inspect mynet
[
    {
   
        "Name": "mynet",
        "Id": "33c9b9028d0b28abfe63e90a1f819aac089af113498b64e39a76e61228a5f869",
        "Created": "2020-07-07T13:55:18.648611917+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
   
            "Driver": "default",
            "Options": {
   },
            "Config": [
                {
   
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
   
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
   
            "45c1584c51dcd3b2e344f3bdcb3293e69dc640e1897d58658a77e8936ca7197c": {
   
                "Name": "tomat-net-02",
                "EndpointID": "d5aa3ab8baeb2b30d3fe2a514fa3865a00a2f0207a6c3a1e9045dd382f821911",
                "MacAddress": "02:42:ac:14:00:03",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
            },
            "8efd6cc5784dbff9e08e45359cfb33970d93b32f4873a0ad86d7b62cbb1d8056": {
   
                "Name": "tomat-net-01",
                "EndpointID": "b185d7148060e59e54d98101a8bf041f4ba763ae6388dea646d1e1dbc46d5e6c",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
   },
        "Labels": {
   }
    }
]

Containers发现多了两个容器,分别是tomat-net-01,tomat-net-02,说明我们新创建的两个tomcat已经属于自定义的mynet网络中!

现在测试tomat-net-01,tomat-net-02的连通性

[root@liuzeyu12a ~]# docker exec tomat-net-01 ping 172.20.0.3
PING 172.20.0.3 (172.20.0.3) 56(84) bytes of data.
64 bytes from 172.20.0.3: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 172.20.0.3: icmp_seq=2 ttl=64 time=0.100 ms
^C
[root@liuzeyu12a ~]# docker exec tomat-net-02 ping 172.20.0.2
PING 172.20.0.2 (172.20.0.2) 56(84) bytes of data.
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=0.102 ms
64 bytes from 172.20.0.2: icmp_seq=2 ttl=64 time=0.080 ms
^C
[root@liuzeyu12a ~]# docker exec tomcat-net-01 ping tomcat-net-02
Error: No such container: tomcat-net-01
[root@liuzeyu12a ~]# docker exec tomat-net-01 ping tomat-net-02
PING tomat-net-02 (172.20.0.3) 56(84) bytes of data.
64 bytes from tomat-net-02.mynet (172.20.0.3): icmp_seq=1 ttl=64 time=0.092 ms
64 bytes from tomat-net-02.mynet (172.20.0.3): icmp_seq=2 ttl=64 time=0.079 ms
^C

因为是属于同一个网络内,所以两个tomcat是可以连通的,并且可以通过容器名互相访问,修复了在docker0网络下的局限性!

9.3 跨容器通信

提出问题:如果不同容器,处在不同的网络中,能否互相访问?


[root@liuzeyu12a ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4a10d56fc7d1        bridge              bridge              local
92572394aa82        host                host                local
33c9b9028d0b        mynet               bridge              local
9d233e7bc390        none                null                local
[root@liuzeyu12a ~]#

假设tomcat01,tomcat02处在docker0中,而tomat-net-01,tomat-net-02处在mynet中,它们是否可以互相访问?

  1. 创建tomcat01,tomcat02
[root@liuzeyu12a ~]# docker run -d -P --name tomcat01 tomcat
2854ac02bf57f66208e20e61da24a103a4522522ab17ae02d6ce71f9e434ca28
[root@liuzeyu12a ~]# docker run -d -P --name tomcat02 tomcat
eba676ca0a8d8422d880e7a0e24e9b721382930571e65ef4ba6cd215d73b828e
[root@liuzeyu12a ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                     NAMES
eba676ca0a8d        tomcat              "catalina.sh run"   2 seconds ago       Up 1 second         0.0.0.0:32773->8080/tcp   tomcat02
2854ac02bf57        tomcat              "catalina.sh run"   6 seconds ago       Up 6 seconds        0.0.0.0:32772->8080/tcp   tomcat01
45c1584c51dc        tomcat              "catalina.sh run"   26 minutes ago      Up 26 minutes       0.0.0.0:32771->8080/tcp   tomat-net-02
8efd6cc5784d        tomcat              "catalina.sh run"   26 minutes ago      Up 26 minutes       0.0.0.0:32770->8080/tcp   tomat-net-01

[root@liuzeyu12a ~]# docker inspect bridge
[
        "Containers": {
   
            "2854ac02bf57f66208e20e61da24a103a4522522ab17ae02d6ce71f9e434ca28": {
   
                "Name": "tomcat01",
                "EndpointID": "178c442f9159ffbfd0a19c302893365326ee93719aa9479a4b6aa3a53ae3c48a",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            },
            "eba676ca0a8d8422d880e7a0e24e9b721382930571e65ef4ba6cd215d73b828e": {
   
                "Name": "tomcat02",
                "EndpointID": "905dcdbdf1aed4c8fe0f6959beb8b13620c9e97453b356628ff663823a05d92d",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            }
        },

]
  1. 尝试访问:
[root@liuzeyu12a ~]# docker exec tomcat01 ping tomat-net-01
ping: tomat-net-01: Name or service not known
[root@liuzeyu12a ~]# docker exec tomcat01 ping 172.20.0.2
^C
[root@liuzeyu12a ~]#

是无法访问的,想想就知道,两个不同的网段,如果没有路由,如何通信!!

  1. 画图分析:
  2. 打通路由
[root@liuzeyu12a ~]# docker network ---help
bad flag syntax: ---help
See 'docker network --help'.

Usage:  docker network COMMAND

Manage networks

Commands:
  connect     Connect a container to a network  #连接
  create      Create a network 
  disconnect  Disconnect a container from a network
  inspect     Display detailed information on one or more networks
  ls          List networks
  prune       Remove all unused networks
  rm          Remove one or more networks

Run 'docker network COMMAND --help' for more information on a command.

# 将tomcat01添加到mynet的网络下
[root@liuzeyu12a ~]# docker network connect mynet tomcat01
[root@liuzeyu12a ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
4a10d56fc7d1        bridge              bridge              local
92572394aa82        host                host                local
33c9b9028d0b        mynet               bridge              local
9d233e7bc390        none                null                local
[root@liuzeyu12a ~]# docker inspect mynet
[
    {
   
        "Name": "mynet",
        "Id": "33c9b9028d0b28abfe63e90a1f819aac089af113498b64e39a76e61228a5f869",
        "Created": "2020-07-07T13:55:18.648611917+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
   
            "Driver": "default",
            "Options": {
   },
            "Config": [
                {
   
                    "Subnet": "172.20.0.0/16",
                    "Gateway": "172.20.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
   
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
   
            "2854ac02bf57f66208e20e61da24a103a4522522ab17ae02d6ce71f9e434ca28": {
   
                "Name": "tomcat01",
                "EndpointID": "3b145e5c819f2ddb93e2b4f6abae8a7484c9e3410a1ba77967d82736e1cc94f9",
                "MacAddress": "02:42:ac:14:00:04",
                "IPv4Address": "172.20.0.4/16",
                "IPv6Address": ""
            },
            "45c1584c51dcd3b2e344f3bdcb3293e69dc640e1897d58658a77e8936ca7197c": {
   
                "Name": "tomat-net-02",
                "EndpointID": "d5aa3ab8baeb2b30d3fe2a514fa3865a00a2f0207a6c3a1e9045dd382f821911",
                "MacAddress": "02:42:ac:14:00:03",
                "IPv4Address": "172.20.0.3/16",
                "IPv6Address": ""
            },
            "8efd6cc5784dbff9e08e45359cfb33970d93b32f4873a0ad86d7b62cbb1d8056": {
   
                "Name": "tomat-net-01",
                "EndpointID": "b185d7148060e59e54d98101a8bf041f4ba763ae6388dea646d1e1dbc46d5e6c",
                "MacAddress": "02:42:ac:14:00:02",
                "IPv4Address": "172.20.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
   },
        "Labels": {
   }
    }
]

可以清楚的看到,mynet的网络下,多了tomcat01容器的信息!

  1. 重新访问
[root@liuzeyu12a ~]# docker exec tomcat01 ping 172.20.0.2
PING 172.20.0.2 (172.20.0.2) 56(84) bytes of data.
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=0.092 ms
64 bytes from 172.20.0.2: icmp_seq=2 ttl=64 time=0.078 ms
^C
[root@liuzeyu12a ~]# docker exec tomcat01 ping 172.20.0.3
PING 172.20.0.3 (172.20.0.3) 56(84) bytes of data.
64 bytes from 172.20.0.3: icmp_seq=1 ttl=64 time=0.109 ms
^C
[root@liuzeyu12a ~]# docker exec tomcat01 ping tomat-net-01
PING tomat-net-01 (172.20.0.2) 56(84) bytes of data.
64 bytes from tomat-net-01.mynet (172.20.0.2): icmp_seq=1 ttl=64 time=0.054 ms
^C
[root@liuzeyu12a ~]# docker exec tomcat01 ping tomat-net-02
PING tomat-net-02 (172.20.0.3) 56(84) bytes of data.
64 bytes from tomat-net-02.mynet (172.20.0.3): icmp_seq=1 ttl=64 time=0.055 ms
64 bytes from tomat-net-02.mynet (172.20.0.3): icmp_seq=2 ttl=64 time=0.075 ms
^C
[root@liuzeyu12a ~]#

可见此时,无论是访问IP地址或者访问容器名,均可以成功访问,说明路由可以达到!

10. Redis集群与微服务项目的部署

10.1 Redis集群部署

#创建集群所需的网络环境
[root@liuzeyu12a ~]# docker network create redis --subnet 172.30.0.0/16

# 执行shell脚本
[root@liuzeyu12a ~]# for port in $(seq 1 6);\
> do \
> mkdir -p /mydata/redis/node-${port}/conf
> touch /mydata/redis/node-${port}/conf/redis.conf
> cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
> port 6379
> bind 0.0.0.0
> cluster-enabled yes
> cluster-config-file nodes.conf
> cluster-node-timeout 5000
> cluster-announce-ip 172.30.0.1${port}
> cluster-announce-port 6379
> cluster-announce-bus-port 16379
> appendonly yes
> EOF
> done

#查看目录结构
[root@liuzeyu12a ~]# tree /mydata/redis/
/mydata/redis/
├── node-
│   └── conf
│       └── redis.conf
├── node-1
│   └── conf
│       └── redis.conf
├── node-2
│   └── conf
│       └── redis.conf
├── node-3
│   └── conf
│       └── redis.conf
├── node-4
│   └── conf
│       └── redis.conf
├── node-5
│   └── conf
│       └── redis.conf
└── node-6
    └── conf
        └── redis.conf

14 directories, 7 files


# 依次启动6台redis容器
docker run -p 6371:6379 -p 16371:16379 --name redis-1 \
-v /mydata/redis/node-1/data:/data \
-v /mydata/redis/node-1/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.30.0.11 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6372:6379 -p 16372:16379 --name redis-2 \
-v /mydata/redis/node-2/data:/data \
-v /mydata/redis/node-2/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.30.0.12 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6373:6379 -p 16373:16379 --name redis-3 \
-v /mydata/redis/node-3/data:/data \
-v /mydata/redis/node-3/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.30.0.13 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6374:6379 -p 16374:16379 --name redis-4 \
-v /mydata/redis/node-4/data:/data \
-v /mydata/redis/node-4/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.30.0.14 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6375:6379 -p 16375:16379 --name redis-5 \
-v /mydata/redis/node-5/data:/data \
-v /mydata/redis/node-5/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.30.0.15 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

docker run -p 6376:6379 -p 16376:16379 --name redis-6 \
-v /mydata/redis/node-6/data:/data \
-v /mydata/redis/node-6/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.30.0.16 redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf

#查看运行的redis服务器
[root@liuzeyu12a ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED              STATUS              PORTS                                              NAMES
1b00219293c6        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   14 seconds ago       Up 13 seconds       0.0.0.0:6376->6379/tcp, 0.0.0.0:16376->16379/tcp   redis-6
d372dafeb050        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   20 seconds ago       Up 19 seconds       0.0.0.0:6375->6379/tcp, 0.0.0.0:16375->16379/tcp   redis-5
47f978bd6def        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   29 seconds ago       Up 28 seconds       0.0.0.0:6374->6379/tcp, 0.0.0.0:16374->16379/tcp   redis-4
0aa99a8a9ccb        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   36 seconds ago       Up 35 seconds       0.0.0.0:6373->6379/tcp, 0.0.0.0:16373->16379/tcp   redis-3
05112691a646        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   42 seconds ago       Up 42 seconds       0.0.0.0:6372->6379/tcp, 0.0.0.0:16372->16379/tcp   redis-2
1ec1e6bea27c        redis:5.0.9-alpine3.11   "docker-entrypoint.s…"   About a minute ago   Up About a minute   0.0.0.0:6371->6379/tcp, 0.0.0.0:16371->16379/tcp   redis-1
[root@liuzeyu12a ~]#

进入其中一台redis服务器,创建redis集群

[root@liuzeyu12a ~]# docker exec -it redis-1 /bin/sh

/data # redis-cli --cluster create 172.30.0.11:6379 172.30.0.12:6379 172.30.0.13:6379 172.30.0.14:6379 172.30.0.15:6379 172.30.0.16:6379
 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 172.30.0.15:6379 to 172.30.0.11:6379
Adding replica 172.30.0.16:6379 to 172.30.0.12:6379
Adding replica 172.30.0.14:6379 to 172.30.0.13:6379
M: f9146db11593345068a6c875b5368fbda70dabf2 172.30.0.11:6379
   slots:[0-5460] (5461 slots) master
M: bea543b0fc119f7ece762e0e3093352e91f37967 172.30.0.12:6379
   slots:[5461-10922] (5462 slots) master
M: 64d658c17efca8d4e495312119ff7bb6cf5ac7e1 172.30.0.13:6379
   slots:[10923-16383] (5461 slots) master
S: b8a453a4a774edd6084b8cd0dd3aa38bddf58b53 172.30.0.14:6379
   replicates 64d658c17efca8d4e495312119ff7bb6cf5ac7e1
S: d66dcf94a9402f8252022f95786f53476364d427 172.30.0.15:6379
   replicates f9146db11593345068a6c875b5368fbda70dabf2
S: cd32e746c0ad3195daa3a9fff68ad78d6a09629e 172.30.0.16:6379
   replicates bea543b0fc119f7ece762e0e3093352e91f37967
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 172.30.0.11:6379)
M: f9146db11593345068a6c875b5368fbda70dabf2 172.30.0.11:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 64d658c17efca8d4e495312119ff7bb6cf5ac7e1 172.30.0.13:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: bea543b0fc119f7ece762e0e3093352e91f37967 172.30.0.12:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: cd32e746c0ad3195daa3a9fff68ad78d6a09629e 172.30.0.16:6379
   slots: (0 slots) slave
   replicates bea543b0fc119f7ece762e0e3093352e91f37967
S: b8a453a4a774edd6084b8cd0dd3aa38bddf58b53 172.30.0.14:6379
   slots: (0 slots) slave
   replicates 64d658c17efca8d4e495312119ff7bb6cf5ac7e1
S: d66dcf94a9402f8252022f95786f53476364d427 172.30.0.15:6379
   slots: (0 slots) slave
   replicates f9146db11593345068a6c875b5368fbda70dabf2
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
/data #

查看创建好的redis集群

/data # clear
/data # redis-cli -c
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:147
cluster_stats_messages_pong_sent:143
cluster_stats_messages_sent:290
cluster_stats_messages_ping_received:138
cluster_stats_messages_pong_received:147
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:290
127.0.0.1:6379>


#三主三从

127.0.0.1:6379> cluster nodes
64d658c17efca8d4e495312119ff7bb6cf5ac7e1 172.30.0.13:6379@16379 master - 0 1594106800076 3 connected 10923-16383
bea543b0fc119f7ece762e0e3093352e91f37967 172.30.0.12:6379@16379 master - 0 1594106800576 2 connected 5461-10922
cd32e746c0ad3195daa3a9fff68ad78d6a09629e 172.30.0.16:6379@16379 slave bea543b0fc119f7ece762e0e3093352e91f37967 0 1594106800576 6 connected
f9146db11593345068a6c875b5368fbda70dabf2 172.30.0.11:6379@16379 myself,master - 0 1594106800000 1 connected 0-5460
b8a453a4a774edd6084b8cd0dd3aa38bddf58b53 172.30.0.14:6379@16379 slave 64d658c17efca8d4e495312119ff7bb6cf5ac7e1 0 1594106801580 4 connected
d66dcf94a9402f8252022f95786f53476364d427 172.30.0.15:6379@16379 slave f9146db11593345068a6c875b5368fbda70dabf2 0 1594106800000 5 connected
127.0.0.1:6379>

测试集群部署是否成功?将redis-3:172.30.0.13master主机关掉,测试slave 172.30.0.14是否顶替了master!

# redis存入a
127.0.0.1:6379> set a b
-> Redirected to slot [15495] located at 172.30.0.13:6379
OK
172.30.0.13:6379> quit


# 关闭redis-3
[root@liuzeyu12a ~]# docker stop redis-3
redis-3
[root@liuzeyu12a ~]#


重新进入redis集群:172.30.0.14给出的值
/data # redis-cli -c
127.0.0.1:6379> get a
-> Redirected to slot [15495] located at 172.30.0.14:6379
"b"
172.30.0.14:6379>

发现主从已经出现了替换,13因为故障fail,14成为新的master

Redis集群部署完成!!

10.2 微服务项目的部署

  1. 创建基础的springboot项目
  2. 打包成jar包


4. 编写Dokerfile文件

FROM java:8
COPY *.jar /app.jar

CMD ["--server.port=8080"]

EXPOSE 8080
ENTRYPOINT ["java","-jar","/app.jar"]
  1. 构建镜像
[root@liuzeyu12a jar]# docker build -t liuzeyu-sprinboot .
Sending build context to Docker daemon  28.04MB
Step 1/5 : FROM java:8
8: Pulling from library/java
5040bd298390: Pull complete
fce5728aad85: Pull complete
76610ec20bf5: Pull complete
60170fec2151: Pull complete
e98f73de8f0d: Pull complete
11f7af24ed9c: Pull complete
49e2d6393f32: Pull complete
bb9cdec9c7f3: Pull complete
Digest: sha256:c1ff613e8ba25833d2e1940da0940c3824f03f802c449f3d1815a66b7f8c0e9d
Status: Downloaded newer image for java:8
 ---> d23bdf5b1b1b
Step 2/5 : COPY *.jar /app.jar
 ---> f40b3b25b09a
Step 3/5 : CMD ["--server.port=8080"]
 ---> Running in 6e5aa2d2e51b
Removing intermediate container 6e5aa2d2e51b
 ---> 2bbe36dcc4af
Step 4/5 : EXPOSE 8080
 ---> Running in 44cc4a0e8210
Removing intermediate container 44cc4a0e8210
 ---> aed4ba83142e
Step 5/5 : ENTRYPOINT ["java","-jar","/app.jar"]
 ---> Running in d0cb9e9c783a
Removing intermediate container d0cb9e9c783a
 ---> b0fa94a0c8cb
Successfully built b0fa94a0c8cb
Successfully tagged liuzeyu-sprinboot:latest
[root@liuzeyu12a jar]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@liuzeyu12a jar]# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
liuzeyu-sprinboot   latest              b0fa94a0c8cb        56 seconds ago      671MB
redis               5.0.9-alpine3.11    3661c84ee9d0        2 months ago        29.8MB
java                8                   d23bdf5b1b1b        3 years ago         643MB

  1. 运行容器

[root@liuzeyu12a jar]# docker run -d -P liuzeyu-sprinboot
0a40478508c704e93ff9f32af402354040de066c67ead51ea8b40631b9f51f9d

  1. 测试项目
[root@liuzeyu12a jar]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                     NAMES
0a40478508c7        liuzeyu-sprinboot   "java -jar /app.jar …"   2 minutes ago       Up 2 minutes        0.0.0.0:32775->8080/tcp   quirky_wozniak

#进入容器
[root@liuzeyu12a jar]# docker exec -it 0a40478508c7 /bin/bash
root@0a40478508c7:/# ls
app.jar  bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
root@0a40478508c7:/# curl localhost:8080/hello/sayHello
Hello Swaggerroot@0a40478508c7:/#

可以看到在容器内部,项目可以访问!

以指定内网访问端口启动,使用外部浏览器进行访问

[root@liuzeyu12a jar]# docker run -d -p 9090:8080 liuzeyu-sprinboot 51e388e7dac55ba47d1eca7004640984a35ee8495db5c989e17bbf1f1d5bfe9c

此时可见项目部署docker成功!!

参考学习:https://www.bilibili.com/video/BV1og4y1q7M4