本文共 6042 字,大约阅读时间需要 20 分钟。
Docker's recent advances have made it the darling of startups and innovators throughout the IT world, but one pain point causes admins and developers alike to bite their nails: networking. Managing the interaction between Docker containers and networks has always been fraught with complications.To some intrepid developers, that's a challenge and not an obstacle. Here are five of the most significant projects currently in the works to solve Docker networking issues, with the possibility that one or more of them might become part of Docker itself.WeaveCreated by Zett.io, Weave "makes the network fit the application, not the other way round," as the company CEO puts it. With Weave, Docker containers are all part of a virtual network switch no matter where they're running. Services can be selectively exposed across the network to the outside world through firewalls and using encryption for wide-area connections.The gist: Possibly the best place to start, since it solves most of the really immediate problems in a straightforward way. But other solutions might offer more depending on your ambitions.KubernetesGoogle's big first leap into the Docker world was an orchestration project, a way to tackle node balancing and many other valuable orchestration-related functions with Docker containers. It also provides, rather deliberately, some solutions to Docker's networking issues, though it goes only so far in that realm.The gist: A good start as-is, but heavy lifting is still required to get the most out of it.CoreOS, FlannelGiven that CoreOS is a project meant to architect an entire Linux distribution around Docker, it makes sense for networking to be part of the package. A project named Flannel (previously Rudder) is at the heart of how CoreOS manages container networking, connecting all the containers in a cluster via their own private mesh network. Presto, no more clumsy port mapping!The gist: Best with CoreOS; requires Kubernetes at least.PipeworkA solution devised by one of Docker's engineers, Pipework allows containers to be connected "in arbitrarily complex scenarios." It was devised as an interim solution and will probably end up as one.The gist: It's best as a curiosity or a technology concept in the long run; as its creator admits, "Docker will [eventually] allow complex scenarios, and Pipework should become obsolete."SocketPlaneRight now SocketPlane's work consists of little more than a press announcement "to bring software defined networking to Docker." The idea is to use the same devops tools used to deploy Docker to manage virtualized networks for containers, and build what amounts to an OpenDaylight/Open vSwitch fabric for Docker. It all sounds promising, but we won't be able to see any product until at least the first quarter of 2015.The gist: Don't hold your breath until it's out.
else case "$IFNAME" in br*) IFTYPE=bridge BRTYPE=linux ;; ovs*) if ! $(which ovs-vsctl >/dev/null) then echo "Need OVS installed on the system to create an ovs bridge" exit 1 fi IFTYPE=bridge BRTYPE=openvswitch ;; *) echo "I do not know how to setup interface $IFNAME." exit 1 ;; esacpipework会自动创建peer, 配置netns, 判断IP是否重复, 配置路由, 等, 配置好后最后还会删除netns, 所以我们用netns看不到.
# Give our ARP neighbors a nudge about the new interfaceif which arping > /dev/null 2>&1then IPADDR=$(echo $IPADDR | cut -d/ -f1) ip netns exec $NSPID arping -c 1 -A -I $CONTAINER_IFNAME $IPADDR > /dev/null 2>&1 || trueelse echo "Warning: arping not found; interface may not be immediately reachable"fi# Remove NSPID to avoid `ip netns` catch it.[ -f /var/run/netns/$NSPID ] && rm -f /var/run/netns/$NSPIDexit 0
[root@localhost ~]# ovs-vsctl add-br ovs0[root@localhost ~]# ip link set ovs0 up[root@localhost ~]# ip addr add 172.0.42.1/24 dev ovs0[root@localhost ~]# ovs-vsctl showf345b7e3-fcb0-4ef3-8295-36d3ef69ceef Bridge "ovs0" Port "ovs0" Interface "ovs0" type: internal ovs_version: "2.3.0"创建容器, 不指定网络.
[root@localhost ~]# docker run --rm -t -i --net=none --name=testpipework centos:centos7 /bin/bash[root@7eb07fe4bbf8 /]# ip link1: lo:使用pipework配置容器IP, 将手工完成的netns配置自动化完成了mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
[root@localhost ~]# ./pipework.sh ovs0 testpipework 172.0.42.2/24可以在容器中看到新增的接口
[root@7eb07fe4bbf8 /]# ip link1: lo:[参考]mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:0056: eth1: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000 link/ether 3e:15:74:c0:64:21 brd ff:ff:ff:ff:ff:ff网络已正常[root@7eb07fe4bbf8 /]# ping 172.0.42.1PING 172.0.42.1 (172.0.42.1) 56(84) bytes of data.64 bytes from 172.0.42.1: icmp_seq=1 ttl=64 time=0.369 ms64 bytes from 172.0.42.1: icmp_seq=2 ttl=64 time=0.055 ms^C--- 172.0.42.1 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1000msrtt min/avg/max/mdev = 0.055/0.212/0.369/0.157 ms路由也加了[root@7eb07fe4bbf8 /]# ip route172.0.42.0/24 dev eth1 proto kernel scope link src 172.0.42.2
转载地址:http://invxl.baihongyu.com/