Kube-OVN 1.7: Hybrid overlay and underlay network, multiple ovn native network, VPC enhancement, active-active subnet gateway and more
With contributions from China Telecom, Inspur, Intel, and ByteDance The Kube-OVN team is excited to announce the release of Kube-OVN 1.7.
Within this release cycle, lots of requirements about complex network infrastructure orchestration in the real ente
rprise scene overwhelming us. We have built these abilities to resolve these complexities into Kube-OVN 1.7 including:
- Hybrid overlay and underlay network
- Pod with multiple ovn native network interface
- VPC gateway with FloatingIP and NAT abilities
- Active-Active subnet gateway
Hybrid overlay and underlay network
Previously, Overlay or Underlay is a cluster-level installation option. All subnets in a cluster must be the same type, overlay, or underlay. However, in enterprises more and more users need different types of networks for different use cases in one Kubernetes cluster. For example, run web applications in overlay network to achieve high flexibility, at the same time run middleware like db in underlay network to achieve high performance and connectivity with the external network.
In Kube-OVN 1.7, the installation network type only takes effect for the default subnet users can select network type dynamically at the subnet level. We also provide a new CRD ProviderNetwork to map the underlay network into container network and provide options for complex real network setups. e.g different nodes may belong to different physic networks, the network interface names are different across the cluster, certain underlay network only exists in a group of nodes, and so on.
For more information, please read Vlan Support
Pod with multiple ovn native network interface
With the increasing number of users who run VM in Kubernetes, the need for multiple network interface VM also increased. In the previous version, Kube-OVN only provides OVN type network to the primary interface, attachment networks have to be provisioned by other CNIs. From this version on, users can use Multus CRDs and Kube-OVN annotations to define a Pod with multiple network interfaces all provisioned by Kube-OVN. This also gives VMs on Kuberentes to run on multiple virtual networks all provided by OVN.
For more information, please read Multi network interace
VPC gateway with FloatingIP and NAT abilities
The VPC in 1.6 only provides the independent address space for multi-tenants. However, Pod in VPC can not connect to external networks and can not be visited outside the VPC. In 1.7 release, we provide a new CRD VpcNatGateway to provide the ability to connect VPC and external networks.
In the VpcNatGateway users can define the eips allocated to the VPC and select how to use these eips for the VPC. It provides abilities like:
- SNAT: map a group of pods egress traffic to a selected eip
- DNAT: map an eip IP:Porrt to a Pod IP:Port
- FloatingIP: map a pod ingress and egress ip to a selected eip
For more information, please read VPC Usage
Active-Active subnet gateway
Kube-OVN subnet support to egress mode distributed and centralized. For centralized mode, users can select a group of nodes to act as the egress gateway. In the previous version, these node work in active-backup mode, only one node is in active status that processes the egress traffic. This node will be the bottleneck when dealing with a large volume of egress traffic. In 1.7 release, all nodes can work together at the same time, egress traffic will be distributed to all selected nodes and it will automatically check failed node and rebalance the traffic.
New to Kube-OVN? Follow the Installation Guide.