0
点赞
收藏
分享

微信扫一扫

k8s资源之service


 欢迎关注我的公众号:

k8s资源之service_nginx

 目前刚开始写一个月,一共写了18篇原创文章,文章目录如下:

​​istio多集群探秘,部署了50次多集群后我得出的结论​​

​​istio多集群链路追踪,附实操视频​​

​​istio防故障利器,你知道几个,istio新手不要读,太难!​​

​​istio业务权限控制,原来可以这么玩​​

​​istio实现非侵入压缩,微服务之间如何实现压缩​​

​​不懂envoyfilter也敢说精通istio系列-http-rbac-不要只会用AuthorizationPolicy配置权限​​

​​不懂envoyfilter也敢说精通istio系列-02-http-corsFilter-不要只会vs​​

​​不懂envoyfilter也敢说精通istio系列-03-http-csrf filter-再也不用再代码里写csrf逻辑了​​

​​不懂envoyfilter也敢说精通istio系列http-jwt_authn-不要只会RequestAuthorization​​

​​不懂envoyfilter也敢说精通istio系列-05-fault-filter-故障注入不止是vs​​

​​不懂envoyfilter也敢说精通istio系列-06-http-match-配置路由不只是vs​​

​​不懂envoyfilter也敢说精通istio系列-07-负载均衡配置不止是dr​​

​​不懂envoyfilter也敢说精通istio系列-08-连接池和断路器​​

​​不懂envoyfilter也敢说精通istio系列-09-http-route filter​​

​​不懂envoyfilter也敢说精通istio系列-network filter-redis proxy​​

​​不懂envoyfilter也敢说精通istio系列-network filter-HttpConnectionManager​​

​​不懂envoyfilter也敢说精通istio系列-ratelimit-istio ratelimit完全手册​​

 

————————————————

•在k8s集群中,service是一个抽象概念,它通过一个虚拟的IP映射指定的端口,将代理客户端发来的请求转到后端一组pod中的一个上。这是个神马意思呢?pod中的容器经常在不停地销毁和重建,因此pod的IP会不停的改变,这时候客户端就没法访问到pod了,现在有了service作为客户端和pod的中间层,它在这里抽象出一个虚拟IP,然后集群内部都可以通过这个虚拟IP访问到具体的pod。

常用命令:

•kubectl get svc

•kubectl label svc ServiceName type=s1

•kubectl label svc ServiceName type-

•kubectl get svc -l type=s1

•kubectl describe svc ServiceName

•kubectl edit svc ServiceName

•Kubectl delete svc ServiceName

•Kubectl delete svc -l type=s1

•Kubectl delete svc –all –n namespace

•Kubectl annotate svc ServiceName type=s1

•Kubectl annotate svc ServiceName type-

•kubectl patch service nginx-clusterip-svc -p '{"metadata":{"labels":{"aa":"bb"}}}‘

•kubectl get svc nginx-clusterip-svc -o yaml

•kubectl get svc nginx-clusterip-svc -o json

•kubectl get svc -o wide

Service的工作方式:

•Userspace方式

•iptables模型

• ipvs模型

Userspace模式:

k8s资源之service_云原生_02

Client Pod要访问Server Pod时,它先将请求发给本机内核空间中的service规则,由它再将请求,转给监听在指定套接字上的kube-proxy,kube-proxy处理完请求,并分发请求到指定Server Pod后,再将请求递交给内核空间中的service,由service将请求转给指定的clinet Pod。
  由于其需要来回在用户空间和内核空间交互通信,因此效率很差

iptables:

k8s资源之service_nginx_03

直接由内核中的iptables规则,接受Client Pod的请求,并处理完成后,直接转发给指定ServerPod

ipvs:

k8s资源之service_云原生_04

接有内核中的ipvs规则来接受Client Pod请求,并处理该请求,再有内核封包后,直接发给指定的Server Pod

以上不论哪种,kube-proxy都通过watch的方式监控着kube-APIServer写入etcd中关于Pod的最新状态信息,它一旦检查到一个Pod资源被删除了 或 新建,它将立即将这些变化,反应再iptables 或 ipvs规则中,以便iptables和ipvs在调度Clinet Pod请求到Server Pod时,不会出现Server Pod不存在的情况

启用方法:

以service方式启动kube-proxy


–proxy-mode= userspace


– proxy-mode= iptables


– proxy-mode= ipvs


以pod方式启动kube-proxy

kubectl edit cm kube-proxy -n kube-system

mode: “ipvs” or iptables or userspace

之后重启各个节点上的kube-proxy pod

Service类型:

•ExternalName

•ClusterIP

•NodePort

•LoadBalancer

clusterip:

[root@master01 service]# cat nginx-clusterIp-svc.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip-svc
spec:
selector:
app: nginx
type: ClusterIP
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-clusterIp-svc-assign-address.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip-svc-with-ip
spec:
selector:
app: nginx
clusterIP: 10.68.100.100
type: ClusterIP
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-clusterIp-svc-headless.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip-svc-headless
spec:
selector:
app: nginx
type: ClusterIP
clusterIP: None
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-clusterIp-svc-with-externalip.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip-svc-with-externalip
spec:
selector:
app: nginx
type: ClusterIP
externalIPs:
- 192.168.198.111
ports:
- name: http
port: 8000
targetPort: 80

exteranlName:

[root@master01 service]# cat nginx-exteranlName-svc.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-externalname-svc
spec:
externalName: www.baidu.com
type: ExternalName

[root@master01 service]# cat nginx-exteranlName-svc-with-externalips.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-externalname-svc-with-externalips
spec:
externalName: www.baidu.com
type: ExternalName
externalIPs:
- 192.168.198.16
- 192.168.198.17

loadblancer:

[root@master01 service]# cat nginx-loadblancer-svc.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-loadblancer-svc
spec:
selector:
app: nginx
type: LoadBalancer
loadBalancerIP: 1.2.3.4
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-loadblancer-svc-without-ip.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-loadblancer-withoutip-svc
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-loadblancer-svc-with-externalips.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-loadblancer-with-externalips
spec:
selector:
app: nginx
type: LoadBalancer
externalIPs:
- 192.168.198.13
- 192.168.198.14
ports:
- name: http
port: 8000
targetPort: 80

nodePort:

[root@master01 service]# cat nginx-nodePort-svc.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport-svc
spec:
selector:
app: nginx
type: NodePort
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-nodePort-svc-with-port.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport-svc-with-port
spec:
selector:
app: nginx
type: NodePort
ports:
- name: http
port: 8000
targetPort: 80
nodePort: 31000

[root@master01 service]# cat nginx-nodePort-svc-with-externalip.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport-svc-with-externalips
spec:
selector:
app: nginx
type: NodePort
externalIPs:
- 192.168.198.10
- 192.168.198.11
ports:
- name: http
port: 8000
targetPort: 80

sessionaffinity:

[root@master01 service]# cat nginx-clusterIp-svc-sessionAffinity.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip-svc-with-sessionaffinity
spec:
selector:
app: nginx
type: ClusterIP
sessionAffinity: ClientIP
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-nodePort-svc-with-sessionaffinity.yal 
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport-svc-with-sessionaffinity
spec:
selector:
app: nginx
type: NodePort
sessionAffinity: ClientIP
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-loadblancer-svc-with-sessionaffinity.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-loadblancer-with-sessionaffinity
spec:
selector:
app: nginx
type: LoadBalancer
sessionAffinity: ClientIP
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-exteranlName-svc-with-sessionaffinity.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-externalname-svc-with-sessionaffinity
spec:
externalName: www.baidu.com
type: ExternalName
sessionAffinity: ClientIP

externalTrafficPolicy:


LoadBalancer and Nodeport type services


如果服务需要将外部流量路由到 本地节点或者集群级别的端点,即service type 为LoadBalancer或NodePort,那么需要指明该参数。存在两种选项:”Cluster”(默认)和 “Local”。 “Cluster” 隐藏源 IP 地址,可能会导致第二跳(second hop)到其他节点,但是全局负载效果较好。”Local” 保留客户端源 IP 地址,避免 LoadBalancer 和 NodePort 类型服务的第二跳,但是可能会导致负载不平衡

root@master01 service]# cat nginx-nodePort-svc-externalTrafficPolicy.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport-svc-externaltrafficpolicy
spec:
selector:
app: nginx
type: NodePort
externalTrafficPolicy: Local
ports:
- name: http
port: 8000
targetPort: 80

[root@master01 service]# cat nginx-loadblancer-svc-externalTrafficPolicy.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-loadblancer-svc-externaltrafficpolicy
spec:
selector:
app: nginx
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 8000
targetPort: 80

healthCheckNodePort:

只有当类型被设置成 “LoadBalancer” 并且 externalTrafficPolicy 被设置成 “Local” 时,才会生效

ipFamily:

[root@master01 service]# cat nginx-clusterIp-svc-ipFamily.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip-svc-ipfamily
spec:
selector:
app: nginx
type: ClusterIP
ipFamily: IPv6
ports:
- name: http
port: 8000
targetPort: 80


To enable IPv4/IPv6 dual-stack, enable the IPv6DualStack  ​​feature gate​​  for the relevant components of your cluster, and set dual-stack cluster network assignments:


kube -controller-manager:


--feature-gates="IPv6DualStack=true"


--cluster- cidr =<IPv4 CIDR>,<IPv6 CIDR>  eg . --cluster- cidr =10.244.0.0/16,fc00::/24


--service-cluster- ip -range=<IPv4 CIDR>,<IPv6 CIDR>


--node-cidr-mask-size-ipv4|--node-cidr-mask-size-ipv6 defaults to /24 for IPv4 and /64 for IPv6


kubelet :


--feature-gates="IPv6DualStack=true"


kube -proxy:


--proxy-mode= ipvs


--cluster- cidrs =<IPv4 CIDR>,<IPv6 CIDR>


--feature-gates="IPv6DualStack=true"


loadBalancerSourceRanges:


This feature is currently supported on Google Compute Engine, Google Kubernetes Engine, AWS Elastic Kubernetes Service, Azure Kubernetes Service, and IBM Cloud Kubernetes Service.



sessionAffinityConfig:


[root@master01 service]# cat nginx-clusterIp-svc-sessionAffinityConfig.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip-svc-sessionaffinityconfig
spec:
selector:
app: nginx
type: ClusterIP
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 2
ports:
- name: http
port: 8000
targetPort: 80




举报

相关推荐

0 条评论