Amazon web services EKS 1.11+;Istio 1.0.6+;纤毛1.4.1,柱状https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: 地址是不允许的

Amazon web services EKS 1.11+;Istio 1.0.6+;纤毛1.4.1,柱状https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: 地址是不允许的,amazon-web-services,amazon-eks,cilium,Amazon Web Services,Amazon Eks,Cilium,以下是重现错误的步骤: 1) 。安装AWS EKS群集(1.11) 2) 。按照此步骤安装Cilium v1.4.1 3) 。安装istio 1.0.6 $ kubectl apply -f install/kubernetes/helm/helm-service-account.yaml $ helm init --service-account tiller $ helm install install/kubernetes/helm/istio --name istio --names

以下是重现错误的步骤:

1) 。安装AWS EKS群集(1.11)

2) 。按照此步骤安装Cilium v1.4.1

3) 。安装istio 1.0.6

$ kubectl apply -f install/kubernetes/helm/helm-service-account.yaml

$ helm init --service-account tiller

$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system
4) 。试试示例nginx

$ kubectl create ns nginx

$ kubectl label namespace nginx istio-injection=enabled

$ kubectl create deployment --image nginx nginx -n nginx

$ kubectl expose deployment nginx --port=80 --type=LoadBalancer -n nginx
遇到问题

$ kubectl get deploy -n nginx
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx   1         0         0            0           27m

$ kubectl get deploy -n nginx -oyaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
  kind: Deployment
  metadata:
    annotations:
      deployment.kubernetes.io/revision: "1"
      traffic.sidecar.istio.io/includeOutboundIPRanges: 172.20.0.0/16
    creationTimestamp: "2019-03-08T13:13:58Z"
    generation: 3
    labels:
      app: nginx
    name: nginx
    namespace: nginx
    resourceVersion: "36034"
    selfLink: /apis/extensions/v1beta1/namespaces/nginx/deployments/nginx
    uid: 0888b279-41a4-11e9-8f26-1274e185a192
  spec:
    progressDeadlineSeconds: 600
    replicas: 1
    revisionHistoryLimit: 10
    selector:
      matchLabels:
        app: nginx
    strategy:
      rollingUpdate:
        maxSurge: 25%
        maxUnavailable: 25%
      type: RollingUpdate
    template:
      metadata:
        creationTimestamp: null
        labels:
          app: nginx
      spec:
        containers:
        - image: nginx
          imagePullPolicy: Always
          name: nginx
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
  status:
    conditions:
    - lastTransitionTime: "2019-03-08T13:13:58Z"
      lastUpdateTime: "2019-03-08T13:13:58Z"
      message: Deployment does not have minimum availability.
      reason: MinimumReplicasUnavailable
      status: "False"
      type: Available
    - lastTransitionTime: "2019-03-08T13:13:58Z"
      lastUpdateTime: "2019-03-08T13:13:58Z"
      message: 'Internal error occurred: failed calling admission webhook "sidecar-injector.istio.io":
        Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s:
        Address is not allowed'
      reason: FailedCreate
      status: "True"
      type: ReplicaFailure
    - lastTransitionTime: "2019-03-08T13:23:59Z"
      lastUpdateTime: "2019-03-08T13:23:59Z"
      message: ReplicaSet "nginx-78f5d695bd" has timed out progressing.
      reason: ProgressDeadlineExceeded
      status: "False"
      type: Progressing
    observedGeneration: 3
    unavailableReplicas: 1
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
调查A, 更新了includeOutboundIPRanges注释,如下所示,没有帮助

$ kubectl edit deploy -n nginx
  annotations:
    traffic.sidecar.istio.io/includeOutboundIPRanges: 172.20.0.0/20

调查B, 移除纤毛,重新安装istio,然后重新安装nginx。Nginx注入变得良好,Nginx pod运行良好

调查C, 作为比较,我将安装步骤切换为2)。和3。),Nginx注射会很好,可以看到Nginx欢迎页面。但在“手动终止”ec2工作实例后,这个“地址不允许”错误会再次出现——ASG自动创建所有ec2工作实例

FYI、纤毛和istio状态

$ kubectl -n kube-system exec -ti cilium-4wzgd cilium-health status
Probe time:   2019-03-08T16:35:57Z
Nodes:
  ip-10-250-206-54.ec2.internal (localhost):
    Host connectivity to 10.250.206.54:
      ICMP to stack:   OK, RTT=440.788µs
      HTTP to agent:   OK, RTT=665.779µs
  ip-10-250-198-72.ec2.internal:
    Host connectivity to 10.250.198.72:
      ICMP to stack:   OK, RTT=799.994µs
      HTTP to agent:   OK, RTT=1.594971ms
  ip-10-250-199-154.ec2.internal:
    Host connectivity to 10.250.199.154:
      ICMP to stack:   OK, RTT=770.777µs
      HTTP to agent:   OK, RTT=1.692356ms
  ip-10-250-205-177.ec2.internal:
    Host connectivity to 10.250.205.177:
      ICMP to stack:   OK, RTT=460.927µs
      HTTP to agent:   OK, RTT=1.383852ms
  ip-10-250-213-68.ec2.internal:
    Host connectivity to 10.250.213.68:
      ICMP to stack:   OK, RTT=766.769µs
      HTTP to agent:   OK, RTT=1.401989ms
  ip-10-250-214-179.ec2.internal:
    Host connectivity to 10.250.214.179:
      ICMP to stack:   OK, RTT=781.72µs
      HTTP to agent:   OK, RTT=2.614356ms

$ kubectl -n kube-system exec -ti cilium-4wzgd -- cilium status
KVStore:                Ok   etcd: 1/1 connected: https://cilium-etcd-client.kube-system.svc:2379 - 3.3.11 (Leader)
ContainerRuntime:       Ok   docker daemon: OK
Kubernetes:             Ok   1.11+ (v1.11.5-eks-6bad6d) [linux/amd64]
Kubernetes APIs:        ["CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
Cilium:                 Ok   OK
NodeMonitor:            Disabled
Cilium health daemon:   Ok   
IPv4 address pool:      6/65535 allocated from 10.54.0.0/16
Controller Status:      34/34 healthy
Proxy Status:           OK, ip 10.54.0.1, port-range 10000-20000
Cluster health:   6/6 reachable   (2019-03-08T16:36:57Z)

$ kubectl get namespace -L istio-injection
NAME           STATUS   AGE   ISTIO-INJECTION
default        Active   4h    
istio-system   Active   4m    
kube-public    Active   4h    
kube-system    Active   4h    
nginx          Active   4h    enabled

$ for pod in $(kubectl -n istio-system get pod -listio=sidecar-injector -o jsonpath='{.items[*].metadata.name}'); do kubectl -n istio-system logs ${pod}; done
2019-03-08T16:35:02.948778Z info    version root@464fc845-2bf8-11e9-b805-0a580a2c0506-docker.io/istio-1.0.6-98598f88f6ee9c1e6b3f03b652d8e0e3cd114fa2-dirty-Modified
2019-03-08T16:35:02.950343Z info    New configuration: sha256sum cf9491065c492014f0cb69c8140a415f0f435a81d2135efbfbab070cf6f16554
2019-03-08T16:35:02.950377Z info    Policy: enabled
2019-03-08T16:35:02.950398Z info    Template: |
  initContainers:
  - name: istio-init
    image: "docker.io/istio/proxy_init:1.0.6"
    args:
    - "-p"
    - [[ .MeshConfig.ProxyListenPort ]]
    - "-u"
    - 1337
    - "-m"
    - [[ annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode ]]
    - "-i"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeOutboundIPRanges`  "172.20.0.0/16"  ]]"
    - "-x"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/excludeOutboundIPRanges`  ""  ]]"
    - "-b"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeInboundPorts` (includeInboundPorts .Spec.Containers) ]]"
    - "-d"
    - "[[ excludeInboundPort (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) (annotation .ObjectMeta `traffic.sidecar.istio.io/excludeInboundPorts`  "" ) ]]"
    imagePullPolicy: IfNotPresent
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    restartPolicy: Always
  containers:
  - name: istio-proxy
    image: [[ annotation .ObjectMeta `sidecar.istio.io/proxyImage`  "docker.io/istio/proxyv2:1.0.6"  ]]

    ports:
    - containerPort: 15090
      protocol: TCP
      name: http-envoy-prom

    args:
    - proxy
    - sidecar
    - --configPath
    - [[ .ProxyConfig.ConfigPath ]]
    - --binaryPath
    - [[ .ProxyConfig.BinaryPath ]]
    - --serviceCluster
    [[ if ne "" (index .ObjectMeta.Labels "app") -]]
    - [[ index .ObjectMeta.Labels "app" ]]
    [[ else -]]
    - "istio-proxy"
    [[ end -]]
    - --drainDuration
    - [[ formatDuration .ProxyConfig.DrainDuration ]]
    - --parentShutdownDuration
    - [[ formatDuration .ProxyConfig.ParentShutdownDuration ]]
    - --discoveryAddress
    - [[ annotation .ObjectMeta `sidecar.istio.io/discoveryAddress` .ProxyConfig.DiscoveryAddress ]]
    - --discoveryRefreshDelay
    - [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]]
    - --zipkinAddress
    - [[ .ProxyConfig.ZipkinAddress ]]
    - --connectTimeout
    - [[ formatDuration .ProxyConfig.ConnectTimeout ]]
    - --proxyAdminPort
    - [[ .ProxyConfig.ProxyAdminPort ]]
    [[ if gt .ProxyConfig.Concurrency 0 -]]
    - --concurrency
    - [[ .ProxyConfig.Concurrency ]]
    [[ end -]]
    - --controlPlaneAuthPolicy
    - [[ annotation .ObjectMeta `sidecar.istio.io/controlPlaneAuthPolicy` .ProxyConfig.ControlPlaneAuthPolicy ]]
  [[- if (ne (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) "0") ]]
    - --statusPort
    - [[ annotation .ObjectMeta `status.sidecar.istio.io/port`  0  ]]
    - --applicationPorts
    - "[[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/applicationPorts` (applicationPorts .Spec.Containers) ]]"
  [[- end ]]
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
    [[ if .ObjectMeta.Annotations ]]
    - name: ISTIO_METAJSON_ANNOTATIONS
      value: |
             [[ toJson .ObjectMeta.Annotations ]]
    [[ end ]]
    [[ if .ObjectMeta.Labels ]]
    - name: ISTIO_METAJSON_LABELS
      value: |
             [[ toJson .ObjectMeta.Labels ]]
    [[ end ]]
    imagePullPolicy: IfNotPresent
    [[ if (ne (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) "0") ]]
    readinessProbe:
      httpGet:
        path: /healthz/ready
        port: [[ annotation .ObjectMeta `status.sidecar.istio.io/port`  0  ]]
      initialDelaySeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/initialDelaySeconds`  1  ]]
      periodSeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/periodSeconds`  2  ]]
      failureThreshold: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/failureThreshold`  30  ]]
    [[ end -]]securityContext:

      readOnlyRootFilesystem: true
      [[ if eq (annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode) "TPROXY" -]]
      capabilities:
        add:
        - NET_ADMIN
      runAsGroup: 1337
      [[ else -]]
      runAsUser: 1337
      [[ end -]]
    restartPolicy: Always
    resources:
      [[ if (isset .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU`) -]]
      requests:
        cpu: "[[ index .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU` ]]"
        memory: "[[ index .ObjectMeta.Annotations `sidecar.istio.io/proxyMemory` ]]"
    [[ else -]]
      requests:
        cpu: 10m

    [[ end -]]
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  volumes:
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      optional: true
      [[ if eq .Spec.ServiceAccountName "" -]]
      secretName: istio.default
      [[ else -]]
      secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]]
      [[ end -]]

$ kubectl get svc 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   5h

$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}

$ kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
ip-*1.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*2.ec2.internal   Ready    <none>   5h    v1.11.5
ip-*3.ec2.internal   Ready    <none>   5h    v1.11.5
ip-*4.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*5.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*6.ec2.internal   Ready    <none>   5h    v1.11.5

$ kubectl get pods --all-namespaces
NAMESPACE      NAME                                      READY   STATUS    RESTARTS   AGE
istio-system   istio-citadel-796c94878b-jt5tb            1/1     Running   0          13m
istio-system   istio-egressgateway-864444d6ff-vwptk      1/1     Running   0          13m
istio-system   istio-galley-6c68c5dbcf-fmtvp             1/1     Running   0          13m
istio-system   istio-ingressgateway-694576c7bb-kmk8k     1/1     Running   0          13m
istio-system   istio-pilot-79f5f46dd5-kbr45              2/2     Running   0          13m
istio-system   istio-policy-5bd5578b94-qzzhd             2/2     Running   0          13m
istio-system   istio-sidecar-injector-6d8f88c98f-slr6x   1/1     Running   0          13m
istio-system   istio-telemetry-5598f86cd8-z7kr5          2/2     Running   0          13m
istio-system   prometheus-76db5fddd5-hw9pb               1/1     Running   0          13m
kube-system    aws-node-5wv4g                            1/1     Running   0          4h
kube-system    aws-node-gsf7l                            1/1     Running   0          4h
kube-system    aws-node-ksddt                            1/1     Running   0          4h
kube-system    aws-node-lszrr                            1/1     Running   0          4h
kube-system    aws-node-r4gcg                            1/1     Running   0          4h
kube-system    aws-node-wtcvj                            1/1     Running   0          4h
kube-system    cilium-4wzgd                              1/1     Running   0          4h
kube-system    cilium-56sq5                              1/1     Running   0          4h
kube-system    cilium-etcd-4vndb7tl6w                    1/1     Running   0          4h
kube-system    cilium-etcd-operator-6d9975f5df-zcb5r     1/1     Running   0          4h
kube-system    cilium-etcd-r9h4txhgld                    1/1     Running   0          4h
kube-system    cilium-etcd-t2fldlwxzh                    1/1     Running   0          4h
kube-system    cilium-fkx8d                              1/1     Running   0          4h
kube-system    cilium-glc8l                              1/1     Running   0          4h
kube-system    cilium-gvm5f                              1/1     Running   0          4h
kube-system    cilium-jscn8                              1/1     Running   0          4h
kube-system    cilium-operator-7df75f5cc8-tnv54          1/1     Running   0          4h
kube-system    coredns-7bcbfc4774-fr59z                  1/1     Running   0          5h
kube-system    coredns-7bcbfc4774-xxwbg                  1/1     Running   0          5h
kube-system    etcd-operator-7b9768bc99-8fxf2            1/1     Running   0          4h
kube-system    kube-proxy-bprmp                          1/1     Running   0          5h
kube-system    kube-proxy-ccb2q                          1/1     Running   0          5h
kube-system    kube-proxy-dv2mn                          1/1     Running   0          5h
kube-system    kube-proxy-qds2r                          1/1     Running   0          5h
kube-system    kube-proxy-rf466                          1/1     Running   0          5h
kube-system    kube-proxy-rz2ck                          1/1     Running   0          5h
kube-system    tiller-deploy-57c574bfb8-cd6rn            1/1     Running   0          4h
$kubectl-n kube系统执行官-ti纤毛-4wzgd纤毛健康状况
探测时间:2019-03-08T16:35:57Z
节点:
ip-10-250-206-54.ec2.internal(本地主机):
到10.250.206.54的主机连接:
ICMP到堆栈:正常,RTT=440.788µs
HTTP到代理:正常,RTT=665.779µs
ip-10-250-198-72.ec2.内部:
到10.250.198.72的主机连接:
ICMP到堆栈:正常,RTT=799.994µs
HTTP到代理:OK,RTT=1.594971ms
ip-10-250-199-154.ec2.内部:
到10.250.199.154的主机连接:
ICMP到堆栈:正常,RTT=770.777µs
HTTP到代理:正常,RTT=1.692356ms
ip-10-250-205-177.ec2.内部:
到10.250.205.177的主机连接:
ICMP到堆栈:正常,RTT=460.927µs
HTTP到代理:确定,RTT=1.3852ms
ip-10-250-213-68.ec2.内部:
到10.250.213.68的主机连接:
ICMP到堆栈:正常,RTT=766.769µs
HTTP到代理:确定,RTT=1.401989ms
ip-10-250-214-179.ec2.内部:
到10.250.214.179的主机连接:
ICMP到堆栈:正常,RTT=781.72µs
HTTP到代理:正常,RTT=2.614356ms
$kubectl-n kube系统执行官-ti cilium-4wzgd-cilium状态
KVStore:正常etcd:1/1已连接:https://cilium-etcd-client.kube-system.svc:2379 -3.3.11(领导者)
ContainerRuntime:Ok docker守护进程:Ok
Kubernetes:Ok 1.11+(v1.11.5-eks-6bad6d)[linux/amd64]
Kubernetes API:[“CustomResourceDefinition”、“cilium/v2::CiliumNetworkPolicy”、“core/v1::Endpoint”、“core/v1::Namespace”、“core/v1::Node”、“core/v1::Pods”、“core/v1::Service”、“networking.k8s.io/v1::NetworkPolicy”]
纤毛:好的
节点监视器:已禁用
纤毛健康守护程序:Ok
IPv4地址池:从10.54.0.0/16分配的6/65535
控制器状态:34/34健康
代理状态:正常,ip 10.54.0.1,端口范围10000-20000
集群运行状况:6/6可访问(2019-03-08T16:36:57Z)
$kubectl获取命名空间-L istio注入
姓名、状态和年龄
默认激活4h
istio系统激活4m
kube公共活动4h
kube系统激活4h
nginx激活4h已启用
$中pod的$(kubectl-n istio system get pod-listio=sidecar injector-o jsonpath='{.items[*].metadata.name}');do kubectl-n istio系统日志${pod};完成
2019-03-08T16:35:02.948778Z信息版本root@464fc845-2bf8-11e9-b805-0a580a2c0506-docker.io/istio-1.0.6-98598f88f6ee9c1e6b3f03b652d8e0e3cd114fa2-dirty-Modified
2019-03-08T16:35:02.950343Z信息新配置:sha256sum CF9491065C492014F0CB69C8140A415F0F435A81D2135EFBFBAB0070CF6F16554
2019-03-08T16:35:02.950377Z信息策略:已启用
2019-03-08T16:35:02.950398Z信息模板:|
初始化容器:
-名称:istio init
图片:“docker.io/istio/proxy_init:1.0.6”
args:
-“-p”
-[.MeshConfig.ProxyListenPort]]
-“-u”
- 1337
-“-m”
-[[annotation.ObjectMeta`sidecar.istio.io/interceptionMode`.ProxyConfig.interceptionMode]]
-“-我”
-“[[annotation.ObjectMeta`traffic.sidecar.istio.io/includeOutboundIPRanges`“172.20.0.0/16”]””
-“-x”
-“[[annotation.ObjectMeta`traffic.sidecar.istio.io/excludeOutboundIPRanges``]””
-“-b”
-“[[annotation.ObjectMeta`traffic.sidecar.istio.io/includeInboundPorts`(includeInboundPorts.Spec.Containers)]”
-“-d”
-“[[excludeInboundPort(annotation.ObjectMeta`status.sidecar.istio.io/port`0)(annotation.ObjectMeta`traffic.sidecar.istio.io/excludeInboundPorts`])”
imagePullPolicy:如果不存在
securityContext:
能力:
加:
-网络管理员
特权:真的
restartPolicy:始终
容器:
-名称:istio代理
图像:[[annotation.ObjectMeta`sidecar.istio.io/proxyImage`“docker.io/istio/proxyv2:1.0.6”]
端口:
-集装箱港口:15090
协议:TCP
姓名:http://prom
args:
-代理
-侧车
---配置路径
-[.ProxyConfig.ConfigPath]]
---二进制路径
-[.ProxyConfig.BinaryPath]]
---服务集群
[[if ne”“(index.ObjectMeta.Labels“app”)-]
-[[index.ObjectMeta.Labels“app”]]
[[其他-]]
-“istio代理”
[[结束-]]
---排水硬化
-[[formatDuration.ProxyConfig.DrainDuration]]
---家长关闭持续时间
-[[formatDuration.ProxyConfig.ParentShutdownDuration]]
---发现地址
-[[annotation.ObjectMeta`sidecar.istio.io/discoveryAddress`.ProxyConfig.discoveryAddress]]
---发现折射延迟
-[[formatDuration.ProxyConfig.DiscoveryRefreshDelay]]
-——齐普基纳礼服
-[.ProxyConfig.ZipkinAddress]]
---连接超时
-[[formatDuration.ProxyConfig.ConnectTimeout]]
---proxyAdminPort
-[.ProxyConfig.ProxyAdminPort]]
[[如果gt.ProxyConfig.Concurrency 0-]]
---并发性
-[.ProxyConfig.Concurrency]]
[[结束-]]
---控制政策
-[[annotation.ObjectMeta`sidecar.istio.io/controlPlaneAuthPolicy`.ProxyCon
$ kubectl -n kube-system exec -ti cilium-4wzgd cilium-health status
Probe time:   2019-03-08T16:35:57Z
Nodes:
  ip-10-250-206-54.ec2.internal (localhost):
    Host connectivity to 10.250.206.54:
      ICMP to stack:   OK, RTT=440.788µs
      HTTP to agent:   OK, RTT=665.779µs
  ip-10-250-198-72.ec2.internal:
    Host connectivity to 10.250.198.72:
      ICMP to stack:   OK, RTT=799.994µs
      HTTP to agent:   OK, RTT=1.594971ms
  ip-10-250-199-154.ec2.internal:
    Host connectivity to 10.250.199.154:
      ICMP to stack:   OK, RTT=770.777µs
      HTTP to agent:   OK, RTT=1.692356ms
  ip-10-250-205-177.ec2.internal:
    Host connectivity to 10.250.205.177:
      ICMP to stack:   OK, RTT=460.927µs
      HTTP to agent:   OK, RTT=1.383852ms
  ip-10-250-213-68.ec2.internal:
    Host connectivity to 10.250.213.68:
      ICMP to stack:   OK, RTT=766.769µs
      HTTP to agent:   OK, RTT=1.401989ms
  ip-10-250-214-179.ec2.internal:
    Host connectivity to 10.250.214.179:
      ICMP to stack:   OK, RTT=781.72µs
      HTTP to agent:   OK, RTT=2.614356ms

$ kubectl -n kube-system exec -ti cilium-4wzgd -- cilium status
KVStore:                Ok   etcd: 1/1 connected: https://cilium-etcd-client.kube-system.svc:2379 - 3.3.11 (Leader)
ContainerRuntime:       Ok   docker daemon: OK
Kubernetes:             Ok   1.11+ (v1.11.5-eks-6bad6d) [linux/amd64]
Kubernetes APIs:        ["CustomResourceDefinition", "cilium/v2::CiliumNetworkPolicy", "core/v1::Endpoint", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
Cilium:                 Ok   OK
NodeMonitor:            Disabled
Cilium health daemon:   Ok   
IPv4 address pool:      6/65535 allocated from 10.54.0.0/16
Controller Status:      34/34 healthy
Proxy Status:           OK, ip 10.54.0.1, port-range 10000-20000
Cluster health:   6/6 reachable   (2019-03-08T16:36:57Z)

$ kubectl get namespace -L istio-injection
NAME           STATUS   AGE   ISTIO-INJECTION
default        Active   4h    
istio-system   Active   4m    
kube-public    Active   4h    
kube-system    Active   4h    
nginx          Active   4h    enabled

$ for pod in $(kubectl -n istio-system get pod -listio=sidecar-injector -o jsonpath='{.items[*].metadata.name}'); do kubectl -n istio-system logs ${pod}; done
2019-03-08T16:35:02.948778Z info    version root@464fc845-2bf8-11e9-b805-0a580a2c0506-docker.io/istio-1.0.6-98598f88f6ee9c1e6b3f03b652d8e0e3cd114fa2-dirty-Modified
2019-03-08T16:35:02.950343Z info    New configuration: sha256sum cf9491065c492014f0cb69c8140a415f0f435a81d2135efbfbab070cf6f16554
2019-03-08T16:35:02.950377Z info    Policy: enabled
2019-03-08T16:35:02.950398Z info    Template: |
  initContainers:
  - name: istio-init
    image: "docker.io/istio/proxy_init:1.0.6"
    args:
    - "-p"
    - [[ .MeshConfig.ProxyListenPort ]]
    - "-u"
    - 1337
    - "-m"
    - [[ annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode ]]
    - "-i"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeOutboundIPRanges`  "172.20.0.0/16"  ]]"
    - "-x"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/excludeOutboundIPRanges`  ""  ]]"
    - "-b"
    - "[[ annotation .ObjectMeta `traffic.sidecar.istio.io/includeInboundPorts` (includeInboundPorts .Spec.Containers) ]]"
    - "-d"
    - "[[ excludeInboundPort (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) (annotation .ObjectMeta `traffic.sidecar.istio.io/excludeInboundPorts`  "" ) ]]"
    imagePullPolicy: IfNotPresent
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
      privileged: true
    restartPolicy: Always
  containers:
  - name: istio-proxy
    image: [[ annotation .ObjectMeta `sidecar.istio.io/proxyImage`  "docker.io/istio/proxyv2:1.0.6"  ]]

    ports:
    - containerPort: 15090
      protocol: TCP
      name: http-envoy-prom

    args:
    - proxy
    - sidecar
    - --configPath
    - [[ .ProxyConfig.ConfigPath ]]
    - --binaryPath
    - [[ .ProxyConfig.BinaryPath ]]
    - --serviceCluster
    [[ if ne "" (index .ObjectMeta.Labels "app") -]]
    - [[ index .ObjectMeta.Labels "app" ]]
    [[ else -]]
    - "istio-proxy"
    [[ end -]]
    - --drainDuration
    - [[ formatDuration .ProxyConfig.DrainDuration ]]
    - --parentShutdownDuration
    - [[ formatDuration .ProxyConfig.ParentShutdownDuration ]]
    - --discoveryAddress
    - [[ annotation .ObjectMeta `sidecar.istio.io/discoveryAddress` .ProxyConfig.DiscoveryAddress ]]
    - --discoveryRefreshDelay
    - [[ formatDuration .ProxyConfig.DiscoveryRefreshDelay ]]
    - --zipkinAddress
    - [[ .ProxyConfig.ZipkinAddress ]]
    - --connectTimeout
    - [[ formatDuration .ProxyConfig.ConnectTimeout ]]
    - --proxyAdminPort
    - [[ .ProxyConfig.ProxyAdminPort ]]
    [[ if gt .ProxyConfig.Concurrency 0 -]]
    - --concurrency
    - [[ .ProxyConfig.Concurrency ]]
    [[ end -]]
    - --controlPlaneAuthPolicy
    - [[ annotation .ObjectMeta `sidecar.istio.io/controlPlaneAuthPolicy` .ProxyConfig.ControlPlaneAuthPolicy ]]
  [[- if (ne (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) "0") ]]
    - --statusPort
    - [[ annotation .ObjectMeta `status.sidecar.istio.io/port`  0  ]]
    - --applicationPorts
    - "[[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/applicationPorts` (applicationPorts .Spec.Containers) ]]"
  [[- end ]]
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: INSTANCE_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: ISTIO_META_POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: ISTIO_META_INTERCEPTION_MODE
      value: [[ or (index .ObjectMeta.Annotations "sidecar.istio.io/interceptionMode") .ProxyConfig.InterceptionMode.String ]]
    [[ if .ObjectMeta.Annotations ]]
    - name: ISTIO_METAJSON_ANNOTATIONS
      value: |
             [[ toJson .ObjectMeta.Annotations ]]
    [[ end ]]
    [[ if .ObjectMeta.Labels ]]
    - name: ISTIO_METAJSON_LABELS
      value: |
             [[ toJson .ObjectMeta.Labels ]]
    [[ end ]]
    imagePullPolicy: IfNotPresent
    [[ if (ne (annotation .ObjectMeta `status.sidecar.istio.io/port`  0 ) "0") ]]
    readinessProbe:
      httpGet:
        path: /healthz/ready
        port: [[ annotation .ObjectMeta `status.sidecar.istio.io/port`  0  ]]
      initialDelaySeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/initialDelaySeconds`  1  ]]
      periodSeconds: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/periodSeconds`  2  ]]
      failureThreshold: [[ annotation .ObjectMeta `readiness.status.sidecar.istio.io/failureThreshold`  30  ]]
    [[ end -]]securityContext:

      readOnlyRootFilesystem: true
      [[ if eq (annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode) "TPROXY" -]]
      capabilities:
        add:
        - NET_ADMIN
      runAsGroup: 1337
      [[ else -]]
      runAsUser: 1337
      [[ end -]]
    restartPolicy: Always
    resources:
      [[ if (isset .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU`) -]]
      requests:
        cpu: "[[ index .ObjectMeta.Annotations `sidecar.istio.io/proxyCPU` ]]"
        memory: "[[ index .ObjectMeta.Annotations `sidecar.istio.io/proxyMemory` ]]"
    [[ else -]]
      requests:
        cpu: 10m

    [[ end -]]
    volumeMounts:
    - mountPath: /etc/istio/proxy
      name: istio-envoy
    - mountPath: /etc/certs/
      name: istio-certs
      readOnly: true
  volumes:
  - emptyDir:
      medium: Memory
    name: istio-envoy
  - name: istio-certs
    secret:
      optional: true
      [[ if eq .Spec.ServiceAccountName "" -]]
      secretName: istio.default
      [[ else -]]
      secretName: [[ printf "istio.%s" .Spec.ServiceAccountName ]]
      [[ end -]]

$ kubectl get svc 
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   5h

$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}

$ kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
ip-*1.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*2.ec2.internal   Ready    <none>   5h    v1.11.5
ip-*3.ec2.internal   Ready    <none>   5h    v1.11.5
ip-*4.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*5.ec2.internal    Ready    <none>   5h    v1.11.5
ip-*6.ec2.internal   Ready    <none>   5h    v1.11.5

$ kubectl get pods --all-namespaces
NAMESPACE      NAME                                      READY   STATUS    RESTARTS   AGE
istio-system   istio-citadel-796c94878b-jt5tb            1/1     Running   0          13m
istio-system   istio-egressgateway-864444d6ff-vwptk      1/1     Running   0          13m
istio-system   istio-galley-6c68c5dbcf-fmtvp             1/1     Running   0          13m
istio-system   istio-ingressgateway-694576c7bb-kmk8k     1/1     Running   0          13m
istio-system   istio-pilot-79f5f46dd5-kbr45              2/2     Running   0          13m
istio-system   istio-policy-5bd5578b94-qzzhd             2/2     Running   0          13m
istio-system   istio-sidecar-injector-6d8f88c98f-slr6x   1/1     Running   0          13m
istio-system   istio-telemetry-5598f86cd8-z7kr5          2/2     Running   0          13m
istio-system   prometheus-76db5fddd5-hw9pb               1/1     Running   0          13m
kube-system    aws-node-5wv4g                            1/1     Running   0          4h
kube-system    aws-node-gsf7l                            1/1     Running   0          4h
kube-system    aws-node-ksddt                            1/1     Running   0          4h
kube-system    aws-node-lszrr                            1/1     Running   0          4h
kube-system    aws-node-r4gcg                            1/1     Running   0          4h
kube-system    aws-node-wtcvj                            1/1     Running   0          4h
kube-system    cilium-4wzgd                              1/1     Running   0          4h
kube-system    cilium-56sq5                              1/1     Running   0          4h
kube-system    cilium-etcd-4vndb7tl6w                    1/1     Running   0          4h
kube-system    cilium-etcd-operator-6d9975f5df-zcb5r     1/1     Running   0          4h
kube-system    cilium-etcd-r9h4txhgld                    1/1     Running   0          4h
kube-system    cilium-etcd-t2fldlwxzh                    1/1     Running   0          4h
kube-system    cilium-fkx8d                              1/1     Running   0          4h
kube-system    cilium-glc8l                              1/1     Running   0          4h
kube-system    cilium-gvm5f                              1/1     Running   0          4h
kube-system    cilium-jscn8                              1/1     Running   0          4h
kube-system    cilium-operator-7df75f5cc8-tnv54          1/1     Running   0          4h
kube-system    coredns-7bcbfc4774-fr59z                  1/1     Running   0          5h
kube-system    coredns-7bcbfc4774-xxwbg                  1/1     Running   0          5h
kube-system    etcd-operator-7b9768bc99-8fxf2            1/1     Running   0          4h
kube-system    kube-proxy-bprmp                          1/1     Running   0          5h
kube-system    kube-proxy-ccb2q                          1/1     Running   0          5h
kube-system    kube-proxy-dv2mn                          1/1     Running   0          5h
kube-system    kube-proxy-qds2r                          1/1     Running   0          5h
kube-system    kube-proxy-rf466                          1/1     Running   0          5h
kube-system    kube-proxy-rz2ck                          1/1     Running   0          5h
kube-system    tiller-deploy-57c574bfb8-cd6rn            1/1     Running   0          4h
Internal error occurred: failed calling admission webhook \"mixer.validation.istio.io\": Post https://istio-galley.istio-system.svc:443/admitmixer?timeout=30s: Address is not allowed