CKA考试知识总结

这是一篇很长很长的文章,是去年CKA考试刚开始推出来的时候,我参与考试复习做过的一些知识点。基于做过的题,大概列出了具体的知识点,当时考试的时候还在使用v1.7版本,现在应该都要到v1.1...

这是一篇很长很长的文章,是去年CKA考试刚开始推出来的时候,我参与考试复习做过的一些知识点。基于做过的题,大概列出了具体的知识点,当时考试的时候还在使用v1.7版本,现在应该都要到v1.12了。

CKA证书

随着k8s声名大噪,国内一大堆公司推各种高价的包过培训班;我只想说:CNCF还真有些缺乏社区精神,更多的还是商业模式。但是,只要能够推动整个云原生的发展,随它吧~

为了让本文显得有说服力,我也把证书贴出来炫炫(认证ID末尾是0100,照理说刚好是第100位通过考试的),勿拍砖!

复习资料

废话不多讲,现在进入主题。

Job

Q: Create a Job that run 60 time with 2 jobs running in parallel

参考资料: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

yaml

apiVersion: batch/v1
kind: Job
metadata:
    name: pi
spec:
    completions: 10
    parallelism: 2
    activeDeadlineSeconds: 2
    template:
    metadata:
        name: pi
    spec:
        containers:
        - name: pi
        image: perl
        command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
        restartPolicy: Never

并行job

Job类型 使用示例 行为 completions Parallelism 备注
一次性Job 数据库迁移 创建一个Pod直至其成功结束 1 1
固定结束次数的Job 处理工作队列的Pod 依次创建一个Pod运行直至completions个成功结束 2 1
固定结束次数的并行Job 多个Pod同时处理工作队列 依次创建多个Pod运行直至completions个成功结束 2 2
并行Job 多个Pod同时处理工作队列 创建一个或多个Pod直至有一个成功结束 1 2 不会创建多个,直接创建出一个
  • kubectl scale job

    A job can be scaled up using the kubectl scale command. For example, the following command sets .spec.parallelism of a job called myjob to 10:

    $ kubectl scale  --replicas=10 jobs/myjob
    job "myjob" scaled
    
  • 注意

    1. parallelism: 表示并行执行的数量;
    2. completions:表示成功运行多少次就结束job;
    3. RestartPolicy仅支持Never或OnFailure;
    4. activeDeadlineSeconds标志失败Pod的重试最大时间,超过这个时间不会继续重试;
    5. kubectl scale其实是修改了job的parallelism属性,并不会对completetions产生影响。

Cronjob

cron 表达式格式:

如果某一位为*/5 就表示每隔5x; 比如在min位的话,代表每隔5分钟

root@test-9:~# kubectl run cronjob --schedule="*/1 * * * *" --restart=OnFailure --image=busybox -- /bin/sh -c "sleep 99"
cronjob "cronjob" created
root@test-9:~#
root@test-9:~# kubectl get cronjob
NAME      SCHEDULE      SUSPEND   ACTIVE    LAST SCHEDULE   AGE
cronjob   */1 * * * *   False     0         <none>
root@test-9:~#
root@test-9:~# kubectl get job
NAME                 DESIRED   SUCCESSFUL   AGE
cronjob-1510581480   1         0            1m
cronjob-1510581540   1         0            14s
root@test-9:~#
root@test-9:~# kubectl get pod
NAME                       READY     STATUS    RESTARTS   AGE
cronjob-1510581480-r49rq   1/1       Running   0          1m
cronjob-1510581540-tl4hn   1/1       Running   0          16s

kubectl top

Q: Find which Pod is taking max CPU

Use kubectl top to find CPU usage per pod

kubectl top node

root@test-9:~/henry# kubectl top nodes
NAME      CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%
test-10   41m          1%        2230Mi          14%
test-9    104m         2%        4931Mi          31%
root@test-9:~/henry#
root@test-9:~/henry#
root@test-9:~/henry# kubectl top nodes | awk '{print $1 "\t" $3|"sort -r -n"}'
test-9        2%
test-10        1%
NAME        CPU%

sort的参数:-r 表示反序排列; -n 表示按照数字排序

awk print的时候,使用”\t” 来区分两个列,同时,使用管道来排序

输出排序

Q: List all PersistentVolumes sorted by their name

Use kubectl get pv --sort-by= <- this problem is buggy & also by default kubectl give the output sorted by name.

排序

root@test-9:~/henry# kcs get svc --sort-by=.metadata.uid
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
tiller-deploy          ClusterIP   10.43.155.15    <none>        44134/TCP       2h
monitoring-influxdb    ClusterIP   10.43.227.43    <none>        8086/TCP        2h
monitoring-grafana     ClusterIP   10.43.217.185   <none>        80/TCP          2h
kube-dns               ClusterIP   10.43.0.10      <none>        53/UDP,53/TCP   2h
kubernetes-dashboard   ClusterIP   10.43.36.245    <none>        9090/TCP        2h
heapster               ClusterIP   10.43.250.217   <none>        80/TCP          2h
root@test-9:~/henry#
root@test-9:~/henry# kcs get svc --sort-by=.metadata.name
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
heapster               ClusterIP   10.43.250.217   <none>        80/TCP          2h
kube-dns               ClusterIP   10.43.0.10      <none>        53/UDP,53/TCP   2h
kubernetes-dashboard   ClusterIP   10.43.36.245    <none>        9090/TCP        2h
monitoring-grafana     ClusterIP   10.43.217.185   <none>        80/TCP          2h
monitoring-influxdb    ClusterIP   10.43.227.43    <none>        8086/TCP        2h
tiller-deploy          ClusterIP   10.43.155.15    <none>        44134/TCP       2h

root@test-9:~/henry# kcs get svc heapster -o json
{
    "apiVersion": "v1",
    "kind": "Service",
    "metadata": {
        "creationTimestamp": "2017-11-12T03:27:51Z",
        "labels": {
            "kubernetes.io/cluster-service": "true",
            "kubernetes.io/name": "Heapster",
            "task": "monitoring"
        },
        "name": "heapster",
        "namespace": "kube-system",
        "resourceVersion": "229",
        "selfLink": "/api/v1/namespaces/kube-system/services/heapster",
        "uid": "769529c5-c759-11e7-8dee-02cdc7a8bd69"
    },
    "spec": {
        "clusterIP": "10.43.250.217",
        "ports": [
            {
                "port": 80,
                "protocol": "TCP",
                "targetPort": 8082
            }
        ],
        "selector": {
            "k8s-app": "heapster"
        },
        "sessionAffinity": "None",
        "type": "ClusterIP"
    },
    "status": {
        "loadBalancer": {}
    }
}

查询资源

# Get commands with basic output
$ kubectl get services                          # List all services in the namespace
$ kubectl get pods --all-namespaces             # List all pods in all namespaces
$ kubectl get pods -o wide                      # List all pods in the namespace, with more details
$ kubectl get deployment my-dep                 # List a particular deployment
$ kubectl get pods --include-uninitialized      # List all pods in the namespace, including uninitialized ones

# Describe commands with verbose output
$ kubectl describe nodes my-node
$ kubectl describe pods my-pod

$ kubectl get services --sort-by=.metadata.name # List Services Sorted by Name

# List pods Sorted by Restart Count
$ kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'

# Get the version label of all pods with label app=cassandra
$ kubectl get pods --selector=app=cassandra rc -o \
    jsonpath='{.items[*].metadata.labels.version}'

# Get ExternalIPs of all nodes
$ kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'

# List Names of Pods that belong to Particular RC
# "jq" command useful for transformations that are too complex for jsonpath, it can be found at https://stedolan.github.io/jq/
$ sel=${$(kubectl get rc my-rc --output=json | jq -j '.spec.selector | to_entries | .[] | "\(.key)=\(.value),"')%?}
$ echo $(kubectl get pods --selector=$sel --output=jsonpath={.items..metadata.name})

# Check which nodes are ready
$ JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \
    && kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True"

# List all Secrets currently in use by a pod
$ kubectl get pods -o json | jq '.items[].spec.containers[].env[]?.valueFrom.secretKeyRef.name' | grep -v null | sort | uniq

常用命令

kubectl run

root@test-9:~# kubectl run demo-1 --image=busybox:latest --env="env1=wise2c" --port=80 --hostport=30098 --restart='Always' --image-pull-policy='Always' --limits="cpu=200m,memory=512Mi" --replicas=2 -- sleep 60
deployment "demo-1" created
root@test-9:~#
root@test-9:~# kubectl get pod
NAME                      READY     STATUS              RESTARTS   AGE
demo-1-4031462666-1m6lc   0/1       ContainerCreating   0          4s
demo-1-4031462666-3sph3   0/1       ContainerCreating   0          4s
root@test-9:~#
root@test-9:~# kubectl get deploy demo-1 -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    annotations:
    deployment.kubernetes.io/revision: "1"
    creationTimestamp: 2017-11-12T06:20:52Z
    generation: 1
    labels:
    run: demo-1
    name: demo-1
    namespace: default
    resourceVersion: "13667"
    selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/demo-1
    uid: a24a6b2b-c771-11e7-8dee-02cdc7a8bd69
spec:
    replicas: 2
    selector:
    matchLabels:
        run: demo-1
    strategy:
    rollingUpdate:
        maxSurge: 1
        maxUnavailable: 1
    type: RollingUpdate
    template:
    metadata:
        creationTimestamp: null
        labels:
        run: demo-1
    spec:
        containers:
        - args:
        - sleep
        - "60"
        env:
        - name: env1
            value: wise2c
        image: busybox:latest
        imagePullPolicy: Always
        name: demo-1
        ports:
        - containerPort: 80
            hostPort: 30098
            protocol: TCP
        resources:
            limits:
            cpu: 200m
            memory: 512Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        dnsPolicy: ClusterFirst
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext: {}
        terminationGracePeriodSeconds: 30
status:
    availableReplicas: 2
    conditions:
    - lastTransitionTime: 2017-11-12T06:22:03Z
    lastUpdateTime: 2017-11-12T06:22:03Z
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
    observedGeneration: 1
    readyReplicas: 2
    replicas: 2
    updatedReplicas: 2

kubectl expose

root@test-9:~# kubectl expose deploy nginx2 --name=nginx --port=80 --target-port=80 --protocol=TCP --type=ClusterIP
service "nginx" exposed
root@test-9:~# kubectl get svc nginx -o yaml
apiVersion: v1
kind: Service
metadata:
    creationTimestamp: 2017-11-12T07:32:48Z
    labels:
    run: nginx2
    name: nginx
    namespace: default
    resourceVersion: "20097"
    selfLink: /api/v1/namespaces/default/services/nginx
    uid: ae7774d4-c77b-11e7-8dee-02cdc7a8bd69
spec:
    clusterIP: 10.43.221.216
    ports:
    - port: 80
    protocol: TCP
    targetPort: 80
    selector:
    run: nginx2
    sessionAffinity: None
    type: ClusterIP
status:
    loadBalancer: {}  
root@test-9:~#
root@test-9:~#
root@test-9:~#
root@test-9:~#
root@test-9:~# kubectl expose deploy nginx2 --name=nginx --port=80 --target-port=80 --protocol=TCP --type=NodePort
service "nginx" exposed
root@test-9:~# kubectl get svc nginx -o yaml
apiVersion: v1
kind: Service
metadata:
    creationTimestamp: 2017-11-12T07:35:21Z
    labels:
    run: nginx2
    name: nginx
    namespace: default
    resourceVersion: "20296"
    selfLink: /api/v1/namespaces/default/services/nginx
    uid: 0a19d690-c77c-11e7-8dee-02cdc7a8bd69
spec:
    clusterIP: 10.43.120.19
    externalTrafficPolicy: Cluster
    ports:
    - nodePort: 30014
    port: 80
    protocol: TCP
    targetPort: 80
    selector:
    run: nginx2
    sessionAffinity: None
    type: NodePort
status:
    loadBalancer: {}

port-forward

root@test-9:~# kubectl get pod -o wide
NAME                    READY     STATUS    RESTARTS   AGE       IP            NODE
nginx-f7d4dc847-bzlzq   1/1       Running   0          11h       10.244.0.24   test-9
nginx-f7d4dc847-lcq57   1/1       Running   0          11h       10.244.1.45   test-10
nginx-f7d4dc847-qs28j   1/1       Running   0          11h       10.244.0.25   test-9
nginx-f7d4dc847-s4xml   1/1       Running   0          11h       10.244.1.44   test-10
nginx-f7d4dc847-skb74   1/1       Running   0          11h       10.244.1.43   test-10
nginx-f7d4dc847-x9vh4   1/1       Running   0          11h       10.244.0.26   test-9
root@test-9:~# kubectl port-forward nginx-f7d4dc847-bzlzq 9090:80
Forwarding from 127.0.0.1:9090 -> 80
Handling connection for 9090
root@test-9:~#
root@test-9:~# curl 127.0.0.1:9090
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
...
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
root@test-9:~#

NetworkPolicy

Q: Create a NetworkPolicy to allow connect to port 8080 by busybox pod only

https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/

Make sure to use apiVersion: extensions/v1beta1 which works on both 1.6 and 1.7

  • 在生效之前,必须先配置annotation来阻止所有的请求;
  • podSelector.matchLablesl:定义了该规则对哪些pod(destination)有效;
  • ingress:指定了允许带标签“access=true” 的pod访问这些服务;
root@test-9:~# kubectl annotate ns default "net.beta.kubernetes.io/network-policy={\"ingress\": {\"isolation\": \"DefaultDeny\"}}"
namespace "default" annotated
root@test-9:~#
root@test-9:~#
root@test-9:~# kubectl describe ns default
Name:         default
Labels:       <none>
Annotations:  net.beta.kubernetes.io/network-policy={"ingress": {"isolation": "DefaultDeny"}}
Status:       Active
No resource quota.
No resource limits.
root@test-9:~#
root@test-9:~/henry# kubectl get pod --show-labels
NAME                      READY     STATUS    RESTARTS   AGE       LABELS
nginx2-2627548522-6f5kf   1/1       Running   0          22m       pod-template-hash=2627548522,run=nginx
nginx2-2627548522-8w87b   1/1       Running   0          22m       pod-template-hash=2627548522,run=nginx
root@test-9:~/henry# kubectl get svc nginx --show-labels
NAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE       LABELS
nginx     NodePort   10.43.120.19   <none>        80:30014/TCP   16m       run=nginx
root@test-9:~/henry# cat network-policy.yaml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
    name: access-nginx
spec:
    podSelector:
    matchLabels:
        run: nginx
    ingress:
    - from:
    - namespaceSelector:
        matchLabels:
            project: myproject
    - podSelector:
        matchLabels:
            access: "true"
    ports:
    - protocol: TCP
        port: 80
root@test-9:~/henry# kubectl get netpol
NAME           POD-SELECTOR   AGE
access-nginx   run=nginx      2m
root@test-9:~/henry# kubectl get netpol access-nginx -o yaml
apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
    creationTimestamp: 2017-11-12T07:40:38Z
    generation: 1
    name: access-nginx
    namespace: default
    resourceVersion: "20699"
    selfLink: /apis/extensions/v1beta1/namespaces/default/networkpolicies/access-nginx
    uid: c72191d1-c77c-11e7-8dee-02cdc7a8bd69
spec:
    ingress:
    - from:
    - podSelector:
        matchLabels:
            access: "true"
        ports:
        - port: 80
        protocol: TCP
    podSelector:
    matchLabels:
        run: nginx
root@test-9:~/henry#
root@test-9:~/henry# kubectl run busybox --rm -ti --labels="access=true" --image=busybox /bin/sh
If you don't see a command prompt, try pressing enter.
/ # wget nginx
Connecting to nginx (10.43.120.19:80)
index.html           100% |********************************************************************************************|   612   0:00:00 ETA
/ #

Node Broken

  • Node参数

root@test-9:~# kubectl describe nodes
Name:               test-10
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=rancher
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=Region1
                    failure-domain.beta.kubernetes.io/zone=FailureDomain1
                    io.rancher.host.docker_version=1.12
                    io.rancher.host.linux_kernel_version=4.4
                    kubernetes.io/hostname=test-10
Annotations:        io.rancher.labels.io.rancher.host.docker_version=
                    io.rancher.labels.io.rancher.host.linux_kernel_version=
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Sun, 12 Nov 2017 11:27:45  0800
Conditions:
    Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
    ----             ------  -----------------                 ------------------                ------                       -------
    OutOfDisk        False   Sun, 12 Nov 2017 15:16:39  0800   Sun, 12 Nov 2017 11:27:45  0800   KubeletHasSufficientDisk     kubelet has sufficient disk space available
    MemoryPressure   False   Sun, 12 Nov 2017 15:16:39  0800   Sun, 12 Nov 2017 11:27:45  0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
    DiskPressure     False   Sun, 12 Nov 2017 15:16:39  0800   Sun, 12 Nov 2017 11:27:45  0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
    Ready            True    Sun, 12 Nov 2017 15:16:39  0800   Sun, 12 Nov 2017 11:27:45  0800   KubeletReady                 kubelet is posting ready status
Addresses:
    InternalIP:  10.144.102.117
    ExternalIP:  10.144.102.117
    Hostname:    test-10
Capacity:
    cpu:     4
    memory:  16301460Ki
    pods:    110
Allocatable:
    cpu:     4
    memory:  16199060Ki
    pods:    110
System Info:
    Machine ID:
    System UUID:                4ABB25CA-B353-450A-9787-28477ED72344
    Boot ID:                    689e31dc-e05d-48de-9068-e8460d15a9b6
    Kernel Version:             4.4.0-91-generic
    OS Image:                   Ubuntu 16.04.1 LTS
    Operating System:           linux
    Architecture:               amd64
    Container Runtime Version:  docker://1.12.6
    Kubelet Version:            v1.7.7-rancher1
    Kube-Proxy Version:         v1.7.7-rancher1
ExternalID:                  3cb02e3d-cb58-42c6-9a54-2fb5cfb836d2
Non-terminated Pods:         (4 in total)
    Namespace                  Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits
    ---------                  ----                                    ------------  ----------  ---------------  -------------
    default                    demo-1-4031462666-1m6lc                 200m (5%)     200m (5%)   512Mi (3%)       512Mi (3%)
    default                    nginx-4217019353-k3mqk                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
    kube-system                kube-dns-638003847-q28hb                260m (6%)     0 (0%)      110Mi (0%)       170Mi (1%)
    kube-system                kubernetes-dashboard-716739405-42t14    100m (2%)     100m (2%)   50Mi (0%)        50Mi (0%)
Allocated resources:
    (Total limits may be over 100 percent, i.e., overcommitted.)
    CPU Requests  CPU Limits  Memory Requests  Memory Limits
    ------------  ----------  ---------------  -------------
    560m (14%)    300m (7%)   672Mi (4%)       732Mi (4%)
Events:         <none>

Etcd

  • Start Etcd

    #start script:
    #========================================
    etcd --name 'default' \
    --data-dir '/root/data.etcd' \
    --ca-file '/pki/ca.crt' --cert-file '/pki/cert.crt' --key-file '/pki/key.key' \
    --peer-ca-file '/pki/ca.crt' --peer-cert-file '/pki/cert.crt' --peer-key-file '/pki/key.key' \
    --client-cert-auth \
    --peer-client-cert-auth \
    --listen-peer-urls https://localhost:2380 \
    --listen-client-urls https://localhost:2379 \
    --advertise-client-urls https://localhost:2379 \
    --initial-advertise-peer-urls https://localhost:2380 \
    --initial-cluster default=https://localhost:2380 \
    --initial-cluster-state 'new' \
    --initial-cluster-token 'etcd-cluster' \
    --debug
    
    
    #operate:
    #========================================
    etcdctl  --endpoint=https://localhost:2379 --ca-file=/pki/ca.crt --cert-file=/pki/cert.crt --key-file=/pki/key.key ls /
    

    如果要设置证书:

    1. 需要把访问的URL加上https
    2. 需要设置上图中红色部分的内容
  • Replacing a failed etcd member

    1. Get the member ID of the failed member1:

      etcdctl --endpoints=http://10.0.0.2,http://10.0.0.3 member list
      
    2. The following message is displayed:

      8211f1d0f64f3269, started, member1, http://10.0.0.1:12380, http://10.0.0.1:2379
      91bc3c398fb3c146, started, member2, http://10.0.0.1:2380, http://10.0.0.2:2379
      fd422379fda50e48, started, member3, http://10.0.0.1:2380, http://10.0.0.3:2379
      
    3. Remove the failed member:

      etcdctl member remove 8211f1d0f64f3269
      
    4. The following message is displayed:

      Removed member 8211f1d0f64f3269 from cluster
      
    5. Add the new member:

      ./etcdctl member add member4 --peer-urls=http://10.0.0.4:2380
      
    6. The following message is displayed:

      Member 2be1eb8f84b7f63e added to cluster ef37ad9dc622a7c4
      
    7. Start the newly added member on a machine with the IP 10.0.0.4:

      bash export ETCD_NAME="member4" export ETCD_INITIAL_CLUSTER="member2=http://10.0.0.2:2380,member3=http://10.0.0.3:2380,member4=http://10.0.0.4:2380" export ETCD_INITIAL_CLUSTER_STATE=existing etcd [flags]
      

      需要知道,先从集群中添加,然后再启动对应的etcd member。

      另外,对于新启动的etcd member需要指定启动的状态为“existing”。

  • Backing up an etcd cluster

    ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb
    # exit 0
    
    # verify the snapshot
    ETCDCTL_API=3 etcdctl --write-out=table snapshot status snapshotdb
     ---------- ---------- ------------ ------------ 
    |   HASH   | REVISION | TOTAL KEYS | TOTAL SIZE |
     ---------- ---------- ------------ ------------ 
    | fe01cf57 |       10 |          7 | 2.1 MB     |
     ---------- ---------- ------------ ------------ 
    

initContainer

Q: You have a Container with a volume mount. Add a init container that creates an empty file in the volume. (only trick is to mount the volume to init-container as well)

https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
apiVersion: v1
kind: Pod
metadata:
    name: test-pd
spec:
    containers:
    - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
    volumeMounts:
    - mountPath: /cache
        name: cache-volume
    initContainers:
    - name: init-touch-file
    image: busybox
    volumeMounts:
    - mountPath: /data
        name: cache-volume
    command: ['sh', '-c', 'echo "" > /data/harshal.txt']
    volumes:
    - name: cache-volume
    emptyDir: {}
root@test-9:~/henry# cat init-container.yaml
apiVersion: v1
kind: Pod
metadata:
    name: myapp-pod
    labels:
    app: myapp
spec:
    containers:
    - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
    initContainers:
    - name: init-baidu
    image: busybox
    command: ['sh', '-c', 'until nslookup www.baidu.com; do echo waiting for baidu.com; sleep 2; done;']
    - name: init-google
    image: busybox
    command: ['sh', '-c', 'until nslookup www.google.com; do echo waiting for google.com; sleep 2; done;']

root@test-9:~/henry#
root@test-9:~/henry# kubectl get pod -a
NAME                      READY     STATUS    RESTARTS   AGE
myapp-pod                 1/1       Running   0          1m
nginx2-2627548522-6f5kf   1/1       Running   0          2h
nginx2-2627548522-8w87b   1/1       Running   0          2h
root@test-9:~/henry#
root@test-9:~/henry# kubectl describe pod myapp-pod
Name:         myapp-pod
Namespace:    default
Node:         test-9/10.144.96.185
Start Time:   Sun, 12 Nov 2017 17:43:49  0800
Labels:       app=myapp
Annotations:  pod.alpha.kubernetes.io/init-container-statuses=[{"name":"init-baidu","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2017-11-12T09:43:54Z","finishedAt":"2017-11-12T09:43:54Z","c...
                pod.alpha.kubernetes.io/init-containers=[{"name":"init-baidu","image":"busybox","command":["sh","-c","until nslookup www.baidu.com; do echo waiting for baidu.com; sleep 2; done;"],"resources":{},"volu...
                pod.beta.kubernetes.io/init-container-statuses=[{"name":"init-baidu","state":{"terminated":{"exitCode":0,"reason":"Completed","startedAt":"2017-11-12T09:43:54Z","finishedAt":"2017-11-12T09:43:54Z","co...
                pod.beta.kubernetes.io/init-containers=[{"name":"init-baidu","image":"busybox","command":["sh","-c","until nslookup www.baidu.com; do echo waiting for baidu.com; sleep 2; done;"],"resources":{},"volum...
Status:       Running
IP:           10.42.107.11
Init Containers:
    init-baidu:
    Container ID:  docker://9497c4dc7c111870022e5dd873daba13f00797308b505f6e82fd1f1545744062
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
    Port:          <none>
    Command:
        sh
        -c
        until nslookup www.baidu.com; do echo waiting for baidu.com; sleep 2; done;
    State:          Terminated
        Reason:       Completed
        Exit Code:    0
        Started:      Sun, 12 Nov 2017 17:43:54  0800
        Finished:     Sun, 12 Nov 2017 17:43:54  0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
        /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qfpj (ro)
    init-google:
    Container ID:  docker://5ff45db07f52c51e40b0bb77ad650aa4fbd29aa7112a4197de33ed880a04376d
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
    Port:          <none>
    Command:
        sh
        -c
        until nslookup www.google.com; do echo waiting for google.com; sleep 2; done;
    State:          Terminated
        Reason:       Completed
        Exit Code:    0
        Started:      Sun, 12 Nov 2017 17:43:59  0800
        Finished:     Sun, 12 Nov 2017 17:43:59  0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
        /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qfpj (ro)
Containers:
    myapp-container:
    Container ID:  docker://88cf1ddb39e7b468d9d06c37a7d3ff1ca0d39ae9b0f46d0cf2f1788cb1482118
    Image:         busybox
    Image ID:      docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
    Port:          <none>
    Command:
        sh
        -c
        echo The app is running! && sleep 3600
    State:          Running
        Started:      Sun, 12 Nov 2017 17:44:04  0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
        /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qfpj (ro)
Conditions:
    Type           Status
    Initialized    True
    Ready          True
    PodScheduled   True
Volumes:
    default-token-5qfpj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5qfpj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                    node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
    Type    Reason                 Age   From               Message
    ----    ------                 ----  ----               -------
    Normal  Scheduled              1m    default-scheduler  Successfully assigned myapp-pod to test-9
    Normal  SuccessfulMountVolume  1m    kubelet, test-9    MountVolume.SetUp succeeded for volume "default-token-5qfpj"
    Normal  Pulling                1m    kubelet, test-9    pulling image "busybox"
    Normal  Pulled                 1m    kubelet, test-9    Successfully pulled image "busybox"
    Normal  Created                1m    kubelet, test-9    Created container
    Normal  Started                1m    kubelet, test-9    Started container
    Normal  Pulling                1m    kubelet, test-9    pulling image "busybox"
    Normal  Pulled                 1m    kubelet, test-9    Successfully pulled image "busybox"
    Normal  Created                1m    kubelet, test-9    Created container
    Normal  Started                1m    kubelet, test-9    Started container
    Normal  Pulling                1m    kubelet, test-9    pulling image "busybox"
    Normal  Pulled                 1m    kubelet, test-9    Successfully pulled image "busybox"
    Normal  Created                1m    kubelet, test-9    Created container
    Normal  Started                1m    kubelet, test-9    Started container
root@test-9:~/henry#

Volume

Q: When running a redis key-value store in your pre-production environments many deployments are incoming from CI and leaving behind a lot of stale cache data in redis which is causing test failures. The CI admin has requested that each time a redis key-value-store is deployed in staging that it not persist its data. Create a pod named non-persistent-redis that specifies a named-volume with name app-cache, and mount path /data/redis. It should launch in the staging namespace and the volume MUST NOT be persistent. Create a Pod with EmptyDir and in the YAML file add namespace: CI

Yaml格式

apiVersion: v1
kind: Pod
metadata:
    name: test-pd
spec:
    containers:
    - image: gcr.io/google_containers/busybox:latest
    name: test-container
    command: ["/bin/sh", "-c", "sleep 9999"]
    volumeMounts:
    - mountPath: /cache
        name: cache-volume
    volumes:
    - name: cache-volume
    emptyDir: {}

挂载文件到pod中:

apiVersion: v1
items:
- apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
    labels:
        io.wise2c.service: xx
        io.wise2c.stack: stack001
    name: stack001-xx
    spec:
    replicas: 1
    template:
        metadata:
        labels:
            io.wise2c.service: xx
            io.wise2c.stack: stack001
        spec:
        containers:
            image: nginx
            name: xx
            resources:
            limits:
                cpu: 200m
                memory: 1073741824
            volumeMounts:
            - mountPath: /etc/resolv.conf
            name: xx
            subPath: resolv.conf
        volumes:
        - configMap:
            name: stack001-xx
            name: xx
- apiVersion: v1
    data:
    resolv.conf: "\nnameserver 10.96.0.10 \n\nsearch stack001.ns-team-2-env-44.svc.cluster.local\
        \ ns-team-2-env-44.svc.cluster.local svc.cluster.local cluster.local\noptions\
        \ ndots:6"
    kind: ConfigMap
    metadata:
    labels:
        io.wise2c.stack: stack001
    name: stack001-xx
kind: List

挂载同一个文件到不同pod中,指定不同的名字:

apiVersion: v1
kind: Pod
metadata:
    name: my-lamp-site
spec:
    containers:
    - name: mysql
        image: busybox
        command: ["/bin/sh", "-c", "sleep 999"]
        volumeMounts:
        - mountPath: /haha/mysql
        name: site-data
        subPath: mysql
    - name: php
        image: busybox
        command: ["/bin/sh", "-c", "sleep 999"]
        volumeMounts:
        - mountPath: /haha/html
        name: site-data
        subPath: html
    volumes:
    - name: site-data
        hostPath:
        path: /data

两种类型的持久卷

PV, 使用静态的PV来挂载,需要用户自己创建PV.

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
    storage: 5Gi
accessModes:
    - ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
nfs:
    path: /tmp
    server: 172.17.0.2

PVC, 用户不用关心PV,只需要说需要什么类型的存储,即创建PVC,然后PVC自动从Storage Class中创建对应的PV。

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
    name: myclaim
spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
        storage: 8Gi
    storageClassName: gold
    selector:
    matchLabels:
        release: "stable"
    matchExpressions:
        - {key: environment, operator: In, values: [dev]}

Storage Class:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
    name: gold
provisioner: kubernetes.io/cinder
parameters:
    type: fast
    availability: nova

Pod:

kind: Pod
apiVersion: v1
metadata:
    name: mypod
spec:
    containers:
    - name: myfrontend
        image: dockerfile/nginx
        volumeMounts:
        - mountPath: "/var/www/html"
        name: mypd
    volumes:
    - name: mypd
        persistentVolumeClaim:
        claimName: myclaim

Log

Q: Find the error message with the string “Some-error message here”.

https://kubernetes.io/docs/concepts/cluster-administration/logging/

see kubectl logs and /var/log for system services

[root@dev-7 henry]# kcc logs -f --tail=10  orchestration-2080965958-khwfx -c orchestration
[root@dev-7 henry]# kcc logs -f --since=1h  orchestration-2080965958-khwfx -c orchestration

kubelet日志位于/var/log/kubelet下

static pod

Q: Run a Jenkins Pod on a specified node only.

https://kubernetes.io/docs/tasks/administer-cluster/static-pod/

Create the Pod manifest at the specified location and then edit the systemd service file for kubelet(/etc/systemd/system/kubelet.service) to include --pod-manifest-path=/specified/path . Once done restart the service.

  1. Choose a node where we want to run the static pod. In this example, it’s my-node1.

    [joe@host ~] $ ssh my-node1
    
  2. Choose a directory, say /etc/kubelet.d and place a web server pod definition there, e.g. /etc/kubelet.d/static-pod.yaml :

    [root@my-node1 ~] $ mkdir /etc/kubernetes.d/ 
    [root@my-node1 ~] $ cat <<EOF >/etc/kubernetes.d/static-pod.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
        name: static-pod
    spec:
        containers:
        - image: busybox
        name: test-container
        command: ["/bin/sh", "-c", "sleep 9999"]
    EOF
    
  3. Configure your kubelet daemon on the node to use this directory by running it with --pod-manifest-path=/etc/kubelet.d/ argument. On Fedora edit /etc/kubernetes/kubelet to include this line:

    KUBELET_ARGS="--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubelet.d/"
    
  4. Instructions for other distributions or Kubernetes installations may vary. Restart kubelet. On Fedora, this is:

    [root@my-node1 ~] $ systemctl restart kubelet
    

效果如下:

[root@dev-9 manifests]# kubectl get pod
NAME                                      READY     STATUS        RESTARTS   AGE
static-pod-dev-9                          1/1       Running       0          34s
[root@dev-9 manifests]#
[root@dev-9 manifests]# kubectl describe pod static-pod-dev-9
Name:                static-pod-dev-9
Namespace:        default
Node:                dev-9/192.168.1.190
Start Time:        Sun, 12 Nov 2017 21:21:48  0800
Labels:                <none>
Annotations:        kubernetes.io/config.hash=1dcad4affd910f45b5c3a8dbdeec8933
                kubernetes.io/config.mirror=1dcad4affd910f45b5c3a8dbdeec8933
                kubernetes.io/config.seen=2017-11-12T21:21:48.15196949 08:00
                kubernetes.io/config.source=file
Status:                Running
IP:                10.244.3.45
Containers:
    test-container:
    Container ID:        docker://ef3e28e45e280e4a50942fc472fd025cb84a7014a64dbc57308cddbfeb1bd979
    Image:                busybox
    Image ID:                docker-pullable://busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0
    Port:                <none>
    Command:
        /bin/sh
        -c
        sleep 9999
    State:                Running
        Started:                Sun, 12 Nov 2017 21:21:52  0800
    Ready:                True
    Restart Count:        0
    Environment:        <none>
    Mounts:                <none>
Conditions:
    Type                Status
    Initialized         True
    Ready         True
    PodScheduled         True
Volumes:        <none>
QoS Class:        BestEffort
Node-Selectors:        <none>
Tolerations:        :NoExecute
Events:                <none>
[root@dev-9 manifests]#

DNS

Q: Use the utility nslookup to look up the DNS records of the service and pod.

From this guide, https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

Look for “Quick Diagnosis”

Services

$ kubectl exec -ti busybox -- nslookup mysvc.myns.svc.cluster.local

Naming conventions for services and pods:

  1. For a regular service, this resolves to the port number and the CNAME: (解析到Cluster-IP)

    my-svc.my-namespace.svc.cluster.local.

root@test-9:~/henry# kubectl exec -ti busybox-2520568787-kkmrw -- nslookup nginx.default.svc.cluster.local
Server:    10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

Name:      nginx.default
Address 1: 10.43.120.19 nginx.default.svc.cluster.local
root@test-9:~/henry#
  1. For a headless service, this resolves to multiple answers(RR解析到多个Pod IP), one for each pod that is backing the service, and contains the port number and a CNAME of the pod of the form

    auto-generated-name.my-svc.my-namespace.svc.cluster.local

Pods

When enabled, pods are assigned a DNS A record in the form of

pod-ip-address.my-namespace.pod.cluster.local

For example, a pod with IP 1.2.3.4 in the namespace default with a DNS name of cluster.local would have an entry: 1-2-3-4.default.pod.cluster.local

root@test-9:~/henry# kubectl exec -ti busybox-2520568787-kkmrw -- nslookup 10-42-236-215.default.pod.cluster.local
Server:    10.43.0.10
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local

Name:      10-42-236-215.default.pod.cluster.local
Address 1: 10.42.236.215
root@test-9:~/henry#

Ingress

Q 17: Create an Ingress resource, Ingress controller and a Service that resolves to cs.rocks.ch.
  1. First, create controller and default backend
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress/master/controllers/nginx/examples/default-backend.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/deployment/nginx/nginx-ingress-controller.yaml
    
  1. Second, create service and expose

    kubectl run ingress-pod --image=nginx --port 80
    kubectl expose deployment ingress-pod --port=80 --target-port=80 --type=NodePort
    
  2. Create the ingress

    cat <<EOF >ingress-cka.yaml
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
        name: ingress-service
    spec:
        rules:
        - host: "cs.rocks.ch"
        http:
            paths:
            - backend:
                serviceName: ingress-pod
                servicePort: 80
    EOF
    
  3. To test, run a curl pod

    kubectl run -i --tty client --image=tutum/curl
    curl -I -L --resolve cs.rocks.ch:80:10.240.0.5 http://cs.rocks.ch/
    

我认为,要访问ingress,在flannel网络中,应该还可以使用hostPort来暴露出ingress-nginx的80和443端口。

  • Mandatory commands

    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml | kubectl apply -f -
    
    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml | kubectl apply -f -
    
    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml | kubectl apply -f -
    
    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml  | kubectl apply -f -
    
    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml | kubectl apply -f -
    
  • Install with RBAC roles

    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml | kubectl apply -f -
    
    curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml | kubectl apply -f -
    
  • Verify installation:

    kubectl get pods --all-namespaces -l app=ingress-nginx --watch
    

TLS

  • 下载cfssl软件包
    1. 访问: https://pkg.cfssl.org
    2. 下载:cfssl_linux-amd64 => cfssl
    3. 下载:cfssljson_linux-amd64 => cfssljson
    4. 下载:cfssl-certinfo-linux-arm64 => cfssl-certinfo
  • 创建证书的流程

    1. 创建自签名的CA证书;
    2. 使用CA证书、CA私钥、CA的配置文件,以及客户的CSR生成客户的证书;
  • 操作流程

    1. 生成default配置文件

      root@test-9:~/henry# ./cfssl print-defaults list
      Default configurations are available for:
              config
              csr
      root@test-9:~/henry# ./cfssl print-defaults config > ca-config.json
      root@test-9:~/henry# ./cfssl print-defaults csr > ca-csr.json
      root@test-9:~/henry#
      
    2. 配置CA的CSR,为自己生成CERT:

      root@test-9:~/henry# cat ca-csr.json
      {
          "CN": "Chen Leiji CA",
          "key": {
              "algo": "ecdsa",
              "size": 256
          },
          "names": [
              {
                  "C": "US",
                  "L": "CA",
                  "ST": "San Francisco",
                          "O": "WISE2C",
                          "OU": "development"
              }
          ]
      }
      
      root@test-9:~/henry#
      root@test-9:~/henry# ./cfssl gencert -initca ca-csr.json | ./cfssljson -bare ca
      2017/11/15 13:16:04 [INFO] generating a new CA key and certificate from CSR
      2017/11/15 13:16:04 [INFO] generate received request
      2017/11/15 13:16:04 [INFO] received CSR
      2017/11/15 13:16:04 [INFO] generating key: ecdsa-256
      2017/11/15 13:16:04 [INFO] encoded CSR
      2017/11/15 13:16:04 [INFO] signed certificate with serial number 84349438505086023342597169366385715026517648791
      root@test-9:~/henry# ls
      ca-config.json  ca.csr  ca-key.pem  ca.pem  cfssl  cfssljson  csr.json
      root@test-9:~/henry#
      
    3. 查看生成的CA证书:

      root@test-9:~/henry# ./cfssl-certinfo -cert ca.pem
      {
      "subject": {
          "common_name": "Chen Leiji CA",
          "country": "US",
          "organization": "WISE2C",
          "organizational_unit": "development",
          "locality": "CA",
          "province": "San Francisco",
          "names": [
          "US",
          "San Francisco",
          "CA",
          "WISE2C",
          "development",
          "Chen Leiji CA"
          ]
      },
      "issuer": {
          "common_name": "Chen Leiji CA",
          "country": "US",
          "organization": "WISE2C",
          "organizational_unit": "development",
          "locality": "CA",
          "province": "San Francisco",
          "names": [
          "US",
          "San Francisco",
          "CA",
          "WISE2C",
          "development",
          "Chen Leiji CA"
          ]
      },
      "serial_number": "84349438505086023342597169366385715026517648791",
      "not_before": "2017-11-15T05:11:00Z",
      "not_after": "2022-11-14T05:11:00Z",
      "sigalg": "ECDSAWithSHA256",
      "authority_key_id": "D4:54:B3:F5:DF:CA:4A:22:E5:E:99:F0:BE:5A:4E:B:89:3C:DC:53",
      "subject_key_id": "D4:54:B3:F5:DF:CA:4A:22:E5:E:99:F0:BE:5A:4E:B:89:3C:DC:53",
      "pem": "-----BEGIN CERTIFICATE-----\nMIICSjCCAfCgAwIBAgIUDsZcEST3fVKpcGgiDP IvVG1ZZcwCgYIKoZIzj0EAwIw\ncTELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNVBAcT\nAkNBMQ8wDQYDVQQKEwZXSVNFMkMxFDASBgNVBAsTC2RldmVsb3BtZW50MRYwFAYD\nVQQDEw1DaGVuIExlaWppIENBMB4XDTE3MTExNTA1MTEwMFoXDTIyMTExNDA1MTEw\nMFowcTELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNV\nBAcTAkNBMQ8wDQYDVQQKEwZXSVNFMkMxFDASBgNVBAsTC2RldmVsb3BtZW50MRYw\nFAYDVQQDEw1DaGVuIExlaWppIENBMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE\nCaeC0bFStNdMcaWjMWtc0/HyC/VrcuALsa7ie5xE1lB6wNtQE1JlDxQUPbOJvHXh\nXJ8Lhtp GKR3nPWiy6 j36NmMGQwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB/wQI\nMAYBAf8CAQIwHQYDVR0OBBYEFNRUs/Xfykoi5Q6Z8L5aTguJPNxTMB8GA1UdIwQY\nMBaAFNRUs/Xfykoi5Q6Z8L5aTguJPNxTMAoGCCqGSM49BAMCA0gAMEUCIQCIG5Hx\n6Pmhj3LT2dRpGGJW3yj3 r9txDjMUH7 CtvJ/AIga5REzrYKSByjSKrMmoa2NPl2\nIIKQ2hASUgXI3565qdc=\n-----END CERTIFICATE-----\n"
      }
      root@test-9:~/henry#
      
    4. 配置CA Profile选项( 此处的profiles对应生成客户CA指定的–profile值 ):

      root@test-9:~/henry# cat ca-config.json
      {
          "signing": {
              "default": {
                  "expiry": "168h"
              },
              "profiles": {
                  "server": {
                      "expiry": "8760h",
                      "usages": [
                          "signing",
                          "key encipherment",
                          "server auth"
                      ]
                  },
                  "client": {
                      "expiry": "8760h",
                      "usages": [
                          "signing",
                          "key encipherment",
                          "client auth"
                      ]
                  },
                  "peer": {
                      "expiry": "8760h",
                      "usages": [
                          "signing",
                          "key encipherment",
                          "server auth"
                      ]
                  }
              }
          }
      }
      
      root@test-9:~/henry#
      
    5. 修改客户CSR.json:

      获取模板文件:

      root@test-9:~/henry# ./cfssl print-defaults csr 
      {
          "CN": "example.net",
          "hosts": [
              "example.net",
              "www.example.net"
          ],
          "key": {
              "algo": "ecdsa",
              "size": 256
          },
          "names": [
              {
                  "C": "US",
                  "L": "CA",
                  "ST": "San Francisco"
              }
          ]
      }
      
      root@test-9:~/henry#
      

      修改CSR,主要涉及hosts的内容:

      root@test-9:~/henry# cat csr.json
      {
          "CN": "Chen Leiji Server",
          "key": {
              "algo": "ecdsa",
              "size": 256
          },
          "hosts":[
              "www.wise2c.com"
          ],
          "names": [
              {
                  "C": "US",
                  "L": "CA",
                  "ST": "San Francisco",
                  "O": "WISE2C",
                  "OU": "development"
              }
          ]
      }
      
    6. 生成客户证书

      root@test-9:~/henry# ./cfssl gencert -ca=ca.pem -ca-key=ca-key.pem --config=ca-config.json --hostname="www.wise2c.com" --profile="server" csr.json | ./cfssljson -bare server
      2017/11/15 14:34:07 [INFO] generate received request
      2017/11/15 14:34:07 [INFO] received CSR
      2017/11/15 14:34:07 [INFO] generating key: ecdsa-256
      2017/11/15 14:34:07 [INFO] encoded CSR
      2017/11/15 14:34:07 [INFO] signed certificate with serial number 408368599847170747880405926931506246283785194006
      root@test-9:~/henry#
      root@test-9:~/henry# ls
      ca-config.json  ca.csr  ca-key.pem  ca.pem  cfssl  cfssl-certinfo  cfssljson  csr.json  server.csr  server-key.pem  server.pem
      root@test-9:~/henry#
      root@test-9:~/henry# ./cfssl-certinfo -cert server.pem
      {
      "subject": {
          "common_name": "Chen Leiji Server",
          "country": "US",
          "organization": "WISE2C",
          "organizational_unit": "development",
          "locality": "CA",
          "province": "San Francisco",
          "names": [
          "US",
          "San Francisco",
          "CA",
          "WISE2C",
          "development",
          "Chen Leiji Server"
          ]
      },
      "issuer": {
          "common_name": "Chen Leiji CA",
          "country": "US",
          "organization": "WISE2C",
          "organizational_unit": "development",
          "locality": "CA",
          "province": "San Francisco",
          "names": [
          "US",
          "San Francisco",
          "CA",
          "WISE2C",
          "development",
          "Chen Leiji CA"
          ]
      },
      "serial_number": "408368599847170747880405926931506246283785194006",
      "sans": [
          "www.wise2c.com"
      ],
      "not_before": "2017-11-15T06:29:00Z",
      "not_after": "2018-11-15T06:29:00Z",
      "sigalg": "ECDSAWithSHA256",
      "authority_key_id": "D4:54:B3:F5:DF:CA:4A:22:E5:E:99:F0:BE:5A:4E:B:89:3C:DC:53",
      "subject_key_id": "1D:DB:C:A:34:9D:50:98:B0:F7:7D:E5:43:AF:3:D:9E:7D:92:C4",
      "pem": "-----BEGIN CERTIFICATE-----\nMIICeTCCAiCgAwIBAgIUR4fhn28TfjY12WtKZvStTxZMyhYwCgYIKoZIzj0EAwIw\ncTELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNVBAcT\nAkNBMQ8wDQYDVQQKEwZXSVNFMkMxFDASBgNVBAsTC2RldmVsb3BtZW50MRYwFAYD\nVQQDEw1DaGVuIExlaWppIENBMB4XDTE3MTExNTA2MjkwMFoXDTE4MTExNTA2Mjkw\nMFowdTELMAkGA1UEBhMCVVMxFjAUBgNVBAgTDVNhbiBGcmFuY2lzY28xCzAJBgNV\nBAcTAkNBMQ8wDQYDVQQKEwZXSVNFMkMxFDASBgNVBAsTC2RldmVsb3BtZW50MRow\nGAYDVQQDExFDaGVuIExlaWppIFNlcnZlcjBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABNS8mQT/DGMqig0Ju4VwcKtJoleoUF/lJokUhGKVudxIDRPMlgfHI7etIOBD\nPPhgrDdBWMEZHqZ0qLhmvp2v4G6jgZEwgY4wDgYDVR0PAQH/BAQDAgWgMBMGA1Ud\nJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFB3bDAo0nVCY\nsPd95UOvAw2efZLEMB8GA1UdIwQYMBaAFNRUs/Xfykoi5Q6Z8L5aTguJPNxTMBkG\nA1UdEQQSMBCCDnd3dy53aXNlMmMuY29tMAoGCCqGSM49BAMCA0cAMEQCIGou6e5c\nhQR0E3 piwH7VDchIlFUvU3OEttxqPnwYUqSAiBOgjYLgbJH07nf2mYqbegRkmpY\nwSmMdvZBSHvLyw81lA==\n-----END CERTIFICATE-----\n"
      }
      root@test-9:~/henry#
      
    7. 拷贝证书到系统,并更新证书库:

      chmod 600 *-key.pem
      cp ~/cfssl/ca.pem /etc/ssl/certs/
      
      update-ca-trust
      

Installation

Q: Setting up K8s master components with a binaries/from tar balls

life-cycle

Q: 对deployment做rollingUpdate,再滚回来

  • RollingUpdate (貌似对于deploy限制只能够设置其image、resource、selector、subject来实现)
[root@dev-7 henry]# kubectl run demo --image=nginx --port=80 --replicas=2 --labels="cka=true"
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl get deploy
NAME                     DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
demo                     2         2         2            2           4m
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl get pod -l cka=true
NAME                    READY     STATUS    RESTARTS   AGE
demo-2959463917-gbv3r   1/1       Running   0          1m
demo-2959463917-j76m9   1/1       Running   0          1m
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl set --help
Configure application resources

These commands help you make changes to existing application resources.

Available Commands:
    image       Update image of a pod template
    resources   Update resource requests/limits on objects with pod templates
    selector    Set the selector on a resource
    subject     Update User, Group or ServiceAccount in a RoleBinding/ClusterRoleBinding
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl set image deploy/demo demo=mysql
[root@dev-7 henry]#
[root@dev-7 henry]# kubectl rollout history deploy/demo
deployments "demo"
REVISION        CHANGE-CAUSE
1                <none>
2                <none>
[root@dev-7 henry]# kubectl rollout history deploy/demo --revison=2
deployments "demo" with revision #2
Pod Template:
    Labels:        cka=true
        pod-template-hash=2216264665
    Containers:
    demo:
    Image:        mysql
    Port:        80/TCP
    Environment:     
                    
  • 发表于 2018-11-08 15:21
  • 阅读 ( 1558 )
  • 分类:默认分类

你可能感兴趣的文章

相关问题

0 条评论

请先 登录 后评论
不写代码的码农
Kemin

架构工程师

45 篇文章

作家榜 »

  1. Kemin 45 文章
  2. golanglover 5 文章
  3. D.Chen 4 文章
  4. salamander 1 文章
  5. 深圳-伟 1 文章
  6. 广训 1 文章
  7. PHP小菜 1 文章
  8. Undefined 0 文章