Димитар Георгиев

Managing Deployments Using Kubernetes Engine

Покупки 0 комментариев
 Managing Deployments Using Kubernetes Engine


// Storing Docker images in the Google Container Repository (GCR).

$ gcloud config list project
$ gcloud config set compute/zone us-central1-a
We can also see all of the fields using the --recursive option::
$ kubectl explain deployment --recursive
$ kubectl explain deployment
understand what the individual fields do::
$ kubectl explain deployment.metadata.name

Update the deployments/auth.yaml cs file:: 
$ vi deployments/auth.yaml

Examine the deployment configuration file::
$ cat deployment/auth.yaml

Create your deployment object using kubectl create::
$ kubectl create -f k8s/deployment.yaml

verify that it was created:: $ kubectl get deployments
verify that a ReplicaSet was created:: $ kubectl get replicasets
verify Pod is created :: $ kubectl get pods
Create the auth service:
$ kubectl create -f k8s/service.yaml

[And one more time to create and expose the frontend Deployment.]
$ kubectl create secret generic tls-certs --from-file tls/
$ kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf
$ kubectl create -f deployments/frontend.yaml
$ kubectl create -f services/frontend.yaml
Interact with the frontend by grabbing it's External IP and then curling to it:
$ kubectl get services frontend
$ curl -ks
curl as a one-liner:
curl -ks https://`kubectl get svc frontend -o=jsonpath="{.

Deployment scale by updating the spec.replicas field. 
can be most easily updated using the kubectl scale command::
$ kubectl scale deployment valkyrie-dev --replicas=5

Verify that there are now 5 hello Pods running: $ kubectl get pods | grep valkyrie-dev | wc -l 

To update your Deployment, Run the following command:
kubectl edit deployment valkyrie-dev

See the new ReplicaSet that Kubernetes creates.:$ kubectl get replicaset

# troubleshooting

$ kubectl describe pod
$ docker pull valkyrie-dev

Add the credential secret to Kubernetes
$ docker login
When prompted, enter your Docker username and password::

View the config.json file:: cat ~/.docker/config.json
To remove any doubt, try the full path instead of ~:

kubectl create secret generic regcred \
    --from-file=.dockerconfigjson=/home/auser/.docker/config.json \

kubectl create secret docker-registry regcred
 --docker-server=https://index.docker.io/v1/ \
 --docker-username=qwiklabs \
 --docker-password=0#Cr4!1At \ 
kubectl -namespace  \
    create secret docker-registry regcred \ 

Inspecting the Secret regcred:: $ kubectl get secret regcred --output=yaml

Add the reference of the secret to use in your pod definition:: 
  - name: regcred  


Download the above file::

wget -O my-private-reg-pod.yaml https://k8s.io/examples/pods/private-reg-pod.yaml


You can also see a new entry in the rollout history:
$ kubectl rollout history deployment/valkyrie-dev

If you detect problems with a running rollout, pause it to stop the update::
kubectl rollout pause deployment/valkyrie-dev

Verify the current state of the rollout::kubectl rollout status deployment/hello

You can also verify this on the Pods directly::
$ kubectl get pods -o jsonpath --template='{range .items[*]}{

We can continue the rollout using the resume command::
$ kubectl rollout resume deployment/hello

Use the rollout command to roll back to the previous version:
$ kubectl rollout undo deployment/hello

Verify the roll back in the history::
$ kubectl rollout history deployment/hello 

Finally, verify that all the Pods have rolled back to their previous versions::
$ kubectl get pods -o jsonpath --template='{range .items[*]}{.metadata.name}{"\t"}



Push the Docker image into the Container Repository.
project = qwiklabs-gcp-02-c4528486fc14

$ docker tag valkyrie-app:v0.0.1 gcr.io/qwiklabs-gcp-02-c4528486fc14/valkyrie-app:v0.0.1

$ docker push gcr.io/qwiklabs-gcp-02-c4528486fc14/valkyrie-app:v0.0.1

docker push gcr.io/qwiklabs-gcp-02-c4528486fc14/valkyrie-app:v0.0.1
Create and expose a deployment in Kubernetes::

$ sed -i s#valkyrie-dev#gcr.io/qwiklabs-gcp-03-9bc0a9233e9f/valkyrie-app:v0.0.1#g k8s/deployment.yaml

Get the Kubernetes credentials before you deploy the image onto the Kubernetes cluster::
$ gcloud container clusters get-credentials valkyrie-dev --zone=us-east1-d
/* ......................... Canary deployments  

Canary deployments allow you to release a change to a small subset of your users
to mitigate risk associated with new releases.
A canary 
deployment consists of a separate deployment with your new version and a service
that targets both your normal, stable deployment as well as your canary deployment

First, create a new canary deployment for the new version:
$ cat deployments/hello-canary.yaml

Now create the canary deployment:
$ kubectl create -f deployments/hello-canary.yaml

Verify it with this kubectl command::
$ kubectl get deployments 

Make sure to update version to 1.0.0 (if your version is pointing to any other) ::
$ kubectl edit deployment hello-canary 

  /*..................................  Blue-green deployments
Use the existing hello service, but update it so that it has a selector app:helloversion: 1.0.0. 
First update the service:
$ kubectl apply -f services/hello-blue.yaml

Create the green deployment:
kubectl create -f deployments/hello-green.yaml
Once you have a green deployment and it has started up properly, verify
that the current version of 1.0.0 is still being used:
$ curl -ks https://`kubectl get svc frontend -o=jsonpath="{.
Now, update the service to point to the new version:
$ kubectl apply -f services/hello-green.yaml
Blue-Green Rollback ::
$ kubectl apply -f services/hello-blue.yaml



Назад Вперед
Войдите или зарегистрируйтесь
чтобы оставить комментарий