Димитар Георгиев

Provisioning Jenkins

Покупки 0 комментариев
Provisioning Jenkins
gcloud config set compute/zone us-east1-d
Then clone the lab's sample code:
$ git clone https://github.com/GoogleCloudPlatform/continuous-deployment-
on-kubernetes.git

Now change to the correct directory:
$ cd continuous-deployment-on-kubernetes
the following command to provision a Kubernetes cluster:
gcloud container clusters create jenkins-cd \
--num-nodes 2 \
--machine-type n1-standard-2 \
--scopes "https://www.googleapis.com/auth/source.read_write,cloud-
platform"
gcloud container clusters list

Now, get the credentials for your cluster:
gcloud container clusters get-credentials jenkins-cd

Kubernetes Engine uses these credentials to access your newly
provisioned cluster
—confirm that you can connect to it by running the following command:
kubectl cluster-info

confirm that you can connect to it by running the following command:
kubectl cluster-info
/.......................  Install Helm  ...................../


Dwnload and install the helm binary
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.14.1-linux-
amd64.tar.gz
Unzip the file in Cloud Shell:
tar zxfv helm-v2.14.1-linux-amd64.tar.gz
cp linux-amd64/helm .

Add yourself as a cluster administrator in the cluster's RBAC so that you
can give 
Jenkins permissions in the cluster:
kubectl create clusterrolebinding cluster-admin-binding --
clusterrole=cluster-admin --user=$(gcloud config get-value account)

Grant Tiller, the server side of Helm, the cluster-admin role in 
your cluster:
kubectl create serviceaccount tiller --namespace kube-system
kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster
-admin
>  --serviceaccount=kube-system:tiller
Initialize Helm. This ensures that the server side of Helm (Tiller) 
is properly installed in your cluster.
./helm init --service-account=tiller
./helm update
Ensure Helm is properly installed by running the following command:: 
$ ./helm version
/*..........................  Configure and Install Jenkins  
............................*/
Configure and Install Jenkins

Use the Helm CLI to deploy the chart with your configuration settings.
./helm install -n cd stable/jenkins -f jenkins/values.yaml --version 1.2.2

> --wait

Configure the Jenkins service account to be able to deploy to the cluster.
$ kubectl create clusterrolebinding jenkins-deploy --clusterrole=cluster
-admin 
>  --serviceaccount=default:cd-jenkins
Run the following command to setup port forwarding to the Jenkins UI from 
the Cloud Shell
export POD_NAME=$(kubectl get pods --namespace default -l "app
.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/
instance=cd" -o jsonpath="{.items[0].metadata.name}")
kubectl/
> port-forward $POD_NAME 8080:8080 >> /dev/null &

Now, check that the Jenkins Service was created properly:
kubectl get svc
/* .................  Connect to Jenkins      ...............................*/

The Jenkins chart will automatically create an admin password for you. 
To retrieve it, run:
printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin
-password
}" | base64 --decode);echo

F54YfesjFs
Deploying the Application
You will deploy the application into two different environments:

Production: The live site that your users access.
Canary: A smaller-capacity site that receives only a percentage of your
user traffic.
Use this environment to validate your software with live traffic before 
it's released to all of your users.
cd sample-app
Create the Kubernetes namespace to logically isolate the deployment:
kubectl create ns production

Create the production and canary deployments, and the services using
the kubectl apply commands:
kubectl apply -f k8s/production -n production
kubectl apply -f k8s/canary -n production 

Scale up the production environment frontends by running the following command:
kubectl scale deployment gceme-frontend-production -n production --replicas 4

Now confirm that you have 5 pods running for the frontend, 4 
for production traffic
and 1 for canary releases (changes to the canary release will only
affect 1 out of 5 (20%) of users):
kubectl get pods -n production -l app=gceme -l role=frontend
kubectl apply -f k8s/services -n production 
Also confirm that you have 2 pods for the backend, 1 for production
and 1 for canary:
kubectl get pods -n production -l app=gceme -l role=backend

Retrieve the external IP for the production services:
kubectl get service gceme-frontend -n production 

Now, store the frontend service load balancer IP in an environment 
variable for use later:
export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.
status
.loadBalancer.ingress[0].ip}" --namespace=production services 
>  gceme-frontend)

curl http://$FRONTEND_SERVICE_IP/version
Creating the Jenkins Pipeline
Creating a repository to host the sample app source code
Create a copy of the gceme sample app and push it to a 
Cloud Source Repository:
gcloud source repos create default  
qwiklabs-gcp-01-f6e6a2dc1b73
git init

Initialize the sample-app directory as its own Git repository:
git config credential.helper gcloud.sh

Run the following command:
git remote add origin https://source.developers.google.com/p/$DEVSHELL_PROJECT_ID/r/default

Set the username and email address for your Git commits.
Replace [EMAIL_ADDRESS] with your Git email address and 
[USERNAME] 
with your Git username:
git config --global user.email "dim.georgiev1976@yandex.com"

git config --global user.name "dimgeorgiev1976"

Add, commit, and push the files:
git add .
git commit -m "Initial commit"
git push origin master 

Create a development branch and push it to the Git server:
git checkout -b new-feature

Open the Jenkinsfile in your terminal editor, for example vi:
vi Jenkinsfile
127.0.0.1:8001
/*...................... Deploying a Canary Release ...........................*/

Create a canary branch and push it to the Git server:
git checkout -b canary 
git push origin canary

version 2.0.0.
export FRONTEND_SERVICE_IP=$(kubectl get -o \
jsonpath="{
.status.loadBalancer.ingress[0].ip}" --namespace=production services 
gceme-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done
/*...................... Deploying to production:: ...........
.............*/

Deploying to production::

git checkout master
git merge canary
git push origin master

check the service URL to ensure that all of the traffic is being served 
by your new version, 2.0.0.
export FRONTEND_SERVICE_IP=$(kubectl get -o \
jsonpath
="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme
-frontend)
while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done

Here's the command again to get the external IP address so you can check
it out:
kubectl get service gceme-frontend -n production


 

Назад Вперед
Войдите или зарегистрируйтесь
чтобы оставить комментарий