kubernetes dashboard install

kubernetes dashboard install

install dashboard

1. 参考资料[GitHub - kubernetes/dashboard: General-purpose web UI for Kubernetes clusters](https://github.com/kubernetes/dashboard)

2. install dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
3. 启动proxy
kubectl proxy
如果启动报错,查看pod是否运行
4. 访问url
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.
5. 生成token,参考资料https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk ‘{print $1}’)
6. 在dashboar界面输入token即可

k8s

k8s

helm install istio

helm install istio

先决条件 1. 已知安装mac下面的docker-desktop 2. kubernetes环境已部署完成 3. Helm已经安装完成 4. 保证网络正常GFW 5. 参考资料([Istio / Traffic Management](https://istio.io/docs/concepts/traffic-management/)) 使用helm install 1. Create a namespace for the istio-system components: ➜ kubectl version Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.8", GitCommit:"211047e9a1922595eaa3a1127ed365e9299a6c23", GitTreeState:"clean", BuildDate:"2019-10-15T12:02:12Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"} ➜ ➜ kubectl create namespace istio-system namespace/istio-system created ➜ 2. Install all the Istio [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) (CRDs) using kubectl apply: ➜ code git clone https://github. [Read More]

kubernetes故障之Orphaned pod

kubernetes故障之Orphaned pod

查看日志 [root@iZbp1c4tqwd9kaykklejmxZ ~]# journalctl -u kubelet -f -- Logs begin at Fri 2019-10-04 11:50:57 CST. -- Nov 01 13:28:21 iZbp1c4tqwd9kaykklejmxZ kubelet[9731]: E1101 13:28:21.575137 9731 kubelet_volumes.go:154] Orphaned pod "bb5c8fa3-b4b1-11e9-96a8-0a1877e1c33d" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them. Nov 01 13:28:22 iZbp1c4tqwd9kaykklejmxZ kubelet[9731]: W1101 13:28:22.826291 9731 reflector.go:270] object-"monitoring"/"grafana-dashboard-statefulset": watch of *v1.ConfigMap ended with: too old resource version: 2345008003 (2345014010) Nov 01 13:28:23 iZbp1c4tqwd9kaykklejmxZ kubelet[9731]: E1101 13:28:23. [Read More]

kubernetes故障之pod迁移

kubernetes故障之pod迁移

相关操作命令 # 查看节点 kubectl get nodes # 设置节点为不可调度 kubectl cordon <NodeName> # 设置节点为可调度 kubectl uncordon <NodeName> # pod漂移到可调度的node节点上 kubectl drain <不可调度的node,上面有pod> --force --ignore-daemonsets 实际操作,把”cn-hangzhou.i-bp1bt6np98dbi2xmno0o,cn-hangzhou.i-bp1bt6np98dbi2xmno0p”设置为不可调度 # 查看node节点 kube-shell> kubectl get nodes NAME STATUS ROLES AGE VERSION cn-hangzhou.i-bp1bt6np98dbi2xmno0o Ready <none> 24d v1.12.6-aliyun.1 cn-hangzhou.i-bp1bt6np98dbi2xmno0p Ready <none> 24d v1.12.6-aliyun.1 cn-hangzhou.i-bp1cu5nvk55l2usiufx5 Ready <none> 35m v1.12.6-aliyun.1 kube-shell> # 设置node节点为不可调度 kube-shell> kubectl cordon cn-hangzhou.i-bp1bt6np98dbi2xmno0o node/cn-hangzhou.i-bp1bt6np98dbi2xmno0o cordoned kube-shell> kubectl cordon cn-hangzhou.i-bp1bt6np98dbi2xmno0p node/cn-hangzhou.i-bp1bt6np98dbi2xmno0p cordoned kube-shell> kubectl drain命令漂移不可调度node上的pod节点 # 查看修改调度后的node节点信息,其中2个node不可调度 kube-shell> kubectl get nodes NAME STATUS ROLES AGE VERSION cn-hangzhou. [Read More]

Docker for Mac install Kubernetes

Docker for Mac install Kubernetes

install Docker for Mac 使用brew 方式安装,注意是在docker版本18.06之后才开始支持 brew cask install docker 从官方下载docker.dmg文件 https://store.docker.com/editions/community/docker-ce-desktop-mac 安装完成之后如图展示,使用docker账号密码登陆就可以启动 安装完成之后界面 设置 Docker 中国官方镜像加速 registry mirror https://registry.docker-cn.com 因为墙的原因我们预先从阿里云服务器上下载Kubernetes所需要的镜像.不然我们在启用Kubernetes应用之后,会一直显示,Kubernetes 正在启动. ➜ images cat images k8s.gcr.io/kube-proxy-amd64:v1.10.3=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.10.3 k8s.gcr.io/kube-controller-manager-amd64:v1.10.3=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.10.3 k8s.gcr.io/kube-scheduler-amd64:v1.10.3=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.10.3 k8s.gcr.io/kube-apiserver-amd64:v1.10.3=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.10.3 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8=registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8=registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8=registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/pause-amd64:3.1=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0=registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/etcd-amd64:3.1.12=registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.1.12 #gcr.io/kubernetes-helm/tiller:v2.10.0=registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.10.0 ➜ images cat load_images.sh #/bin/bash file="images" if [ -f "$file" ] then echo "$file found." while IFS='=' read -r key value do #echo "${key}=${value}" docker pull ${value} docker tag ${value} ${key} docker rmi ${value} done < "$file" else echo "$file not found. [Read More]

Kubernetest REST API

Kubernetest REST API

启动proxy ➜ _posts kubectl proxy --port=8080 Starting to serve on 127.0.0.1:8080 查看APIserver ➜ http 127.0.0.1:8080 HTTP/1.1 200 OK Content-Length: 2644 Content-Type: application/json Date: Tue, 15 Jan 2019 05:19:42 GMT { "paths": [ "/api", "/api/v1", "/apis", "/apis/", "/apis/admissionregistration.k8s.io", "/apis/admissionregistration.k8s.io/v1beta1", "/apis/apiextensions.k8s.io", "/apis/apiextensions.k8s.io/v1beta1", "/apis/apiregistration.k8s.io", "/apis/apiregistration.k8s.io/v1", "/apis/apiregistration.k8s.io/v1beta1", "/apis/apps", "/apis/apps/v1", "/apis/apps/v1beta1", "/apis/apps/v1beta2", "/apis/authentication.k8s.io", "/apis/authentication.k8s.io/v1", "/apis/authentication.k8s.io/v1beta1", "/apis/authorization.k8s.io", "/apis/authorization.k8s.io/v1", "/apis/authorization.k8s.io/v1beta1", "/apis/autoscaling", "/apis/autoscaling/v1", "/apis/autoscaling/v2beta1", "/apis/batch", "/apis/batch/v1", "/apis/batch/v1beta1", "/apis/ceph.rook.io", "/apis/ceph.rook.io/v1", "/apis/certificates.k8s.io", "/apis/certificates.k8s.io/v1beta1", "/apis/compose.docker.com", "/apis/compose.docker.com/v1beta1", "/apis/compose.docker.com/v1beta2", "/apis/events.k8s.io", "/apis/events.k8s.io/v1beta1", "/apis/extensions", "/apis/extensions/v1beta1", "/apis/networking. [Read More]