Today, Trivy adopted by famous container registry services, Harbor registry and Mirantis Docker Enterprise. Trivy is an easy-to-use open-source vulnerability scanner for container images. It detects vulnerabilities not only in OS packages but also in application libraries, e.g. node(JavaScript), gem(Ruby).
I have been using Go language(Golang). I like Golang because it has great official tools and it’s easy to start developing and improving products. For example, Golang has an official code formatter go fmt, so I do not care about coding rules while writing Golang.Today I will introduce Golang official code profiling tool pprof, it also visualizes profiling data. It is very easy to use.
Some of our front-end projects are hosted by Google Cloud Storage, and cached by Cloudflare. Today I will introduce how to host with GCS and Cloudflare.
I created development environment in minikube. I took some times to setup keep Volume in own machine.
Go language has become a popular language in the world today. Many famous open-source products use Golang. Today I will introduce simple tips on how to create a secure project in Golang.Beforehand. I would like to recommend reading OWASP(Open Web Application Security Project)'s Secure Coding Plactice even if your application is not a web application. It is one of the best books on secure coding in Golang. Today, I will talk about tips not included in OWASP Go guidelines.
GlossaryKubeadmWhat is it?kubeadm helps you bootstrap a minimum viable Kubernetes cluster that conforms to best practices. With kubeadm, your cluster should pass Kubernetes Conformance tests. Kubeadm also supports other cluster lifecycle functions, such as upgrades, downgrade, and managing bootstrap tokens.source: Creating a single control-plane cluster with kubeadmCreating a single control-plane cluster with kubeadmSpecBefore you begin - One or more machines running a deb/rpm-compatible OS, for example Ubuntu or CentOS - 2 GB or more of RAM per machine. Any less leaves little room for your apps. - 2 CPUs or more on the control-plane node - Full network connectivity among all machines in the cluster. A public or private network is fine.We only need to run 3 steps if you want to create 1 master node and 1 worker node cluster.run "kubeadm init" on master nodeinstall CNI on master node ("kubectl apply")run "kubeadm join —token=xxxx" on worker nodeCalicoCalico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads.Calico is a popular CNI(container network interface) plugin. CNI makes it easy to configure container networking when containers are created or destroyed. Calico has good performance, flexibility, and security.source: Benchmark results of Kubernetes network plugins (CNI) over 10Gbit/s networkOverviewCreate instancesCreate FirewallsInstall docker, kubelet and kubeadm to each serverInitialize kubeadm master nodeInstall calicoAdd other master node servers to control-planeJoin worker node to the clustercode block examples# : comments of following commands #> : expected stdout <xxx> : variables1. install dockerRun following commands in each server# kubelet/kubernetes need to set swapoff # https://github.com/kubernetes/kubernetes/issues/53533 sudo swapoff -a # persist swapoff sudo sh -c "cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness = 0 EOF" sudo sysctl --system # Set SELinux in permissive mode (effectively disabling it) # to allow containers access to access the host filesystem sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # install docker and set docker daemon sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine sudo yum install -y yum-utils device-mapper-persistent-data lvm2 sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo sudo yum makecache fast sudo yum update -y sudo yum install -y docker-ce docker-ce-cli containerd.io # docker daemon should use the systemd cgroup sudo mkdir /etc/docker sudo sh -c 'cat <<EOF > /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF' sudo mkdir -p /etc/systemd/system/docker.service.d sudo systemctl enable docker sudo systemctl daemon-reload sudo systemctl start docker2. install kubeadm and kubeletsudo sh -c "cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF" sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes sudo mkdir /var/lib/kubelet # run with systemd # https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/kubelet-integration/#the-kubelet-drop-in-file-for-systemd sudo sh -c 'cat <<EOF > /var/lib/kubelet/config.yaml kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: "systemd" EOF' sudo mkdir /etc/systemd/kubelet.service.d sudo sh -c 'cat <<EOF > /etc/systemd/kubelet.service.d/20-extra-args.conf [Service] Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" EOF' sudo systemctl enable --now kubelet sudo systemctl daemon-reload # check cgroup driver sudo docker info | grep Cgroup #> Cgroup Driver: systemd3. create kubeadm to master nodes1st master node# create Load Balancer by opening port 6443 # 192.168.0.0/16 using for subnet in Calico sudo sh -c 'cat <<EOF > kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable apiServer: certSANs: - <MASTER_NODE> controlPlaneEndpoint: "<MASTER_NODE>:6443" networking: podSubnet: "192.168.0.0/16" EOF' sudo kubeadm reset --force sudo kubeadm init --config=kubeadm-config.yaml # Copy the displayed output to notepad #> kubeadm join <MASTER_NODE>:6443 --token xxxxxx --discovery-token-ca-cert-hash xxxxxx"kubeadm init" runs following stepspreflight checksstart kubeletgenerate configuration filesgenerate manifest for etcdstart control planelabelinginstall kube-proxy and CoreDNSnext step registering credential information to kubectl.mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configNow, we can access the <MASTER_NODE>:6443/api/. Apply CNI to calicoNext step, installing calico for communicates between other server's pods.Calico is one of the most popular network plugin for Kubernetes.# applying Calico and creating initial configuration files on master1 server # https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/calico kubectl apply -f https://docs.projectcalico.org/v3.11/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f https://docs.projectcalico.org/v3.11/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml # check pods labeled "calico-node" kubectl get pods -n kube-system -l k8s-app=calico-node Copy credential and configuration files from the first master node to other master nodes.# Copy files to other masters CIPS="<master2ip> <master3ip>" USER=<USERNAME> for host in ${CIPS};do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key scp /etc/kubernetes/admin.conf "${USER}"@$host: done Other master nodesClone credential information and configuration files and run "kubeadm join" which memorized command result of "kubeadm init" on the 1st master node.Clone creadential and configuration files and kubeadm join with the copied results from "kubeadm init" commands on master1 node.sudo mkdir -p /etc/kubernetes/pki/etcd sudo mv -vi /home/$USER/ca.crt /etc/kubernetes/pki/ sudo mv -vi /home/$USER/ca.key /etc/kubernetes/pki/ sudo mv -vi /home/$USER/sa.key /etc/kubernetes/pki/ sudo mv -vi /home/$USER/sa.pub /etc/kubernetes/pki/ sudo mv -vi /home/$USER/front-proxy-ca.crt /etc/kubernetes/pki/ sudo mv -vi /home/$USER/front-proxy-ca.key /etc/kubernetes/pki/ sudo mv -vi /home/$USER/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt sudo mv -vi /home/$USER/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key sudo mv -vi /home/$USER/admin.conf /etc/kubernetes/admin.conf sudo mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo shown $(id -u):$(id -g) $HOME/.kube/config # join to kubeadm as master node with --control-plane option # paste memorized command with --experimental-control-plan option sudo kubeadm join <MASTER_URL>:6443 --token <COPIED_TOKEN> \ --discovery-token-ca-cert-hash <COPIED_HASH> \ --control-planeNow we can check iptables rule changed by Calico.sudo iptables -LWorker nodesWorker nodes only necessary to run "kubeadm join". sudo kubeadm join <MASTER_URL>:6443 --token <COPIED_TOKEN> \ --discovery-token-ca-cert-hash <COPIED_HASH>Run some applicationskubectl create ns hello-test kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node -n hello-test kubectl expose deployment hello-node --type=LoadBalancer --port=80 -n hello-test kubectl get po -n hello-test #> NAME READY STATUS RESTARTS AGE # hello-node-7676b5fb8d-xzkb8 1/1 Running 0 32s kubectl get svc hello-node -n hello-test #> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # hello-node LoadBalancer 10.111.27.104 <pending> 80:<PODPORT>/TCP 27s kubectl get po/<podname> -o jsonpath='{.status.hostIP}' #> <HOSTIP> curl <HOSTIP>:<PODPORT> #> Hello, World! # delete all of them kubectl delete ns hello-testDelete a worker nodeOn a master nodekubectl cardon <node> kubectl drain <node> --delete-local-data --force ---ignore-daemonsets kubectl delete node <node>On the deleted node"kubectl delete" with kubeadm cannot delete target node's iptables. We need to delete iptables on our own.sudo kubeadm reset sudo sh -c "iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X"
Helm is a popular tool for Kubernetes package manager. This article introduces what is Helm and how to host private helm repositories.