Get Appointment

Blog & Insights

WP_Query Object ( [query] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 6 [post__not_in] => Array ( ) ) [query_vars] => Array ( [post_type] => post [showposts] => 8 [orderby] => Array ( [date] => desc ) [autosort] => 0 [paged] => 6 [post__not_in] => Array ( ) [error] => [m] => [p] => 0 [post_parent] => [subpost] => [subpost_id] => [attachment] => [attachment_id] => 0 [name] => [pagename] => [page_id] => 0 [second] => [minute] => [hour] => [day] => 0 [monthnum] => 0 [year] => 0 [w] => 0 [category_name] => [tag] => [cat] => [tag_id] => [author] => [author_name] => [feed] => [tb] => [meta_key] => [meta_value] => [preview] => [s] => [sentence] => [title] => [fields] => [menu_order] => [embed] => [category__in] => Array ( ) [category__not_in] => Array ( ) [category__and] => Array ( ) [post__in] => Array ( ) [post_name__in] => Array ( ) [tag__in] => Array ( ) [tag__not_in] => Array ( ) [tag__and] => Array ( ) [tag_slug__in] => Array ( ) [tag_slug__and] => Array ( ) [post_parent__in] => Array ( ) [post_parent__not_in] => Array ( ) [author__in] => Array ( ) [author__not_in] => Array ( ) [search_columns] => Array ( ) [ignore_sticky_posts] => [suppress_filters] => [cache_results] => 1 [update_post_term_cache] => 1 [update_menu_item_cache] => [lazy_load_term_meta] => 1 [update_post_meta_cache] => 1 [posts_per_page] => 8 [nopaging] => [comments_per_page] => 50 [no_found_rows] => [order] => DESC ) [tax_query] => WP_Tax_Query Object ( [queries] => Array ( ) [relation] => AND [table_aliases:protected] => Array ( ) [queried_terms] => Array ( ) [primary_table] => wp_yjtqs8r8ff_posts [primary_id_column] => ID ) [meta_query] => WP_Meta_Query Object ( [queries] => Array ( ) [relation] => [meta_table] => [meta_id_column] => [primary_table] => [primary_id_column] => [table_aliases:protected] => Array ( ) [clauses:protected] => Array ( ) [has_or_relation:protected] => ) [date_query] => [request] => SELECT SQL_CALC_FOUND_ROWS wp_yjtqs8r8ff_posts.ID FROM wp_yjtqs8r8ff_posts WHERE 1=1 AND ((wp_yjtqs8r8ff_posts.post_type = 'post' AND (wp_yjtqs8r8ff_posts.post_status = 'publish' OR wp_yjtqs8r8ff_posts.post_status = 'expired' OR wp_yjtqs8r8ff_posts.post_status = 'acf-disabled' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-success' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-failed' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-schedule' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-pending' OR wp_yjtqs8r8ff_posts.post_status = 'tribe-ea-draft'))) ORDER BY wp_yjtqs8r8ff_posts.post_date DESC LIMIT 40, 8 [posts] => Array ( [0] => WP_Post Object ( [ID] => 3751 [post_author] => 14 [post_date] => 2023-03-09 14:50:56 [post_date_gmt] => 2023-03-09 14:50:56 [post_content] =>

By: Saikrishna Madupu – Sr Devops Engineer

This article reviews how to tail logs from multiple pods via Kubernetes and Stern.

Kubernetes (K8) is a scalable container orchestrator. It is fairly lightweight to support IoT appliances and it can also handle huge business systems with hundreds of apps and hosts .

Stern is a tool for the tailing of numerous Kubernetes pods and the numerous containers that make up each pod. To facilitate faster debugging, each result is color coded.

As the query is a regular expression, the pod name can be easily filtered, and the exact id is not required. For instance, for instance omitting the deployment id. When a pod is deleted, it is removed from the tail, and when a new pod is added, it is automatically tailed.

Stern can tail all of the containers in a pod instead of having to do each one manually. You can simply specify the container flag to limit the number of containers displayed. By default, all containers are monitored.

Deploying a nginx svc:
kind: Service
apiVersion: v1
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP%  
kubectl apply --filename nginx-svc.yaml -n keyva

O/p: service/nginx unchanged
deployment.apps/nginx created

we can validate and verify the svc and pods that being up and running:
kubectl get all -n keyva
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-cd55c47f5-gwtkn   1/1     Running   0          12s

NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/nginx   ClusterIP   10.96.58.31   <none>        80/TCP    88d

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           12s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-cd55c47f5   1         1         1       12s

kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-cd55c47f5-gwtkn   1/1     Running   0          30s

KubeCtl has limits:

Using the label selection, it is evident that kubectl can read logs from numerous pods, however this technique has a drawback.

The reason for this is that –follow streams the API server's logs. You open a connection to the API server per pod, which opens a connection to the associated kubelet to stream logs continually. This does not scale well and results in many incoming and outgoing connections to the API server. As a result, it became a design decision to restrict the number of concurrent connections. Using Stern:

The command is fairly straightforward. Stern retrieves the logs from the given namespace for the specified application. In the case of Stern, you can view not only logs from a single Kubernetes object, such as a deployment or service, but also logs from all related objects. Example:

Stern -n keyva nginx

stern -n keyva nginx                           
+ nginx-cd55c47f5-86ql5 › nginx
+ nginx-cd55c47f5-bm55t › nginx
+ nginx-cd55c47f5-gwtkn › nginx
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-gwtkn nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-gwtkn nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Configuration complete; ready for start up
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: using the "epoll" event method
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: nginx/1.23.3
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: OS: Linux 5.10.124-linuxkit
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker processes
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 35
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 36
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 37
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 38
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-bm55t nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Configuration complete; ready for start up
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: using the "epoll" event method
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: nginx/1.23.3
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: OS: Linux 5.10.124-linuxkit
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker processes
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 36
nginx-cd55c47f5-86ql5 nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 37
nginx-cd55c47f5-86ql5 nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 38
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 39
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh

If you want to use stern in Kubernetes Pods, you need to create the following ClusterRole and bind it to ServiceAccount.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: stern
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "watch", "list"]

Stern facilitates the output of custom log messages. Using the —output flag, you may utilize the following prepared templates:

outputdescription
defultDisplays the namespace, pod and container, and decorates it with color depending on --color
rawOnly outputs the log message itself, useful when your logs are json and you want to pipe them to jq
jsonMarshals the log struct to json. Useful for programatic purposes

It takes a custom template through the —template flag, which is subsequently compiled into a Go template and used for each log message. The following struct is passed to this Go template:

property typedescription
Messagestring The log message itself
NodeNamestringThe node name where the pod is scheduled on
NamespacestringThe namespace of the pod
PodNamestringThe name of the pod
ContainerNamestring The name of the container

In addition to the built-in functions, the template includes the following functions:

funcargumentsdescription
jsonobjectMarshal the object and output it as a json text
colorcolor.Color, stringWrap the text in color (.ContainerColor and .PodColor provided)
parseJSONstringParse string as JSON
extjsonstringParse the object as json and output colorized json
ppextjsonstringParse the object as json and output pretty-print colorized json

Kubernetes can add complexity. Software programmers need logs quickly to fix problems. Set up your CLI with some aliases and get rolling to tail logs from your apps in real-time if you are using Kubernetes and have access to view logs on your Kubernetes cluster.

About the Author

[table id =5 /]

[post_title] => Tail Logs from Multiple K8 Pods [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => tail-logs-from-multiple-k8-pods [to_ping] => [pinged] => [post_modified] => 2023-04-21 17:04:17 [post_modified_gmt] => 2023-04-21 17:04:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3751 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [1] => WP_Post Object ( [ID] => 3743 [post_author] => 14 [post_date] => 2023-02-28 17:08:16 [post_date_gmt] => 2023-02-28 17:08:16 [post_content] =>

By: Saikrishna Madupu – Sr Devops Engineer

Reloader is a Kubernetes tool that automatically reloads configuration files in a running container when a change is detected. This can be useful for updating configurations without having to manually restart your application. This blog will walk through the process of setting up Reloader in Kubernetes. It will provide an example of MySQL deployment to enable reloader using annotations and MySQL deployment YAML to watch for changes in secrets.


Installation:

• kubectl apply -f 
  https://raw.githubusercontent.com/stakater/Reloader/master/deployments/kubernetes/reloader.yaml
• Helm
  helm repo add stakater https://stakater.github.io/stakater-charts
  helm repo update
  helm install stakater/reloader

Notes:

By default, Reloader is deployed in the default namespace and monitors all namespaces for changes to secrets and configmaps.

Example:

Create a secret for the MySQL DB:

secret.yaml
apiVersion: v1
kind: Secret
metadata:
   name: demo-secret
   annotations:
      reloader.stakater.com/auto: "true"
data:
   password: dGVzdGluZzEyMzQK


kubectl apply -f secret.yaml

kubectl describe secret demo-secret
Name:         demo-secret
Namespace:    keyva
Labels:       <none>
Annotations:  reloader.stakater.com/auto: true

Type:  Opaque

Data
====
password:  13 bytes
username:  8 bytes

Create PVC and PC for MySQL:

persistent-volume.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
	name: mysql-pv-volume
	labels:
		type: local
spec:
	storageClassName: manual
	capacity:
		storage: 2Gi
	accessModes:
		- ReadWriteOnce
	hostPath:
		path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
	name: mysql-pv-claim
spec:
	storageClassName: manual
	accessModes:
		- ReadWriteOnce
	resources:
		requests:
		storage: 2Gi

kubectl apply -f persistent-volume.yaml

kubectl describe pvc
Name: mysql-pv-claim
Namespace: keyva
StorageClass: manual
Status: Bound
Volume: mysql-pv-volume
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
			pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 2Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: mysql-7ddf8fdbb8-bbbms
			mysql-7ddf8fdbb8-thdv9
			mysql-7ddf8fdbb8-z2rjj
Events: 	<none>

kubectl describe pv 
Name: mysql-pv-volume
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: keyva/mysql-pv-claim
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity: <none>
Message:         
Source:
		Type: HostPath (bare host directory volume)
		Path: /mnt/data
		HostPathType:  
Events: <none>

Create deployment YML for MySQL container using above Secrets, PVC and PV:

kubectl apply -f my-sql.yaml
apiVersion: v1
kind: Service
metadata:
	name: mysql
spec:
	ports:
	- port: 3306
	selector:
		app: mysql
	clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
	name: mysql
	annotations:
reloader.stakater.com/auto: "true"
spec:
	selector:
		matchLabels:
			app: mysql
	replicas: 3
	strategy:
		type: Recreate
	template:
		metadata:
		labels:
			app: mysql
		spec:
		containers:
		- image: mysql:5.6
			name: mysql
			env:
			# Use secret in real usage
			- name: MYSQL_ROOT_PASSWORD
			valueFrom:
			secretKeyRef:
				name: demo-secret
			key: password
			ports:
			- containerPort: 3306
			name: mysql
			volumeMounts:
			- name: mysql-persistent-storage
			mountPath: /var/lib/mysql
			volumes:
			- name: mysql-persistent-storage
			persistentVolumeClaim:
			claimName: mysql-pv-claim

Once deployed, you can verify the pods status:

Kubectl get pods

kubectl get pods
NAME	READY	STATUS	RESTARTS	AGE
mysql-5b8d8bd99b-277mf	1/1	Running	0	10s
mysql-5b8d8bd99b-6x5wz	1/1	Running	0	10s
mysql-5b8d8bd99b-srlnb	1/1	Running	0	10s

Describing the pod to view details:

kubectl describe deployment mysql
Name: mysql
Namespace: keyva
CreationTimestamp: Sun, 19 Feb 2023 20:28:02 -0600
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
			reloader.stakater.com/auto: true
Selector: app=mysql
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
	Labels:  app=mysql
	Containers:
	mysql:
		Image:      mysql:5.6
		Port:       3306/TCP
		Host Port:  0/TCP
		Environment:
		MYSQL_ROOT_PASSWORD:  <set to the key 'password' in secret 'demo-secret'>  Optional: false
		Mounts:
			/var/lib/mysql from mysql-persistent-storage (rw)
	Volumes:
		mysql-persistent-storage:
		Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
		ClaimName: mysql-pv-claim
		ReadOnly: false
Conditions:
	Type           Status  Reason
	----           ------  ------
	Progressing    True    NewReplicaSetAvailable
	Available      True    MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: mysql-5b8d8bd99b (3/3 replicas created)
Events:
	Type    Reason             Age   From                   Message
	----    ------             ----  ----                   -------
	Normal  ScalingReplicaSet  11m   deployment-controller  Scaled up replica set mysql-5b8d8bd99b to 3

Reloader can monitor changes to ConfigMap and Secret, as well as perform rolling upgrades on Pods and their associated DeploymentConfigs, Deployments, Daemonsets Statefulsets, and Rollouts.

Validation of reloader:

kubectl logs reloader-reloader-7f6b8d49f7-9lxrx

time="2023-02-22T01:59:48Z" level=info msg="Changes detected in 'demo-secret' of type 'SECRET' in namespace 'keyva', Updated 'mysql' of type 'Deployment' in namespace 'keyva'"

Verify the age of Pods:
kubectl get pods
NAME	READY	STATUS	RESTARTS	AGE
mysql-7bd6b6d789-fqfgd	1/1	Running	0	3s
mysql-7bd6b6d789-tbxxp	1/1	Running	0	3s
mysql-7bd6b6d789-trd7v	1/1	Running	0	3s

Ref: K8-Reloader

About the Author

[table id =5 /]

[post_title] => Reloader - K8 - Rapid - Rollouts [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => reloader-k8-rapid-rollouts [to_ping] => [pinged] => [post_modified] => 2023-04-21 17:06:07 [post_modified_gmt] => 2023-04-21 17:06:07 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3743 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [2] => WP_Post Object ( [ID] => 3709 [post_author] => 7 [post_date] => 2023-02-10 18:43:05 [post_date_gmt] => 2023-02-10 18:43:05 [post_content] => Keyva is pleased to announce the certification of the Keyva Integration for Red Hat Ansible Automation Platform for the new ServiceNow Utah release. Clients can now seamlessly upgrade their ServiceNow App from previous ServiceNow releases (Tokyo, Rome, San Diego) to the Utah release. The Utah release includes various updates to products, applications, and features for Customer Experience, Technology Excellence, Automation & Low-Code, Now Platform, and Industries. Learn more about the Keyva ServiceNow Integrations Red Hat products and view all the ServiceNow releases for which Keyva has been certified at the ServiceNow store, visit bit.ly/3lsquOb. [post_title] => Keyva ServiceNow App for Red Hat Ansible Automation Platform – Certified for Utah Release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keyva-servicenow-app-for-red-hat-ansible-automation-platform-certified-for-utah-release [to_ping] => [pinged] => [post_modified] => 2024-05-28 18:29:51 [post_modified_gmt] => 2024-05-28 18:29:51 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3709 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [3] => WP_Post Object ( [ID] => 3685 [post_author] => 7 [post_date] => 2022-12-06 16:08:45 [post_date_gmt] => 2022-12-06 16:08:45 [post_content] =>

Keyva is pleased to announce the certification of our ServiceNow App for Red Hat Ansible for the new ServiceNow Tokyo release. Clients can now seamlessly upgrade their ServiceNow App from previous ServiceNow releases (Rome, San Diego) to the Tokyo release.

According to ServiceNow, the Tokyo release is purpose‑built to deliver better employee and customer experiences, supercharge automation and trust in operations, and accelerate value in ways that are good for people, good for the planet, and good for profits. Tokyo release is the latest version since the company’s inception.

Learn more about the Keyva ServiceNow App for Ansible and view all the ServiceNow releases for which it has been certified at the ServiceNow store, visit bit.ly/3VTl0ZM.

[post_title] => Keyva ServiceNow App for Red Hat Ansible – Certified for Tokyo Release [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keyva-servicenow-app-for-red-hat-ansible-certified-for-tokyo-release [to_ping] => [pinged] => [post_modified] => 2024-05-28 18:24:58 [post_modified_gmt] => 2024-05-28 18:24:58 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3433 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [4] => WP_Post Object ( [ID] => 3682 [post_author] => 2 [post_date] => 2022-11-23 16:31:50 [post_date_gmt] => 2022-11-23 16:31:50 [post_content] =>
https://youtu.be/8YNWdeSLsGk
Keyva CEO Jaime Gmach discusses how we help clients move data, applications, and other workflows from on-premise to the cloud.
[post_title] => Keyva: Cloud Migration [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keyva-cloud-migration [to_ping] => [pinged] => [post_modified] => 2023-06-28 17:55:14 [post_modified_gmt] => 2023-06-28 17:55:14 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3422 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [5] => WP_Post Object ( [ID] => 3684 [post_author] => 2 [post_date] => 2022-11-02 16:42:23 [post_date_gmt] => 2022-11-02 16:42:23 [post_content] =>
https://youtu.be/9zr1gzEEYCs

Keyva CEO Jaime Gmach discusses how Keyva helps build and maintain integrations between the tools and solutions our clients frequently use in a way that is secure, scalable, and supportable.

[post_title] => Keyva: Integrations [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keyva-integrations [to_ping] => [pinged] => [post_modified] => 2024-05-16 16:05:44 [post_modified_gmt] => 2024-05-16 16:05:44 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3427 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [6] => WP_Post Object ( [ID] => 3683 [post_author] => 2 [post_date] => 2022-10-26 16:32:35 [post_date_gmt] => 2022-10-26 16:32:35 [post_content] =>
https://youtu.be/81mmUgw2DGQ
Keyva CEO Jaime Gmach shares how end-to-end automation is at the core of how we help clients drive business value and technical capabilities from their IT environment.
[post_title] => Keyva: Automation [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keyva-automation [to_ping] => [pinged] => [post_modified] => 2024-05-16 15:25:57 [post_modified_gmt] => 2024-05-16 15:25:57 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3424 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [7] => WP_Post Object ( [ID] => 3681 [post_author] => 2 [post_date] => 2022-10-05 16:27:06 [post_date_gmt] => 2022-10-05 16:27:06 [post_content] =>
https://youtu.be/QsMbR1-6Arc

Keyva CEO Jaime Gmach shares our approach to modern DevOps.

[post_title] => Keyva: Modern DevOps [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => keyva-modern-devops [to_ping] => [pinged] => [post_modified] => 2024-05-16 15:47:19 [post_modified_gmt] => 2024-05-16 15:47:19 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3419 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) ) [post_count] => 8 [current_post] => -1 [before_loop] => 1 [in_the_loop] => [post] => WP_Post Object ( [ID] => 3751 [post_author] => 14 [post_date] => 2023-03-09 14:50:56 [post_date_gmt] => 2023-03-09 14:50:56 [post_content] =>

By: Saikrishna Madupu – Sr Devops Engineer

This article reviews how to tail logs from multiple pods via Kubernetes and Stern.

Kubernetes (K8) is a scalable container orchestrator. It is fairly lightweight to support IoT appliances and it can also handle huge business systems with hundreds of apps and hosts .

Stern is a tool for the tailing of numerous Kubernetes pods and the numerous containers that make up each pod. To facilitate faster debugging, each result is color coded.

As the query is a regular expression, the pod name can be easily filtered, and the exact id is not required. For instance, for instance omitting the deployment id. When a pod is deleted, it is removed from the tail, and when a new pod is added, it is automatically tailed.

Stern can tail all of the containers in a pod instead of having to do each one manually. You can simply specify the container flag to limit the number of containers displayed. By default, all containers are monitored.

Deploying a nginx svc:
kind: Service
apiVersion: v1
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP%  
kubectl apply --filename nginx-svc.yaml -n keyva

O/p: service/nginx unchanged
deployment.apps/nginx created

we can validate and verify the svc and pods that being up and running:
kubectl get all -n keyva
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-cd55c47f5-gwtkn   1/1     Running   0          12s

NAME            TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/nginx   ClusterIP   10.96.58.31   <none>        80/TCP    88d

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           12s

NAME                              DESIRED   CURRENT   READY   AGE
replicaset.apps/nginx-cd55c47f5   1         1         1       12s

kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-cd55c47f5-gwtkn   1/1     Running   0          30s

KubeCtl has limits:

Using the label selection, it is evident that kubectl can read logs from numerous pods, however this technique has a drawback.

The reason for this is that –follow streams the API server's logs. You open a connection to the API server per pod, which opens a connection to the associated kubelet to stream logs continually. This does not scale well and results in many incoming and outgoing connections to the API server. As a result, it became a design decision to restrict the number of concurrent connections. Using Stern:

The command is fairly straightforward. Stern retrieves the logs from the given namespace for the specified application. In the case of Stern, you can view not only logs from a single Kubernetes object, such as a deployment or service, but also logs from all related objects. Example:

Stern -n keyva nginx

stern -n keyva nginx                           
+ nginx-cd55c47f5-86ql5 › nginx
+ nginx-cd55c47f5-bm55t › nginx
+ nginx-cd55c47f5-gwtkn › nginx
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-gwtkn nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-gwtkn nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx-cd55c47f5-gwtkn nginx /docker-entrypoint.sh: Configuration complete; ready for start up
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: using the "epoll" event method
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: nginx/1.23.3
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: OS: Linux 5.10.124-linuxkit
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker processes
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 35
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 36
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 37
nginx-cd55c47f5-gwtkn nginx 2023/01/17 10:42:19 [notice] 1#1: start worker process 38
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-bm55t nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx-cd55c47f5-bm55t nginx /docker-entrypoint.sh: Configuration complete; ready for start up
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: using the "epoll" event method
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: nginx/1.23.3
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: OS: Linux 5.10.124-linuxkit
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker processes
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 36
nginx-cd55c47f5-86ql5 nginx 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 37
nginx-cd55c47f5-86ql5 nginx 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 38
nginx-cd55c47f5-bm55t nginx 2023/01/17 10:47:26 [notice] 1#1: start worker process 39
nginx-cd55c47f5-86ql5 nginx /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh

If you want to use stern in Kubernetes Pods, you need to create the following ClusterRole and bind it to ServiceAccount.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: stern
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "watch", "list"]

Stern facilitates the output of custom log messages. Using the —output flag, you may utilize the following prepared templates:

outputdescription
defultDisplays the namespace, pod and container, and decorates it with color depending on --color
rawOnly outputs the log message itself, useful when your logs are json and you want to pipe them to jq
jsonMarshals the log struct to json. Useful for programatic purposes

It takes a custom template through the —template flag, which is subsequently compiled into a Go template and used for each log message. The following struct is passed to this Go template:

property typedescription
Messagestring The log message itself
NodeNamestringThe node name where the pod is scheduled on
NamespacestringThe namespace of the pod
PodNamestringThe name of the pod
ContainerNamestring The name of the container

In addition to the built-in functions, the template includes the following functions:

funcargumentsdescription
jsonobjectMarshal the object and output it as a json text
colorcolor.Color, stringWrap the text in color (.ContainerColor and .PodColor provided)
parseJSONstringParse string as JSON
extjsonstringParse the object as json and output colorized json
ppextjsonstringParse the object as json and output pretty-print colorized json

Kubernetes can add complexity. Software programmers need logs quickly to fix problems. Set up your CLI with some aliases and get rolling to tail logs from your apps in real-time if you are using Kubernetes and have access to view logs on your Kubernetes cluster.

About the Author

[table id =5 /]

[post_title] => Tail Logs from Multiple K8 Pods [post_excerpt] => [post_status] => publish [comment_status] => closed [ping_status] => closed [post_password] => [post_name] => tail-logs-from-multiple-k8-pods [to_ping] => [pinged] => [post_modified] => 2023-04-21 17:04:17 [post_modified_gmt] => 2023-04-21 17:04:17 [post_content_filtered] => [post_parent] => 0 [guid] => https://keyvatech.com/?p=3751 [menu_order] => 0 [post_type] => post [post_mime_type] => [comment_count] => 0 [filter] => raw ) [comment_count] => 0 [current_comment] => -1 [found_posts] => 112 [max_num_pages] => 14 [max_num_comment_pages] => 0 [is_single] => [is_preview] => [is_page] => [is_archive] => [is_date] => [is_year] => [is_month] => [is_day] => [is_time] => [is_author] => [is_category] => [is_tag] => [is_tax] => [is_search] => [is_feed] => [is_comment_feed] => [is_trackback] => [is_home] => 1 [is_privacy_policy] => [is_404] => [is_embed] => [is_paged] => 1 [is_admin] => [is_attachment] => [is_singular] => [is_robots] => [is_favicon] => [is_posts_page] => [is_post_type_archive] => [query_vars_hash:WP_Query:private] => 43dbb0366d3a7536cbe6a299469b3ec9 [query_vars_changed:WP_Query:private] => [thumbnails_cached] => [allow_query_attachment_by_filename:protected] => [stopwords:WP_Query:private] => [compat_fields:WP_Query:private] => Array ( [0] => query_vars_hash [1] => query_vars_changed ) [compat_methods:WP_Query:private] => Array ( [0] => init_query_flags [1] => parse_tax_query ) [tribe_is_event] => [tribe_is_multi_posttype] => [tribe_is_event_category] => [tribe_is_event_venue] => [tribe_is_event_organizer] => [tribe_is_event_query] => [tribe_is_past] => )

Tail Logs from Multiple K8 Pods

By: Saikrishna Madupu – Sr Devops Engineer This article reviews how to tail logs from multiple pods via Kubernetes and Stern. Kubernetes (K8) is a scalable container orchestrator. It is fairly lightweight ...

Reloader – K8 – Rapid – Rollouts

By: Saikrishna Madupu – Sr Devops Engineer Reloader is a Kubernetes tool that automatically reloads configuration files in a running container when a change is detected. This can be useful for updating ...

Keyva ServiceNow App for Red Hat Ansible Automation Platform – Certified for Utah Release

Keyva is pleased to announce the certification of the Keyva Integration for Red Hat Ansible Automation Platform for the new ServiceNow Utah release. Clients can now seamlessly upgrade their ServiceNow ...

Keyva ServiceNow App for Red Hat Ansible – Certified for Tokyo Release

Keyva is pleased to announce the certification of our ServiceNow App for Red Hat Ansible for the new ServiceNow Tokyo release. Clients can now seamlessly upgrade their ServiceNow App from ...

Keyva: Cloud Migration

Keyva CEO Jaime Gmach discusses how we help clients move data, applications, and other workflows from on-premise to the cloud.

Keyva: Integrations

Keyva CEO Jaime Gmach discusses how Keyva helps build and maintain integrations between the tools and solutions our clients frequently use in a way that is secure, scalable, and supportable.

Keyva: Automation

Keyva CEO Jaime Gmach shares how end-to-end automation is at the core of how we help clients drive business value and technical capabilities from their IT environment.

Keyva: Modern DevOps

Keyva CEO Jaime Gmach shares our approach to modern DevOps.