Wednesday, November 20, 2024

How to Use the ipinfo Tool

The ipinfo CLI tool allows you to look up IP address details, including bulk lookups, ASN, and geolocation information. Below is a guide to get started quickly:


Installation

  1. Install with Homebrew (macOS/Linux):

    brew install ipinfo-cli

  2. Install via Script: Download and install the tool using the provided script:
    curl -Ls https://github.com/ipinfo/cli/releases/latest/download/ipinfo.sh | sh

    Add the binary to your PATH:

    export PATH="$HOME/.ipinfo/bin:$PATH"

  3. Download Binary Manually (Linux/macOS/Windows):
    Download the appropriate binary from the GitHub releases page.

Basic Usage

Lookup a Single IP Address

To get detailed information about an IP address:

ipinfo <IP_ADDRESS>

Example:

ipinfo 8.8.8.8


Lookup Your Own IP

ipinfo myip


Bulk Lookup

Lookup multiple IP addresses from a file using a pipping

Note: make sure you have created your account on https://ipinfo.io  and you have your token which is required for bulk IP lookups

cat ips.txt| ipinfo -t YOUR_TOKEN

Or you can register your token using below command 

$  ipinfo init                  
1) Enter an existing API token
2) Sign up or log in at ipinfo.io with your browser

Press 1 and then paste your token which you have got while creating account on ipinfo.io

Advanced Features

Filter Fields

Use the -f flag to specify the fields you want in the output:

ipinfo 8.8.8.8 -f ip,country,city


For further details, refer to the official documentation.



Monday, October 21, 2024

Mimir and VictoriaMetrics

 Mimir and VictoriaMetrics are both highly efficient, scalable time-series databases and monitoring solutions often used in Kubernetes environments to handle metrics data. They provide an alternative to Prometheus in managing, storing, and querying large amounts of metrics efficiently.

Mimir in Kubernetes

Grafana Mimir is a large-scale, high-performance time-series database (TSDB) built for long-term metrics storage and highly available, multi-tenant Prometheus use cases. It extends Prometheus' capabilities, providing horizontal scalability and long-term retention of metrics.

Key Use Cases of Mimir in Kubernetes:

  1. Horizontal Scalability:

    • Mimir allows you to scale Prometheus horizontally across multiple Kubernetes nodes. This makes it suitable for large environments where a single Prometheus instance might not suffice due to the high cardinality of metrics or large workloads.
  2. Long-Term Storage:

    • Mimir stores metrics long-term, enabling efficient querying of historical data. In large Kubernetes environments, you might need to store metrics data for extended periods (months/years), and Mimir makes this possible while managing storage space efficiently.
  3. Multi-Tenancy:

    • Mimir supports multi-tenancy, allowing different teams or projects to use the same instance but maintain isolation of their metrics and queries. This is particularly useful in multi-tenant Kubernetes clusters where multiple teams or workloads coexist.
  4. High Availability:

    • By replicating data across multiple nodes, Mimir ensures high availability for querying metrics, even during failures or maintenance events.
  5. Seamless Prometheus Integration:

    • Mimir is fully compatible with Prometheus and can be deployed in Kubernetes as a drop-in replacement or extension for Prometheus to provide scalability, durability, and multi-tenant support. It integrates with Grafana for visualizing metrics.
  6. Federated Clusters:

    • Mimir allows you to aggregate metrics from multiple Prometheus instances across different Kubernetes clusters, giving you a global view of your infrastructure.

Deploying Mimir in Kubernetes:

  • You can deploy Mimir in Kubernetes using Helm charts provided by Grafana Labs or manually configure it to collect metrics from Prometheus and other systems.

Example:


helm repo add grafana https://grafana.github.io/helm-charts helm install mimir grafana/mimir-distributed

VictoriaMetrics in Kubernetes

VictoriaMetrics is another scalable time-series database built as a cost-efficient alternative to Prometheus for high-performance, long-term metrics storage. It’s designed to ingest and store high cardinality time-series data in an efficient way, making it suitable for cloud-native applications in Kubernetes environments.

Key Use Cases of VictoriaMetrics in Kubernetes:

  1. High Ingestion Rate:

    • VictoriaMetrics is optimized for high ingestion rates, allowing it to handle millions of active time series without the performance degradation that Prometheus might experience at such scales.
  2. Long-Term Storage:

    • Like Mimir, VictoriaMetrics is designed for long-term storage of metrics. You can store months or years of metrics data without worrying about the performance overhead, thanks to its efficient compression techniques.
  3. Prometheus Drop-In Replacement:

    • VictoriaMetrics is fully compatible with Prometheus and can act as a replacement for Prometheus' backend storage. It can receive metrics in the same format as Prometheus and works with Prometheus’ remote write and read features.
  4. Query Efficiency:

    • It provides high query performance, enabling fast access to large datasets. This makes it highly efficient for environments where querying across large time spans (e.g., months) is necessary.
  5. Multi-Tenancy:

    • Like Mimir, VictoriaMetrics supports multi-tenancy, allowing multiple teams or environments in Kubernetes to store and query their own isolated metrics.
  6. Cost Efficiency:

    • VictoriaMetrics is known for its efficient use of memory and storage resources, which makes it a cost-effective solution for managing large-scale time-series data in Kubernetes clusters.
  7. Global Monitoring:

    • VictoriaMetrics supports scraping metrics from multiple Prometheus instances or clusters, making it ideal for global monitoring solutions that aggregate data from multiple Kubernetes clusters into a single storage backend.

Deploying VictoriaMetrics in Kubernetes:

  • You can deploy VictoriaMetrics using Helm charts or Kubernetes manifests.

Example Helm chart deployment:


helm repo add vm https://victoriametrics.github.io/helm-charts/ helm install vm-cluster vm/victoria-metrics-cluster
  • Once deployed, VictoriaMetrics will handle metrics ingestion and storage while being fully compatible with Grafana for visualization.

Comparison: Mimir vs. VictoriaMetrics in Kubernetes

FeatureGrafana MimirVictoriaMetrics
ScalabilityHorizontally scalable across multiple nodesHighly scalable, designed for high ingestion rates
Long-Term StorageYes, optimized for long-term metrics retentionYes, with efficient compression for long-term data
Multi-TenancyFull multi-tenant supportSupports multi-tenancy for isolated environments
Query PerformanceHigh-performance queries, even at scaleVery fast and efficient queries over large datasets
Prometheus IntegrationFull Prometheus compatibilityDrop-in Prometheus replacement, remote write/read
Cost EfficiencyOptimized for large workloads, with redundancyHighly cost-efficient, minimal resource usage
High AvailabilitySupports high availability and durabilityNot inherently HA, but can be configured for it
Federated ClustersYes, can aggregate metrics across clustersYes, supports scraping from multiple Prometheus instances

Conclusion

Both Mimir and VictoriaMetrics are powerful tools for observability in Kubernetes environments, especially when dealing with high volumes of time-series data and requiring long-term storage. They provide solutions for scaling beyond Prometheus' typical capacity, enhancing the ability to monitor and troubleshoot large Kubernetes deployments.

  • Mimir is ideal for teams looking for advanced features like horizontal scalability, multi-tenancy, and high availability, particularly when integrating with Grafana.
  • VictoriaMetrics offers an efficient, cost-effective alternative for high-volume metrics storage and retrieval, with fast query performance and reduced resource usage.

Both tools significantly extend the capabilities of Prometheus and are suitable for Kubernetes clusters that need to manage vast amounts of observability data.

What is Helm charts - K8s

Helm charts are a package management tool for Kubernetes, designed to simplify the deployment, configuration, and management of applications in Kubernetes clusters. They encapsulate Kubernetes resources (like pods, services, config maps, etc.) into reusable, configurable packages called charts, making it easier to deploy complex applications with minimal manual intervention.

How Helm Charts Work with Kubernetes:

  1. Helm Chart Structure:

    • A Helm chart is a collection of YAML templates and files that define a Kubernetes application.
    • Each chart consists of a set of Kubernetes manifests (e.g., deployment.yaml, service.yaml) that define the resources to be deployed.
  2. Templating:

    • Helm uses templating to inject dynamic values into these manifests, allowing for flexible and repeatable deployments.
    • Values can be customized using a values.yaml file, which holds default configurations that can be overridden at deployment time.
  3. Repositories:

    • Helm charts are stored in repositories (similar to package repositories in other package managers).
    • Public repositories like the official Helm Hub or Artifact Hub host thousands of pre-configured charts for common applications (e.g., NGINX, MySQL, Prometheus, etc.).
    • You can also create private repositories for custom applications.
  4. Installing Applications with Helm:

    • Using Helm, you can deploy an application with a single command that references a chart, pulling the chart from a repository and deploying it into your Kubernetes cluster.
    • Example:

      helm repo add stable https://charts.helm.sh/stable helm install my-release stable/nginx
    • This command pulls the NGINX Helm chart, installs it into the Kubernetes cluster, and creates resources like deployments, services, etc.
  5. Versioning and Rollbacks:

    • Helm keeps track of chart releases and their versions.
    • You can upgrade, roll back, or uninstall chart releases easily, making it ideal for maintaining application lifecycles.
      • Upgrade: helm upgrade <release-name> <chart-name>
      • Rollback: helm rollback <release-name> <revision-number>
  6. Helm Values:

    • values.yaml allows you to define custom configurations. When deploying a Helm chart, you can override these values to change settings like replica counts, image versions, or environment variables.

      helm install my-release stable/nginx --set replicaCount=3
  7. Chart Dependencies:

    • Helm charts can have dependencies on other charts. This enables complex applications to be built using smaller, reusable charts.
    • Dependencies are listed in the Chart.yaml file, and Helm ensures all dependencies are installed alongside the main chart.

Typical Use Cases for Helm Charts in Kubernetes:

  • Application Deployment: Helm charts make it easy to deploy applications (databases, web servers, monitoring tools) with a single command.
  • Managing Configurations: By using Helm’s templating system, you can manage environment-specific configurations without hardcoding values into Kubernetes manifests.
  • Complex Workloads: Helm is ideal for deploying applications with many interconnected resources (e.g., microservices), where you need to manage dependencies and complex configurations.
  • Version Control: Helm allows you to version control your application deployments and quickly roll back if something goes wrong.

In summary, Helm charts bring simplicity, modularity, and flexibility to Kubernetes application deployments, offering a powerful way to manage complex workloads and configurations in an efficient, reusable manner.

Key Components of Observability in Kubernetes

Deploying comprehensive observability in Kubernetes clusters involves monitoring key metrics, gathering logs, and tracing distributed transactions across various microservices and components. To achieve this, you’ll need to set up a set of integrated tools to cover the three key observability pillars: metrics, logs, and tracing.

Here’s a guide to deploying comprehensive observability in Kubernetes, along with recommended tools for each aspect.

Key Components of Observability in Kubernetes:

  1. Metrics Monitoring: Track resource usage, performance, and system health.
  2. Logging: Collect and aggregate logs for debugging and auditing purposes.
  3. Distributed Tracing: Trace requests across microservices to diagnose latency and performance issues.
  4. Visualization and Alerting: Use dashboards and alerts to provide actionable insights and notifications.

Step-by-Step Guide to Deploy Comprehensive Observability in Kubernetes

1. Metrics Monitoring with Prometheus and Grafana

Prometheus is the de facto standard for monitoring metrics in Kubernetes. It collects metrics from applications, Kubernetes components, and infrastructure, and stores them for analysis. Grafana is typically paired with Prometheus to visualize metrics through dashboards.

Steps to Deploy Prometheus and Grafana:

  • Install Prometheus:

    • Use Helm (a package manager for Kubernetes) to install the Prometheus stack.

      helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/kube-prometheus-stack
    • Prometheus will automatically scrape metrics from Kubernetes components such as the API server, Kubelet, etc., using the kube-state-metrics component.
  • Install Grafana:

    • Grafana can be included in the same Helm chart (as part of kube-prometheus-stack) or installed separately.
    • Access Grafana, then add Prometheus as a data source and import Kubernetes-related dashboards from the Grafana community or create custom ones.
  • Alerting: Configure alerting rules in Prometheus to trigger alerts (email, Slack, etc.) when certain conditions are met (e.g., high CPU usage, failing pods).

2. Logging with Fluentd/Fluentbit and Elasticsearch (ELK Stack) or Loki

Logs are critical for diagnosing issues in a Kubernetes environment. Fluentd or Fluentbit is commonly used to collect, transform, and route logs to a backend, like Elasticsearch (for ELK stack) or Loki.

Steps to Deploy Logging Stack:

  • Install Fluentd or Fluentbit:

    • Fluentbit is a lightweight log processor, while Fluentd is more feature-rich. Both can be used to collect logs from Kubernetes containers.
    • Install Fluentbit via Helm:

      helm repo add fluent https://fluent.github.io/helm-charts helm install fluentbit fluent/fluent-bit
  • Install Elasticsearch and Kibana (for ELK):

    • Elasticsearch will store the logs, and Kibana will visualize them.
    • You can install the ELK stack (Elasticsearch, Logstash, Kibana) or use OpenSearch as an alternative. This can be installed using Helm charts or through managed services from cloud providers (like AWS OpenSearch).
  • Alternative with Loki:

    • Loki is a lightweight, log aggregation system from Grafana Labs that integrates well with Prometheus and Grafana for log visualization.
    • To install Loki via Helm:

      helm repo add grafana https://grafana.github.io/helm-charts helm install loki grafana/loki-stack
    • Logs can be visualized directly within Grafana.

3. Distributed Tracing with Jaeger or OpenTelemetry

Distributed tracing is essential in microservices architectures to track how requests propagate through various services, helping diagnose latency and bottlenecks.

Steps to Deploy Jaeger or OpenTelemetry:

  • Install Jaeger:

    • Jaeger is a popular open-source tracing tool designed for distributed systems. It integrates well with Kubernetes and can trace requests across services.
    • Install Jaeger using Helm:

      helm repo add jaegertracing https://jaegertracing.github.io/helm-charts helm install jaeger jaegertracing/jaeger
  • Integrate with Microservices:

    • To capture trace data, instrument your microservices with Jaeger or OpenTelemetry SDKs. If your services are already using frameworks like gRPC or HTTP, these frameworks might already support Jaeger integration.
  • Use OpenTelemetry:

    • OpenTelemetry is a vendor-neutral observability framework that combines metrics, logs, and traces. It can be used in place of or alongside Jaeger.
    • Install OpenTelemetry Collector using Helm:

      helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install otel open-telemetry/opentelemetry-collector

4. Visualization and Alerting with Grafana

Grafana plays a key role in visualizing observability data from multiple sources, including Prometheus (metrics), Loki (logs), and Jaeger (traces).

  • Configure Dashboards: Import or create dashboards for Kubernetes, and integrate alerts with communication platforms like Slack, email, or PagerDuty.
  • Unified Observability: Grafana allows you to have a unified view of metrics, logs, and traces, making it easier to correlate data across different layers of your Kubernetes cluster.

Popular Tools and Platforms for Kubernetes Observability

  1. Metrics Monitoring:

    • Prometheus: For real-time metrics collection and alerting.
    • Grafana: For visualizing metrics from Prometheus and other sources.
    • Thanos: For long-term storage and scaling of Prometheus metrics.
  2. Logging:

    • Fluentd or Fluentbit: For log collection and forwarding.
    • Elasticsearch, Logstash, Kibana (ELK): For storing, processing, and visualizing logs.
    • Loki: A log aggregation system designed to work well with Prometheus.
  3. Distributed Tracing:

    • Jaeger: For distributed tracing, offering a complete solution for monitoring the flow of requests in microservices.
    • OpenTelemetry: A unified platform for collecting traces, metrics, and logs.
    • Zipkin: Another tracing tool, similar to Jaeger.
  4. Alerting:

    • Alertmanager: Prometheus’ alerting tool.
    • PagerDuty, Opsgenie, Slack: For receiving alerts.

Managed Observability Platforms

In addition to open-source tools, several managed platforms provide comprehensive observability for Kubernetes:

  1. Datadog: Full-stack monitoring and observability for Kubernetes clusters, offering metrics, traces, and logs in a single platform.
  2. New Relic: Offers a Kubernetes observability solution with detailed insights into applications, infrastructure, and logs.
  3. AWS CloudWatch: A fully managed service from AWS for monitoring Kubernetes clusters on EKS.
  4. Azure Monitor: For monitoring AKS clusters and applications.
  5. Google Cloud Operations (formerly Stackdriver): For monitoring GKE clusters.

Conclusion

Deploying observability in Kubernetes involves combining metrics, logs, and tracing tools to provide a full view of the cluster and application health. Prometheus, Grafana, Jaeger, Fluentd/Fluentbit, and Elasticsearch or Loki are the most popular open-source tools for achieving comprehensive observability. Managed solutions like Datadog, New Relic, and CloudWatch provide an all-in-one solution for teams preferring less operational overhead.

Tuesday, October 15, 2024

Kubernetes - Advanced Kubeconfig setup

A Kubeconfig file is a YAML file that stores details about the cluster's API server, the namespace, and user authentication credentials.


Kubernetes tools like kubectl, Helm follow an order of precedence to locate the active Kubeconfig.


Lowest Priority: Default Location


This location is used unless another option overrides it.


Linux

~/.kube/config

macOS

/Users/<username>/.kube/config

Windows

%USERPROFILE%\.kube\config


Next priority: Environment variable


The KUBECONFIG environment variable overrides the default location by specifying one or more Kubeconfig files. For a single file named config-test, the syntax is:


export KUBECONFIG=config-1


Highest priority: Command line


The --kubeconfig command line option has the highest priority, meaning all other Kubeconfig files are ignored. For example, to use a test-kubeconfig file located in the ~/.kube/ directory to request secret objects, you would use the following command:


kubectl --kubeconfig=~/.kube/config-1 get secrets



Merge multiple kubeconfig files


To merge multiple files, list them in the environment variable separated with a colon on Linux


export KUBECONFIG=~/.kube/config-1:~/.kube/config-2


Physically merge config files


To physically merge multiple Kubeconfig files into a single file, first add them to the KUBECONFIG environment variable as before. Then, use the --flatten option to combine them, redirect the output to a new file, and use the merged file as your new Kubeconfig. Here's how you can do it:


export KUBECONFIG=config1:config2:config3

kubectl config view --flatten > merged-kubeconfig


You can then set the KUBECONFIG environment variable to point to the newly merged file:


export KUBECONFIG=merged-kubeconfig

Wednesday, April 26, 2023

Linux KVM hypervisor a beginners guide

 Linux KVM. KVM is a virtualization technology for Linux that allows you to create and run virtual machines (VMs) on a Linux host.


Before we begin, make sure that you have a Linux machine with KVM installed. You can install KVM on Ubuntu or Debian with the following command:

sudo apt-get install qemu-kvm libvirt-bin bridge-utils virt-manager


For Red Hat-based systems, you can use the following command:

sudo yum install qemu-kvm libvirt bridge-utils virt-manager


Once you have KVM installed, you can start creating VMs.

Creating a Virtual Machine

You can create a VM using the virt-install command. Here is an example command:

sudo virt-install \
--name my-vm \
--ram 2048 \
--vcpus 2 \
--disk path=/var/lib/libvirt/images/my-vm.qcow2,size=10 \
--os-type linux \
--os-variant ubuntu20.04 \
--network bridge=virbr0 \
--graphics none \
--console pty,target_type=serial \
--location 'http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/'


This command creates a VM named my-vm with 2 vCPUs and 2GB of RAM. The VM has a 10GB disk and uses the Ubuntu 20.04 operating system. The VM is connected to a virtual bridge named virbr0 and has no graphics output. The console output is redirected to a serial port. Finally, the VM is installed from an Ubuntu 20.04 installation ISO available at the given URL.

You can adjust the parameters of this command to suit your needs. For example, you can change the name of the VM, the amount of RAM and vCPUs, the disk size and location, and the network settings.


Starting and Stopping a Virtual Machine

To start a VM, use the virsh start command followed by the name of the VM:

sudo virsh start my-vm

To stop a VM, use the virsh shutdown command followed by the name of the VM:

sudo virsh shutdown my-vm


Listing Virtual Machines

To list all the VMs on your system, use the virsh list command:

sudo virsh list --all

This command lists all the running and stopped VMs on your system.

Managing Virtual Machines with Virt-Manager

Virt-Manager is a graphical user interface for managing virtual machines. To start Virt-Manager, type the following command in a terminal:

virt-manager


This command opens the Virt-Manager window, where you can view and manage your VMs.

To create a new VM using Virt-Manager, click the Create a new virtual machine button in the toolbar. This opens a wizard that guides you through the process of creating a new VM.

Conclusion

In this tutorial, we have covered the basics of creating, starting, and stopping virtual machines using KVM on Linux. We have also covered some of the CLI commands and configuration examples for managing virtual machines. With this knowledge, you can start using KVM to create and manage virtual machines on your Linux host.


Saturday, April 15, 2023

How to remove (^M) characters from a file in Linux

The control characters "^M" in a text file are actually the line endings used by Windows/DOS operating systems. These characters can cause issues when you try to use the file in Linux. Fortunately, it is easy to remove them using the tr command

tr -d '\r' < original_file.txt > new_file.txt


That's it! This is a quick and simple way to remove the control characters "^M" from a text file in Linux.