Wednesday, December 4, 2024

Step-by-step guide to install and configure an OpenVPN server on Ubuntu

Step-by-step guide to install and configure an OpenVPN server on Ubuntu, followed by instructions for connecting to it using a mobile client.


Step 1: Update Your System

Before installing OpenVPN, ensure your system is up to date.


sudo apt update && sudo apt upgrade -y

Step 2: Install OpenVPN and Easy-RSA

Install OpenVPN and the Easy-RSA package, which will be used to set up a Certificate Authority (CA).


sudo apt install openvpn easy-rsa -y

Step 3: Set Up the Easy-RSA Directory

Create and configure the Easy-RSA directory.

make-cadir ~/easy-rsa cd ~/easy-rsa

Step 4: Configure Variables

Edit the vars file to set custom values for your certificates.

nano vars

Modify the following lines as needed:


set_var EASYRSA_REQ_COUNTRY "US" set_var EASYRSA_REQ_PROVINCE "State" set_var EASYRSA_REQ_CITY "City" set_var EASYRSA_REQ_ORG "YourOrg" set_var EASYRSA_REQ_EMAIL "email@example.com" set_var EASYRSA_REQ_OU "MyOrganizationalUnit"

Save and exit the editor (Ctrl+O, Enter, then Ctrl+X).


Step 5: Build the Certificate Authority (CA)

Clean up the directory and build the CA.


./easyrsa clean-all ./easyrsa init-pki ./easyrsa build-ca

When prompted, set a password for the CA and confirm it.


Step 6: Generate Server and Client Certificates

Generate the server certificate and key.


./easyrsa gen-req server nopass ./easyrsa sign-req server server

Generate client certificates for the first client (e.g., client1).


./easyrsa gen-req client1 nopass ./easyrsa sign-req client client1

Step 7: Generate Diffie-Hellman Parameters and TLS Key

Generate Diffie-Hellman parameters and a static key for encryption.


./easyrsa gen-dh openvpn --genkey --secret ta.key

Step 8: Configure the OpenVPN Server

Copy the generated certificates, keys, and other necessary files to the OpenVPN directory.


sudo cp pki/ca.crt pki/issued/server.crt pki/private/server.key pki/dh.pem ta.key /etc/openvpn

Create a server configuration file.

sudo nano /etc/openvpn/server.conf

Add the following configuration:


port 1194

proto udp

dev tun

ca ca.crt

cert server.crt

key server.key

dh none

;tls-auth ta.key 0

tls-crypt ta.key

topology subnet

server 10.8.0.0 255.255.255.0

push "redirect-gateway def1 bypass-dhcp"

push "dhcp-option DNS 8.8.8.8"

push "dhcp-option DNS 8.8.4.4"

keepalive 10 120

cipher AES-256-GCM

auth SHA256

user nobody

group nogroup

persist-key

persist-tun

status openvpn-status.log

verb 3

Save and exit.


Step 9: Start and Enable the OpenVPN Service

Start the OpenVPN server and enable it to start on boot.


sudo systemctl start openvpn@server sudo systemctl enable openvpn@server

Step 10: Configure Firewall Rules

Allow OpenVPN traffic through the firewall.


sudo ufw allow 1194/udp sudo ufw allow OpenSSH sudo ufw enable

Enable IP forwarding by editing the following file:


sudo nano /etc/sysctl.conf

Uncomment or add the following line:


net.ipv4.ip_forward=1

Apply the changes:


sudo sysctl -p

Step 11: Generate Client Configuration

Create a client configuration file:


nano client1.ovpn

Add the following content:


client

dev tun

proto udp

remote <YOUR_SERVER_IP> 1194

resolv-retry infinite

nobind

persist-key

persist-tun

remote-cert-tls server

cipher AES-256-GCM

auth SHA256

key-direction 1

verb 3

<ca> # Insert the content of ca.crt </ca> <cert> # Insert the content of client1.crt </cert> <key> # Insert the content of client1.key </key>

<tls-crypt>

# Insert the content of ta.key

</tls-crypt>

Replace YOUR_SERVER_IP with your server's public IP address.

Export the client configuration to your client device. For example:


scp client1.ovpn user@client-device:/path/to/destination

Step 12: Connect from a Mobile Client

  1. Download the OpenVPN app on your mobile device.
  2. Transfer the client1.ovpn file to your mobile.
  3. Open the OpenVPN app, import the configuration file, and connect.

Wednesday, November 20, 2024

How to Use the ipinfo Tool

The ipinfo CLI tool allows you to look up IP address details, including bulk lookups, ASN, and geolocation information. Below is a guide to get started quickly:


Installation

  1. Install with Homebrew (macOS/Linux):

    brew install ipinfo-cli

  2. Install via Script: Download and install the tool using the provided script:
    curl -Ls https://github.com/ipinfo/cli/releases/latest/download/ipinfo.sh | sh

    Add the binary to your PATH:

    export PATH="$HOME/.ipinfo/bin:$PATH"

  3. Download Binary Manually (Linux/macOS/Windows):
    Download the appropriate binary from the GitHub releases page.

Basic Usage

Lookup a Single IP Address

To get detailed information about an IP address:

ipinfo <IP_ADDRESS>

Example:

ipinfo 8.8.8.8


Lookup Your Own IP

ipinfo myip


Bulk Lookup

Lookup multiple IP addresses from a file using a pipping

Note: make sure you have created your account on https://ipinfo.io  and you have your token which is required for bulk IP lookups

cat ips.txt| ipinfo -t YOUR_TOKEN

Or you can register your token using below command 

$  ipinfo init                  
1) Enter an existing API token
2) Sign up or log in at ipinfo.io with your browser

Press 1 and then paste your token which you have got while creating account on ipinfo.io

Advanced Features

Filter Fields

Use the -f flag to specify the fields you want in the output:

ipinfo 8.8.8.8 -f ip,country,city


For further details, refer to the official documentation.



Monday, October 21, 2024

Mimir and VictoriaMetrics

 Mimir and VictoriaMetrics are both highly efficient, scalable time-series databases and monitoring solutions often used in Kubernetes environments to handle metrics data. They provide an alternative to Prometheus in managing, storing, and querying large amounts of metrics efficiently.

Mimir in Kubernetes

Grafana Mimir is a large-scale, high-performance time-series database (TSDB) built for long-term metrics storage and highly available, multi-tenant Prometheus use cases. It extends Prometheus' capabilities, providing horizontal scalability and long-term retention of metrics.

Key Use Cases of Mimir in Kubernetes:

  1. Horizontal Scalability:

    • Mimir allows you to scale Prometheus horizontally across multiple Kubernetes nodes. This makes it suitable for large environments where a single Prometheus instance might not suffice due to the high cardinality of metrics or large workloads.
  2. Long-Term Storage:

    • Mimir stores metrics long-term, enabling efficient querying of historical data. In large Kubernetes environments, you might need to store metrics data for extended periods (months/years), and Mimir makes this possible while managing storage space efficiently.
  3. Multi-Tenancy:

    • Mimir supports multi-tenancy, allowing different teams or projects to use the same instance but maintain isolation of their metrics and queries. This is particularly useful in multi-tenant Kubernetes clusters where multiple teams or workloads coexist.
  4. High Availability:

    • By replicating data across multiple nodes, Mimir ensures high availability for querying metrics, even during failures or maintenance events.
  5. Seamless Prometheus Integration:

    • Mimir is fully compatible with Prometheus and can be deployed in Kubernetes as a drop-in replacement or extension for Prometheus to provide scalability, durability, and multi-tenant support. It integrates with Grafana for visualizing metrics.
  6. Federated Clusters:

    • Mimir allows you to aggregate metrics from multiple Prometheus instances across different Kubernetes clusters, giving you a global view of your infrastructure.

Deploying Mimir in Kubernetes:

  • You can deploy Mimir in Kubernetes using Helm charts provided by Grafana Labs or manually configure it to collect metrics from Prometheus and other systems.

Example:


helm repo add grafana https://grafana.github.io/helm-charts helm install mimir grafana/mimir-distributed

VictoriaMetrics in Kubernetes

VictoriaMetrics is another scalable time-series database built as a cost-efficient alternative to Prometheus for high-performance, long-term metrics storage. It’s designed to ingest and store high cardinality time-series data in an efficient way, making it suitable for cloud-native applications in Kubernetes environments.

Key Use Cases of VictoriaMetrics in Kubernetes:

  1. High Ingestion Rate:

    • VictoriaMetrics is optimized for high ingestion rates, allowing it to handle millions of active time series without the performance degradation that Prometheus might experience at such scales.
  2. Long-Term Storage:

    • Like Mimir, VictoriaMetrics is designed for long-term storage of metrics. You can store months or years of metrics data without worrying about the performance overhead, thanks to its efficient compression techniques.
  3. Prometheus Drop-In Replacement:

    • VictoriaMetrics is fully compatible with Prometheus and can act as a replacement for Prometheus' backend storage. It can receive metrics in the same format as Prometheus and works with Prometheus’ remote write and read features.
  4. Query Efficiency:

    • It provides high query performance, enabling fast access to large datasets. This makes it highly efficient for environments where querying across large time spans (e.g., months) is necessary.
  5. Multi-Tenancy:

    • Like Mimir, VictoriaMetrics supports multi-tenancy, allowing multiple teams or environments in Kubernetes to store and query their own isolated metrics.
  6. Cost Efficiency:

    • VictoriaMetrics is known for its efficient use of memory and storage resources, which makes it a cost-effective solution for managing large-scale time-series data in Kubernetes clusters.
  7. Global Monitoring:

    • VictoriaMetrics supports scraping metrics from multiple Prometheus instances or clusters, making it ideal for global monitoring solutions that aggregate data from multiple Kubernetes clusters into a single storage backend.

Deploying VictoriaMetrics in Kubernetes:

  • You can deploy VictoriaMetrics using Helm charts or Kubernetes manifests.

Example Helm chart deployment:


helm repo add vm https://victoriametrics.github.io/helm-charts/ helm install vm-cluster vm/victoria-metrics-cluster
  • Once deployed, VictoriaMetrics will handle metrics ingestion and storage while being fully compatible with Grafana for visualization.

Comparison: Mimir vs. VictoriaMetrics in Kubernetes

FeatureGrafana MimirVictoriaMetrics
ScalabilityHorizontally scalable across multiple nodesHighly scalable, designed for high ingestion rates
Long-Term StorageYes, optimized for long-term metrics retentionYes, with efficient compression for long-term data
Multi-TenancyFull multi-tenant supportSupports multi-tenancy for isolated environments
Query PerformanceHigh-performance queries, even at scaleVery fast and efficient queries over large datasets
Prometheus IntegrationFull Prometheus compatibilityDrop-in Prometheus replacement, remote write/read
Cost EfficiencyOptimized for large workloads, with redundancyHighly cost-efficient, minimal resource usage
High AvailabilitySupports high availability and durabilityNot inherently HA, but can be configured for it
Federated ClustersYes, can aggregate metrics across clustersYes, supports scraping from multiple Prometheus instances

Conclusion

Both Mimir and VictoriaMetrics are powerful tools for observability in Kubernetes environments, especially when dealing with high volumes of time-series data and requiring long-term storage. They provide solutions for scaling beyond Prometheus' typical capacity, enhancing the ability to monitor and troubleshoot large Kubernetes deployments.

  • Mimir is ideal for teams looking for advanced features like horizontal scalability, multi-tenancy, and high availability, particularly when integrating with Grafana.
  • VictoriaMetrics offers an efficient, cost-effective alternative for high-volume metrics storage and retrieval, with fast query performance and reduced resource usage.

Both tools significantly extend the capabilities of Prometheus and are suitable for Kubernetes clusters that need to manage vast amounts of observability data.

What is Helm charts - K8s

Helm charts are a package management tool for Kubernetes, designed to simplify the deployment, configuration, and management of applications in Kubernetes clusters. They encapsulate Kubernetes resources (like pods, services, config maps, etc.) into reusable, configurable packages called charts, making it easier to deploy complex applications with minimal manual intervention.

How Helm Charts Work with Kubernetes:

  1. Helm Chart Structure:

    • A Helm chart is a collection of YAML templates and files that define a Kubernetes application.
    • Each chart consists of a set of Kubernetes manifests (e.g., deployment.yaml, service.yaml) that define the resources to be deployed.
  2. Templating:

    • Helm uses templating to inject dynamic values into these manifests, allowing for flexible and repeatable deployments.
    • Values can be customized using a values.yaml file, which holds default configurations that can be overridden at deployment time.
  3. Repositories:

    • Helm charts are stored in repositories (similar to package repositories in other package managers).
    • Public repositories like the official Helm Hub or Artifact Hub host thousands of pre-configured charts for common applications (e.g., NGINX, MySQL, Prometheus, etc.).
    • You can also create private repositories for custom applications.
  4. Installing Applications with Helm:

    • Using Helm, you can deploy an application with a single command that references a chart, pulling the chart from a repository and deploying it into your Kubernetes cluster.
    • Example:

      helm repo add stable https://charts.helm.sh/stable helm install my-release stable/nginx
    • This command pulls the NGINX Helm chart, installs it into the Kubernetes cluster, and creates resources like deployments, services, etc.
  5. Versioning and Rollbacks:

    • Helm keeps track of chart releases and their versions.
    • You can upgrade, roll back, or uninstall chart releases easily, making it ideal for maintaining application lifecycles.
      • Upgrade: helm upgrade <release-name> <chart-name>
      • Rollback: helm rollback <release-name> <revision-number>
  6. Helm Values:

    • values.yaml allows you to define custom configurations. When deploying a Helm chart, you can override these values to change settings like replica counts, image versions, or environment variables.

      helm install my-release stable/nginx --set replicaCount=3
  7. Chart Dependencies:

    • Helm charts can have dependencies on other charts. This enables complex applications to be built using smaller, reusable charts.
    • Dependencies are listed in the Chart.yaml file, and Helm ensures all dependencies are installed alongside the main chart.

Typical Use Cases for Helm Charts in Kubernetes:

  • Application Deployment: Helm charts make it easy to deploy applications (databases, web servers, monitoring tools) with a single command.
  • Managing Configurations: By using Helm’s templating system, you can manage environment-specific configurations without hardcoding values into Kubernetes manifests.
  • Complex Workloads: Helm is ideal for deploying applications with many interconnected resources (e.g., microservices), where you need to manage dependencies and complex configurations.
  • Version Control: Helm allows you to version control your application deployments and quickly roll back if something goes wrong.

In summary, Helm charts bring simplicity, modularity, and flexibility to Kubernetes application deployments, offering a powerful way to manage complex workloads and configurations in an efficient, reusable manner.

Key Components of Observability in Kubernetes

Deploying comprehensive observability in Kubernetes clusters involves monitoring key metrics, gathering logs, and tracing distributed transactions across various microservices and components. To achieve this, you’ll need to set up a set of integrated tools to cover the three key observability pillars: metrics, logs, and tracing.

Here’s a guide to deploying comprehensive observability in Kubernetes, along with recommended tools for each aspect.

Key Components of Observability in Kubernetes:

  1. Metrics Monitoring: Track resource usage, performance, and system health.
  2. Logging: Collect and aggregate logs for debugging and auditing purposes.
  3. Distributed Tracing: Trace requests across microservices to diagnose latency and performance issues.
  4. Visualization and Alerting: Use dashboards and alerts to provide actionable insights and notifications.

Step-by-Step Guide to Deploy Comprehensive Observability in Kubernetes

1. Metrics Monitoring with Prometheus and Grafana

Prometheus is the de facto standard for monitoring metrics in Kubernetes. It collects metrics from applications, Kubernetes components, and infrastructure, and stores them for analysis. Grafana is typically paired with Prometheus to visualize metrics through dashboards.

Steps to Deploy Prometheus and Grafana:

  • Install Prometheus:

    • Use Helm (a package manager for Kubernetes) to install the Prometheus stack.

      helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/kube-prometheus-stack
    • Prometheus will automatically scrape metrics from Kubernetes components such as the API server, Kubelet, etc., using the kube-state-metrics component.
  • Install Grafana:

    • Grafana can be included in the same Helm chart (as part of kube-prometheus-stack) or installed separately.
    • Access Grafana, then add Prometheus as a data source and import Kubernetes-related dashboards from the Grafana community or create custom ones.
  • Alerting: Configure alerting rules in Prometheus to trigger alerts (email, Slack, etc.) when certain conditions are met (e.g., high CPU usage, failing pods).

2. Logging with Fluentd/Fluentbit and Elasticsearch (ELK Stack) or Loki

Logs are critical for diagnosing issues in a Kubernetes environment. Fluentd or Fluentbit is commonly used to collect, transform, and route logs to a backend, like Elasticsearch (for ELK stack) or Loki.

Steps to Deploy Logging Stack:

  • Install Fluentd or Fluentbit:

    • Fluentbit is a lightweight log processor, while Fluentd is more feature-rich. Both can be used to collect logs from Kubernetes containers.
    • Install Fluentbit via Helm:

      helm repo add fluent https://fluent.github.io/helm-charts helm install fluentbit fluent/fluent-bit
  • Install Elasticsearch and Kibana (for ELK):

    • Elasticsearch will store the logs, and Kibana will visualize them.
    • You can install the ELK stack (Elasticsearch, Logstash, Kibana) or use OpenSearch as an alternative. This can be installed using Helm charts or through managed services from cloud providers (like AWS OpenSearch).
  • Alternative with Loki:

    • Loki is a lightweight, log aggregation system from Grafana Labs that integrates well with Prometheus and Grafana for log visualization.
    • To install Loki via Helm:

      helm repo add grafana https://grafana.github.io/helm-charts helm install loki grafana/loki-stack
    • Logs can be visualized directly within Grafana.

3. Distributed Tracing with Jaeger or OpenTelemetry

Distributed tracing is essential in microservices architectures to track how requests propagate through various services, helping diagnose latency and bottlenecks.

Steps to Deploy Jaeger or OpenTelemetry:

  • Install Jaeger:

    • Jaeger is a popular open-source tracing tool designed for distributed systems. It integrates well with Kubernetes and can trace requests across services.
    • Install Jaeger using Helm:

      helm repo add jaegertracing https://jaegertracing.github.io/helm-charts helm install jaeger jaegertracing/jaeger
  • Integrate with Microservices:

    • To capture trace data, instrument your microservices with Jaeger or OpenTelemetry SDKs. If your services are already using frameworks like gRPC or HTTP, these frameworks might already support Jaeger integration.
  • Use OpenTelemetry:

    • OpenTelemetry is a vendor-neutral observability framework that combines metrics, logs, and traces. It can be used in place of or alongside Jaeger.
    • Install OpenTelemetry Collector using Helm:

      helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm install otel open-telemetry/opentelemetry-collector

4. Visualization and Alerting with Grafana

Grafana plays a key role in visualizing observability data from multiple sources, including Prometheus (metrics), Loki (logs), and Jaeger (traces).

  • Configure Dashboards: Import or create dashboards for Kubernetes, and integrate alerts with communication platforms like Slack, email, or PagerDuty.
  • Unified Observability: Grafana allows you to have a unified view of metrics, logs, and traces, making it easier to correlate data across different layers of your Kubernetes cluster.

Popular Tools and Platforms for Kubernetes Observability

  1. Metrics Monitoring:

    • Prometheus: For real-time metrics collection and alerting.
    • Grafana: For visualizing metrics from Prometheus and other sources.
    • Thanos: For long-term storage and scaling of Prometheus metrics.
  2. Logging:

    • Fluentd or Fluentbit: For log collection and forwarding.
    • Elasticsearch, Logstash, Kibana (ELK): For storing, processing, and visualizing logs.
    • Loki: A log aggregation system designed to work well with Prometheus.
  3. Distributed Tracing:

    • Jaeger: For distributed tracing, offering a complete solution for monitoring the flow of requests in microservices.
    • OpenTelemetry: A unified platform for collecting traces, metrics, and logs.
    • Zipkin: Another tracing tool, similar to Jaeger.
  4. Alerting:

    • Alertmanager: Prometheus’ alerting tool.
    • PagerDuty, Opsgenie, Slack: For receiving alerts.

Managed Observability Platforms

In addition to open-source tools, several managed platforms provide comprehensive observability for Kubernetes:

  1. Datadog: Full-stack monitoring and observability for Kubernetes clusters, offering metrics, traces, and logs in a single platform.
  2. New Relic: Offers a Kubernetes observability solution with detailed insights into applications, infrastructure, and logs.
  3. AWS CloudWatch: A fully managed service from AWS for monitoring Kubernetes clusters on EKS.
  4. Azure Monitor: For monitoring AKS clusters and applications.
  5. Google Cloud Operations (formerly Stackdriver): For monitoring GKE clusters.

Conclusion

Deploying observability in Kubernetes involves combining metrics, logs, and tracing tools to provide a full view of the cluster and application health. Prometheus, Grafana, Jaeger, Fluentd/Fluentbit, and Elasticsearch or Loki are the most popular open-source tools for achieving comprehensive observability. Managed solutions like Datadog, New Relic, and CloudWatch provide an all-in-one solution for teams preferring less operational overhead.

Tuesday, October 15, 2024

Kubernetes - Advanced Kubeconfig setup

A Kubeconfig file is a YAML file that stores details about the cluster's API server, the namespace, and user authentication credentials.


Kubernetes tools like kubectl, Helm follow an order of precedence to locate the active Kubeconfig.


Lowest Priority: Default Location


This location is used unless another option overrides it.


Linux

~/.kube/config

macOS

/Users/<username>/.kube/config

Windows

%USERPROFILE%\.kube\config


Next priority: Environment variable


The KUBECONFIG environment variable overrides the default location by specifying one or more Kubeconfig files. For a single file named config-test, the syntax is:


export KUBECONFIG=config-1


Highest priority: Command line


The --kubeconfig command line option has the highest priority, meaning all other Kubeconfig files are ignored. For example, to use a test-kubeconfig file located in the ~/.kube/ directory to request secret objects, you would use the following command:


kubectl --kubeconfig=~/.kube/config-1 get secrets



Merge multiple kubeconfig files


To merge multiple files, list them in the environment variable separated with a colon on Linux


export KUBECONFIG=~/.kube/config-1:~/.kube/config-2


Physically merge config files


To physically merge multiple Kubeconfig files into a single file, first add them to the KUBECONFIG environment variable as before. Then, use the --flatten option to combine them, redirect the output to a new file, and use the merged file as your new Kubeconfig. Here's how you can do it:


export KUBECONFIG=config1:config2:config3

kubectl config view --flatten > merged-kubeconfig


You can then set the KUBECONFIG environment variable to point to the newly merged file:


export KUBECONFIG=merged-kubeconfig