Redpanda in Kubernetes: Get started with this dreamy duo

The most popular container orchestration platform meets the simplest streaming data platform! Learn the basics of Redpanda in K8s to make the most of this dreamy duo.

By
on
April 27, 2023

Kubernetes (K8s) is the defacto platform for cloud-native environments, so it’s not surprising that many developers choose it to manage their Redpanda clusters. But when things go wrong, it's not as simple as “kill it, dump it, and rebuild”—much like with other data-intensive software, databases, messaging systems, and even Apache Kafka®. This is especially true when you’re streaming vast amounts of data with high throughput.

Redpanda is all about making things simple. So to make deploying Redpanda in K8s a breeze, many coffee-fueled hours were spent tuning and optimizing its performance in containers, ensuring quick recovery from broker failure through continuous rebalancing and separating the persistence layer.

To help you better understand how everything works under the hood, this blog uncovers the components installed after deploying Redpanda so it’ll be much easier for you to manage, scale, and troubleshoot them if needed.

Understanding Redpanda’s components

Redpanda and Kubernetes use similar terms to describe their structure, which makes it all simpler to wrap your head around.

Diagram showing the similarities between a Redpanda and K8s cluster
Diagram showing the similarities between a Redpanda and K8s cluster

Here’s a brief description of each component:

  • Redpanda broker: The single binary instance of Redpanda with built-in schema registry, HTTP proxy, and message broker capabilities.
  • Redpanda cluster: One or more instances of Redpanda brokers, and aware of all members in the cluster. Provides scale, reliability, and coordination using the Raft consensus algorithm.
  • K8s worker node: A physical or virtual machine that runs the containers and does any work assigned to them by the K8s control plane.
  • K8s cluster: Group of K8s worker nodes and control plane nodes that orchestrate containers running on top with defined CPU, memory, network, and storage resources.
  • Pod: A runtime deployment of the container that encapsulates Redpanda broker— ephemeral by nature, and shares storage and network resources in the same K8s cluster.

The architecture of Redpanda in Kubernetes

We currently recommend using the Redpanda Helm chart to deploy a Redpanda cluster in Kubernetes. Helm chart makes it straightforward to deploy a multi-broker Redpanda cluster, and provides the Redpanda Console for easy administration. Regardless of the different ways to deploy (Helm or Operator), the fundamental components in Kubernetes won’t change.

Here’s a diagram of what your K8s cluster will look like after deployment:

Diagram of a K8s cluster post-deployment
Diagram of a K8s cluster post-deployment

Let’s focus on the Redpanda broker pods for a minute. You can break them down into three logical sections:

  • Redpanda broker: The actual engine of the streaming platform. It’s in charge of streaming events from producers and consumers, writing data into storage tiers, and replicating partitions.
  • The networks: These are for communicating between clients outside of the K8s cluster and the broker.
  • Persistent storage: For storing events and cache shadow index, if Tiered Storage is turned on.

You can change the Redpanda configuration via customizing settings in Helm charts. We recommend defining configuration settings in your own values.yaml file when installing the chart.

There’s a lot more to learn about these three sections, so let’s dig a little deeper into each one.

Redpanda brokers

Diagram of Redpanda brokers
Diagram of Redpanda brokers

Redpanda relies on StatefulSet to guarantee the uniqueness of a broker. Partitions are replicated and distributed evenly across brokers, so every broker will need to keep state (and remember who they were) even after restart to pick up where it left off.

We specified K8s rules for not scheduling the broker pods to run together on the same K8s node. This ensures:

  • Redpanda brokers don’t compete for the same CPU cores and memory resources on a machine
  • Multiple brokers won’t go down at once if the underlying K8s worker node goes offline

For the best performance, allocate at least 80% of K8s’ node memory and CPU cores to each container. Leave the remaining for utilities and K8s processes.

The way Redpanda handles cluster and topic-level configuration is no different from other ways of deployment. Settings are replicated across the Redpanda cluster and stored internally inside it. The best way to configure them is through the use of rpk—Redpanda's command line interface (CLI) utility. The broker-specific configurations are mounted from configmap into local storage under the folder /etc/redpanda.

If TLS is enabled, Redpanda takes advantage of the CertManager operator to generate the certificates, (which helps with automatic renewal). By default, we set Letsencrypt as the certificate issuer. You can overwrite it with your preferred certificate issuer. The issued cert is then stored in a secret, mounted to the broker pod, and then used by the broker for encryption.

The networks

Diagram showing how K8s manages networks
Diagram showing how K8s manages networks

K8s distributes network traffic among multiple pods of the same service. But for solutions like Kafka or Redpanda, it just won’t work. The clients will have to reach the broker that hosts the leader of the partition directly.

We use the headless service to give each pod running Redpanda broker a unique ID. Intra-broker communication, such as leader election and partition replication, and clients living inside the same K8s cluster will use the internal K8s address/IP for the pod to communicate.

For external communication, Redpanda uses NodePort service by default. This will expose the listeners on a static port on the K8s node IP. It’s best to customize a domain name and set up DNS to mask the IPs. Alternatively, you can set up a LoadBalancer service for each broker, where the network traffic will be routed via the internal K8s controller.

You also will have to configure the right advertised Kafka address for clients, so the client can locate the brokers correctly. For a deep dive into how it works, check out our blog on "what is advertised Kafka address?". Pay special attention to the section about using Kafka address and advertised Kafka address in K8s.

Persistent storage

Diagram illustrating persistent storage with Redpanda in K8s
Diagram illustrating persistent storage with Redpanda in K8s

Storage plays the most important role. It’s defined as Persistent Volumes (PV) claimed via a Persistent Volume Claim (PVC) for each pod running a Redpanda broker. The PVs can be predefined or dynamically provisioned, and they determine the size of your storage as well as the types of storage (SSD, NVMe SSD).

Storage placement heavily impacts IOPS capacity and the location of your data. Here’s a good rule of thumb to follow when choosing between local or remote storage: the further away from the K8s cluster, the higher the latency.

You can choose to use local, ephemeral storage on the K8s worker node, but there’s a risk of data loss in the case of node failure. If this happens, Redpanda will automatically attempt to replicate the data from other brokers to minimize that risk.

Always consider enabling Tiered Storage to leverage its benefits, including reduced storage cost and improved recoverability. Tiered Storage can help avoid “disk full” problems by offloading old data into the cloud object store automatically, which in turn reduces the local storage requirements for the Redpanda cluster.

One thing to keep in mind with Tiered Storage is the partition cache. The cache is a portion of the disk space dedicated to temporarily holding data from the object store. Make sure you have taken account of that while planning.

In the case where you absolutely need to expand the size of your storage, please note that not all storage types can support volume expansion. Volume types that allow the users to resize are, for example, gcePersistentDisk, awsElasticBlockStore, and Azure Disk.

Maintaining, monitoring, and optimizing Redpanda in Kubernetes

Deploying your Redpanda cluster in K8s doesn’t mean it’s time to put your feet up. To keep the cluster running smoothly, you want to continuously maintain, monitor, and optimize the system—known as “day two operations”.

Day two activities typically include:

  • Scaling the cluster
  • Upgrading to access new features and bug fixes
  • Changing the replication or partition setting
  • Modifying the segmentation size
  • Setting back pressure
  • Tweaking data retention policies

One important “day two” operation is the rolling upgrade, which lets you safely move the software release forward while minimizing client impact. Let’s take a closer look at how to manage rolling upgrades in K8s.

Rolling upgrades with Kubernetes

K8s adds a layer of complexity when doing upgrades. Here’s what you need to do:

  1. Turn off some of its self-healing features so it won’t interfere with the broker’s plans for partition relocations.
  2. Make sure you have the latest Helm chart from Redpanda, then delete the Statefulset with cascade set to orphan so it will keep the old brokers running.
  3. Use Helm to upgrade, where it will redeploy a new statefulset with the latest version of Redpanda configured (with OnDelete strategy so it doesn’t immediately restart the brokers).

Once it’s set, carry on with how you would normally upgrade a broker: put one of them into maintenance mode, wait for it to drain, let the cluster rebalance, and delete the pod. The new statefulset will schedule a new pod running the latest version of the broker. With the PV being a separate entity, the data still remains and can be reused. This will allow Redpanda to recover faster and spring back to 100% capacity.

Tutorial: Learn how to deploy Redpanda in Kubernetes

The best way to learn is by doing it yourself, so we’ve set up an interactive scenario that guides you through deploying Redpanda in Kubernetes using Redpanda Helm chart. The scenario is broken down into four steps:

  1. Deploy Redpanda in K8s
  2. Use rpk to create topics, consumers, and producers
  3. Understand the networking using NodePort and LoadBalancer
  4. Customize and upgrade using Helm, and meet Redpanda Console

The entire tutorial is hosted on Killercoda—an interactive learning platform, and it’s completely free to use. All you have to do is click here to start.

Screenshot of the Redpanda in Kubernetes scenario in KillerCoda
Screenshot of the Redpanda in Kubernetes scenario in KillerCoda

Conclusion

This blog went over the components installed after deploying Redpanda in Kubernetes, what they’re used for, and what's important to be aware of. The skinny is that Redpanda has made deploying and operating in Kubernetes simple with tools like Helm. This allows you to take full advantage of the declarative nature of the platform to achieve better flexibility and scalability.

To learn more about Redpanda, check out our documentation and browse the Redpanda blog for tutorials on how to easily integrate with Redpanda. For a more hands-on approach, take Redpanda's free Community edition for a spin!

If you get stuck, have a question, or just want to chat with our engineers and fellow Redpanda users, join our Redpanda Community on Slack.

Graphic for downloading streaming data report
Save Your Spot

Related articles

VIEW ALL POSTS
Kafka Connect vs. Redpanda Connect
Christa Lane
&
&
&
November 5, 2024
Text Link
8 business benefits of real-time analytics
Redpanda
&
&
&
October 22, 2024
Text Link
Vector databases vs. knowledge graphs for streaming data applications
Fortune Adekogbe
&
&
&
October 15, 2024
Text Link