Building the future of Kubernetes support for Redpanda

Check out how we're making it simpler to run Redpanda on Kubernetes, and my vision for the future of this powerful integration.

By
on
April 20, 2023
Last modified on
TL;DR Takeaways:
What are the benefits of using Redpanda on Kubernetes?

Using Redpanda on Kubernetes combines the powerful capabilities of a modern, high-performance data streaming engine with the orchestration advantages of Kubernetes. This combination allows you to create a flexible, resilient, and scalable infrastructure that’s ideal for various streaming use cases. You can deploy Redpanda on Kubernetes in minutes, either locally or in your cloud of choice.

How can I deploy a Redpanda cluster in Kubernetes?

This blog recommends using the Redpanda Helm chart to deploy a Redpanda cluster in Kubernetes. Helm chart makes it straightforward to deploy a multi-broker Redpanda cluster, and provides the Redpanda Console for easy administration. You can change the Redpanda configuration via customizing settings in Helm charts, preferably defining configuration settings in your own values.yaml file when installing the chart.

Learn more at Redpanda University
“Kubernetes makes it easier to maintain and scale Redpanda clusters, resulting in better performance and reliability of your streaming applications.” - Joe Julian

Focusing my career on Kubernetes (K8s) happened completely by accident. I stumbled into it when I applied for an opening with Samsung and, despite not knowing Go or never having worked with Kubernetes before, I got the job. I’d worked a lot with OpenStack and it became clear pretty quickly that Kubernetes solved the biggest shortcomings that made OpenStack difficult.

Kubernetes, for the unfamiliar, is an open-source project for automating deployment, scaling, and management of containerized applications. It began at Google in 2014 and has since evolved into a global movement with hundreds of contributors and adopted by over 60 percent of organizations.

A large portion of the Redpanda Community runs their deployments on Kubernetes, so my role is to lead a team of developers to build the tools that make Redpanda a “product” for Kubernetes. Basically, we make the tools to make installation and operation consistent with the Kubernetes ecosystem.

In this post, I’ll explain why you’d run Redpanda in Kubernetes in the first place, give you some solid sources to learn more and end with a few tips for developers just getting started with Redpanda in K8s.

Why run Redpanda in Kubernetes?

Kubernetes is a declarative state-resolution engine that’s primarily responsible for reducing the MTTR (mean time to resolve) in the event of failures. In a nutshell, reducing recovery time improves application reliability and also reduces cost. Subsequently, the declarative nature of Kubernetes makes it a valuable platform to run Redpanda on.

Here are a few reasons why:

  • GitOps: Because the state of the application is defined as declarative resources, this makes it easy to version control and automate the deployment and management of Redpanda clusters.
  • Immutable infrastructure: Kubernetes treats your infrastructure as immutable. This means that once you deploy your Kubernetes cluster, any changes to the infrastructure are made by creating new resources rather than modifying existing ones. This approach ensures consistency and predictability. Redpanda benefits by having those changes abstracted away, allowing it to do what it does best.
  • Self-healing: Kubernetes is designed to be self-healing. This means that if a pod in your Redpanda cluster fails or becomes unavailable, Kubernetes will automatically replace it with a new one, ensuring that your cluster continues to operate as expected.
  • Scalability: Kubernetes allows you to easily scale your Redpanda cluster up or down by adding or removing pods. This is done by simply updating the desired state of your infrastructure in a YAML file, rather than manually provisioning or de-provisioning resources.

Overall, the declarative nature of Kubernetes allows you to define and manage your Redpanda infrastructure in a more efficient and reliable way. It provides a simple and unified interface for managing infrastructure, which makes it easier to maintain and scale Redpanda clusters, resulting in better performance and reliability of your streaming applications.

The vision for our Redpanda on Kubernetes integration

In the long term, I hope we can build an active community that’s involved in driving the future of Kubernetes support for Redpanda. By focusing on using Helm for installation, we have opened up the installer to non-developers.

To ensure that we can continue to engage the community without requiring esoteric knowledge, our upgraded Redpanda Operator will build on the Helm chart, using the chart for the installation and lifecycle management of the application itself, while adding additional day two operations like user, ACL, topic, and schema management. We want to allow GitOps-driven management of every aspect of Redpanda and its connectors.

I don’t see a finish line, per se. It’s more about expanding Redpanda’s footprint until it can easily be deployed everywhere. We're even considering packaging Redpanda with popular tools for specific use cases, so people can simply click to install and everything will just work. There are so many possibilities for us in the future, and as a developer-first company, the Redpanda Community plays a major role in shaping it.

Exciting Redpanda-Kubernetes developments in the works

I’ve hinted at an updated operator. For the first phase, this will allow the user to use their CI/CD platform to deploy and manage Redpanda clusters, users, ACLs, and topics. Next, we’ll be adding the same for managing schemas. It’s a short leap from there to simply pointing the operator at a Git repository and allowing the operator to be the CD tool. We also want to add a wrapper Helm chart to provide various commonly-used optional tools like external-dns, metallb, cert-manager, etc.

Getting started with Redpanda in K8s for real-time streaming

My best advice is to start with a simple local cluster by following our quick-start guide on deploying a local cluster. Learn how it works with the default settings. Understand why some choices were made, like using NodePorts instead of LoadBalancers by reading the Best Practices. Next, run the same thing on one of the cloud providers. Test your expected workload. If you need more performance, consider building a new image for your worker instances that have been tuned with the autotuner. If you have any questions or need help with any of the steps, reach out to us on our Redpanda Community on Slack or reach out to us on GitHub.

Join the team at Redpanda

If you’re all about Kubernetes and want to help make developers’ lives easier, we’re always on the lookout for engineers to join us in tackling complex problems and building simple solutions. Our team is filled with smart people from around the world who thrive on code, caffeine, and a good dose of nerdy humor. If that sounds like you, check out our Redpanda careers page.

To ping me directly, you can find me on our Redpanda Community on Slack as @joe and on Kubernetes Slack as @joejulian. You can also find me on LinkedIn and Mastodon.

No items found.
Joe Julian
Author
Joe Julian

Joe Julian
Author

Author

Author

Related articles

VIEW ALL POSTS
Real-time AI: what is it and why it needs streaming data
Jenny Medeiros
&
&
&
September 4, 2025
Text Link
Integrating OpenID Connect with Redpanda
Ben Barkhouse
&
&
&
September 2, 2025
Text Link
Setting up Redpanda observability in Datadog
Kavya Shivashankar
&
&
&
August 27, 2025
Text Link