
Streamfest day 2: Smarter streaming in the cloud and the future of Kafka
Highlights from the second day of Redpanda Streamfest 2025
Check out how we're making it simpler to run Redpanda on Kubernetes, and my vision for the future of this powerful integration.
“Kubernetes makes it easier to maintain and scale Redpanda clusters, resulting in better performance and reliability of your streaming applications.” - Joe Julian
Focusing my career on Kubernetes (K8s) happened completely by accident. I stumbled into it when I applied for an opening with Samsung and, despite not knowing Go or never having worked with Kubernetes before, I got the job. I’d worked a lot with OpenStack and it became clear pretty quickly that Kubernetes solved the biggest shortcomings that made OpenStack difficult.
Kubernetes, for the unfamiliar, is an open-source project for automating deployment, scaling, and management of containerized applications. It began at Google in 2014 and has since evolved into a global movement with hundreds of contributors and adopted by over 60 percent of organizations.
A large portion of the Redpanda Community runs their deployments on Kubernetes, so my role is to lead a team of developers to build the tools that make Redpanda a “product” for Kubernetes. Basically, we make the tools to make installation and operation consistent with the Kubernetes ecosystem.
In this post, I’ll explain why you’d run Redpanda in Kubernetes in the first place, give you some solid sources to learn more and end with a few tips for developers just getting started with Redpanda in K8s.
Kubernetes is a declarative state-resolution engine that’s primarily responsible for reducing the MTTR (mean time to resolve) in the event of failures. In a nutshell, reducing recovery time improves application reliability and also reduces cost. Subsequently, the declarative nature of Kubernetes makes it a valuable platform to run Redpanda on.
Here are a few reasons why:
Overall, the declarative nature of Kubernetes allows you to define and manage your Redpanda infrastructure in a more efficient and reliable way. It provides a simple and unified interface for managing infrastructure, which makes it easier to maintain and scale Redpanda clusters, resulting in better performance and reliability of your streaming applications.
In the long term, I hope we can build an active community that’s involved in driving the future of Kubernetes support for Redpanda. By focusing on using Helm for installation, we have opened up the installer to non-developers.
To ensure that we can continue to engage the community without requiring esoteric knowledge, our upgraded Redpanda Operator will build on the Helm chart, using the chart for the installation and lifecycle management of the application itself, while adding additional day two operations like user, ACL, topic, and schema management. We want to allow GitOps-driven management of every aspect of Redpanda and its connectors.
I don’t see a finish line, per se. It’s more about expanding Redpanda’s footprint until it can easily be deployed everywhere. We're even considering packaging Redpanda with popular tools for specific use cases, so people can simply click to install and everything will just work. There are so many possibilities for us in the future, and as a developer-first company, the Redpanda Community plays a major role in shaping it.
I’ve hinted at an updated operator. For the first phase, this will allow the user to use their CI/CD platform to deploy and manage Redpanda clusters, users, ACLs, and topics. Next, we’ll be adding the same for managing schemas. It’s a short leap from there to simply pointing the operator at a Git repository and allowing the operator to be the CD tool. We also want to add a wrapper Helm chart to provide various commonly-used optional tools like external-dns, metallb, cert-manager, etc.
My best advice is to start with a simple local cluster by following our quick-start guide on deploying a local cluster. Learn how it works with the default settings. Understand why some choices were made, like using NodePorts instead of LoadBalancers by reading the Best Practices. Next, run the same thing on one of the cloud providers. Test your expected workload. If you need more performance, consider building a new image for your worker instances that have been tuned with the autotuner. If you have any questions or need help with any of the steps, reach out to us on our Redpanda Community on Slack or reach out to us on GitHub.
If you’re all about Kubernetes and want to help make developers’ lives easier, we’re always on the lookout for engineers to join us in tackling complex problems and building simple solutions. Our team is filled with smart people from around the world who thrive on code, caffeine, and a good dose of nerdy humor. If that sounds like you, check out our Redpanda careers page.
To ping me directly, you can find me on our Redpanda Community on Slack as @joe and on Kubernetes Slack as @joejulian. You can also find me on LinkedIn and Mastodon.
Chat with our team, ask industry experts, and meet fellow data streaming enthusiasts.

Highlights from the second day of Redpanda Streamfest 2025

Highlights from the first day of Redpanda Streamfest 2025

Cost-effective Cloud Topics, Google Cloud BigLake Iceberg catalogs, and SQL Server CDC
Subscribe and never miss another blog post, announcement, or community event. We hate spam and will never sell your contact information.