Summer’s in full swing, and so are the latest updates from Decodable! We’re excited to bring you a fresh wave of features and improvements designed to supercharge your real-time data platform. From updated PyFlink support to enhanced declarative resource management, our summer release is all about making your data movement smoother and more powerful. Dive in and discover how these updates can transform your real-time data movement workloads.
Fully Managed PyFlink
Great news for Python enthusiasts: Support for PyFlink running as a fully managed service on Decodable is now publicly available. Already have a PyFlink job? Upload it through the Decodable UI and give it a try. Want to learn how to build a PyFlink application from scratch instead? Check out our comprehensive example to get started. Additionally, our declarative resource management fully supports managing PyFlink pipelines.
For those new to PyFlink, it is a Python API designed for Apache Flink. PyFlink makes it easier to access Flink’s powerful stream processing capabilities for those proficient in Python and familiar with libraries like Pandas, NumPy, PyTorch, or TensorFlow. Whether you're developing real-time machine learning pipelines or updating vector-search enabled databases with the latest data as part of Retrieval Augmentation Generation (RAG) pipelines, PyFlink ensures a smooth integration.
Declarative Execution
Earlier this year, we introduced our YAML-based Declarative resource management feature, an Infrastructure as Code (IaC) solution that supports GitOps workflows. This allows you to keep all Decodable resources in source control and integrate with your CI/CD pipelines. The feedback has been overwhelmingly positive. Building on this success, we're excited to introduce Declarative execution. This new feature allows you to specify the desired execution state directly within your YAML file. Our system will then automatically reconcile the actual runtime state to ensure it matches your specifications. This improvement simplifies your workflow, making it even more seamless and efficient.
Here is an example <span class="inline-code">example-sql-pipeline.yaml</span> for a pipeline:
Run the command below and the platform will ensure this pipeline is created and running:
Check out our blog for a step-by-step guide with an end-to-end real-time ETL use case.
Expanded RDBMS Integration for Real-time ETL
Our latest connectors focus on RDBMS integrations, including
Running analytics directly from operational databases is simple but can overwhelm your system, leading to degraded service and poor user experience. Our RDBMS connectors solve this by offloading heavy analytical computations to Decodable pipelines, ensuring continuous streaming updates while your existing databases efficiently handle queries. If you already use an analytical system or data warehouse, our new CDC connectors enable real-time data ingestion from a broader range of source systems.
Set up is easy—just provide the connectivity configurations and select the resources to connect. All of these connectors support change streams and multiple streams, optimizing for both low latency and resource efficiency.
Snapshot Management UI
Managing pipeline snapshots is now easier than ever. Whether you want to set up a cron job for periodic backups of your pipeline state or trigger a one-time snapshot before making changes, our UI makes the process simple and straightforward.
This is paired with the ability to easily restart a pipeline from any snapshot. Together, these features make it easier than ever to manage upgrades or reprocess data with deterministic results.
Improved Connector Configuration Interface
It’s no longer necessary to open yet another tab in your browser to figure out how to configure a connection. We've baked our connector documentation right into the app, giving you all the info you need at your fingertips.
Docs Updates
New Home Page
Our docs home page 🏠 has a new look! The intuitive layout provides quick access to:
- ✅Begin your journey with clear, step-by-step instructions.
- 🧑💻Learn through practical, hands-on examples.
- 📖In-depth documentation on connectors, APIs, and more.
Visit Decodable Docs to start exploring.
SQL Function References
We’ve made a major improvement for the SQL function documentation with:
- 📝 Proper code formatting
- 📚 Separate pages by function category
Developer's Hub
Discover the Decodable developer experience through our newly published in-depth blogs:
- How to get data from Apache Kafka to Apache Iceberg on S3 with Decodable
- Decodable vs. Amazon MSF: Getting Started with Flink SQL
- Denormalizing Change Event Streams for Document Stores
Don’t forget to subscribe to our Checkpoint Chronicle newsletters to stay ahead in the data and streaming space. Curated by industry experts Gunnar Morling and Robin Moffatt, each monthly issue delivers a roundup of the most interesting developments, insights, and innovations in real-time data processing.