Simple and predictable pricing.
We scale with you, only paying for what you use. Delivering the performance you need when you need it.
All Decodable plans include:
Developer
Team
Enterprise
Compare features
A data engineering service that makes it easy for developers and data engineers to build and deploy real-time data pipelines for data-driven applications.
- Github
- Google
- Github
- Google
- Active Directory / LDAP
- ADFS
- Azure AD
- Github
- Google
- Microsoft Live
- OpenID Connect
- Ping Federate
- SAML
- Community Slack
- Private Slack channel
- Email
- Github
- Google
- Github
- Google
- Active Directory / LDAP
- ADFS
- Azure AD
- Github
- Google
- Microsoft Live
- OpenID Connect
- Ping Federate
- SAML
Frequently asked questions
Our mission is to put you in a position to control your destiny and success.
A task is a connection or pipeline worker that performs data collection or processing. All connections and pipelines have at least one task, and frequently more based on the configured parallelism. You control the maximum number of tasks when you create a connection or pipeline. Tasks receive a dedicated CPU and memory allocation in Decodable.
Decodable measures task usage once per minute and bills you for the average task usage each hour, rounded to the nearest whole number of tasks.
No! You’re only billed for active connections and pipelines.
No. The Developer and Team plans include a limited number of concurrent real-time previews while the Enterprise plan includes an unlimited number of previews.
Yes! You can switch plans at any time. If you select a plan with a lesser task limit, any connections or pipelines that put you over the task limit will be deactivated. If you transition from a paid plan to a free plan, you’ll be charged for any tasks you’ve already used.
If you’re interested in pre-purchasing task capacity or volume discounts, the Enterprise plan is probably for you! Contact us for a custom quote.
When data is written to a stream it is retained for a fixed amount of time or size, whichever comes first. This retention is what allows pipelines to tolerate failures, restarts, slow consumers, and other operational tasks you perform, without losing data. When stream data exceeds the retention time or size it is automatically deleted, from the oldest data to the newest.
You control both time- and size-based retention settings on a per-stream basis. Accounts on plans with maximums on retention may not exceed their limits. Size limits are per-account, while time limits are per-stream.
Example: Under the Team plan, all streams are allowed to retain data for 7 days however the total size of all streams may not exceed 100GB.
If you’d like to retain more than 100GB of data or more than 14 days, please contact support.