Back
August 16, 2023
8
min read

Introduction to Apache Flink and Stream Processing

Most of the data produced and consumed by businesses and their customers is created in continuous streams. Until recently, however, our data processing technology was limited to processing data in batches, often overnight. Apache Flink® is an open source engine for processing streaming data, serving the fast-growing need for low-latency, high-throughput handling of continuous streams of data.

In this article, we'll explore what Flink is and what differentiates it from traditional batch processing, how it's used to drive modern applications and services, why it matters in the context of today's business needs, and touch on how it works at a high level. This paper is intended for technical decision makers as well as developers who are looking to build applications using real-time data.

Batch processing inevitably introduces latency and complexity into the flow of data, since batches must be collected and then often moved to another application for processing. Stream processing offers several advantages over batch, including lower latency, higher throughput, easier scaling, and reduced demands on infrastructure and developers. 

Stream processing enables organizations to identify and act on meaningful events as they occur. Use cases enabled by stream processing include fraud detection, financial analysis, social media analysis, recommendation engines, and handing IoT data.

Although the data industry has been using batch processing for decades, stream processing technology and demand for it have both increased to a point where many organizations depend on stream processing for mission-critical applications and functions. Apache Flink is the premier open source engine for processing streaming data.

Stream processing engines like Flink work in a straightforward manner: data, whether from a continuous stream, from stored historical data, or from other sources, is ingested. The application then performs some kind of computation or transformation, the result of which is stored as the state in Flink. From there, the data stream continues as an output stream.

Tools and processes for stream processing are not yet as mature as for batch, although that is changing. New products and platforms, including Decodable, make it easier to work with and gain the advantages of Flink and data stream processing.

What Is Apache Flink?

Apache Flink is an open source engine for processing streaming data, serving the fast-growing need for low-latency, high-throughput handling of continuous streams of data.

Much of the data produced and consumed by modern applications is created in continuous streams, common examples include online transactions, IoT devices, and application data logs. It is only in the past decade or so, however, that technology for stream processing has been broadly available. So while data is very often produced continuously, businesses have been forced to rely on computer systems that required it to be processed in discrete batches. As the volume and velocity of data continues to increase and users’ expectations for real-time and near-real-time responses and insights become more demanding, many organizations are transitioning to stream processing for their mission-critical applications and use cases.

Flink processes real-time data, often called stream processing, where an application is ingesting and listening to events, updating its state, its models, or doing some other transformation on that information. It is also suited to batch processing, working with static or historic data in the same manner. And Flink performs well for the expanding use case of event-driven applications such as fraud detection and real-time analytics. In addition to low latency and high throughput, Flink provides stateful stream processing, where operations in a dataflow remember information across multiple events. Flink also leverages its distributed architecture to enable scaling up or down depending on current needs.

Apache Flink works by taking in data streams and performing computations on them. The results of the computations are represented as states within Flink, and can be output as data streams to other systems for further processing. It provides a unified programming model for both streaming and batch data analysis, and its core is a distributed streaming dataflow engine that optimizes, schedules, and executes computations across distributed clusters.

One of the key advantages of Flink is its ability to process data in real time. This is made possible by its in-memory processing engine, which allows it to perform many operations on data directly in memory, without the need to read from and write to disk or to retrieve and store data from a database. However, Flink is able to gracefully degrade to disk in case it runs out of available memory, making the system resilient and scalable to large data sizes.

These qualities allow Flink to address three main use cases: data analytics, event-driven applications, and streaming ETL. Analytics and event-driven applications are discussed in detail below. Flink’s ability to replace traditional ETL processing comes from its ability to perform transformations on data continuously, rather than waiting for a batch, and to write them downstream.

Another core benefit of Flink is that it reduces demands and resource needs on data infrastructure, maintenance, and developers’ time. And because the data is in a single, local location, scaling is much easier to achieve by adding machines, refactoring applications, or updating schema.

How Flink Is Used

Data stream processing, sometimes called event stream processing, is used to detect patterns, trends, and other insights from high-volume, fast-moving data sources. Stream processing enables organizations to identify and act on meaningful events as they occur, rather than grouping data and collecting it at some predetermined interval, as with batch processing.

Flink’s abilities and speed have made it attractive for a wide range of use cases. Its users include Alibaba (which uses Flink during the Singles’ Day event in China, which sees more than $80 billion of ecommerce in a day), eBay (for real-time metrics and analytics), and Lyft (which uses Flink for real-time pricing and demand forecasting). A more in-depth look at Flink users is available on the Apache Flink website.

There are several broad groups of applications that can benefit from a stream-based approach, depending on the particular business needs.

  • Event-driven applications. Events from a data stream are ingested and trigger an action, such as a state update, a computation, an alert, or other actions. Rather than writing and reading data to a transactional database, event-driven applications perform operations in application memory or on local storage.
  • Data analytics applications. While analytics have traditionally been performed as batch queries, Flink allows for real-time analytics by ingesting event streams and updating results as those events are consumed. For example, a dashboard can query the internal state of the application as it continuously evolves.
  • Data pipeline applications. An alternative to moving data via ETL pipelines, a data pipeline can clean, transform, enrich, aggregate, and move data, all while working in a continuous streaming mode. These pipelines can handle continuously produced data and process it with low latency.

A good use case example for Flink is fraud detection, which is a highly event-driven application. By analyzing transaction data as it comes in, data stream processing can help detect potential fraudulent activity in real-time, allowing financial institutions and consumers to take immediate action to prevent losses. Credit card transactions form a massive, rapid, and unending stream of data. Fraud detection looks for patterns in those transactions and identifies purchases that fall outside of those patterns. With more than one billion credit card transactions happening every day, the task requires very low latency and very high throughput. Even a very fast batch processing approach could not keep up, but a stream processing approach, which does not need to move data around for processing, is able to.

Other prominent use cases for Flink include:

  • Internet of Things (IoT): In the context of IoT, data stream processing can be used to analyze data from sensors and other devices in real-time, allowing businesses to better monitor and control their operations or react to changes in the environment.
  • Financial market analysis: By analyzing financial data as it is generated, data stream processing can help traders and investors make more informed decisions and respond to market changes.
  • Social media analysis: Data stream processing can be used to analyze data from social media platforms in real-time, allowing businesses to track and quickly respond to customer sentiment and feedback.
  • Real-time recommendations: Stream processing allows for real-time analysis of an online customer’s behavior and shopping cart, as well as provide a view of what other users are buying. This allows systems to make faster, more accurate recommendations.

Why Flink Matters

At Decodable, we believe that real-time streaming is the future of data.

Understanding that future, though, begins by understanding the past. In the earliest days of data processing, data was gathered in actual, physical batches of punch cards, which were carried to a mainframe, queued up, and processed together. This process was dictated by the computing technology of the day, which was thousands or millions of times slower than today. That model of batch processing, both the technological and the mental model, carried over to the modern era of databases, which began in the 1970s with the invention of the relational database. 

As the volume of data took off in an exponential way, the continuous streams of data produced by our devices and activities seemed so overwhelmingly large that they could not be valuable. Observers named it “data exhaust” and “data smog.” While the early 2000s saw the rise of “Big Data,” the truth is that truly robust tools to handle that continuous data and gain value from it were not yet available.

At the same time, businesses and consumers came to expect fast, often instantaneous, responses to their requests or questions. There is no doubt that batch processing has come a long way. Thanks to the ever-increasing speed of computation and the development of micro-batch processing, it can sometimes approximate true real-time processing. However, there is still latency, often to a degree that impacts the needs of data consumers and their productivity. In many cases, businesses must wait hours, days, or even longer for things like sales analytics to be processed.

The real world does not produce data in batches. The data streams from our mobile devices, the sensors in our cars, the IoT devices in factories, the activity logs of our applications, never stop. There are vanishingly few examples of “natural,” neat divisions that can be processed in a batch.

This is the contradiction that has been around for decades, namely that data is mostly produced in continuous streams, but is still mostly processed in batches. This disconnect between the reality of streaming data and the methods of handling it has a number of negative effects. For example, metrics or other insights into a stream of data are not produced in true real time. There is always a delay, some amount of latency, as an arbitrary batch of data is created and processed, whether that’s a couple of seconds or a couple of days. It puts pressure on software developers to hide that fundamental real-time vs. batch contradiction through a kind of digital sleight of hand, processing batches as fast as possible to disguise the time interval between them. It puts more demands on data infrastructure, since data needs to to be moved to a data lake or data warehouse through ETL processes. And all of that additional work and maintenance consumes developers’ time, creating substantial opportunity costs getting in the way of new, innovative work developers are not taking on.

The core of this contradiction is that for many types of data and many use cases, batch processing is not a good fit. That doesn’t mean that no use cases are appropriate for batch processing. Applications such as payroll and overnight trade settlement may remain appropriate for batch processing. Robert Metzger, one of the original creators of Flink and now a software engineer at Decodable, shared one rule of thumb, “If your queries are changing faster than your data, then a batch approach might make sense. The streaming case is when your data is changing faster than your queries.”

If your queries are changing faster than your data, then a batch approach might make sense. The streaming case is when your data is changing faster than your queries.

Stream Processing vs. Batch Processing

The demand for real-time data processing continues to grow, as well as the need for the real-time insights and business intelligence that can come out of it. Zion Market Research projects the market for stream processing to reach $2.6 billion by 2025.

Increasing demand also points to the need for more developers fluent in stream processing. Moving from batch processing to stream processing requires not only fluency with new tools, but a significant mental shift as well. Indeed, the longer a developer has been immersed in batch processing, the harder it can be to make the shift. It demands a new way of thinking about data and data flows. With the advent of SQL for stream processing, this transition has become easier in recent years, making that a viable path for developers to consider.

How Flink Works

Stream processing engines like Flink work in a straightforward manner. Input data, whether from a continuous stream, from stored historical data, or from other sources, is ingested. The application then performs some kind of computation or transformation, the result of which is stored as the state in Flink. From there, the data stream continues as an output stream, often moving to a new processing pipeline or to a data sink, such as a database or another application.

Two sample architectures shown below, for an event-driven application and a real-time analysis application, illustrate more specifically how data flows in a Flink-centered system.

Event-Driven Application Architecture

A push notification system is an example of an event-driven application. Imagine a car-sharing service’s app is ingesting a datastream indicating where cars are and their predicted arrival time. When a car reports that its arrival time will be late, the app sends a push notification to the passenger 

In a traditional event-driven application, events would be first written and/or read to a transactional database, the application would compute or transform the data, and an action would be triggered. The time required for the write and computation creates some latency before the notification is pushed out.

Event-driven application
Event-driven application, https://flink.apache.org/usecases.html

Using Flink, the computation happens on a continuous basis, before any read or write to a database, greatly reducing processing time and thus latency. In this case, the push notification goes out in real time.

Real-Time Analytics Application Architecture

Traditionally, analytics have been run as batch processing jobs, where a set amount of data is collected, processed, and results are stored. With stream processing, the analytics are computed within the stream processing engine–zeroing any latency involved in the batching–and can be output directly to a dashboard, as well as stored to a database.

Streaming analytics
Streaming analytics, https://flink.apache.org/usecases.html

Flink and Decodable

Stream processing is still a relatively new approach to handling and extracting value from data, which means that the tools available are not yet as mature as those for traditional batch processing. Building a complete stream processing solution from scratch can involve combining several open-source elements as well as custom code.

There are new products and platforms that greatly simplify working with Flink, including Decodable, a serverless, real-time, streaming data platform. Decodable takes care of most of the complexity involved in setting up and running Flink and allows developers to focus on their applications rather than infrastructure.

With Decodable, developers define pipelines in SQL that process real-time streams of data that are attached to the data infrastructure by connections. Connections can be between Decodable and messaging systems, object stores, operational database systems, data warehouses, data lakes, and microservices. In this context, real time means sub-second end-to-end latency. In most cases, pipelines receive, process, and send data in tens of milliseconds.

For more information, visit decodable.co.

Further Reading

Apache Flink and the discipline of stream processing are fresh technologies, but there are a number of resources for building Flink knowledge and skills.

The official Apache Flink site includes an overview, downloads, and documentation.

Additionally, these O’Reilly books provide an in-depth treatment of Flink and stream processing.

  • Ellen Friedman and Kostas Tsoumas: Introduction to Apache Flink, O’Reilly Media, 2016.
  • Fabian Hueske and Vasiliki Kalavri: Stream Processing with Apache Flink, O’Reilly Media, 2019.

Additional Resources

📫 Email signup 👇

Did you enjoy this issue of Checkpoint Chronicle? Would you like the next edition delivered directly to your email to read from the comfort of your own home?

Simply enter your email address here and we'll send you the next issue as soon as it's published—and nothing else, we promise!

👍 Got it!
Oops! Something went wrong while submitting the form.
David Fabritius

Most of the data produced and consumed by businesses and their customers is created in continuous streams. Until recently, however, our data processing technology was limited to processing data in batches, often overnight. Apache Flink® is an open source engine for processing streaming data, serving the fast-growing need for low-latency, high-throughput handling of continuous streams of data.

In this article, we'll explore what Flink is and what differentiates it from traditional batch processing, how it's used to drive modern applications and services, why it matters in the context of today's business needs, and touch on how it works at a high level. This paper is intended for technical decision makers as well as developers who are looking to build applications using real-time data.

Batch processing inevitably introduces latency and complexity into the flow of data, since batches must be collected and then often moved to another application for processing. Stream processing offers several advantages over batch, including lower latency, higher throughput, easier scaling, and reduced demands on infrastructure and developers. 

Stream processing enables organizations to identify and act on meaningful events as they occur. Use cases enabled by stream processing include fraud detection, financial analysis, social media analysis, recommendation engines, and handing IoT data.

Although the data industry has been using batch processing for decades, stream processing technology and demand for it have both increased to a point where many organizations depend on stream processing for mission-critical applications and functions. Apache Flink is the premier open source engine for processing streaming data.

Stream processing engines like Flink work in a straightforward manner: data, whether from a continuous stream, from stored historical data, or from other sources, is ingested. The application then performs some kind of computation or transformation, the result of which is stored as the state in Flink. From there, the data stream continues as an output stream.

Tools and processes for stream processing are not yet as mature as for batch, although that is changing. New products and platforms, including Decodable, make it easier to work with and gain the advantages of Flink and data stream processing.

What Is Apache Flink?

Apache Flink is an open source engine for processing streaming data, serving the fast-growing need for low-latency, high-throughput handling of continuous streams of data.

Much of the data produced and consumed by modern applications is created in continuous streams, common examples include online transactions, IoT devices, and application data logs. It is only in the past decade or so, however, that technology for stream processing has been broadly available. So while data is very often produced continuously, businesses have been forced to rely on computer systems that required it to be processed in discrete batches. As the volume and velocity of data continues to increase and users’ expectations for real-time and near-real-time responses and insights become more demanding, many organizations are transitioning to stream processing for their mission-critical applications and use cases.

Flink processes real-time data, often called stream processing, where an application is ingesting and listening to events, updating its state, its models, or doing some other transformation on that information. It is also suited to batch processing, working with static or historic data in the same manner. And Flink performs well for the expanding use case of event-driven applications such as fraud detection and real-time analytics. In addition to low latency and high throughput, Flink provides stateful stream processing, where operations in a dataflow remember information across multiple events. Flink also leverages its distributed architecture to enable scaling up or down depending on current needs.

Apache Flink works by taking in data streams and performing computations on them. The results of the computations are represented as states within Flink, and can be output as data streams to other systems for further processing. It provides a unified programming model for both streaming and batch data analysis, and its core is a distributed streaming dataflow engine that optimizes, schedules, and executes computations across distributed clusters.

One of the key advantages of Flink is its ability to process data in real time. This is made possible by its in-memory processing engine, which allows it to perform many operations on data directly in memory, without the need to read from and write to disk or to retrieve and store data from a database. However, Flink is able to gracefully degrade to disk in case it runs out of available memory, making the system resilient and scalable to large data sizes.

These qualities allow Flink to address three main use cases: data analytics, event-driven applications, and streaming ETL. Analytics and event-driven applications are discussed in detail below. Flink’s ability to replace traditional ETL processing comes from its ability to perform transformations on data continuously, rather than waiting for a batch, and to write them downstream.

Another core benefit of Flink is that it reduces demands and resource needs on data infrastructure, maintenance, and developers’ time. And because the data is in a single, local location, scaling is much easier to achieve by adding machines, refactoring applications, or updating schema.

How Flink Is Used

Data stream processing, sometimes called event stream processing, is used to detect patterns, trends, and other insights from high-volume, fast-moving data sources. Stream processing enables organizations to identify and act on meaningful events as they occur, rather than grouping data and collecting it at some predetermined interval, as with batch processing.

Flink’s abilities and speed have made it attractive for a wide range of use cases. Its users include Alibaba (which uses Flink during the Singles’ Day event in China, which sees more than $80 billion of ecommerce in a day), eBay (for real-time metrics and analytics), and Lyft (which uses Flink for real-time pricing and demand forecasting). A more in-depth look at Flink users is available on the Apache Flink website.

There are several broad groups of applications that can benefit from a stream-based approach, depending on the particular business needs.

  • Event-driven applications. Events from a data stream are ingested and trigger an action, such as a state update, a computation, an alert, or other actions. Rather than writing and reading data to a transactional database, event-driven applications perform operations in application memory or on local storage.
  • Data analytics applications. While analytics have traditionally been performed as batch queries, Flink allows for real-time analytics by ingesting event streams and updating results as those events are consumed. For example, a dashboard can query the internal state of the application as it continuously evolves.
  • Data pipeline applications. An alternative to moving data via ETL pipelines, a data pipeline can clean, transform, enrich, aggregate, and move data, all while working in a continuous streaming mode. These pipelines can handle continuously produced data and process it with low latency.

A good use case example for Flink is fraud detection, which is a highly event-driven application. By analyzing transaction data as it comes in, data stream processing can help detect potential fraudulent activity in real-time, allowing financial institutions and consumers to take immediate action to prevent losses. Credit card transactions form a massive, rapid, and unending stream of data. Fraud detection looks for patterns in those transactions and identifies purchases that fall outside of those patterns. With more than one billion credit card transactions happening every day, the task requires very low latency and very high throughput. Even a very fast batch processing approach could not keep up, but a stream processing approach, which does not need to move data around for processing, is able to.

Other prominent use cases for Flink include:

  • Internet of Things (IoT): In the context of IoT, data stream processing can be used to analyze data from sensors and other devices in real-time, allowing businesses to better monitor and control their operations or react to changes in the environment.
  • Financial market analysis: By analyzing financial data as it is generated, data stream processing can help traders and investors make more informed decisions and respond to market changes.
  • Social media analysis: Data stream processing can be used to analyze data from social media platforms in real-time, allowing businesses to track and quickly respond to customer sentiment and feedback.
  • Real-time recommendations: Stream processing allows for real-time analysis of an online customer’s behavior and shopping cart, as well as provide a view of what other users are buying. This allows systems to make faster, more accurate recommendations.

Why Flink Matters

At Decodable, we believe that real-time streaming is the future of data.

Understanding that future, though, begins by understanding the past. In the earliest days of data processing, data was gathered in actual, physical batches of punch cards, which were carried to a mainframe, queued up, and processed together. This process was dictated by the computing technology of the day, which was thousands or millions of times slower than today. That model of batch processing, both the technological and the mental model, carried over to the modern era of databases, which began in the 1970s with the invention of the relational database. 

As the volume of data took off in an exponential way, the continuous streams of data produced by our devices and activities seemed so overwhelmingly large that they could not be valuable. Observers named it “data exhaust” and “data smog.” While the early 2000s saw the rise of “Big Data,” the truth is that truly robust tools to handle that continuous data and gain value from it were not yet available.

At the same time, businesses and consumers came to expect fast, often instantaneous, responses to their requests or questions. There is no doubt that batch processing has come a long way. Thanks to the ever-increasing speed of computation and the development of micro-batch processing, it can sometimes approximate true real-time processing. However, there is still latency, often to a degree that impacts the needs of data consumers and their productivity. In many cases, businesses must wait hours, days, or even longer for things like sales analytics to be processed.

The real world does not produce data in batches. The data streams from our mobile devices, the sensors in our cars, the IoT devices in factories, the activity logs of our applications, never stop. There are vanishingly few examples of “natural,” neat divisions that can be processed in a batch.

This is the contradiction that has been around for decades, namely that data is mostly produced in continuous streams, but is still mostly processed in batches. This disconnect between the reality of streaming data and the methods of handling it has a number of negative effects. For example, metrics or other insights into a stream of data are not produced in true real time. There is always a delay, some amount of latency, as an arbitrary batch of data is created and processed, whether that’s a couple of seconds or a couple of days. It puts pressure on software developers to hide that fundamental real-time vs. batch contradiction through a kind of digital sleight of hand, processing batches as fast as possible to disguise the time interval between them. It puts more demands on data infrastructure, since data needs to to be moved to a data lake or data warehouse through ETL processes. And all of that additional work and maintenance consumes developers’ time, creating substantial opportunity costs getting in the way of new, innovative work developers are not taking on.

The core of this contradiction is that for many types of data and many use cases, batch processing is not a good fit. That doesn’t mean that no use cases are appropriate for batch processing. Applications such as payroll and overnight trade settlement may remain appropriate for batch processing. Robert Metzger, one of the original creators of Flink and now a software engineer at Decodable, shared one rule of thumb, “If your queries are changing faster than your data, then a batch approach might make sense. The streaming case is when your data is changing faster than your queries.”

If your queries are changing faster than your data, then a batch approach might make sense. The streaming case is when your data is changing faster than your queries.

Stream Processing vs. Batch Processing

The demand for real-time data processing continues to grow, as well as the need for the real-time insights and business intelligence that can come out of it. Zion Market Research projects the market for stream processing to reach $2.6 billion by 2025.

Increasing demand also points to the need for more developers fluent in stream processing. Moving from batch processing to stream processing requires not only fluency with new tools, but a significant mental shift as well. Indeed, the longer a developer has been immersed in batch processing, the harder it can be to make the shift. It demands a new way of thinking about data and data flows. With the advent of SQL for stream processing, this transition has become easier in recent years, making that a viable path for developers to consider.

How Flink Works

Stream processing engines like Flink work in a straightforward manner. Input data, whether from a continuous stream, from stored historical data, or from other sources, is ingested. The application then performs some kind of computation or transformation, the result of which is stored as the state in Flink. From there, the data stream continues as an output stream, often moving to a new processing pipeline or to a data sink, such as a database or another application.

Two sample architectures shown below, for an event-driven application and a real-time analysis application, illustrate more specifically how data flows in a Flink-centered system.

Event-Driven Application Architecture

A push notification system is an example of an event-driven application. Imagine a car-sharing service’s app is ingesting a datastream indicating where cars are and their predicted arrival time. When a car reports that its arrival time will be late, the app sends a push notification to the passenger 

In a traditional event-driven application, events would be first written and/or read to a transactional database, the application would compute or transform the data, and an action would be triggered. The time required for the write and computation creates some latency before the notification is pushed out.

Event-driven application
Event-driven application, https://flink.apache.org/usecases.html

Using Flink, the computation happens on a continuous basis, before any read or write to a database, greatly reducing processing time and thus latency. In this case, the push notification goes out in real time.

Real-Time Analytics Application Architecture

Traditionally, analytics have been run as batch processing jobs, where a set amount of data is collected, processed, and results are stored. With stream processing, the analytics are computed within the stream processing engine–zeroing any latency involved in the batching–and can be output directly to a dashboard, as well as stored to a database.

Streaming analytics
Streaming analytics, https://flink.apache.org/usecases.html

Flink and Decodable

Stream processing is still a relatively new approach to handling and extracting value from data, which means that the tools available are not yet as mature as those for traditional batch processing. Building a complete stream processing solution from scratch can involve combining several open-source elements as well as custom code.

There are new products and platforms that greatly simplify working with Flink, including Decodable, a serverless, real-time, streaming data platform. Decodable takes care of most of the complexity involved in setting up and running Flink and allows developers to focus on their applications rather than infrastructure.

With Decodable, developers define pipelines in SQL that process real-time streams of data that are attached to the data infrastructure by connections. Connections can be between Decodable and messaging systems, object stores, operational database systems, data warehouses, data lakes, and microservices. In this context, real time means sub-second end-to-end latency. In most cases, pipelines receive, process, and send data in tens of milliseconds.

For more information, visit decodable.co.

Further Reading

Apache Flink and the discipline of stream processing are fresh technologies, but there are a number of resources for building Flink knowledge and skills.

The official Apache Flink site includes an overview, downloads, and documentation.

Additionally, these O’Reilly books provide an in-depth treatment of Flink and stream processing.

  • Ellen Friedman and Kostas Tsoumas: Introduction to Apache Flink, O’Reilly Media, 2016.
  • Fabian Hueske and Vasiliki Kalavri: Stream Processing with Apache Flink, O’Reilly Media, 2019.

Additional Resources

📫 Email signup 👇

Did you enjoy this issue of Checkpoint Chronicle? Would you like the next edition delivered directly to your email to read from the comfort of your own home?

Simply enter your email address here and we'll send you the next issue as soon as it's published—and nothing else, we promise!

David Fabritius

Let's Get Decoding

Decodable is free to try. Register for access and see how easy it is.