From Continuous Delivery to Continuous Data Delivery: Laying the Foundations

From #ContinuousDelivery to Continuous Data Delivery: Laying the Foundations

Modern DevOps practices of continuous testing, integration, deployment/delivery, and monitoring form the backbone of a smooth deployment pipeline that continuously feeds back into itself for improvement. 

Organizations of all industry types are pushing to realize Continuous Delivery to improve their development velocity and accelerate time to market. The main aim of CI/CD operations is to facilitate infrastructure and code deployments consistently throughout the full software development pipeline. However, data engineering can become a major constraint within that process.

Data preparation (also referred to as ETL/ELT) and testing can be two of the biggest impediments, and typically encompass a large number of manual tasks in the data engineering process. Streamlining these components to become faster and more automated is an ideal for many data engineers; the ambition, of course, being fully automated continuous data delivery (CDD).

Examining Continuous Delivery

Optimizing CI/CD capabilities for deployment automation and orchestration frames many cutting edge DevOps operations. Continuous delivery (CD) is the architectural DevOps practice of consistently and automatically delivering quality code changes into production. 

The CD best practice approach is based on producing software in short, iterative cycles. By developing in short cycles and reducing workload to small batch sizes, teams can reliably release their code into production at any time. Accordingly, CD allows development teams to build, test, and release code faster and more frequently.

Continuous delivery functions through a fully automated workflow process. The process can be partially automated with manual checkpoints at critical stages, although this should be avoided. When continuous delivery is implemented effectively, developers always have a deployment-ready immutable build artifact that has passed through a standardized and automated test process.

So how do you ensure risk-free and repeatable deployment cycles? Constructing a robust development pipeline lies in improving resiliency through immutable artifacts. The path to efficient CD lies in these keywords: testing, resiliency, and immutability. These concepts are the foundation we need to set before we include the additional component of Data to Continuous Delivery…

Continuous Delivery and Testing

Optimizing Testing

Traditionally testing was predominantly manual but Agile, DevOps, and numerous other best practice methodologies have stirred the rising use of iterative, automated testing suites, tools, and self-written checks to improve quality code assurance at every step of the deployment process.

An effective deployment pipeline is a set of activities comprising exploratory, usability, and acceptance testing around code change packages that are designed, automated, and iterated to catch potential bugs and issues prior to release.

If no problems are detected, engineers should feel totally comfortable releasing any code changes that have been processed through the pipeline. If defects are discovered later, it means the pipeline needs improvement—perhaps by adding or updating some of the tests. This feedback loop allows teams to optimize based on best practices of continuous improvement (another facet of the DevOps methodology).

Automated Testing

Automated testing adds in multiple layers of quality assurance throughout the deployment pipeline rather than just waiting until deployment to realize the impact of updates across multiple dimensions before deploying to customers/end users. Developers can release changes in a more sustainable manner that maintains zero downtime for the customer or end-user.

Automated software testing can comprise many different tools with varying capabilities, ranging from tests that cover isolated code syntax checks, to those that simulate a full human-driven manual testing experience. 

If CD is all about delivering new code changes in the quickest and most risk-free manner possible to customers, then automated testing is critical to meet that goal. There’s no way to automate code deployments with clunky, error-prone, manual steps within the delivery process. 

Automated testing ensures quality at all stages of development (rather than leaving it as a single, time-consuming step at the end of the process) by ensuring that all new code commits do not introduce any bugs into the production environment. With this quality assured approach, code remains deployable and deliverable at all times.

Different Types of Testing

It’s important to assess, revise, and improve your test process frequently.  Analyze the application, development and testing processes, and test environments within your deployment pipeline to create a resilient strategy for implementing an automated testing suite using a variety of test types.

Different test types include (but are not limited to):

  • Unit tests
  • Integration tests
  • Functional tests
  • End-to-end tests
  • Acceptance tests
  • Performance tests
  • Smoke tests

The Cherre testing suite includes the following tools: 

  • Karate: Cherre uses Karate to run pre-release checks on Hasura permissions, relationships, and remote schemas as well as API client use cases.
  • DBT: Our team runs dbt (data build tool) tests on its BigQuery database as part of the data ELT process. The dbt tool allows us to transform data with the same practices that software engineers use to build applications. DBT tests improve our confidence that the changes we are making in the pipeline will not break existing behaviour.
  • Pytest: Pytest is a mature Python testing tool that we use in pre-release checks and to run tests across the migrations testing framework. It also acts as an ad hoc logical data validator. 

Continuous Delivery and Resiliency

What Is Resiliency?

A resilient system operates successfully even when under duress from attacks or component failures. There are many class types of failure, and each type stresses a different element of system resiliency. Some failures should never occur, some can be especially costly, and some safety-critical systems should never fail but occasionally do. In software engineering, as in life, it’s worth heeding John Allspaw’s advice that, “being able to recover quickly from failure is more important than having failures less often.”

Causes of System Failure

Due to its ever changing nature, software development is an inherently error-prone process. Such errors can lead to huge failures of magnitude for those systems in which the software is implemented. Engineering teams spend a great deal of energy and effort to ensure that potential points of failure are identified and steps are put in place to mitigate risk.

However, despite rigorous testing and processes, many major technical incidents at their root are caused by human error.  Human error in software engineering can be the byproduct of multiple reasons and can even occur when everyone is putting in their best efforts. Every engineer in their lifetime has accidentally missed something, had poor judgement, inadequate training, simple mistakes, or simply a lack of attention. The ramifications of ‘simple mistakes’ can be costly and long-term, which is why blameless post mortems are critical to continuous improvement

While it is impossible to eradicate human error from any process that still contains manual steps, automating as much of the pipeline as possible is the first step to improving system resilience and optimizing risk management.

Optimizing for Resiliency

As outlined above, testing is an obvious approach to ensuring resilience. A second approach involves designing for resilience. Designing for resilience starts by having a clear understanding of, and outlining the requirements for, the software and the business before even beginning the entire design process. It is critical as part of resilience engineering to ensure that the overall design captures vital aspects related to failure or resilience, such as recovery modes that can bring a company back from hardware failure, incorrect inputs into a system, the human error factor, and also any expected degradation. When these aspects are included early in the design, it becomes easier to support resilience even at advanced stages of development.

An organization’s infrastructure needs to maintain a safety margin of resiliency that is capable of absorbing any failure type via deep defenses, and recovery modes need to prioritize and take care of the most likely and highest impact risks.

Optimizing a system for resilience helps improve reliability, which in turn encourages the adoption and implementation of Continuous Delivery. While there are no set guidelines for successful Continuous Delivery, resiliency is a strong CD enabler alongside its partner in crime: immutability.

To achieve resiliency, Cherre does:

  • Peer reviews: Cherre adopts a code owners style practice whereby a cross-functional team owns their select list of services. Any changes or impacts to said service are reviewed by a team. This mirrors what’s seen a lot in the open-source world. 
  • Paired and group coding sessions: Cherre developers often work together in pairs and small groups to optimize the quality of the code they produce. This approach improves our shared understanding of how the code works and it also reduces the back and forth of a peer review which increases throughput. This makes it easier for the next person downstream who reads the code to modify it if necessary. The practice also reduces the danger that the only person on the team who knows how part of the code works may leave, taking that precious knowledge with them.
  • Rollbacks: A reactive resiliency practice that Cherre employs is system state capture and rollback functionality upon failure. 

Continuous Delivery and Immutability

Defining Immutability

For enterprise development, development best practices have progressed to the point where the ideal scenario is to progress a build artifact that is exactly the same from testing, to pre-production before deploying to production. Whatever container, file type, piece of code that you’re working with, you should ideally build it once and then deploy the same thing throughout the whole pipeline. 

To become immutable, these deployable artifacts need to remain in the same unchangeable state when they are deployed to quality assurance, internally verified, promoted to staging, externally verified, and then promoted to production. Automated testing is key here. If you find an issue during this process, you fix the issue, build a new version of the artifact, and start the process again. Only with testing can you create immutable artifacts that progress through all environments from development, staging, pre-production and production in the same way.

Benefits of immutability

  • Building a deployment pipeline around the process of creating immutable artifacts helps to improve the overall robustness of the CD process.
  • Immutable artifacts reduce 2:00 am pager emergencies that involve trying to work out exactly what changed between the pre-production build—where everything worked perfectly—and production—where everything is failing. 
  • Creating immutable deployable artifacts means that artifacts are built once (for time-saving and efficiency), yet deployed many times to progressive environments (improving system resiliency and reliability).

To achieve immutability,  Cherre is leveraging the following:

Defining Continuous Data Delivery

Continuous Data Delivery (CDD) is a process that aims to accelerate, automate, and shrink slow and manual processes surrounding data engineering. More on this though in the next article. Watch this space.


Cherre is the leader in real estate data and insight. We help companies connect and unify their disparate real estate data so they can make faster, smarter decisions. Contact us today to learn more.

Stefan Thorpe

#StefanThorpe

▻ VP of Engineering @ Cherre ▻ Cloud Solutions Architect ▻ DevOps Evangelist 

Stefan is an IT professional with 20+ years management and hands-on experience providing technical and DevOps solutions to support strategic business objectives.