PostgreSQL checksums vs. Performance - Why we willingly sacrifice ~3% TPS (and sleep better)

Last Updated: December 1, 2025 | Reading Time: 22 minutes

At Hosted Power we are performance addicts. We tune kernels for fun, benchmark things nobody asked us to test, and debate database settings over lunch. So when PostgreSQL recently introduced a feature that slows throughput by roughly three percent, you would expect us to reject it immediately.

Instead, we turned it on everywhere. On purpose.

In this article, we explain why a small performance trade off results in a major increase in data integrity and operational confidence, especially for ecommerce platforms and DevOps teams running mission critical workloads.

 

Table of contents

  1. Why a deliberate three percent trade off makes sense
  2. Benchmark environment
  3. Benchmark results
  4. Interpreting the impact
  5. Why Hosted Power enables checksums by default
  6. What this means inside TurboStack
  7. Conclusion

 

Why a deliberate three percent trade off makes sense

At its core, high performance hosting is about speed, stability, and predictable scalability. Yet even in that context, raw speed is only one part of the equation. Data correctness is equally critical.

PostgreSQL data checksums detect silent corruption inside table and index pages. These are the most dangerous types of errors because:

  • nothing crashes
  • no warnings appear
  • the system continues serving incorrect data

In ecommerce this can mean:

  • order totals that are subtly wrong
  • stock counts that drift over time
  • replicas that spread corrupted pages
  • backups storing incorrect data for weeks

By enabling checksums, PostgreSQL validates the integrity of each page when it is read from disk. If something is wrong, the system alerts you immediately instead of silently returning invalid results.

 

What it costs

During our internal benchmarks, activating data checksums resulted in:

  • roughly three to three point five percent lower throughput (TPS)
  • about one to two percent higher average latency

A tiny cost, considering the operational risk it eliminates.

 

Benchmark environment

To make a fair assessment, we built a controlled and repeatable benchmarking setup. We benchmarked PostgreSQL 18, with and without data checksums, on a 16 core and 64 GB node.

 

Hardware

ComponentSpecification
CPU cores16
RAM64 GB
StorageNVMe production grade

 

Software

ComponentVersion
DatabasePercona Server for PostgreSQL 18.1.1
Benchmark toolpgbench 16.10

 

Database

We used pgbench with a scale factor of 1500, which creates roughly 24 GB of tables and indexes. In other words: a realistic dataset size, similar to real ecommerce cases.

 

Workloads

For this experiment we wanted to mimic the kind of traffic an active ecommerce platform experiences on a normal day. That means two very different types of behaviour: customers who interact with the shop and change things, and visitors who simply browse.

 

Read write OLTP workload

This workload represents real customer activity. It follows the default pgbench transaction model, which performs a mix of operations:

  • several UPDATE statements to change data
  • a SELECT to read information
  • an INSERT into the history table

In practice, this is similar to a shopper browsing products, adding items to their cart, refreshing the page, or completing an order. It is a good proxy for the busy, ever changing traffic of an online store.

 

Read only workload

This workload focuses entirely on reading data, using the pgbench -S mode.

  • It fires large amounts of SELECT queries
  • It mimics customers who browse catalog pages, filter products, or search through a large product collection

This is useful to see how PostgreSQL behaves under heavy read pressure without any writes happening in the background.

 

Concurrency levels

To understand performance under different levels of stress, we tested two scenarios:

  • High concurrency: 256 clients running simultaneously, across 16 threads, for 300 seconds. This represents true peak load conditions.
  • Moderate concurrency: 32 clients over 8 threads. We used this mainly to illustrate how small tests can introduce noise or misleading results.

 

Checksum modes

To isolate the impact of data checksums, we ran the benchmarks in two configurations:

  • a cluster with checksums enabled
  • an identical cluster with checksums disabled

Everything else in the setup stayed exactly the same, ensuring the results compared only the effect of checksums and nothing else.

 

Benchmark results

Read write OLTP at high concurrency

ConfigurationTPSAverage latency
Checksums OFF879.99267.49 ms
Checksums ON853.61271.27 ms

 

Impact: roughly 3% lower throughput and one 1.4% higher latency.

 

Read only workload at high concurrency

ConfigurationTPSAverage latency
Checksums OFF8601.4816.50 ms
Checksums ON8310.7516.84 ms

Even under pure select traffic, the impact of using checksums remains around 3%.

 

Moderate concurrency test

When we reduced the load and ran the test with only 32 clients, something strange happened. The results suddenly suggested that enabling checksums made PostgreSQL dramatically faster.

 

Here is what the numbers looked like:

 

ModeTPSAverage latency
Checksums off841.7036.17 ms
Checksums on1533.2620.05 ms

 

If you take this at face value, it seems to say that turning checksums on makes the system more than eighty percent faster. That would be amazing, but also impossible.

What actually happened is simple: this run was influenced by normal system noise. At low concurrency, even small differences become exaggerated. Things like whether the cache was warm, what the OS was doing in the background, or even timing quirks can skew results.

This run is not evidence that checksums improve performance. Instead, it is a perfect reminder that:

  • single runs can be misleading
  • low concurrency tests are especially noisy
  • proper benchmarking always requires multiple runs and healthy skepticism

We include this example not for its numbers, but for the lesson it teaches: always validate your results.

 

Interpreting the impact

When we look at the high concurrency test runs, the ones with 256 clients on a 24 GB database, the impact of checksums becomes very clear and very small.

 

For the heavy workloads we observed:

 

Workload typeTPS impactLatency impact
Read write OLTPabout 3 percent lower TPSabout 1.4 percent higher average latency
Read only workloadabout 3.4 percent lower TPSabout 2 percent higher latency

 

On a machine with 16 CPU cores and 64 GB of RAM, this is honestly a very small impact. To put it into perspective, you can see similar variations from everyday events such as:

  • the database choosing a slightly different query plan
  • a cron job waking up at an inconvenient moment
  • someone launching a large internal report in the middle of the day

     

The true cost of silent corruption

Without checksums, PostgreSQL may read a corrupted page without realizing it. The database presents the data as if nothing is wrong. This can lead to:

  • undefined behavior in queries
  • incorrect business logic decisions
  • broken replicas
  • corrupted backups

With checksums enabled, corruption is detected immediately, allowing you to:

  • fail over to a healthy node
  • restore from backup with confidence
  • prevent replicas from spreading invalid data
  • investigate root causes before drift becomes severe

For ecommerce stores handling thousands of orders per hour, this difference is massive.

 

Why Hosted Power enables checksums by default

We specialise in high performance hosting for ecommerce platforms, digital agencies, SaaS products, and DevOps teams. Our customers depend on predictable performance, correct data, and a platform that remains stable during heavy peak periods.

Based on our testing and operational experience, our official position is:

PostgreSQL data checksums are enabled by default on all production databases.

We accept a small synthetic benchmark penalty in exchange for:

  • early detection of issues
  • reliable replicas and backups
  • reduced risk during failovers and migrations
  • improved long term data integrity

This is not a downgrade in performance but an upgrade in reliability.

 

What this means inside TurboStack

TurboStack, our high performance PaaS, provides a unified interface to manage applications, databases, caching, containers, and deployments across multiple cloud providers. It is engineered for speed, automation, and stability.

Enabling checksums aligns with TurboStack's core principles:

  • Performance with safety: maximum speed without compromising integrity
  • Predictability: fewer unexpected failures and clearer diagnostics
  • Confidence for teams: ecommerce managers, developers, and DevOps engineers get a more trustworthy platform.

 

Value for our customer profiles

  • Ecommerce platforms benefit from correct pricing, consistent inventory, and stable order processing even during peak events like Black Friday.
  • Digital agencies experience fewer production issues, smoother project delivery, and a hosting partner who takes operational reliability seriously.
  • DevOps teams enjoy a predictable data layer, easier troubleshooting, and full control through TurboStack's automation and APIs.

 

Checksums strengthen exactly what TurboStack stands for: a high performance platform that prioritises both speed and accuracy.

 

Conclusion

Enabling PostgreSQL data checksums is a deliberate, informed trade off. A small three percent decrease in synthetic performance provides a substantial increase in operational integrity and long term stability.

For ecommerce businesses, agencies, and DevOps teams, correct data is more important than squeezing out marginal gains in TPS. The cost of unnoticed corruption far outweighs the micro overhead of validation.

Hosted Power therefore confidently enables checksums by default, ensuring that every application running on our platform benefits from stronger reliability, safer failovers, and predictable behaviour under pressure.

If you want to learn how we can boost the performance and stability of your infrastructure, we are here to help.

 

Are you looking for database performance with maximum speed and accuracy?

Contact us

Want to learn more about these topics?