Production Check

Crunchy Bridge provides a checklist to assist you in making sure that a cluster is ready for production. Production Check identifies some of the most important production-related settings and tools available on the platform.

To run Production Check on one your clusters, navigate to the Cluster Overview page and click "Run check" at the bottom of the overview panel.

High availability

Verifies that you have High Availability (HA) enabled on your cluster. With HA, Crunchy Bridge maintains a standby of your production database that will be promoted in case of a failure of the primary. If a failure occurs without HA, a new cluster will have to be prepared to replace the failed primary. The process will be started as soon as the failure is identified, but could take several hours to complete, depending on the size of your cluster.

Cluster protected

Cluster protection prevents the accidental deletion of your cluster. Turning cluster protection on or off has no functional impact on the cluster itself, and is safe to turn on at any time. Make sure your cluster is protected in the cluster settings or the cluster overview panel.

Consistent maintenance window

You can set a maintenance window for each cluster that specifies when maintenance tasks should be run. This includes maintenance initiated by you or by Crunchy Bridge. Setting the maintenance window helps to ensure that cluster management operations, such as resizes or cluster refreshes, are completed at a more convenient time for your cluster. This can reduce interruptions for your application and users.

Log drain configured

We encourage you to use a third-party logging tool to store and parse your Postgres logs. This check will fail if you have not set up a log drain to your logging provider. Read more about setting up a logging provider and see sample configurations for several providers.

Statement timeout

Postgres allows you to set statement_timeout, which will abort any statement that takes longer than the specified time. You can set it at multiple levels, including individual statement, specific user, or database.

Setting a default statement_timeout for each database is a good starting point. This ensures any application or individual connecting to that database by default will not be able to run queries that take longer than is desirable. A reasonable setting for statement_timeout is 30 or 60 seconds, for example:

ALTER DATABASE mydatabase SET statement_timeout = '60s';

Query minimum log timeout

The log_min_duration_statement setting ensures that Postgres will log queries that take longer than a certain amount of time to run. This can be very helpful in identifying queries that run long and could potentially be optimized. You can set log_min_duration_statement at multiple levels, including individual statement, specific user, or database.

You can start by setting log_min_duration_statement to 1,000 milliseconds at the database level, meaning that queries taking longer than 1 second to run will be logged:

ALTER DATABASE mydatabase SET log_min_duration_statement = '1000ms';

Configuration and use of pgBouncer

pgBouncer is included with Crunchy Bridge clusters to provide connection pooling, but it must be enabled on each new cluster before it can be used. This will check to see if pgBouncer is configured and reviews the idle connection count. If you have more than 40 connections, you will get a warning to review your connection settings. Read more about managing connections.

Connecting with Postgres user

Crunchy Bridge provides an application role that you can use to connect your application to your database, rather than using the postgres role. This check evaluates how many connections you have with the postgres role, and will show a warning if it finds more than three connections. It also compares that number with the number of connections using roles other than postgres.

If you see this showing yellow or red, check that you're connecting to your cluster using the appropriate roles wherever you are making connections.

Potentially exhausted primary keys

Auto-incrementing integer keys can cause overflow, especially if they are smallint or int data types. This check will give you a warning if you have any integer fields that are close to their overflow threshold.

Check out Integer Overflow in Postgres on the blog to learn more about integer overflow, and for guidance on what to do if you find that an integer is close to overflow, or has already reached the limit.