Changelog

The following are recent changes in the Crunchy Bridge product. Each is marked as to whether it applies to a core Postgres feature, a UI change in the Crunchy dashboard, an update to our docs, an update to the Crunchy CLI (command line interface), or a change to the platform's REST API.

feature

Hobby tier no longer supports HA

We have disabled the high availability (HA) feature on hobby-tier clusters. Hobby instances are not intended for production purposes. If you need high availability you may resize your cluster to the standard or memory tier.

postgres

pg_incremental extension now available

The pg_incremental extension is now available for new clusters running Postgres 16 and 17.

This extension is used in conjunction with pg_cron for running incremental batch processing jobs. See the Crunchy blog post for more information and sample use cases for pg_incremental.

api

The event list API endpoint supports a delay parameter

The event list API endpoint supports a new delay parameter that can be used by poll loops for a stronger guarantee that events won't be missed from transactions committing out of order. We recommend using a value of 10s for a reasonable compromise between timeliness of event delivery and protection against outlying long transactions.

postgres

Postgres 17 is now default

With Postgres 17 released in September, and 17.2 now available with a number of CVE patches, bug fixes, and improvements, we've made it the default major version for newly created clusters.

feature

Incremental backups are now the default

Crunchy Bridge now takes two incremental backups per day and a full backup once a week. This allows for more frequent backups that are less resource intensive, and faster recovery. The change is only enabled on new clusters and existing ones that are upgraded or refreshed, but will eventually be rolled out to all clusters.

feature

Crunchy Data Warehouse

We have released a new product offering for Crunchy Data Warehouse. Crunchy Data Warehouse replaces the earlier Crunchy Bridge for Analytics with an expanded feature set and built in managed object storage. Crunchy Data Warehouse has full Iceberg support for read, write, and update. Crunchy Data Warehouse also has extensive features to read data lake files in Parquet, JSON, and CSV.

The new clusters can be created from a dropdown toggle next to the Create Cluster options. See our documentation on Crunchy Data Warehouse. Note that warehouse has its own pricing structure and additional costs for storage.

postgres

Postgres 17 is now available

Postgres 17 is now available. Changes include new memory management system for VACUUM, introduces JSON_TABLE for converting JSON data to a table representation, and improved logical replication (failover control, pg_upgrade - preserve replication slot for publisher & subscribers, and pg_createsubscriber - creates logical replicas from physical standbys). See the release notes for more details.

feature

Spatial support for Crunchy Bridge for Analytics

We have released new options for working with spatial data in Crunchy Bridge for Analytics. Analytics clusters now support reading GeoParquet, Overture, GeoJSON, Shapefiles, CSV, and other GDAL compatible file formats. Remote object storage can be accessed via private or public S3 or public HTTPS URLs.

See our documentation on Crunchy Bridge for Analytics - Spatial on how to get started.

dashboard

Network usage metric view

A new metric view is available for network usage, showing number of bytes in and out of an instance across all its network interfaces.

dashboard

Saved query folders are no longer shared between clusters

Previously, saved query folders would appear for every cluster across a team, and produced odd display properties when they contained saved queries in one cluster, but were empty in another. This has now changed: like with saved queries, saved query folders are always assigned a specific cluster, and each queries view scopes folders to only its cluster. Preexisting folders that contained queries from multiple clusters have been duplicated, and saved queries reassigned to the duplicate for their specific cluster.

postgres

Crunchy Scheduler now available

Crunchy Scheduler is now available for Postgres clusters. Jobs can be scheduled to run at specific times or intervals, managed through the Bridge Dashboard.

postgres

pgx_ulid extension now available

The pgx_ulid extension is now available for your Postgres clusters.

pgx_ulid adds support for creating and using ULID types in Postgres.

postgres

pgvector version 0.7.0 is now available

The pgvector extension has been updated to v0.7.0 and now supports halfvec and sparsevec types, as well as new indexing capabilities. Existing clusters can use the Refresh Instance button from the Settings tab to receive the update.

For full release notes, please review the pgvector changelog.

postgres

Postgres 13 has been retired

To encourage users to use more modern versions of Postgres, it's no longer generally possible to provision new clusters on Postgres 13. Teams that already have Postgres 13 clusters may continue to do so for the time being (for approximately the next year), but we'd encourage them to start looking into upgrading major versions as well.

feature

Saved Queries are copied on fork

When forking a cluster, saved queries are now copied into the new cluster along with the bulk of other data. These queries are copies, and updating one won't propagate back to its original. Deprovisioning the fork will also have no effect on the original cluster's saved queries.

feature

Crunchy Bridge for Analytics

We have released a new cluster type for Analytics. Analytics clusters support reading CSV, JSON, and Parquet files from remote object storage (e.g. S3). Clusters also enable a special vectorized and fast query engine and functions for copying files to and from the remote storage.

The new clusters can be created from a dropdown toggle next to the Create Cluster options. See our documentation on Crunchy Bridge for Analytics. Note that analytics has its own pricing structure.

postgres

pgpodman version 0.3 is now available

The pgpodman extension that powers Container Apps has been updated to v0.3. The stop_container SQL function now has an optional second parameter to specify a timeout to wait in seconds for the container to stop. The default is 10 minutes (600 seconds). This gives control over how long to wait for a container to stop in order to help prevent a kill signal from prematurely being sent to the container.

The new timeout behavior can be obtained by running ALTER EXTENSION pgpodman UPDATE after performing a server refresh.

feature

Extended periods for metric views

Cluster metrics in the Dashboard support two new time ranges of 1 week and 30 days, significantly increasing the allowable lookback period. Extended periods are facilitated by a histogram-based aggregates system that makes ranging over long durations less costly to carry out.

feature

Folder view for Saved Queries

Saved queries can now be sorted into folders to help organize them. Queries can either be top level, or stored one level deep in a folder. Nested folders are not currently supported as of this release.

feature

Automatic weekly statistics reset

Clusters can opt-in to have their statistics reset on a weekly basis with pg_stat_statements_reset() run automatically at the beginning of Sunday UTC. This helps keep query-related database insights more relevant by regularly pruning stale information. Enable the feature on a cluster's Settings page and looking for the Reset statistics weekly toggle. New clusters have it enabled by default.

feature

You can connect a cluster to AWS PrivateLink, GCP Private Service Connect, or Azure Private Link from the cluster Networking tab. See additional details in the Private Link docs.

feature

Self-service VPC peering

You can create network peering connections from inside the dashboard in the Team Settings → Networks for AWS and GCP. See additional details in the VPC peering docs.

feature

GCP storage rate increase

The price of storage on GCP has changed from $0.10 per GB to $0.23 per GB, effective February 1st, and will apply to both existing and newly provisioned clusters. The price change in Bridge is due to an increase in disk pricing on GCP.

postgres

pgvector version 0.6.0 is now available

The pgvector extension has been updated to v0.6.0 and now supports parallel index builds for HNSW. Existing clusters can use the Refresh Instance button from the Settings tab to receive the update.

For full release notes, please review the pgvector changelog.

feature

Account notification settings

The kinds of notifications received from Crunchy Bridge are now configurable in Dashboard under Account Settings → Notifications, allowing users to opt out of being emailed on actions they're not interested in.

Most notifications are configurable, but some related to account security (e.g. email changed or password changed) are not.

feature

Cluster groups with Citus support

Cluster groups are now available with support Citus Postgres extension that enables horizontal scalability with distributed storage and queries, along with columnar storage.

Create a cluster group in the Crunchy Dashboard under Team Settings → Cluster Groups, then add clusters to it from the same page.

feature

Saved queries can now return up to 50,000 rows

Saved Queries in the Dashboard or API can now return up to 50,000 rows in their CSV and JSON results, up from the previous maximum of 10,000. As before, there's a limit on query results of 10 MB.

CSV or JSON must be used to get the extended result set. The maximum number of rows returned in the web UI is 1,000.

feature

A new standard-4 instance is now available on AWS

A new standard-4 is now available to provision on AWS, coming with 4 GB of memory and 1 vCPU, with baseline IOPS of 2,500 and a maximum of 20,000.

standard-4 is available at a base price point of $70.

postgres

pg_uuidv7 extension now available

The pg_uuidv7 extension is now available for your Postgres clusters.

pg_uuidv7 adds support for creating and using version 7 UUIDs in Postgres.

postgres

postgresql_panonymizer extension now available

The postgresql_anonymizer extension is now available for your Postgres clusters.

postgresql_anonymizer is an extension to mask or replace personally identifiable information (PII) or commercially sensitive data from a PostgreSQL database.

feature

Custom OpenID Connect providers

Bridge accounts can now be created by registering a custom OpenID Connect provider, enabling access to a wider variety of identity providers and self-hosted providers. Go to OpenID Connect provider registration, verify your provider's domain, fill in client details, then complete a successful login with it to be redirected back to Bridge.

OpenID Connect providers must support the WebFinger protocol so that Bridge can verify the identity of a user with a provider before it's allowed to be added.

feature

Accounts with SSO enabled can remove a password credential

Passwords that are associated with both an SSO (single sign-on) provider and a password credential can now remove the latter to help better shore up the security of their account and that of teams they're members of. Passwords are considered more susceptible to attacks like credential stuffing, and the use of SSO gives administrators a faster and more definitive way of widely managing membership. Removing a password is a one-way operation. After removal, a password can't be added back.

Team administrators can go to the Members page of their teams and look for "SSO-only" badges to see which members only authenticate via SSO versus which also have a password, and may wish to ask the latter to remove their password.

Accounts can remove their password by visiting Account Settings → Authentication and looking for the "Remove Password" section. If there isn't one, no password is set.

feature

Teams can be configured to allow automatic joining via SSO

Teams can now be configured so that they allow other accounts to join them automatically, as long as they're authenticated with the same SSO (single sign-on) provider and domain. For example, a team could be configured so that as long as a new account is authenticated through Google and have a @crunchydata.com email address, the account could join the team themselves without going through the traditional team member invite loop.

Automatic joining can be configured for a team under Team Settings → General.

Teams can be joined under Account Settings → Join Team.

postgres

Postgres 16 is now default

With Postgres 16 available since September and 16.1 now released with fixes for three CVEs and 55 bugs (some of which affected previous versions as well), we've made it the default major version for newly created clusters.

postgres

pgvector 0.5.1 is now available

The pgvector extension has been updated to v0.5.1. Existing clusters can use the Refresh Instance button from the Settings tab to receive the update.

For full release notes, please review the pgvector changelog.

postgres

New Postgres servers will get a random_page_cost of 1.1

Postgres' random_page_cost setting specifies the rough estimate of random reads compared to sequential ones, and helps the planner decide whether to prefer index lookups to sequential scans. Postgres' default value of 4 was originally set in 2005, a time when spinning mechanical disks were much more prolific than the SSDs generally in use today. Our testing on the three major clouds showed roughly a 5-8% cost difference between sequential and random reads, suggesting that the default random_page_cost was much too high for these environments. New Postgres servers will get a value of 1.1 instead of 4.

dashboard

Security badges for MFA and SSO-only in team member list

The list of team members for each team now shows badges indicating whether each team member has MFA (multi-factor authentication) enabled and whether their account authenticates exclusively by SSO (single sign-on) and doesn't have a password credential. This allows admins to vet the security compliance of members on their teams and reach out to those who should shore up their security posture.

dashboard

Redesigned Dashboard layout

Redesigned the layout to improve usability of navigating around the Bridge Dashboard. Includes: - Persistent team links in top navigation bar. - Cluster dropdown that supports changing to clusters in other teams. - Some navigation moves to the left sidebar where more space is available instead of staying soley vertical.

postgres

timescaledb extension now available

timescaledb is now available for your Postgres cluster.

timescaledb provides automatic partitioning of time-series data, events, and analytics.

postgres

pglogical extension now available

pglogical is now available for your Postgres cluster.

pglogical provides logical streaming replication for PostgreSQL, using a publish/subscribe model.

dashboard

Disk usage metrics

The metrics page now includes disk usage, which visualizes database sizes, log size, and WAL size.

dashboard

Saved Queries SQL Assistant

Write plain text descriptions of queries and our AI-powered SQL Assistant can generate the corresponding SQL. Opt-in to share your schema for more accurate queries.

dashboard

Saved Queries in Dashboard

Introducing Saved Queries: Create shareable SQL queries that run against a cluster. Export Saved Queries to JSON and CSV, or embed directly into Google Sheets.

dashboard

Production check in Dashboard

Ever wondered if your database cluster is ready for production use? There is now a production check link under 'Cluster Overview' in the Dashboard that provides detailed recommendations.

postgres

Postgres 15 is now default

With three patch versions of Postgres 15 now released, and having been GA since October 2022, we've made it the default major version for newly created clusters.

postgres

Postgres 12 has been retired

To encourage users to use more modern versions of Postgres, it's no longer generally possible to provision new clusters on Postgres 12. Teams that already have Postgres 12 clusters may continue to do so for the time being, but we'd encourage them to start looking into upgrading major versions as well.

dashboard

Command palette v1

We have added an experimental command palette to the Dashboard. It currently supports a series of quick navigation commands for teams and clusters, and can be opened using the ⌘ + K (or Ctrl + K for windows). More coming soon.

postgres

clickhouse_fdw and pg_repack extensions now available

Two new extensions are now available for your Postgres cluster.

The clickhouse_fdw extension allows you to connect and interact with a foreign ClickHouse database.

The pg_repack extension allows you to remove bloat and restore the physical order of clustered indexes without holding exclusive locks.

feature

Personal teams are now normal teams

Every new Bridge account automatically has a new team created for its personal use. Previously, this team appeared as Personal in the Bridge Dashboard, and although it behaved similarly to normal teams, it had some limitations like that no additional team members could be added to it.

Personal teams have been changed so they're now just normal teams that behave the same as every other team. They now appear in Dashboard with a name like Joe's team or Jane's team depending on the name of the owner, but can be renamed to anything.

feature

Multi-factor authentication

Crunchy Bridge now supports TOTP (time-based one-time password) and WebAuthn (biometric and Yubikey) multi-factor authentication (MFA) to better secure your account. It can be enabled from My Account → Authentication.

SSO-based (single sign-on) accounts can also enable MFA to be required on sensitive operations like creating a new API key.

postgres

mongo_fdw and postgresql-hll extensions are now available

Two new extensions are now available for your Postgres cluster.

The mongo_fdw extension allows you to connect and interact with a foreign MongoDB database.

The postgresql-hll extension enables the data structure and data type for HyperLogLog.

api

Event role.password_revealed has been deprecated

The event role.password_revealed has been retired and is no longer generated. Our findings that were many users would reveal credentials programmatically and generate these in quantities large enough to drown out other events in the audit log, making it less useful. We'd encourage users to use role-based credentials instead to improve visibility into who has database credentials.

postgres

Improved logging defaults for Postgres

We have modified our default logging configuration for Postgres including log_min_duration_statement, log_statement, log_lock_waits, log_min_messages and log_temp_files.

They provide you with better visiblity into how your database is behaving and performing.