Skip to content

DBConvert Streams FAQ.

General Questions

Cross-Platform Compatibility

Q: Can DBConvert Streams convert Postgres data between different operating systems?

A: Yes, DBConvert Streams allows you to establish remote connections to PostgreSQL servers running on both Windows and Linux. This means that you can:

  • Set up a source connection pointing to a Windows-based Postgres server
  • Set up a target connection pointing to a Linux-based Postgres server
  • Transfer data between the two systems in either direction

System Requirements

Q: What are the minimum system requirements for running DBConvert Streams?

A: The minimum requirements depend on your deployment method:

For Docker deployment:

  • Docker Engine 20.10.0 or newer
  • Docker Compose v2.0.0 or newer
  • At least 2GB of available RAM
  • 2GB of free disk space
  • 2 CPU cores minimum (3+ cores recommended)

For Binary deployment:

  • Unix-based operating system (Linux or macOS)
  • At least 1GB of available RAM
  • 1GB of free disk space
  • 2 CPU cores minimum (3+ cores recommended)

Deployment Options

Q: What deployment options are available for DBConvert Streams?

A: DBConvert Streams offers two main deployment options:

  1. Docker Deployment (Recommended):

    • Complete containerized solution
    • Includes all infrastructure components
    • Simplified management and updates
  2. Binary Deployment:

    • Direct installation on Linux/macOS
    • Systemd service management
    • Manual infrastructure setup

Stream Configuration Questions

Data Bundle Size

Q: What is the recommended data bundle size for different scenarios?

A: The dataBundleSize parameter (10-1000 records) should be adjusted based on your data:

  • For simple tables with few fields: Larger values (closer to 1000)
  • For complex tables or binary data: Smaller values
  • Consider adjusting based on:
    • Table complexity
    • Record size
    • Available memory
    • Network capacity

Stream Limits

Q: Can I set limits on stream operations?

A: Yes, you can set two types of limits:

  1. numberOfEvents: Maximum number of events to process before stopping
  2. elapsedTime: Maximum duration in seconds before stopping

Example configuration:

json
{
    "limits": {
        "numberOfEvents": 1000000,    // Stop after 1M events
        "elapsedTime": 3600          // Stop after 1 hour
    }
}

Slow Consumer Issues

Q: What does the "slow consumer, messages dropped" error in NATS indicate?

A: This error signifies that a consumer is struggling to keep up with the message flow from the NATS server, leading to message drops due to processing lag.

Q: How can I address the "slow consumer" issue when transferring fat rows with more data?

A: To alleviate this issue, consider setting up the dataBundleSize parameter in the stream configuration to optimize data bundle sizes during transmission, preventing errors related to slow consumers and dropped messages.

Q: Is the default setting sufficient for handling all types of data transfers?

A: While the default settings work well for regular tables, it's advisable to adjust parameters like dataBundleSize to lower values for tables with larger or "fat" records to ensure optimal performance and avoid errors associated with slow consumers.

For more information, see the related NATS errors article.

Payload Size Issues

Q: How do I resolve the error: "[source] data size 2.0 MB exceeds max payload 1.0 MB"?

A: This error occurs when records in the source table are too large. To resolve this:

  1. First try reducing the dataBundleSize parameter to a lower value
  2. If the issue persists even with dataBundleSize=1, modify the NATS configuration:
    • Increase the max_payload parameter to 8MB

Example NATS configuration

debug: true
trace: false

# Each server can connect to clients on the internal port 4222 
# (mapped to external ports in our docker-compose)
port: 4222

# Persistent JetStream data store
jetstream = {
  # Each server persists messages within the docker container
  # at /data/nats-server (mounted as ./persistent-data/server-n… 
  # in our docker-compose)
  store_dir: "/data/nats-server/"
}
max_payload: 8MB

NATS Configuration

Q: What's the recommended NATS configuration for large data transfers?

A: Here's a recommended NATS configuration for handling large data transfers:

debug: true
trace: false

# Internal client connection port
port: 4222

# Persistent JetStream data store
jetstream = {
    store_dir: "/data/nats-server/"
}

# Increased payload size for large records
max_payload: 8MB

Connection Timeout Issues

Q: How do I fix the "SendStandbyStatusUpdate failed: write failed: closed" error after 30 minutes of inactivity?

A: This error occurs due to connection timeout from inactivity. To resolve:

  1. Increase the pool_max_conn_idle_time runtime parameter
  2. Use a connection string with extended idle time:
    postgres://postgres:[email protected]:5432/mydb?pool_max_conn_idle_time=10h
    This sets the maximum idle time to 10 hours.

Large Data Transfer Issues

Q: How do I handle checkpoint frequency errors during large data transfers?

A: If you see errors like:

checkpoints are occurring too frequently (29 seconds apart)
HINT: Consider increasing the configuration parameter "max_wal_size"

To resolve:

  1. Increase the value of max_wal_size in postgresql.conf
  2. Or modify the checkpoint_timeout parameter
  3. These adjustments help manage WAL files generation and retention during large transfers

Table Structure Handling

Q: How does DBConvert Streams handle table structure creation in the target database?

A: DBConvert Streams provides several options for table structure handling:

  1. Automatic structure creation (createStructure: true):

    • Automatically creates tables in target
    • Maps data types between different databases
    • Creates corresponding indexes
  2. Optional index creation control:

    • Use noCreateIndexes: true for faster initial loads
    • Create indexes after data load for better performance

Security Questions

Credential Management

Q: How does DBConvert Streams handle sensitive information?

A: DBConvert Streams uses HashiCorp Vault to securely manage:

  • Database passwords and credentials
  • SSL/TLS certificates
  • Client certificates
  • API keys
  • Other sensitive connection information

SSL Configuration

Q: What SSL/TLS security options are available?

A: DBConvert Streams supports multiple SSL modes:

  1. Disable: No encryption (development only)
  2. Require: Basic encryption
  3. Verify-CA: Server certificate verification
  4. Verify-Full: Complete verification with hostname validation

Performance Optimization

Q: How can I optimize performance for large data transfers?

A: Consider these optimization strategies:

  1. For initial loads:

    • Use Convert mode instead of CDC
    • Skip index creation initially
    • Create indexes after data load
    • Adjust data bundle size appropriately
  2. For continuous replication:

    • Use CDC mode for minimal impact
    • Configure appropriate reporting intervals
    • Monitor system resource usage

Additional Resources

DBConvert Streams - event driven replication for databases