Skip to main content

Migrate from MySQL to PostgreSQL

Overview

This tutorial shows you how to migrate your SuperTokens database from MySQL to PostgreSQL. The migration involves exporting data from MySQL, setting up a PostgreSQL database with the same schema version, and importing the data with proper format conversions.

Before you start

The tutorial assumes the following:

  • You have access to both your MySQL and PostgreSQL databases
  • Both databases are running on the same version of SuperTokens Core
  • You have administrative privileges on both databases

Steps

1. Create a backup of your MySQL database

Create a final backup of your MySQL database. The instructions for this step are specific to your database management system.

2. Prepare the PostgreSQL database

Start the same version of supertokens-postgresql to initialize the schema in the database.

warning

Make sure to use the same version as the supertokens-mysql instance that you are currently running.

3. Export data from MySQL

3.1 Export standard tables

Run the following command to export most of your data:

mysqldump supertokens --fields-terminated-by ','  --fields-enclosed-by '"' --fields-escaped-by '\'  --no-create-info --tab /var/lib/mysql-files/
info

If you do not have permissions to write to the database filesystem you can use the following python script to export tables one by one:

This creates CSV files for all tables in the /var/lib/mysql-files/ directory.

3.2 Export the WebAuthn credentials table

The webauthn_credentials table requires special handling because of the data type used to store the public_key field.

SELECT id, app_id, rp_id, user_id, counter, HEX(public_key) AS public_key, transports, created_at, updated_at 
FROM webauthn_credentials
INTO OUTFILE '/var/lib/mysql-files/webauthn_credentials_hex.txt'
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '\\'
LINES TERMINATED BY '\n';

This exports the public_key field as hexadecimal text for proper conversion to PostgreSQL's Binary Data (BYTEA) format.

4. Transfer data files

If necessary, copy the exported CSV files to a location where the PostgreSQL database can access them.

5. Import data into the PostgreSQL database

Next, you need to import the data into your PostgreSQL database.

5.1. Disable triggers

Connect to your PostgreSQL database and disable triggers to prevent constraint violations during import.

SET session_replication_role = 'replica';
info

If you cannot disable the triggers, use the order specified in the next step. The With Order shows you how to import everything one by one without triggering the constraints.

5.2 Import the standard tables

For most tables, you can import the data directly.

COPY <table_name> FROM '/pg-data-host/<table_name>.csv' 
CSV DELIMITER ',' QUOTE '"' ESCAPE '\' NULL as '\N';

5.3 Handle the third-party provider clients table

The tenant_thirdparty_provider_clients table requires special handling. You need to do this to differences between the MySQL JSON and PostgreSQL text[] formats.

1Create a staging table
CREATE TABLE tenant_thirdparty_provider_clients_raw (
connection_uri_domain VARCHAR(256) DEFAULT '' NOT NULL,
app_id VARCHAR(64) DEFAULT 'public' NOT NULL,
tenant_id VARCHAR(64) DEFAULT 'public' NOT NULL,
third_party_id VARCHAR(28) NOT NULL,
client_type VARCHAR(64) DEFAULT '' NOT NULL,
client_id VARCHAR(256) NOT NULL,
client_secret TEXT,
scope JSONB,
force_pkce BOOLEAN,
additional_config TEXT
);
2Import data into the staging table:
COPY tenant_thirdparty_provider_clients_raw FROM '/host/tenant_thirdparty_provider_clients.txt' 
CSV DELIMITER ',' QUOTE '"' ESCAPE '\' NULL as '\N';
3Convert and insert into the final table:
INSERT INTO tenant_thirdparty_provider_clients (
connection_uri_domain, app_id, tenant_id, third_party_id, client_type,
client_id, client_secret, force_pkce, additional_config, scope
)
SELECT
connection_uri_domain, app_id, tenant_id, third_party_id, client_type,
client_id, client_secret, force_pkce, additional_config,
ARRAY(
SELECT jsonb_array_elements_text(scope->jsonb_object_keys(scope))
)
FROM tenant_thirdparty_provider_clients_raw;

5.4 Handle the WebAuthn credentials table

The webauthn_credentials table requires conversion from MySQL Binary Large Object, BLOB, to PostgreSQL Binary Data, BYTEA format.

1Create a staging table
CREATE TABLE IF NOT EXISTS webauthn_credentials_staging (
id VARCHAR(256) NOT NULL,
app_id VARCHAR(64) DEFAULT 'public' NOT NULL,
rp_id VARCHAR(256) NOT NULL,
user_id CHAR(36),
counter BIGINT NOT NULL,
public_key TEXT NOT NULL,
transports TEXT NOT NULL,
created_at BIGINT NOT NULL,
updated_at BIGINT NOT NULL
);
2Import the hexadecimal data
COPY webauthn_credentials_staging FROM '/host/webauthn_credentials_hex.txt' 
CSV DELIMITER ',' QUOTE '"' ESCAPE '\' NULL as '\N';
3Convert and insert into the final table
INSERT INTO webauthn_credentials (
id, app_id, rp_id, user_id, counter, public_key, transports, created_at, updated_at
)
SELECT
id, app_id, rp_id, user_id, counter,
decode(public_key, 'hex'),
transports, created_at, updated_at
FROM webauthn_credentials_staging;

5.5 Delete the staging tables

Delete the two temporary tables.

DROP TABLE webauthn_credentials_staging;
DROP TABLE tenant_thirdparty_provider_clients_raw;

5.6 Re-enable triggers

After importing all data, re-enable the triggers:

SET session_replication_role = 'origin';

6. Verify the migration

Verify that all data migrated successfully by comparing record counts between your MySQL and PostgreSQL databases:

#!/bin/bash

MYSQL_HOST="DB_HOST"
MYSQL_PORT="3306"
MYSQL_USER="DB_USER"
MYSQL_DB="DB_NAME"
MYSQL_PASS="DB_PASS"

PG_HOST="PG_HOST"
PG_PORT="5432"
PG_USER="PG_USER"
PG_DB="PG_DB"
PG_PASS="PG_PASS"

echo "Comparing table row counts between MySQL and PostgreSQL"

echo "Getting table list from MySQL"
TABLES=$(mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS \
-N -e "SHOW TABLES" $MYSQL_DB)

if [ $? -ne 0 ]; then
echo "ERROR: Failed to get table list from MySQL"
exit 1
fi

TOTAL_TABLES=$(echo "$TABLES" | wc -l)
echo "Found $TOTAL_TABLES tables"
echo ""

MATCH_COUNT=0
MISMATCH_COUNT=0
ERROR_COUNT=0

for TABLE in $TABLES; do
MYSQL_COUNT=$(mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PASS \
-N -e "SELECT COUNT(*) FROM $TABLE" $MYSQL_DB 2>&1)

if [ $? -ne 0 ]; then
echo "$TABLE - MySQL: ERROR, PostgreSQL: -, Status: ERROR"
ERROR_COUNT=$((ERROR_COUNT + 1))
continue
fi

PG_COUNT=$(PGPASSWORD=$PG_PASS psql -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DB \
-t -c "SELECT COUNT(*) FROM $TABLE" 2>&1)

if [ $? -ne 0 ]; then
echo "$TABLE - MySQL: $MYSQL_COUNT, PostgreSQL: ERROR, Status: ERROR"
ERROR_COUNT=$((ERROR_COUNT + 1))
continue
fi

MYSQL_COUNT=$(echo $MYSQL_COUNT | xargs)
PG_COUNT=$(echo $PG_COUNT | xargs)

if [ "$MYSQL_COUNT" = "$PG_COUNT" ]; then
echo "$TABLE - MySQL: $MYSQL_COUNT, PostgreSQL: $PG_COUNT, Status: ✓ MATCH"
MATCH_COUNT=$((MATCH_COUNT + 1))
else
echo "$TABLE - MySQL: $MYSQL_COUNT, PostgreSQL: $PG_COUNT, Status: ✗ MISMATCH"
MISMATCH_COUNT=$((MISMATCH_COUNT + 1))
fi
done

echo "Summary:"
echo " Total tables: $TOTAL_TABLES"
echo " Matching: $MATCH_COUNT"
echo " Mismatched: $MISMATCH_COUNT"
echo " Errors: $ERROR_COUNT"
echo ""

if [ $MISMATCH_COUNT -eq 0 ] && [ $ERROR_COUNT -eq 0 ]; then
echo "✓ All tables match!"
exit 0
else
echo "✗ Some tables have mismatches or errors"
exit 1
fi

If the numbers match, you have successfully migrated your SuperTokens data from MySQL to PostgreSQL 🎉