mattermost/server/channels/store/sqlstore/schema_dump.go
Jesse Hallam 41e5c7286b
Remove vestigial MySQL support (#34865)
* Remove legacy quoteColumnName() utility

Since Mattermost only supports PostgreSQL, the quoteColumnName() helper
that was designed to handle database-specific column quoting is no longer
needed. The function was a no-op that simply returned the column name
unchanged.

Remove the function from utils.go and update status_store.go to use
the "Manual" column name directly.

* Remove legacy driver checks from store.go

Since Mattermost only supports PostgreSQL, remove conditional checks
for different database drivers:

- Simplify specialSearchChars() to always return PostgreSQL-compatible chars
- Remove driver check from computeBinaryParam()
- Remove driver check from computeDefaultTextSearchConfig()
- Simplify GetDbVersion() to use PostgreSQL syntax directly
- Remove switch statement from ensureMinimumDBVersion()
- Remove unused driver parameter from versionString()

* Remove MySQL alternatives for batch delete operations

Since Mattermost only supports PostgreSQL, remove the MySQL-specific
DELETE...LIMIT syntax and keep only the PostgreSQL array-based approach:

- reaction_store.go: Use PostgreSQL array syntax for PermanentDeleteBatch
- file_info_store.go: Use PostgreSQL array syntax for PermanentDeleteBatch
- preference_store.go: Use PostgreSQL tuple IN subquery for DeleteInvalidVisibleDmsGms

* Remove MySQL alternatives for UPDATE...FROM syntax

Since Mattermost only supports PostgreSQL, remove the MySQL-specific
UPDATE syntax that joins tables differently:

- thread_store.go: Use PostgreSQL UPDATE...FROM syntax in
  MarkAllAsReadByChannels and MarkAllAsReadByTeam
- post_store.go: Use PostgreSQL UPDATE...FROM syntax in deleteThreadFiles

* Remove MySQL alternatives for JSON and subquery operations

Since Mattermost only supports PostgreSQL, remove the MySQL-specific
JSON and subquery syntax:

- thread_store.go: Use PostgreSQL JSONB operators for updating participants
- access_control_policy_store.go: Use PostgreSQL JSONB @> operator for
  querying JSON imports
- session_store.go: Use PostgreSQL subquery syntax for Cleanup
- job_store.go: Use PostgreSQL subquery syntax for Cleanup

* Remove MySQL alternatives for CTE queries

Since Mattermost only supports PostgreSQL, simplify code that
uses CTEs (Common Table Expressions):

- channel_store.go: Remove MySQL CASE-based fallback in
  UpdateLastViewedAt and use PostgreSQL CTE exclusively
- draft_store.go: Remove driver checks in DeleteEmptyDraftsByCreateAtAndUserId,
  DeleteOrphanDraftsByCreateAtAndUserId, and determineMaxDraftSize

* Remove driver checks in migrate.go and schema_dump.go

Simplify migration code to use PostgreSQL driver directly since
PostgreSQL is the only supported database.

* Remove driver checks in sqlx_wrapper.go

Always apply lowercase named parameter transformation since PostgreSQL
is the only supported database.

* Remove driver checks in user_store.go

Simplify user store functions to use PostgreSQL-only code paths:
- Remove isPostgreSQL parameter from helper functions
- Use LEFT JOIN pattern instead of subqueries for bot filtering
- Always use case-insensitive LIKE with lower() for search
- Remove MySQL-specific role filtering alternatives

* Remove driver checks in post_store.go

Simplify post_store.go to use PostgreSQL-only code paths:
- Inline getParentsPostsPostgreSQL into getParentsPosts
- Use PostgreSQL TO_CHAR/TO_TIMESTAMP for date formatting in analytics
- Use PostgreSQL array syntax for batch deletes
- Simplify determineMaxPostSize to always use information_schema
- Use PostgreSQL jsonb subtraction for thread participants
- Always execute RefreshPostStats (PostgreSQL materialized views)
- Use materialized views for AnalyticsPostCountsByDay
- Simplify AnalyticsPostCountByTeam to always use countByTeam

* Remove driver checks in channel_store.go

Simplify channel_store.go to use PostgreSQL-only code paths:
- Always use sq.Dollar.ReplacePlaceholders for UNION queries
- Use PostgreSQL LEFT JOIN for retention policy exclusion
- Use PostgreSQL jsonb @> operator for access control policy imports
- Simplify buildLIKEClause to always use LOWER() for case-insensitive search
- Simplify buildFulltextClauseX to always use PostgreSQL to_tsvector/to_tsquery
- Simplify searchGroupChannelsQuery to use ARRAY_TO_STRING/ARRAY_AGG

* Remove driver checks in file_info_store.go

Simplify file_info_store.go to use PostgreSQL-only code paths:
- Always use PostgreSQL to_tsvector/to_tsquery for file search
- Use file_stats materialized view for CountAll()
- Use file_stats materialized view for GetStorageUsage() when not including deleted
- Always execute RefreshFileStats() for materialized view refresh

* Remove driver checks in attributes_store.go

Simplify attributes_store.go to use PostgreSQL-only code paths:
- Always execute RefreshAttributes() for materialized view refresh
- Remove isPostgreSQL parameter from generateSearchQueryForExpression
- Always use PostgreSQL LOWER() LIKE LOWER() syntax for case-insensitive search

* Remove driver checks in retention_policy_store.go

Simplify retention_policy_store.go to use PostgreSQL-only code paths:
- Remove isPostgres parameter from scanRetentionIdsForDeletion
- Always use pq.Array for scanning retention IDs
- Always use pq.Array for inserting retention IDs
- Remove unused json import

* Remove driver checks in property stores

Simplify property_field_store.go and property_value_store.go to use
PostgreSQL-only code paths:
- Always use PostgreSQL type casts (::text, ::jsonb, ::bigint, etc.)
- Remove isPostgres variable and conditionals

* Remove driver checks in channel_member_history_store.go

Simplify PermanentDeleteBatch to use PostgreSQL-only code path:
- Always use ctid-based subquery for DELETE with LIMIT

* Remove remaining driver checks in user_store.go

Simplify user_store.go to use PostgreSQL-only code paths:
- Use LEFT JOIN for bot exclusion in AnalyticsActiveCountForPeriod
- Use LEFT JOIN for bot exclusion in IsEmpty

* Simplify fulltext search by consolidating buildFulltextClause functions

Remove convertMySQLFullTextColumnsToPostgres and consolidate
buildFulltextClause and buildFulltextClauseX into a single function
that takes variadic column arguments and returns sq.Sqlizer.

* Simplify SQL stores leveraging PostgreSQL-only support

- Simplify UpdateMembersRole in channel_store.go and team_store.go
  to use UPDATE...RETURNING instead of SELECT + UPDATE
- Simplify GetPostReminders in post_store.go to use DELETE...RETURNING
- Simplify DeleteOrphanedRows queries by removing MySQL workarounds
  for subquery locking issues
- Simplify UpdateUserLastSyncAt to use UPDATE...FROM...RETURNING
  instead of fetching user first then updating
- Remove MySQL index hint workarounds in ORDER BY clauses
- Update outdated comments referencing MySQL
- Consolidate buildFulltextClause and remove convertMySQLFullTextColumnsToPostgres

* Remove MySQL-specific test artifacts

- Delete unused MySQLStopWords variable and stop_word.go file
- Remove redundant testSearchEmailAddressesWithQuotes test
  (already covered by testSearchEmailAddresses)
- Update comment that referenced MySQL query planning

* Remove MySQL references from server code outside sqlstore

- Update config example and DSN parsing docs to reflect PostgreSQL-only support
- Remove mysql:// scheme check from IsDatabaseDSN
- Simplify SanitizeDataSource to only handle PostgreSQL
- Remove outdated MySQL comments from model and plugin code

* Remove MySQL references from test files

- Update test DSNs to use PostgreSQL format
- Remove dead mysql-replica flag and replicaFlag variable
- Simplify tests that had MySQL/PostgreSQL branches

* Update docs and test config to use PostgreSQL

- Update mmctl config set example to use postgres driver
- Update test-config.json to use PostgreSQL DSN format

* Remove MySQL migration scripts, test data, and docker image

Delete MySQL-related files that are no longer needed:
- ESR upgrade scripts (esr.*.mysql.*.sql)
- MySQL schema dumps (mattermost-mysql-*.sql)
- MySQL replication test scripts (replica-*.sh, mysql-migration-test.sh)
- MySQL test warmup data (mysql_migration_warmup.sql)
- MySQL docker image reference from mirror-docker-images.json

* Remove MySQL references from webapp

- Simplify minimumHashtagLength description to remove MySQL-specific configuration note
- Remove unused HIDE_MYSQL_STATS_NOTIFICATION preference constant
- Update en.json i18n source file

* clean up e2e-tests

* rm server/tests/template.load

* Use teamMemberSliceColumns() in UpdateMembersRole RETURNING clause

Refactor to use the existing helper function instead of hardcoding
the column names, ensuring consistency if the columns are updated.

* u.id -> u.Id

* address code review feedback

---------

Co-authored-by: Mattermost Build <build@mattermost.com>
2026-01-20 21:01:59 +00:00

303 lines
8.3 KiB
Go

// Copyright (c) 2015-present Mattermost, Inc. All Rights Reserved.
// See LICENSE.txt for license information.
package sqlstore
import (
"database/sql"
"strings"
"github.com/hashicorp/go-multierror"
sq "github.com/mattermost/squirrel"
"github.com/pkg/errors"
"github.com/mattermost/mattermost/server/public/model"
)
// GetSchemaDefinition dumps the database schema.
func (ss *SqlStore) GetSchemaDefinition() (*model.SupportPacketDatabaseSchema, error) {
var schemaInfo model.SupportPacketDatabaseSchema
var rErr *multierror.Error
// Get the database collation
dbCollation, err := ss.getDatabaseCollation()
if err != nil {
rErr = multierror.Append(rErr, err)
} else {
schemaInfo.DatabaseCollation = dbCollation
}
// Get the database encoding
dbEncoding, err := ss.getDatabaseEncoding()
if err != nil {
rErr = multierror.Append(rErr, err)
} else {
schemaInfo.DatabaseEncoding = dbEncoding
}
// Get table options
tableOptions, err := ss.getTableOptions()
if err != nil {
rErr = multierror.Append(rErr, err)
}
// Get table schema information
tablesMap, tableCollations, err := ss.getTableSchemaInformation()
if err != nil {
rErr = multierror.Append(rErr, err)
}
// Get table indexes
tableIndexes, err := ss.getTableIndexes()
if err != nil {
rErr = multierror.Append(rErr, err)
}
// Process and combine table metadata
for _, table := range tablesMap {
// Add table collation if it exists
if collation, ok := tableCollations[table.Name]; ok {
table.Collation = collation
}
// Add table options if they exist
if options, ok := tableOptions[table.Name]; ok && len(options) > 0 {
table.Options = options
}
// Add table indexes if they exist
if indexes, ok := tableIndexes[table.Name]; ok {
table.Indexes = indexes
}
schemaInfo.Tables = append(schemaInfo.Tables, *table)
}
return &schemaInfo, rErr.ErrorOrNil()
}
// getDatabaseCollation retrieves the database collation for PostgreSQL
func (ss *SqlStore) getDatabaseCollation() (string, error) {
var dbCollation sql.NullString
collationQuery := sq.Select("datcollate").
From("pg_database").
Where(sq.Expr("datname = current_database()"))
sqlString, args, err := collationQuery.PlaceholderFormat(sq.Dollar).ToSql()
if err != nil {
return "", errors.Wrap(err, "failed to build database collation query")
}
err = ss.GetMaster().DB.QueryRow(sqlString, args...).Scan(&dbCollation)
if err != nil {
return "", errors.Wrap(err, "failed to get database collation")
}
if !dbCollation.Valid {
return "", nil
}
return dbCollation.String, nil
}
// getDatabaseEncoding retrieves the database encoding for PostgreSQL
func (ss *SqlStore) getDatabaseEncoding() (string, error) {
var dbEncoding sql.NullString
encodingQuery := sq.Select("pg_encoding_to_char(encoding)").
From("pg_database").
Where(sq.Expr("datname = current_database()"))
sqlString, args, err := encodingQuery.PlaceholderFormat(sq.Dollar).ToSql()
if err != nil {
return "", errors.Wrap(err, "failed to build database encoding query")
}
err = ss.GetMaster().DB.QueryRow(sqlString, args...).Scan(&dbEncoding)
if err != nil {
return "", errors.Wrap(err, "failed to get database encoding")
}
if !dbEncoding.Valid {
return "", nil
}
return dbEncoding.String, nil
}
// getTableOptions retrieves table-specific options from PostgreSQL system catalogs
func (ss *SqlStore) getTableOptions() (map[string]map[string]string, error) {
tableOptions := make(map[string]map[string]string)
optionsQuery := sq.Select("c.relname as table_name", "unnest(c.reloptions) as option_value").
From("pg_class c").
Join("pg_namespace n ON n.oid = c.relnamespace").
Where(sq.And{
sq.Expr("n.nspname = current_schema()"),
sq.Eq{"c.relkind": "r"},
sq.NotEq{"c.reloptions": nil},
})
optionsSql, optionsArgs, err := optionsQuery.PlaceholderFormat(sq.Dollar).ToSql()
if err != nil {
return nil, errors.Wrap(err, "failed to build table options query")
}
optionsRows, err := ss.GetMaster().DB.Query(optionsSql, optionsArgs...)
if err != nil {
return nil, errors.Wrap(err, "failed to query table options")
}
defer optionsRows.Close()
// Process table options
var rErr *multierror.Error
for optionsRows.Next() {
var tableName string
var optionValue string
err = optionsRows.Scan(&tableName, &optionValue)
if err != nil {
rErr = multierror.Append(rErr, errors.Wrap(err, "failed to scan database schema row"))
continue
}
// Parse option in format key=value
parts := strings.SplitN(optionValue, "=", 2)
if len(parts) != 2 {
continue
}
key := parts[0]
value := parts[1]
// Initialize the options map for this table if needed
if _, ok := tableOptions[tableName]; !ok {
tableOptions[tableName] = make(map[string]string)
}
// Add option to the table
tableOptions[tableName][key] = value
}
return tableOptions, rErr.ErrorOrNil()
}
// getTableSchemaInformation retrieves table and column information from information_schema
func (ss *SqlStore) getTableSchemaInformation() (map[string]*model.DatabaseTable, map[string]string, error) {
tablesMap := make(map[string]*model.DatabaseTable)
tableCollations := make(map[string]string)
schemaQuery := sq.Select(
"t.table_name",
"c.column_name",
"c.data_type",
"c.character_maximum_length",
"c.is_nullable",
"c.collation_name",
).
From("information_schema.tables t").
LeftJoin("information_schema.columns c ON t.table_name = c.table_name AND t.table_schema = c.table_schema").
Where(sq.Expr("t.table_schema = current_schema()")).
OrderBy("t.table_name", "c.ordinal_position")
schemaSql, schemaArgs, err := schemaQuery.PlaceholderFormat(sq.Dollar).ToSql()
if err != nil {
return nil, nil, errors.Wrap(err, "failed to build schema information query")
}
rows, err := ss.GetMaster().DB.Query(schemaSql, schemaArgs...)
if err != nil {
return nil, nil, errors.Wrap(err, "failed to query schema information")
}
defer rows.Close()
var rErr *multierror.Error
for rows.Next() {
var tableName, columnName, dataType, isNullable string
var characterMaxLength sql.NullInt64
var collationName sql.NullString
err = rows.Scan(&tableName, &columnName, &dataType, &characterMaxLength, &isNullable, &collationName)
if err != nil {
rErr = multierror.Append(rErr, errors.Wrap(err, "failed to scan database schema row"))
continue
}
// Track collation names for tables.
// Only the first non-null collation encountered per table is stored in tableCollations.
if collationName.Valid && collationName.String != "" {
if _, ok := tableCollations[tableName]; !ok {
tableCollations[tableName] = collationName.String
}
}
// Initialize table in map if it doesn't exist
if _, ok := tablesMap[tableName]; !ok {
tablesMap[tableName] = &model.DatabaseTable{
Name: tableName,
Columns: []model.DatabaseColumn{},
}
}
// Add column to table
if columnName != "" {
maxLength := int64(0)
if characterMaxLength.Valid {
maxLength = characterMaxLength.Int64
}
tablesMap[tableName].Columns = append(tablesMap[tableName].Columns, model.DatabaseColumn{
Name: columnName,
DataType: dataType,
MaxLength: maxLength,
IsNullable: isNullable == "YES",
})
}
}
return tablesMap, tableCollations, rErr.ErrorOrNil()
}
// getTableIndexes retrieves index information for all tables
func (ss *SqlStore) getTableIndexes() (map[string][]model.DatabaseIndex, error) {
tableIndexes := make(map[string][]model.DatabaseIndex)
// Query pg_indexes for index information
indexQuery := sq.Select(
"tablename",
"indexname",
"indexdef",
).
From("pg_indexes").
Where(sq.Expr("schemaname = current_schema()"))
indexSql, indexArgs, err := indexQuery.PlaceholderFormat(sq.Dollar).ToSql()
if err != nil {
return nil, errors.Wrap(err, "failed to build index query")
}
rows, err := ss.GetMaster().DB.Query(indexSql, indexArgs...)
if err != nil {
return nil, errors.Wrap(err, "failed to query index information")
}
defer rows.Close()
var rErr *multierror.Error
for rows.Next() {
var tableName, indexName, indexDef string
err = rows.Scan(&tableName, &indexName, &indexDef)
if err != nil {
rErr = multierror.Append(rErr, errors.Wrap(err, "failed to scan index row"))
continue
}
index := model.DatabaseIndex{
Name: indexName,
Definition: indexDef,
}
tableIndexes[tableName] = append(tableIndexes[tableName], index)
}
return tableIndexes, rErr.ErrorOrNil()
}