Remove vestigial MySQL support (#34865)

* Remove legacy quoteColumnName() utility

Since Mattermost only supports PostgreSQL, the quoteColumnName() helper
that was designed to handle database-specific column quoting is no longer
needed. The function was a no-op that simply returned the column name
unchanged.

Remove the function from utils.go and update status_store.go to use
the "Manual" column name directly.

* Remove legacy driver checks from store.go

Since Mattermost only supports PostgreSQL, remove conditional checks
for different database drivers:

- Simplify specialSearchChars() to always return PostgreSQL-compatible chars
- Remove driver check from computeBinaryParam()
- Remove driver check from computeDefaultTextSearchConfig()
- Simplify GetDbVersion() to use PostgreSQL syntax directly
- Remove switch statement from ensureMinimumDBVersion()
- Remove unused driver parameter from versionString()

* Remove MySQL alternatives for batch delete operations

Since Mattermost only supports PostgreSQL, remove the MySQL-specific
DELETE...LIMIT syntax and keep only the PostgreSQL array-based approach:

- reaction_store.go: Use PostgreSQL array syntax for PermanentDeleteBatch
- file_info_store.go: Use PostgreSQL array syntax for PermanentDeleteBatch
- preference_store.go: Use PostgreSQL tuple IN subquery for DeleteInvalidVisibleDmsGms

* Remove MySQL alternatives for UPDATE...FROM syntax

Since Mattermost only supports PostgreSQL, remove the MySQL-specific
UPDATE syntax that joins tables differently:

- thread_store.go: Use PostgreSQL UPDATE...FROM syntax in
  MarkAllAsReadByChannels and MarkAllAsReadByTeam
- post_store.go: Use PostgreSQL UPDATE...FROM syntax in deleteThreadFiles

* Remove MySQL alternatives for JSON and subquery operations

Since Mattermost only supports PostgreSQL, remove the MySQL-specific
JSON and subquery syntax:

- thread_store.go: Use PostgreSQL JSONB operators for updating participants
- access_control_policy_store.go: Use PostgreSQL JSONB @> operator for
  querying JSON imports
- session_store.go: Use PostgreSQL subquery syntax for Cleanup
- job_store.go: Use PostgreSQL subquery syntax for Cleanup

* Remove MySQL alternatives for CTE queries

Since Mattermost only supports PostgreSQL, simplify code that
uses CTEs (Common Table Expressions):

- channel_store.go: Remove MySQL CASE-based fallback in
  UpdateLastViewedAt and use PostgreSQL CTE exclusively
- draft_store.go: Remove driver checks in DeleteEmptyDraftsByCreateAtAndUserId,
  DeleteOrphanDraftsByCreateAtAndUserId, and determineMaxDraftSize

* Remove driver checks in migrate.go and schema_dump.go

Simplify migration code to use PostgreSQL driver directly since
PostgreSQL is the only supported database.

* Remove driver checks in sqlx_wrapper.go

Always apply lowercase named parameter transformation since PostgreSQL
is the only supported database.

* Remove driver checks in user_store.go

Simplify user store functions to use PostgreSQL-only code paths:
- Remove isPostgreSQL parameter from helper functions
- Use LEFT JOIN pattern instead of subqueries for bot filtering
- Always use case-insensitive LIKE with lower() for search
- Remove MySQL-specific role filtering alternatives

* Remove driver checks in post_store.go

Simplify post_store.go to use PostgreSQL-only code paths:
- Inline getParentsPostsPostgreSQL into getParentsPosts
- Use PostgreSQL TO_CHAR/TO_TIMESTAMP for date formatting in analytics
- Use PostgreSQL array syntax for batch deletes
- Simplify determineMaxPostSize to always use information_schema
- Use PostgreSQL jsonb subtraction for thread participants
- Always execute RefreshPostStats (PostgreSQL materialized views)
- Use materialized views for AnalyticsPostCountsByDay
- Simplify AnalyticsPostCountByTeam to always use countByTeam

* Remove driver checks in channel_store.go

Simplify channel_store.go to use PostgreSQL-only code paths:
- Always use sq.Dollar.ReplacePlaceholders for UNION queries
- Use PostgreSQL LEFT JOIN for retention policy exclusion
- Use PostgreSQL jsonb @> operator for access control policy imports
- Simplify buildLIKEClause to always use LOWER() for case-insensitive search
- Simplify buildFulltextClauseX to always use PostgreSQL to_tsvector/to_tsquery
- Simplify searchGroupChannelsQuery to use ARRAY_TO_STRING/ARRAY_AGG

* Remove driver checks in file_info_store.go

Simplify file_info_store.go to use PostgreSQL-only code paths:
- Always use PostgreSQL to_tsvector/to_tsquery for file search
- Use file_stats materialized view for CountAll()
- Use file_stats materialized view for GetStorageUsage() when not including deleted
- Always execute RefreshFileStats() for materialized view refresh

* Remove driver checks in attributes_store.go

Simplify attributes_store.go to use PostgreSQL-only code paths:
- Always execute RefreshAttributes() for materialized view refresh
- Remove isPostgreSQL parameter from generateSearchQueryForExpression
- Always use PostgreSQL LOWER() LIKE LOWER() syntax for case-insensitive search

* Remove driver checks in retention_policy_store.go

Simplify retention_policy_store.go to use PostgreSQL-only code paths:
- Remove isPostgres parameter from scanRetentionIdsForDeletion
- Always use pq.Array for scanning retention IDs
- Always use pq.Array for inserting retention IDs
- Remove unused json import

* Remove driver checks in property stores

Simplify property_field_store.go and property_value_store.go to use
PostgreSQL-only code paths:
- Always use PostgreSQL type casts (::text, ::jsonb, ::bigint, etc.)
- Remove isPostgres variable and conditionals

* Remove driver checks in channel_member_history_store.go

Simplify PermanentDeleteBatch to use PostgreSQL-only code path:
- Always use ctid-based subquery for DELETE with LIMIT

* Remove remaining driver checks in user_store.go

Simplify user_store.go to use PostgreSQL-only code paths:
- Use LEFT JOIN for bot exclusion in AnalyticsActiveCountForPeriod
- Use LEFT JOIN for bot exclusion in IsEmpty

* Simplify fulltext search by consolidating buildFulltextClause functions

Remove convertMySQLFullTextColumnsToPostgres and consolidate
buildFulltextClause and buildFulltextClauseX into a single function
that takes variadic column arguments and returns sq.Sqlizer.

* Simplify SQL stores leveraging PostgreSQL-only support

- Simplify UpdateMembersRole in channel_store.go and team_store.go
  to use UPDATE...RETURNING instead of SELECT + UPDATE
- Simplify GetPostReminders in post_store.go to use DELETE...RETURNING
- Simplify DeleteOrphanedRows queries by removing MySQL workarounds
  for subquery locking issues
- Simplify UpdateUserLastSyncAt to use UPDATE...FROM...RETURNING
  instead of fetching user first then updating
- Remove MySQL index hint workarounds in ORDER BY clauses
- Update outdated comments referencing MySQL
- Consolidate buildFulltextClause and remove convertMySQLFullTextColumnsToPostgres

* Remove MySQL-specific test artifacts

- Delete unused MySQLStopWords variable and stop_word.go file
- Remove redundant testSearchEmailAddressesWithQuotes test
  (already covered by testSearchEmailAddresses)
- Update comment that referenced MySQL query planning

* Remove MySQL references from server code outside sqlstore

- Update config example and DSN parsing docs to reflect PostgreSQL-only support
- Remove mysql:// scheme check from IsDatabaseDSN
- Simplify SanitizeDataSource to only handle PostgreSQL
- Remove outdated MySQL comments from model and plugin code

* Remove MySQL references from test files

- Update test DSNs to use PostgreSQL format
- Remove dead mysql-replica flag and replicaFlag variable
- Simplify tests that had MySQL/PostgreSQL branches

* Update docs and test config to use PostgreSQL

- Update mmctl config set example to use postgres driver
- Update test-config.json to use PostgreSQL DSN format

* Remove MySQL migration scripts, test data, and docker image

Delete MySQL-related files that are no longer needed:
- ESR upgrade scripts (esr.*.mysql.*.sql)
- MySQL schema dumps (mattermost-mysql-*.sql)
- MySQL replication test scripts (replica-*.sh, mysql-migration-test.sh)
- MySQL test warmup data (mysql_migration_warmup.sql)
- MySQL docker image reference from mirror-docker-images.json

* Remove MySQL references from webapp

- Simplify minimumHashtagLength description to remove MySQL-specific configuration note
- Remove unused HIDE_MYSQL_STATS_NOTIFICATION preference constant
- Update en.json i18n source file

* clean up e2e-tests

* rm server/tests/template.load

* Use teamMemberSliceColumns() in UpdateMembersRole RETURNING clause

Refactor to use the existing helper function instead of hardcoding
the column names, ensuring consistency if the columns are updated.

* u.id -> u.Id

* address code review feedback

---------

Co-authored-by: Mattermost Build <build@mattermost.com>
This commit is contained in:
Jesse Hallam 2026-01-20 17:01:59 -04:00 committed by GitHub
parent dcda5304ff
commit 41e5c7286b
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
70 changed files with 449 additions and 7194 deletions

View file

@ -74,7 +74,6 @@
"mochawesome-merge": "4.4.1",
"mochawesome-report-generator": "6.2.0",
"moment-timezone": "0.6.0",
"mysql": "2.18.1",
"path": "0.12.7",
"pdf-parse": "1.1.1",
"pg": "8.16.3",
@ -4160,16 +4159,6 @@
"tweetnacl": "^0.14.3"
}
},
"node_modules/bignumber.js": {
"version": "9.0.0",
"resolved": "https://registry.npmjs.org/bignumber.js/-/bignumber.js-9.0.0.tgz",
"integrity": "sha512-t/OYhhJ2SD+YGBQcjY8GzzDHEk9f3nerxjtfa6tlMXfe7frs/WozhvCNoGvpM0P3bNf3Gq5ZRMlGr5f3r4/N8A==",
"dev": true,
"license": "MIT",
"engines": {
"node": "*"
}
},
"node_modules/blob-util": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/blob-util/-/blob-util-2.0.2.tgz",
@ -4829,13 +4818,6 @@
"node": ">=6.6.0"
}
},
"node_modules/core-util-is": {
"version": "1.0.3",
"resolved": "https://registry.npmjs.org/core-util-is/-/core-util-is-1.0.3.tgz",
"integrity": "sha512-ZQBvi1DcpJ4GDqanjucZ2Hj3wEO5pZDS89BWbkcrvdxksJorwUDDZamX9ldFkp9aw2lmBDLgkObEA4DWNJ9FYQ==",
"dev": true,
"license": "MIT"
},
"node_modules/cross-env": {
"version": "10.0.0",
"resolved": "https://registry.npmjs.org/cross-env/-/cross-env-10.0.0.tgz",
@ -7942,13 +7924,6 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/isarray": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/isarray/-/isarray-1.0.0.tgz",
"integrity": "sha512-VLghIWNM6ELQzo7zwmcg0NmTVyWKYjvIeM83yjp0wRDTmUnrM678fQbcKBo6n2CJEF0szoG//ytg+TKla89ALQ==",
"dev": true,
"license": "MIT"
},
"node_modules/isexe": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/isexe/-/isexe-2.0.0.tgz",
@ -9324,29 +9299,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/mysql": {
"version": "2.18.1",
"resolved": "https://registry.npmjs.org/mysql/-/mysql-2.18.1.tgz",
"integrity": "sha512-Bca+gk2YWmqp2Uf6k5NFEurwY/0td0cpebAucFpY/3jhrwrVGuxU2uQFCHjU19SJfje0yQvi+rVWdq78hR5lig==",
"dev": true,
"license": "MIT",
"dependencies": {
"bignumber.js": "9.0.0",
"readable-stream": "2.3.7",
"safe-buffer": "5.1.2",
"sqlstring": "2.3.1"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/mysql/node_modules/safe-buffer": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
"integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==",
"dev": true,
"license": "MIT"
},
"node_modules/natural-compare": {
"version": "1.4.0",
"resolved": "https://registry.npmjs.org/natural-compare/-/natural-compare-1.4.0.tgz",
@ -10214,13 +10166,6 @@
"node": ">= 0.6.0"
}
},
"node_modules/process-nextick-args": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/process-nextick-args/-/process-nextick-args-2.0.1.tgz",
"integrity": "sha512-3ouUOpQhtgrbOa17J7+uxOTpITYWaGP7/AhoR3+A+/1e9skrzelGi/dXzEYyvbxubEF6Wn2ypscTKiKJFFn1ag==",
"dev": true,
"license": "MIT"
},
"node_modules/prop-types": {
"version": "15.8.1",
"resolved": "https://registry.npmjs.org/prop-types/-/prop-types-15.8.1.tgz",
@ -10391,29 +10336,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/readable-stream": {
"version": "2.3.7",
"resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.7.tgz",
"integrity": "sha512-Ebho8K4jIbHAxnuxi7o42OrZgF/ZTNcsZj6nRKyUmkhLFq8CHItp/fy6hQZuZmP/n3yZ9VBUbp4zz/mX8hmYPw==",
"dev": true,
"license": "MIT",
"dependencies": {
"core-util-is": "~1.0.0",
"inherits": "~2.0.3",
"isarray": "~1.0.0",
"process-nextick-args": "~2.0.0",
"safe-buffer": "~5.1.1",
"string_decoder": "~1.1.1",
"util-deprecate": "~1.0.1"
}
},
"node_modules/readable-stream/node_modules/safe-buffer": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz",
"integrity": "sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g==",
"dev": true,
"license": "MIT"
},
"node_modules/readdirp": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz",
@ -11201,16 +11123,6 @@
"node": ">= 10.x"
}
},
"node_modules/sqlstring": {
"version": "2.3.1",
"resolved": "https://registry.npmjs.org/sqlstring/-/sqlstring-2.3.1.tgz",
"integrity": "sha512-ooAzh/7dxIG5+uDik1z/Rd1vli0+38izZhGzSa34FwR7IbelPWCCKSNIl8jlL/F7ERvy8CB2jNeM1E9i9mXMAQ==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/sshpk": {
"version": "1.18.0",
"resolved": "https://registry.npmjs.org/sshpk/-/sshpk-1.18.0.tgz",

View file

@ -69,7 +69,6 @@
"mochawesome-merge": "4.4.1",
"mochawesome-report-generator": "6.2.0",
"moment-timezone": "0.6.0",
"mysql": "2.18.1",
"path": "0.12.7",
"pdf-parse": "1.1.1",
"pg": "8.16.3",

View file

@ -1,13 +1,6 @@
// Copyright (c) 2015-present Mattermost, Inc. All Rights Reserved.
// See LICENSE.txt for license information.
/**
* Functions here are expected to work with MySQL and PostgreSQL (known as dialect).
* When updating this file, make sure to test in both dialect.
* You'll find table and columns names are being converted to lowercase. Reason being is that
* in MySQL, first letter is capitalized.
*/
const mapKeys = require('lodash.mapkeys');
function convertKeysToLowercase(obj) {
@ -33,12 +26,12 @@ const dbGetActiveUserSessions = async ({dbConfig, params: {username, userId, lim
try {
let user;
if (username) {
user = await knexClient(toLowerCase(dbConfig, 'Users')).where('username', username).first();
user = await knexClient('users').where('username', username).first();
user = convertKeysToLowercase(user);
}
const now = Date.now();
const sessions = await knexClient(toLowerCase(dbConfig, 'Sessions')).
const sessions = await knexClient('sessions').
where('userid', user ? user.id : userId).
where('expiresat', '>', now).
orderBy('lastactivityat', 'desc').
@ -60,7 +53,7 @@ const dbGetUser = async ({dbConfig, params: {username}}) => {
}
try {
const user = await knexClient(toLowerCase(dbConfig, 'Users')).where('username', username).first();
const user = await knexClient('users').where('username', username).first();
return {user: convertKeysToLowercase(user)};
} catch (error) {
@ -75,7 +68,7 @@ const dbGetUserSession = async ({dbConfig, params: {sessionId}}) => {
}
try {
const session = await knexClient(toLowerCase(dbConfig, 'Sessions')).
const session = await knexClient('sessions').
where('id', '=', sessionId).
first();
@ -92,7 +85,7 @@ const dbUpdateUserSession = async ({dbConfig, params: {sessionId, userId, fields
}
try {
let user = await knexClient(toLowerCase(dbConfig, 'Users')).where('id', userId).first();
let user = await knexClient('users').where('id', userId).first();
if (!user) {
return {errorMessage: `No user found with id: ${userId}.`};
}
@ -102,12 +95,12 @@ const dbUpdateUserSession = async ({dbConfig, params: {sessionId, userId, fields
user = convertKeysToLowercase(user);
await knexClient(toLowerCase(dbConfig, 'Sessions')).
await knexClient('sessions').
where('id', '=', sessionId).
where('userid', '=', user.id).
update(fieldsToUpdate);
const session = await knexClient(toLowerCase(dbConfig, 'Sessions')).
const session = await knexClient('sessions').
where('id', '=', sessionId).
where('userid', '=', user.id).
first();
@ -119,27 +112,11 @@ const dbUpdateUserSession = async ({dbConfig, params: {sessionId, userId, fields
}
};
function toLowerCase(config, name) {
if (config.client === 'mysql') {
return name;
}
return name.toLowerCase();
}
const dbRefreshPostStats = async ({dbConfig}) => {
if (!knexClient) {
knexClient = getKnexClient(dbConfig);
}
// Only run for PostgreSQL
if (dbConfig.client !== 'postgres') {
return {
skipped: true,
message: 'Refresh post stats is only supported for PostgreSQL',
};
}
try {
await knexClient.raw('REFRESH MATERIALIZED VIEW posts_by_team_day;');
await knexClient.raw('REFRESH MATERIALIZED VIEW bot_posts_by_team_day;');

View file

@ -635,7 +635,7 @@ func (ps *PlatformService) LdapDiagnostic() einterfaces.LdapDiagnosticInterface
return ps.ldapDiagnostic
}
// DatabaseTypeAndSchemaVersion returns the Database type (postgres or mysql) and current version of the schema
// DatabaseTypeAndSchemaVersion returns the database type and current version of the schema
func (ps *PlatformService) DatabaseTypeAndSchemaVersion() (string, string, error) {
schemaVersion, err := ps.Store.GetDBSchemaVersion()
if err != nil {

View file

@ -242,13 +242,9 @@ func TestDatabaseTypeAndMattermostVersion(t *testing.T) {
databaseType, schemaVersion, err := th.Service.DatabaseTypeAndSchemaVersion()
require.NoError(t, err)
if *th.Service.Config().SqlSettings.DriverName == model.DatabaseDriverPostgres {
assert.Equal(t, "postgres", databaseType)
} else {
assert.Equal(t, "mysql", databaseType)
}
assert.Equal(t, "postgres", databaseType)
// It's hard to check wheather the schema version is correct or not.
// It's hard to check whether the schema version is correct or not.
// So, we just check if it's greater than 1.
assert.GreaterOrEqual(t, schemaVersion, strconv.Itoa(1))
}

View file

@ -4,21 +4,14 @@
package users
import (
"flag"
"testing"
"github.com/mattermost/mattermost/server/v8/channels/testlib"
)
var mainHelper *testlib.MainHelper
var replicaFlag bool
func TestMain(m *testing.M) {
if f := flag.Lookup("mysql-replica"); f == nil {
flag.BoolVar(&replicaFlag, "mysql-replica", false, "")
flag.Parse()
}
var options = testlib.HelperOptions{
EnableStore: true,
EnableResources: true,

View file

@ -1,7 +0,0 @@
// Copyright (c) 2015-present Mattermost, Inc. All Rights Reserved.
// See LICENSE.txt for license information.
package searchlayer
var MySQLStopWords = []string{"a", "about", "an", "are", "as", "at", "be", "by", "com", "de", "en", "for", "from", "how", "i", "in", "is", "it", "la", "of",
"on", "or", "that", "the", "this", "to", "was", "what", "when", "where", "who", "will", "with", "und", "the", "www"}

View file

@ -41,17 +41,10 @@ var searchPostStoreTests = []searchTest{
Tags: []string{EnginePostgres},
},
{
// Postgres supports search with and without quotes
Name: "Should be able to search for email addresses with or without quotes",
Fn: testSearchEmailAddresses,
Tags: []string{EnginePostgres, EngineElasticSearch},
},
{
// MySql supports search with quotes only
Name: "Should be able to search for email addresses with quotes",
Fn: testSearchEmailAddressesWithQuotes,
Tags: []string{EngineElasticSearch},
},
{
Name: "Should be able to search when markdown underscores are applied",
Fn: testSearchMarkdownUnderscores,
@ -557,21 +550,6 @@ func testSearchEmailAddresses(t *testing.T, th *SearchTestHelper) {
})
}
func testSearchEmailAddressesWithQuotes(t *testing.T, th *SearchTestHelper) {
p1, err := th.createPost(th.User.Id, th.ChannelBasic.Id, "email test@test.com", "", model.PostTypeDefault, 0, false)
require.NoError(t, err)
_, err = th.createPost(th.User.Id, th.ChannelBasic.Id, "email test2@test.com", "", model.PostTypeDefault, 0, false)
require.NoError(t, err)
defer th.deleteUserPosts(th.User.Id)
params := &model.SearchParams{Terms: "\"test@test.com\""}
results, err := th.Store.Post().SearchPostsForUser(th.Context, []*model.SearchParams{params}, th.User.Id, th.Team.Id, 0, 20)
require.NoError(t, err)
require.Len(t, results.Posts, 1)
th.checkPostInSearchResults(t, p1.Id, results.Posts)
}
func testSearchMarkdownUnderscores(t *testing.T, th *SearchTestHelper) {
p1, err := th.createPost(th.User.Id, th.ChannelBasic.Id, "_start middle end_ _another_", "", model.PostTypeDefault, 0, false)
require.NoError(t, err)

View file

@ -359,12 +359,7 @@ func (s *SqlAccessControlPolicyStore) SetActiveStatus(rctx request.CTX, id strin
if existingPolicy.Type == model.AccessControlPolicyTypeParent {
// if the policy is a parent, we need to update the child policies
var expr sq.Sqlizer
if s.DriverName() == model.DatabaseDriverPostgres {
expr = sq.Expr("Data->'imports' @> ?::jsonb", fmt.Sprintf("%q", id))
} else {
expr = sq.Expr("JSON_CONTAINS(JSON_EXTRACT(Data, '$.imports'), ?)", fmt.Sprintf("%q", id))
}
expr := sq.Expr("Data->'imports' @> ?::jsonb", fmt.Sprintf("%q", id))
query, args, err = s.getQueryBuilder().Update("AccessControlPolicies").Set("Active", active).Where(expr).ToSql()
if err != nil {
return nil, errors.Wrapf(err, "failed to build query for policy with id=%s", id)
@ -541,11 +536,7 @@ func (s *SqlAccessControlPolicyStore) GetAll(_ request.CTX, opts model.GetAccess
query := s.selectQueryBuilder
if opts.ParentID != "" {
if s.DriverName() == model.DatabaseDriverPostgres {
query = query.Where(sq.Expr("Data->'imports' @> ?", fmt.Sprintf("%q", opts.ParentID)))
} else {
query = query.Where(sq.Expr("JSON_CONTAINS(JSON_EXTRACT(Data, '$.imports'), ?)", fmt.Sprintf("%q", opts.ParentID)))
}
query = query.Where(sq.Expr("Data->'imports' @> ?", fmt.Sprintf("%q", opts.ParentID)))
}
if opts.Type != "" {

View file

@ -51,10 +51,8 @@ func newSqlAttributesStore(sqlStore *SqlStore, metrics einterfaces.MetricsInterf
}
func (s *SqlAttributesStore) RefreshAttributes() error {
if s.DriverName() == model.DatabaseDriverPostgres {
if _, err := s.GetMaster().Exec("REFRESH MATERIALIZED VIEW AttributeView"); err != nil {
return errors.Wrap(err, "error refreshing materialized view AttributeView")
}
if _, err := s.GetMaster().Exec("REFRESH MATERIALIZED VIEW AttributeView"); err != nil {
return errors.Wrap(err, "error refreshing materialized view AttributeView")
}
return nil
@ -143,8 +141,8 @@ func (s *SqlAttributesStore) SearchUsers(rctx request.CTX, opts model.SubjectSea
}
if term := opts.Term; strings.TrimSpace(term) != "" {
_, query = generateSearchQueryForExpression(query, strings.Fields(term), searchFields, s.DriverName() == model.DatabaseDriverPostgres, argCount)
_, count = generateSearchQueryForExpression(count, strings.Fields(term), searchFields, s.DriverName() == model.DatabaseDriverPostgres, argCount)
_, query = generateSearchQueryForExpression(query, strings.Fields(term), searchFields, argCount)
_, count = generateSearchQueryForExpression(count, strings.Fields(term), searchFields, argCount)
}
q, args, err := query.ToSql()
@ -211,25 +209,17 @@ func (s *SqlAttributesStore) GetChannelMembersToRemove(rctx request.CTX, channel
return members, nil
}
func generateSearchQueryForExpression(query sq.SelectBuilder, terms []string, fields []string, isPostgreSQL bool, prevArgs int) (int, sq.SelectBuilder) {
func generateSearchQueryForExpression(query sq.SelectBuilder, terms []string, fields []string, prevArgs int) (int, sq.SelectBuilder) {
for _, term := range terms {
searchFields := []string{}
termArgs := []any{}
for _, field := range fields {
if isPostgreSQL {
prevArgs++
searchFields = append(searchFields, fmt.Sprintf("lower(%s) LIKE lower($%d) escape '*' ", field, prevArgs))
} else {
searchFields = append(searchFields, fmt.Sprintf("%s LIKE ? escape '*' ", field))
}
prevArgs++
searchFields = append(searchFields, fmt.Sprintf("lower(%s) LIKE lower($%d) escape '*' ", field, prevArgs))
termArgs = append(termArgs, fmt.Sprintf("%%%s%%", strings.TrimLeft(term, "@")))
}
if isPostgreSQL {
prevArgs++
searchFields = append(searchFields, fmt.Sprintf("lower(%s) LIKE lower($%d) escape '*' ", "Id", prevArgs))
} else {
searchFields = append(searchFields, "Id = ?")
}
prevArgs++
searchFields = append(searchFields, fmt.Sprintf("lower(%s) LIKE lower($%d) escape '*' ", "Id", prevArgs))
termArgs = append(termArgs, strings.TrimLeft(term, "@"))
query = query.Where(fmt.Sprintf("(%s)", strings.Join(searchFields, " OR ")), termArgs...)
}

View file

@ -248,16 +248,12 @@ func (s SqlChannelMemberHistoryStore) PermanentDeleteBatchForRetentionPolicies(r
// DeleteOrphanedRows removes entries from ChannelMemberHistory when a corresponding channel no longer exists.
func (s SqlChannelMemberHistoryStore) DeleteOrphanedRows(limit int) (deleted int64, err error) {
// TODO: https://mattermost.atlassian.net/browse/MM-63368
// We need the extra level of nesting to deal with MySQL's locking
const query = `
DELETE FROM ChannelMemberHistory WHERE (ChannelId, UserId, JoinTime) IN (
SELECT ChannelId, UserId, JoinTime FROM (
SELECT ChannelId, UserId, JoinTime FROM ChannelMemberHistory
LEFT JOIN Channels ON ChannelMemberHistory.ChannelId = Channels.Id
WHERE Channels.Id IS NULL
LIMIT ?
) AS A
DELETE FROM ChannelMemberHistory WHERE ctid IN (
SELECT ChannelMemberHistory.ctid FROM ChannelMemberHistory
LEFT JOIN Channels ON ChannelMemberHistory.ChannelId = Channels.Id
WHERE Channels.Id IS NULL
LIMIT $1
)`
result, err := s.GetMaster().Exec(query, limit)
if err != nil {
@ -268,39 +264,22 @@ func (s SqlChannelMemberHistoryStore) DeleteOrphanedRows(limit int) (deleted int
}
func (s SqlChannelMemberHistoryStore) PermanentDeleteBatch(endTime int64, limit int64) (int64, error) {
var (
query string
args []any
err error
)
if s.DriverName() == model.DatabaseDriverPostgres {
var innerSelect string
innerSelect, args, err = s.getQueryBuilder().
Select("ctid").
From("ChannelMemberHistory").
Where(sq.And{
sq.NotEq{"LeaveTime": nil},
sq.LtOrEq{"LeaveTime": endTime},
}).Limit(uint64(limit)).
ToSql()
if err != nil {
return 0, errors.Wrap(err, "channel_member_history_to_sql")
}
query, _, err = s.getQueryBuilder().
Delete("ChannelMemberHistory").
Where(fmt.Sprintf(
"ctid IN (%s)", innerSelect,
)).ToSql()
} else {
query, args, err = s.getQueryBuilder().
Delete("ChannelMemberHistory").
Where(sq.And{
sq.NotEq{"LeaveTime": nil},
sq.LtOrEq{"LeaveTime": endTime},
}).
Limit(uint64(limit)).ToSql()
innerSelect, args, err := s.getQueryBuilder().
Select("ctid").
From("ChannelMemberHistory").
Where(sq.And{
sq.NotEq{"LeaveTime": nil},
sq.LtOrEq{"LeaveTime": endTime},
}).Limit(uint64(limit)).
ToSql()
if err != nil {
return 0, errors.Wrap(err, "channel_member_history_to_sql")
}
query, _, err := s.getQueryBuilder().
Delete("ChannelMemberHistory").
Where(fmt.Sprintf(
"ctid IN (%s)", innerSelect,
)).ToSql()
if err != nil {
return 0, errors.Wrap(err, "channel_member_history_to_sql")
}

View file

@ -2531,51 +2531,44 @@ func (s SqlChannelStore) PermanentDeleteMembersByUser(rctx request.CTX, userId s
func (s SqlChannelStore) UpdateLastViewedAt(channelIds []string, userId string) (map[string]int64, error) {
lastPostAtTimes := []struct {
Id string
LastPostAt int64
TotalMsgCount int64
TotalMsgCountRoot int64
Id string
LastPostAt int64
}{}
if len(channelIds) == 0 {
return map[string]int64{}, nil
}
// We use the question placeholder format for both databases, because
// we replace that with the dollar format later on.
// It's needed to support the prefix CTE query. See: https://github.com/Masterminds/squirrel/issues/285.
// We use the question placeholder format because we replace it with the
// dollar format later on. It's needed to support the prefix CTE query.
// See: https://github.com/Masterminds/squirrel/issues/285.
query := sq.StatementBuilder.PlaceholderFormat(sq.Question).
Select("Id, LastPostAt, TotalMsgCount, TotalMsgCountRoot").
From("Channels").
Where(sq.Eq{"Id": channelIds})
// TODO: use a CTE for mysql too when version 8 becomes the minimum supported version.
if s.DriverName() == model.DatabaseDriverPostgres {
with := query.Prefix("WITH c AS (").Suffix(") ,")
update := sq.StatementBuilder.PlaceholderFormat(sq.Question).
Update("ChannelMembers cm").
Set("MentionCount", 0).
Set("MentionCountRoot", 0).
Set("UrgentMentionCount", 0).
Set("MsgCount", sq.Expr("greatest(cm.MsgCount, c.TotalMsgCount)")).
Set("MsgCountRoot", sq.Expr("greatest(cm.MsgCountRoot, c.TotalMsgCountRoot)")).
Set("LastViewedAt", sq.Expr("greatest(cm.LastViewedAt, c.LastPostAt)")).
Set("LastUpdateAt", sq.Expr("greatest(cm.LastViewedAt, c.LastPostAt)")).
SuffixExpr(sq.Expr("FROM c WHERE cm.UserId = ? AND c.Id = cm.ChannelId", userId))
updateWrap := update.Prefix("updated AS (").Suffix(")")
query = with.SuffixExpr(updateWrap).Suffix("SELECT Id, LastPostAt FROM c")
}
with := query.Prefix("WITH c AS (").Suffix(") ,")
update := sq.StatementBuilder.PlaceholderFormat(sq.Question).
Update("ChannelMembers cm").
Set("MentionCount", 0).
Set("MentionCountRoot", 0).
Set("UrgentMentionCount", 0).
Set("MsgCount", sq.Expr("greatest(cm.MsgCount, c.TotalMsgCount)")).
Set("MsgCountRoot", sq.Expr("greatest(cm.MsgCountRoot, c.TotalMsgCountRoot)")).
Set("LastViewedAt", sq.Expr("greatest(cm.LastViewedAt, c.LastPostAt)")).
Set("LastUpdateAt", sq.Expr("greatest(cm.LastViewedAt, c.LastPostAt)")).
SuffixExpr(sq.Expr("FROM c WHERE cm.UserId = ? AND c.Id = cm.ChannelId", userId))
updateWrap := update.Prefix("updated AS (").Suffix(")")
query = with.SuffixExpr(updateWrap).Suffix("SELECT Id, LastPostAt FROM c")
sql, args, err := query.ToSql()
if err != nil {
return nil, errors.Wrap(err, "UpdateLastViewedAt_CTE_Tosql")
}
if s.DriverName() == model.DatabaseDriverPostgres {
sql, err = sq.Dollar.ReplacePlaceholders(sql)
if err != nil {
return nil, errors.Wrap(err, "UpdateLastViewedAt_ReplacePlaceholders")
}
sql, err = sq.Dollar.ReplacePlaceholders(sql)
if err != nil {
return nil, errors.Wrap(err, "UpdateLastViewedAt_ReplacePlaceholders")
}
err = s.GetMaster().Select(&lastPostAtTimes, sql, args...)
@ -2588,53 +2581,9 @@ func (s SqlChannelStore) UpdateLastViewedAt(channelIds []string, userId string)
}
times := map[string]int64{}
if s.DriverName() == model.DatabaseDriverPostgres {
for _, t := range lastPostAtTimes {
times[t.Id] = t.LastPostAt
}
return times, nil
}
msgCountQuery, msgCountQueryRoot, lastViewedQuery := sq.Case("ChannelId"), sq.Case("ChannelId"), sq.Case("ChannelId")
for _, t := range lastPostAtTimes {
times[t.Id] = t.LastPostAt
msgCountQuery = msgCountQuery.When(
sq.Expr("?", t.Id),
sq.Expr("GREATEST(MsgCount, ?)", t.TotalMsgCount))
msgCountQueryRoot = msgCountQueryRoot.When(
sq.Expr("?", t.Id),
sq.Expr("GREATEST(MsgCountRoot, ?)", t.TotalMsgCountRoot))
lastViewedQuery = lastViewedQuery.When(
sq.Expr("?", t.Id),
sq.Expr("GREATEST(LastViewedAt, ?)", t.LastPostAt))
}
updateQuery := s.getQueryBuilder().Update("ChannelMembers").
Set("MentionCount", 0).
Set("MentionCountRoot", 0).
Set("UrgentMentionCount", 0).
Set("MsgCount", msgCountQuery).
Set("MsgCountRoot", msgCountQueryRoot).
Set("LastViewedAt", lastViewedQuery).
Set("LastUpdateAt", sq.Expr("LastViewedAt")).
Where(sq.Eq{
"UserId": userId,
"ChannelId": channelIds,
})
sql, args, err = updateQuery.ToSql()
if err != nil {
return nil, errors.Wrap(err, "UpdateLastViewedAt_Update_Tosql")
}
if _, err := s.GetMaster().Exec(sql, args...); err != nil {
return nil, errors.Wrapf(err, "failed to update ChannelMembers with userId=%s and channelId in %v", userId, channelIds)
}
return times, nil
}
@ -3199,7 +3148,7 @@ func (s SqlChannelStore) AutocompleteInTeamForSearch(teamID string, userID strin
}
} else {
// build the full text search clause
full := s.buildFulltextClauseX(term, "Name", "DisplayName", "Purpose")
full := s.buildFulltextClause(term, "Name", "DisplayName", "Purpose")
// build the LIKE query
likeSQL, likeArgs, err := query.Where(like).ToSql()
if err != nil {
@ -3218,15 +3167,11 @@ func (s SqlChannelStore) AutocompleteInTeamForSearch(teamID string, userID strin
args = append(likeArgs, fullArgs...)
}
var err error
// since the UNION is not part of squirrel, we need to assemble it and then update
// the placeholders manually
if s.DriverName() == model.DatabaseDriverPostgres {
sql, err = sq.Dollar.ReplacePlaceholders(sql)
if err != nil {
return nil, errors.Wrap(err, "AutocompleteInTeamForSearch_Placeholder")
}
sql, err := sq.Dollar.ReplacePlaceholders(sql)
if err != nil {
return nil, errors.Wrap(err, "AutocompleteInTeamForSearch_Placeholder")
}
// query the database
@ -3393,13 +3338,9 @@ func (s SqlChannelStore) channelSearchQuery(opts *store.ChannelSearchOpts) sq.Se
InnerJoin("RetentionPoliciesChannels ON c.Id = RetentionPoliciesChannels.ChannelId").
Where(sq.Eq{"RetentionPoliciesChannels.PolicyId": opts.PolicyID})
} else if opts.ExcludePolicyConstrained {
if s.DriverName() == model.DatabaseDriverPostgres {
query = query.
LeftJoin("RetentionPoliciesChannels ON c.Id = RetentionPoliciesChannels.ChannelId").
Where("RetentionPoliciesChannels.ChannelId IS NULL")
} else {
query = query.Where(sq.Expr(`c.Id NOT IN (SELECT ChannelId FROM RetentionPoliciesChannels)`))
}
query = query.
LeftJoin("RetentionPoliciesChannels ON c.Id = RetentionPoliciesChannels.ChannelId").
Where("RetentionPoliciesChannels.ChannelId IS NULL")
} else if opts.IncludePolicyID {
query = query.
LeftJoin("RetentionPoliciesChannels ON c.Id = RetentionPoliciesChannels.ChannelId")
@ -3420,12 +3361,10 @@ func (s SqlChannelStore) channelSearchQuery(opts *store.ChannelSearchOpts) sq.Se
likeTerms[i] = likeTerm
}
likeClause = strings.ReplaceAll(likeClause, ":LikeTerm", "?")
fulltextClause, fulltextTerm := s.buildFulltextClause(opts.Term, "c.Name, c.DisplayName, c.Purpose")
fulltextClause = strings.ReplaceAll(fulltextClause, ":FulltextTerm", "?")
query = query.Where(sq.Or{
sq.Expr(likeClause, likeTerms...),
sq.Expr(fulltextClause, fulltextTerm),
s.buildFulltextClause(opts.Term, "c.Name", "c.DisplayName", "c.Purpose"),
})
}
@ -3474,11 +3413,7 @@ func (s SqlChannelStore) channelSearchQuery(opts *store.ChannelSearchOpts) sq.Se
if opts.ExcludeAccessControlPolicyEnforced {
query = query.Where("c.Id NOT IN (SELECT ID From AccessControlPolicies WHERE Type = ?)", model.AccessControlPolicyTypeChannel)
} else if opts.ParentAccessControlPolicyId != "" {
if s.DriverName() == model.DatabaseDriverPostgres {
query = query.Where(sq.Expr("c.Id IN (SELECT ID From AccessControlPolicies WHERE Type = ? AND Data->'imports' @> ?)", model.AccessControlPolicyTypeChannel, fmt.Sprintf("%q", opts.ParentAccessControlPolicyId)))
} else {
query = query.Where(sq.Expr("c.Id IN (SELECT ID From AccessControlPolicies WHERE Type = ? AND JSON_CONTAINS(JSON_EXTRACT(Data, '$.imports'), ?))", model.AccessControlPolicyTypeChannel, fmt.Sprintf("%q", opts.ParentAccessControlPolicyId)))
}
query = query.Where(sq.Expr("c.Id IN (SELECT ID From AccessControlPolicies WHERE Type = ? AND Data->'imports' @> ?)", model.AccessControlPolicyTypeChannel, fmt.Sprintf("%q", opts.ParentAccessControlPolicyId)))
} else if opts.AccessControlPolicyEnforced {
query = query.InnerJoin("AccessControlPolicies acp ON acp.ID = c.Id")
}
@ -3556,11 +3491,7 @@ func (s SqlChannelStore) buildLIKEClause(term string, searchColumns string) (lik
// Prepare the LIKE portion of the query.
var searchFields []string
for field := range strings.SplitSeq(searchColumns, ", ") {
if s.DriverName() == model.DatabaseDriverPostgres {
searchFields = append(searchFields, fmt.Sprintf("lower(%s) LIKE lower(%s) escape '*'", field, ":LikeTerm"))
} else {
searchFields = append(searchFields, fmt.Sprintf("%s LIKE %s escape '*'", field, ":LikeTerm"))
}
searchFields = append(searchFields, fmt.Sprintf("lower(%s) LIKE lower(%s) escape '*'", field, ":LikeTerm"))
}
likeClause = fmt.Sprintf("(%s)", strings.Join(searchFields, " OR "))
@ -3582,13 +3513,8 @@ func (s SqlChannelStore) buildLIKEClauseX(term string, searchColumns ...string)
var searchFields sq.Or
for _, field := range searchColumns {
if s.DriverName() == model.DatabaseDriverPostgres {
expr := fmt.Sprintf("LOWER(%s) LIKE LOWER(?) ESCAPE '*'", field)
searchFields = append(searchFields, sq.Expr(expr, likeTerm))
} else {
expr := fmt.Sprintf("%s LIKE ? ESCAPE '*'", field)
searchFields = append(searchFields, sq.Expr(expr, likeTerm))
}
expr := fmt.Sprintf("LOWER(%s) LIKE LOWER(?) ESCAPE '*'", field)
searchFields = append(searchFields, sq.Expr(expr, likeTerm))
}
return searchFields
@ -3596,71 +3522,28 @@ func (s SqlChannelStore) buildLIKEClauseX(term string, searchColumns ...string)
const spaceFulltextSearchChars = "<>+-()~:*\"!@&"
func (s SqlChannelStore) buildFulltextClause(term string, searchColumns string) (fulltextClause, fulltextTerm string) {
// Copy the terms as we will need to prepare them differently for each search type.
fulltextTerm = term
func (s SqlChannelStore) buildFulltextClause(term string, searchColumns ...string) sq.Sqlizer {
// These chars must be treated as spaces in the fulltext query.
fulltextTerm = strings.Map(func(r rune) rune {
fulltextTerm := strings.Map(func(r rune) rune {
if strings.ContainsRune(spaceFulltextSearchChars, r) {
return ' '
}
return r
}, fulltextTerm)
}, term)
// Prepare the FULLTEXT portion of the query.
// Remove all pipes |
fulltextTerm = strings.ReplaceAll(fulltextTerm, "|", "")
// Split the search term and append :* to each part for prefix matching
splitTerm := strings.Fields(fulltextTerm)
for i, t := range strings.Fields(fulltextTerm) {
for i, t := range splitTerm {
splitTerm[i] = t + ":*"
}
// Join the search terms with & for AND matching
fulltextTerm = strings.Join(splitTerm, " & ")
fulltextClause = fmt.Sprintf("((to_tsvector('%[1]s', %[2]s)) @@ to_tsquery('%[1]s', :FulltextTerm))", s.pgDefaultTextSearchConfig, convertMySQLFullTextColumnsToPostgres(searchColumns))
return
}
func (s SqlChannelStore) buildFulltextClauseX(term string, searchColumns ...string) sq.Sqlizer {
// Copy the terms as we will need to prepare them differently for each search type.
fulltextTerm := term
// These chars must be treated as spaces in the fulltext query.
fulltextTerm = strings.Map(func(r rune) rune {
if strings.ContainsRune(spaceFulltextSearchChars, r) {
return ' '
}
return r
}, fulltextTerm)
// Prepare the FULLTEXT portion of the query.
if s.DriverName() == model.DatabaseDriverPostgres {
// remove all pipes |
fulltextTerm = strings.ReplaceAll(fulltextTerm, "|", "")
// split the search term and append :* to each part
splitTerm := strings.Fields(fulltextTerm)
for i, t := range splitTerm {
splitTerm[i] = t + ":*"
}
// join the search term with &
fulltextTerm = strings.Join(splitTerm, " & ")
expr := fmt.Sprintf("((to_tsvector('%[1]s', %[2]s)) @@ to_tsquery('%[1]s', ?))", s.pgDefaultTextSearchConfig, strings.Join(searchColumns, " || ' ' || "))
return sq.Expr(expr, fulltextTerm)
}
splitTerm := strings.Fields(fulltextTerm)
for i, t := range splitTerm {
splitTerm[i] = "+" + t + "*"
}
fulltextTerm = strings.Join(splitTerm, " ")
expr := fmt.Sprintf("MATCH(%s) AGAINST (? IN BOOLEAN MODE)", strings.Join(searchColumns, ", "))
expr := fmt.Sprintf("((to_tsvector('%[1]s', %[2]s)) @@ to_tsquery('%[1]s', ?))", s.pgDefaultTextSearchConfig, strings.Join(searchColumns, " || ' ' || "))
return sq.Expr(expr, fulltextTerm)
}
@ -3685,57 +3568,19 @@ func (s SqlChannelStore) searchClause(term string) sq.Sqlizer {
return nil
}
fulltextClause := s.buildFulltextClauseX(term, "c.Name", "c.DisplayName", "c.Purpose")
return sq.Or{
likeClause,
fulltextClause,
s.buildFulltextClause(term, "c.Name", "c.DisplayName", "c.Purpose"),
}
}
func (s SqlChannelStore) searchGroupChannelsQuery(userId, term string, isPostgreSQL bool) sq.SelectBuilder {
var baseLikeTerm string
func (s SqlChannelStore) searchGroupChannelsQuery(userId, term string) sq.SelectBuilder {
baseLikeTerm := "ARRAY_TO_STRING(ARRAY_AGG(u.Username), ', ') LIKE ?"
terms := strings.Fields((strings.ToLower(term)))
having := sq.And{}
if isPostgreSQL {
baseLikeTerm = "ARRAY_TO_STRING(ARRAY_AGG(u.Username), ', ') LIKE ?"
cc := s.getSubQueryBuilder().Select("c.Id").
From("Channels c").
Join("ChannelMembers cm ON c.Id=cm.ChannelId").
Join("Users u on u.Id = cm.UserId").
Where(sq.Eq{
"c.Type": model.ChannelTypeGroup,
"u.id": userId,
}).
GroupBy("c.Id")
for _, term := range terms {
term = sanitizeSearchTerm(term, "\\")
having = append(having, sq.Expr(baseLikeTerm, "%"+term+"%"))
}
subq := s.getSubQueryBuilder().Select("cc.id").
FromSelect(cc, "cc").
Join("ChannelMembers cm On cc.Id = cm.ChannelId").
Join("Users u On u.Id = cm.UserId").
GroupBy("cc.Id").
Having(having).
Limit(model.ChannelSearchDefaultLimit)
return s.getQueryBuilder().Select(channelSliceColumns(true)...).
From("Channels").
Where(sq.Expr("Id IN (?)", subq))
}
baseLikeTerm = "GROUP_CONCAT(u.Username SEPARATOR ', ') LIKE ?"
for _, term := range terms {
term = sanitizeSearchTerm(term, "\\")
having = append(having, sq.Expr(baseLikeTerm, "%"+term+"%"))
}
cc := s.getSubQueryBuilder().Select(channelSliceColumns(true, "c")...).
cc := s.getSubQueryBuilder().Select("c.Id").
From("Channels c").
Join("ChannelMembers cm ON c.Id=cm.ChannelId").
Join("Users u on u.Id = cm.UserId").
@ -3745,18 +3590,26 @@ func (s SqlChannelStore) searchGroupChannelsQuery(userId, term string, isPostgre
}).
GroupBy("c.Id")
return s.getQueryBuilder().Select(channelSliceColumns(true, "cc")...).
for _, term := range terms {
term = sanitizeSearchTerm(term, "\\")
having = append(having, sq.Expr(baseLikeTerm, "%"+term+"%"))
}
subq := s.getSubQueryBuilder().Select("cc.id").
FromSelect(cc, "cc").
Join("ChannelMembers cm on cc.Id = cm.ChannelId").
Join("Users u on u.Id = cm.UserId").
Join("ChannelMembers cm On cc.Id = cm.ChannelId").
Join("Users u On u.Id = cm.UserId").
GroupBy("cc.Id").
Having(having).
Limit(model.ChannelSearchDefaultLimit)
return s.getQueryBuilder().Select(channelSliceColumns(true)...).
From("Channels").
Where(sq.Expr("Id IN (?)", subq))
}
func (s SqlChannelStore) SearchGroupChannels(userId, term string) (model.ChannelList, error) {
isPostgreSQL := s.DriverName() == model.DatabaseDriverPostgres
query := s.searchGroupChannelsQuery(userId, term, isPostgreSQL)
query := s.searchGroupChannelsQuery(userId, term)
sql, params, err := query.ToSql()
if err != nil {
@ -4240,23 +4093,14 @@ func (s SqlChannelStore) UserBelongsToChannels(userId string, channelIds []strin
// UpdateMembersRole updates all the members of channelID in the adminIDs string array to be admins and sets all other
// users as not being admin.
// It returns the list of userIDs whose roles got updated.
// It returns the list of members whose roles got updated.
//
// TODO: parameterize adminIDs
func (s SqlChannelStore) UpdateMembersRole(channelID string, adminIDs []string) (_ []*model.ChannelMember, err error) {
transaction, err := s.GetMaster().Beginx()
if err != nil {
return nil, err
}
defer finalizeTransactionX(transaction, &err)
// On MySQL it's not possible to update a table and select from it in the same query.
// A SELECT and a UPDATE query are needed.
// Once we only support PostgreSQL, this can be done in a single query using RETURNING.
query, args, err := s.getQueryBuilder().
Select(channelMemberSliceColumns()...).
From("ChannelMembers").
Where(sq.Eq{"ChannelID": channelID}).
func (s SqlChannelStore) UpdateMembersRole(channelID string, adminIDs []string) ([]*model.ChannelMember, error) {
query := s.getQueryBuilder().
Update("ChannelMembers").
Set("SchemeAdmin", sq.Case().When(sq.Eq{"UserId": adminIDs}, "true").Else("false")).
Where(sq.Eq{"ChannelId": channelID}).
Where(sq.Or{sq.Eq{"SchemeGuest": false}, sq.Expr("SchemeGuest IS NULL")}).
Where(
sq.Or{
@ -4271,42 +4115,14 @@ func (s SqlChannelStore) UpdateMembersRole(channelID string, adminIDs []string)
sq.NotEq{"UserId": adminIDs},
},
},
).ToSql()
if err != nil {
return nil, errors.Wrap(err, "channel_tosql")
}
).
Suffix("RETURNING " + strings.Join(channelMemberSliceColumns(), ", "))
var updatedMembers []*model.ChannelMember
if err = transaction.Select(&updatedMembers, query, args...); err != nil {
return nil, errors.Wrap(err, "failed to get list of updated users")
}
// Update SchemeAdmin field as the data from the SQL is not updated yet
for _, member := range updatedMembers {
if slices.Contains(adminIDs, member.UserId) {
member.SchemeAdmin = true
} else {
member.SchemeAdmin = false
}
}
query, args, err = s.getQueryBuilder().
Update("ChannelMembers").
Set("SchemeAdmin", sq.Case().When(sq.Eq{"UserId": adminIDs}, "true").Else("false")).
Where(sq.Eq{"ChannelId": channelID}).
Where(sq.Or{sq.Eq{"SchemeGuest": false}, sq.Expr("SchemeGuest IS NULL")}).ToSql()
if err != nil {
return nil, errors.Wrap(err, "team_tosql")
}
if _, err = transaction.Exec(query, args...); err != nil {
if err := s.GetMaster().SelectBuilder(&updatedMembers, query); err != nil {
return nil, errors.Wrap(err, "failed to update ChannelMembers")
}
if err = transaction.Commit(); err != nil {
return nil, errors.Wrap(err, "commit_transaction")
}
return updatedMembers, nil
}

View file

@ -229,22 +229,18 @@ func (s *SqlDraftStore) GetMaxDraftSize() int {
func (s *SqlDraftStore) determineMaxDraftSize() int {
var maxDraftSizeBytes int32
if s.DriverName() == model.DatabaseDriverPostgres {
// The Draft.Message column in Postgres has historically been VARCHAR(4000), but
// may be manually enlarged to support longer drafts.
if err := s.GetReplica().Get(&maxDraftSizeBytes, `
SELECT
COALESCE(character_maximum_length, 0)
FROM
information_schema.columns
WHERE
table_name = 'drafts'
AND column_name = 'message'
`); err != nil {
mlog.Warn("Unable to determine the maximum supported draft size", mlog.Err(err))
}
} else {
mlog.Warn("No implementation found to determine the maximum supported draft size")
// The Draft.Message column has historically been VARCHAR(4000), but
// may be manually enlarged to support longer drafts.
if err := s.GetReplica().Get(&maxDraftSizeBytes, `
SELECT
COALESCE(character_maximum_length, 0)
FROM
information_schema.columns
WHERE
table_name = 'drafts'
AND column_name = 'message'
`); err != nil {
mlog.Warn("Unable to determine the maximum supported draft size", mlog.Err(err))
}
// Assume a worst-case representation of four bytes per rune.
@ -288,31 +284,28 @@ func (s *SqlDraftStore) GetLastCreateAtAndUserIdValuesForEmptyDraftsMigration(cr
}
func (s *SqlDraftStore) DeleteEmptyDraftsByCreateAtAndUserId(createAt int64, userId string) error {
var builder Builder
if s.DriverName() == model.DatabaseDriverPostgres {
builder = s.getQueryBuilder().
Delete("Drafts d").
PrefixExpr(s.getQueryBuilder().Select().
Prefix("WITH dd AS (").
Columns("UserId", "ChannelId", "RootId").
From("Drafts").
Where(sq.Or{
sq.Gt{"CreateAt": createAt},
sq.And{
sq.Eq{"CreateAt": createAt},
sq.Gt{"UserId": userId},
},
}).
OrderBy("CreateAt", "UserId").
Limit(100).
Suffix(")"),
).
Using("dd").
Where("d.UserId = dd.UserId").
Where("d.ChannelId = dd.ChannelId").
Where("d.RootId = dd.RootId").
Where("d.Message = ''")
}
builder := s.getQueryBuilder().
Delete("Drafts d").
PrefixExpr(s.getQueryBuilder().Select().
Prefix("WITH dd AS (").
Columns("UserId", "ChannelId", "RootId").
From("Drafts").
Where(sq.Or{
sq.Gt{"CreateAt": createAt},
sq.And{
sq.Eq{"CreateAt": createAt},
sq.Gt{"UserId": userId},
},
}).
OrderBy("CreateAt", "UserId").
Limit(100).
Suffix(")"),
).
Using("dd").
Where("d.UserId = dd.UserId").
Where("d.ChannelId = dd.ChannelId").
Where("d.RootId = dd.RootId").
Where("d.Message = ''")
if _, err := s.GetMaster().ExecBuilder(builder); err != nil {
return errors.Wrapf(err, "failed to delete empty drafts")
@ -322,31 +315,28 @@ func (s *SqlDraftStore) DeleteEmptyDraftsByCreateAtAndUserId(createAt int64, use
}
func (s *SqlDraftStore) DeleteOrphanDraftsByCreateAtAndUserId(createAt int64, userId string) error {
var builder Builder
if s.DriverName() == model.DatabaseDriverPostgres {
builder = s.getQueryBuilder().
Delete("Drafts d").
PrefixExpr(s.getQueryBuilder().Select().
Prefix("WITH dd AS (").
Columns("UserId", "ChannelId", "RootId").
From("Drafts").
Where(sq.Or{
sq.Gt{"CreateAt": createAt},
sq.And{
sq.Eq{"CreateAt": createAt},
sq.Gt{"UserId": userId},
},
}).
OrderBy("CreateAt", "UserId").
Limit(100).
Suffix(")"),
).
Using("dd").
Where("d.UserId = dd.UserId").
Where("d.ChannelId = dd.ChannelId").
Where("d.RootId = dd.RootId").
Suffix("AND (d.RootId IN (SELECT Id FROM Posts WHERE DeleteAt <> 0) OR NOT EXISTS (SELECT 1 FROM Posts WHERE Posts.Id = d.RootId))")
}
builder := s.getQueryBuilder().
Delete("Drafts d").
PrefixExpr(s.getQueryBuilder().Select().
Prefix("WITH dd AS (").
Columns("UserId", "ChannelId", "RootId").
From("Drafts").
Where(sq.Or{
sq.Gt{"CreateAt": createAt},
sq.And{
sq.Eq{"CreateAt": createAt},
sq.Gt{"UserId": userId},
},
}).
OrderBy("CreateAt", "UserId").
Limit(100).
Suffix(")"),
).
Using("dd").
Where("d.UserId = dd.UserId").
Where("d.ChannelId = dd.ChannelId").
Where("d.RootId = dd.RootId").
Suffix("AND (d.RootId IN (SELECT Id FROM Posts WHERE DeleteAt <> 0) OR NOT EXISTS (SELECT 1 FROM Posts WHERE Posts.Id = d.RootId))")
if _, err := s.GetMaster().ExecBuilder(builder); err != nil {
return errors.Wrapf(err, "failed to delete orphan drafts")

View file

@ -417,7 +417,6 @@ func (fs SqlFileInfoStore) AttachToPost(rctx request.CTX, fileId, postId, channe
count, err := sqlResult.RowsAffected()
if err != nil {
// RowsAffected should never fail with the MySQL or Postgres drivers
return errors.Wrap(err, "unable to retrieve rows affected")
} else if count == 0 {
// Could not attach the file to the post
@ -494,12 +493,7 @@ func (fs SqlFileInfoStore) PermanentDelete(rctx request.CTX, fileId string) erro
}
func (fs SqlFileInfoStore) PermanentDeleteBatch(rctx request.CTX, endTime int64, limit int64) (int64, error) {
var query string
if fs.DriverName() == "postgres" {
query = "DELETE from FileInfo WHERE Id = any (array (SELECT Id FROM FileInfo WHERE CreateAt < ? AND CreatorId != ? LIMIT ?))"
} else {
query = "DELETE from FileInfo WHERE CreateAt < ? AND CreatorId != ? LIMIT ?"
}
query := "DELETE from FileInfo WHERE Id = any (array (SELECT Id FROM FileInfo WHERE CreateAt < ? AND CreatorId != ? LIMIT ?))"
sqlResult, err := fs.GetMaster().Exec(query, endTime, model.BookmarkFileOwner, limit)
if err != nil {
@ -625,14 +619,14 @@ func (fs SqlFileInfoStore) Search(rctx request.CTX, paramsList []*model.SearchPa
terms := params.Terms
excludedTerms := params.ExcludedTerms
for _, c := range fs.specialSearchChars() {
for _, c := range specialSearchChars {
terms = strings.Replace(terms, c, " ", -1)
excludedTerms = strings.Replace(excludedTerms, c, " ", -1)
}
if terms == "" && excludedTerms == "" {
// we've already confirmed that we have a channel or user to search for
} else if fs.DriverName() == model.DatabaseDriverPostgres {
} else {
// Parse text for wildcards
if wildcard, err := regexp.Compile(`\*($| )`); err == nil {
terms = wildcard.ReplaceAllLiteralString(terms, ":* ")
@ -683,17 +677,9 @@ func (fs SqlFileInfoStore) Search(rctx request.CTX, paramsList []*model.SearchPa
}
func (fs SqlFileInfoStore) CountAll() (int64, error) {
var query sq.SelectBuilder
if fs.DriverName() == model.DatabaseDriverPostgres {
query = fs.getQueryBuilder().
Select("num").
From("file_stats")
} else {
query = fs.getQueryBuilder().
Select("COUNT(*)").
From("FileInfo").
Where("DeleteAt = 0")
}
query := fs.getQueryBuilder().
Select("num").
From("file_stats")
var count int64
err := fs.GetReplica().GetBuilder(&count, query)
@ -733,7 +719,7 @@ func (fs SqlFileInfoStore) GetFilesBatchForIndexing(startTime int64, startFileID
func (fs SqlFileInfoStore) GetStorageUsage(_, includeDeleted bool) (int64, error) {
var query sq.SelectBuilder
if fs.DriverName() == model.DatabaseDriverPostgres && !includeDeleted {
if !includeDeleted {
query = fs.getQueryBuilder().
Select("usage").
From("file_stats")
@ -741,10 +727,6 @@ func (fs SqlFileInfoStore) GetStorageUsage(_, includeDeleted bool) (int64, error
query = fs.getQueryBuilder().
Select("COALESCE(SUM(Size), 0)").
From("FileInfo")
if !includeDeleted {
query = query.Where("DeleteAt = 0")
}
}
var size int64
@ -812,15 +794,13 @@ func (fs SqlFileInfoStore) RestoreForPostByIds(rctx request.CTX, postId string,
}
func (fs SqlFileInfoStore) RefreshFileStats() error {
if fs.DriverName() == model.DatabaseDriverPostgres {
// CONCURRENTLY is not used deliberately because as per Postgres docs,
// not using CONCURRENTLY takes less resources and completes faster
// at the expense of locking the mat view. Since viewing admin console
// is not a very frequent activity, we accept the tradeoff to let the
// refresh happen as fast as possible.
if _, err := fs.GetMaster().Exec("REFRESH MATERIALIZED VIEW file_stats"); err != nil {
return errors.Wrap(err, "error refreshing materialized view file_stats")
}
// CONCURRENTLY is not used deliberately because as per Postgres docs,
// not using CONCURRENTLY takes less resources and completes faster
// at the expense of locking the mat view. Since viewing admin console
// is not a very frequent activity, we accept the tradeoff to let the
// refresh happen as fast as possible.
if _, err := fs.GetMaster().Exec("REFRESH MATERIALIZED VIEW file_stats"); err != nil {
return errors.Wrap(err, "error refreshing materialized view file_stats")
}
return nil

View file

@ -434,12 +434,7 @@ func (jss SqlJobStore) Delete(id string) (string, error) {
}
func (jss SqlJobStore) Cleanup(expiryTime int64, batchSize int) error {
var query string
if jss.DriverName() == model.DatabaseDriverPostgres {
query = "DELETE FROM Jobs WHERE Id IN (SELECT Id FROM Jobs WHERE CreateAt < ? AND (Status != ? AND Status != ?) ORDER BY CreateAt ASC LIMIT ?)"
} else {
query = "DELETE FROM Jobs WHERE CreateAt < ? AND (Status != ? AND Status != ?) ORDER BY CreateAt ASC LIMIT ?"
}
query := "DELETE FROM Jobs WHERE Id IN (SELECT Id FROM Jobs WHERE CreateAt < ? AND (Status != ? AND Status != ?) ORDER BY CreateAt ASC LIMIT ?)"
var rowsAffected int64 = 1

View file

@ -17,7 +17,6 @@ import (
"github.com/mattermost/mattermost/server/public/shared/mlog"
"github.com/mattermost/mattermost/server/v8/channels/db"
"github.com/mattermost/morph"
"github.com/mattermost/morph/drivers"
ps "github.com/mattermost/morph/drivers/postgres"
"github.com/mattermost/morph/models"
mbindata "github.com/mattermost/morph/sources/embedded"
@ -111,13 +110,7 @@ func (ss *SqlStore) initMorph(dryRun, enableLogging bool) (*morph.Morph, error)
return nil, err
}
var driver drivers.Driver
switch ss.DriverName() {
case model.DatabaseDriverPostgres:
driver, err = ps.WithInstance(ss.GetMaster().DB.DB)
default:
err = fmt.Errorf("unsupported database type %s for migration", ss.DriverName())
}
driver, err := ps.WithInstance(ss.GetMaster().DB.DB)
if err != nil {
return nil, err
}

View file

@ -1698,10 +1698,7 @@ func (s *SqlPostStore) getPostsAround(rctx request.CTX, before bool, options mod
}
query = query.From("Posts p").
Where(conditions).
// Adding ChannelId and DeleteAt order columns
// to let mysql choose the "idx_posts_channel_id_delete_at_create_at" index always.
// See MM-24170.
OrderBy("p.ChannelId", "p.DeleteAt", "p.CreateAt "+sort).
OrderBy("p.CreateAt " + sort).
Limit(uint64(options.PerPage)).
Offset(uint64(offset))
@ -1785,10 +1782,7 @@ func (s *SqlPostStore) getPostIdAroundTime(channelId string, time int64, before
Select("Id").
From("Posts").
Where(conditions).
// Adding ChannelId and DeleteAt order columns
// to let mysql choose the "idx_posts_channel_id_delete_at_create_at" index always.
// See MM-23369.
OrderBy("Posts.ChannelId", "Posts.DeleteAt", "Posts.CreateAt "+sort).
OrderBy("Posts.CreateAt " + sort).
Limit(1)
var postId string
@ -1812,10 +1806,7 @@ func (s *SqlPostStore) GetPostAfterTime(channelId string, time int64, collapsedT
}
query := s.postsQuery.
Where(conditions).
// Adding ChannelId and DeleteAt order columns
// to let mysql choose the "idx_posts_channel_id_delete_at_create_at" index always.
// See MM-23369.
OrderBy("Posts.ChannelId", "Posts.DeleteAt", "Posts.CreateAt ASC").
OrderBy("Posts.CreateAt ASC").
Limit(1)
var post model.Post
@ -1853,76 +1844,6 @@ func (s *SqlPostStore) getRootPosts(channelId string, offset int, limit int, ski
}
func (s *SqlPostStore) getParentsPosts(channelId string, offset int, limit int, skipFetchThreads bool, includeDeleted bool) ([]*model.Post, error) {
if s.DriverName() == model.DatabaseDriverPostgres {
return s.getParentsPostsPostgreSQL(channelId, offset, limit, skipFetchThreads, includeDeleted)
}
deleteAtCondition := "AND DeleteAt = 0"
if includeDeleted {
deleteAtCondition = ""
}
// query parent Ids first
roots := []string{}
rootQuery := `
SELECT DISTINCT
q.RootId
FROM
(SELECT
Posts.RootId
FROM
Posts
WHERE
ChannelId = ? ` + deleteAtCondition + `
ORDER BY CreateAt DESC
LIMIT ? OFFSET ?) q
WHERE q.RootId != ''`
err := s.GetReplica().Select(&roots, rootQuery, channelId, limit, offset)
if err != nil {
return nil, errors.Wrap(err, "failed to find Posts")
}
if len(roots) == 0 {
return nil, nil
}
cols := postSliceColumnsWithName("p")
var where sq.Sqlizer
where = sq.Eq{"p.Id": roots}
if skipFetchThreads {
col := "(SELECT COUNT(*) FROM Posts WHERE Posts.RootId = (CASE WHEN p.RootId = '' THEN p.Id ELSE p.RootId END)) as ReplyCount"
if !includeDeleted {
col = "(SELECT COUNT(*) FROM Posts WHERE Posts.RootId = (CASE WHEN p.RootId = '' THEN p.Id ELSE p.RootId END) AND Posts.DeleteAt = 0) as ReplyCount"
}
cols = append(cols, col)
} else {
where = sq.Or{
where,
sq.Eq{"p.RootId": roots},
}
}
query := s.getQueryBuilder().
Select(cols...).
From("Posts p").
Where(sq.And{
where,
sq.Eq{"p.ChannelId": channelId},
}).
OrderBy("p.CreateAt")
if !includeDeleted {
query = query.Where(sq.Eq{"p.DeleteAt": 0})
}
posts := []*model.Post{}
if err := s.GetReplica().SelectBuilder(&posts, query); err != nil {
return nil, errors.Wrap(err, "failed to find Posts")
}
return posts, nil
}
func (s *SqlPostStore) getParentsPostsPostgreSQL(channelId string, offset int, limit int, skipFetchThreads bool, includeDeleted bool) ([]*model.Post, error) {
posts := []*model.Post{}
replyCountQuery := ""
onStatement := "q1.RootId = q2.Id"
@ -2159,7 +2080,7 @@ func (s *SqlPostStore) search(teamId string, userId string, params *model.Search
}
}
for _, c := range s.specialSearchChars() {
for _, c := range specialSearchChars {
if !params.IsHashtag {
terms = strings.Replace(terms, c, " ", -1)
}
@ -2275,9 +2196,9 @@ func (s *SqlPostStore) search(teamId string, userId string, params *model.Search
// TODO: convert to squirrel HW
func (s *SqlPostStore) AnalyticsUserCountsWithPostsByDay(teamId string) (model.AnalyticsRows, error) {
var args []any
query := `SELECT DISTINCT
DATE(FROM_UNIXTIME(Posts.CreateAt / 1000)) AS Name,
COUNT(DISTINCT Posts.UserId) AS Value
query :=
`SELECT
TO_CHAR(DATE(TO_TIMESTAMP(Posts.CreateAt / 1000)), 'YYYY-MM-DD') AS Name, COUNT(DISTINCT Posts.UserId) AS Value
FROM Posts`
if teamId != "" {
@ -2288,28 +2209,10 @@ func (s *SqlPostStore) AnalyticsUserCountsWithPostsByDay(teamId string) (model.A
}
query += ` Posts.CreateAt >= ? AND Posts.CreateAt <= ?
GROUP BY DATE(FROM_UNIXTIME(Posts.CreateAt / 1000))
GROUP BY DATE(TO_TIMESTAMP(Posts.CreateAt / 1000))
ORDER BY Name DESC
LIMIT 30`
if s.DriverName() == model.DatabaseDriverPostgres {
query = `SELECT
TO_CHAR(DATE(TO_TIMESTAMP(Posts.CreateAt / 1000)), 'YYYY-MM-DD') AS Name, COUNT(DISTINCT Posts.UserId) AS Value
FROM Posts`
if teamId != "" {
query += " INNER JOIN Channels ON Posts.ChannelId = Channels.Id AND Channels.TeamId = ? AND"
args = []any{teamId}
} else {
query += " WHERE"
}
query += ` Posts.CreateAt >= ? AND Posts.CreateAt <= ?
GROUP BY DATE(TO_TIMESTAMP(Posts.CreateAt / 1000))
ORDER BY Name DESC
LIMIT 30`
}
end := utils.MillisFromTime(utils.EndOfDay(utils.Yesterday()))
start := utils.MillisFromTime(utils.StartOfDay(utils.Yesterday().AddDate(0, 0, -31)))
args = append(args, start, end)
@ -2385,55 +2288,16 @@ func (s *SqlPostStore) countPostsByDay(teamID, startDay, endDay string) (model.A
// TODO: convert to squirrel HW
func (s *SqlPostStore) AnalyticsPostCountsByDay(options *model.AnalyticsPostCountsOptions) (model.AnalyticsRows, error) {
if s.DriverName() == model.DatabaseDriverPostgres {
endDay := utils.Yesterday().Format("2006-01-02")
startDay := utils.Yesterday().AddDate(0, 0, -31).Format("2006-01-02")
if options.YesterdayOnly {
startDay = utils.Yesterday().AddDate(0, 0, -1).Format("2006-01-02")
}
// Use materialized views
if options.BotsOnly {
return s.countBotPostsByDay(options.TeamId, startDay, endDay)
}
return s.countPostsByDay(options.TeamId, startDay, endDay)
}
var args []any
query := `SELECT
DATE(FROM_UNIXTIME(Posts.CreateAt / 1000)) AS Name,
COUNT(Posts.Id) AS Value
FROM Posts`
if options.BotsOnly {
query += " INNER JOIN Bots ON Posts.UserId = Bots.Userid"
}
if options.TeamId != "" {
query += " INNER JOIN Channels ON Posts.ChannelId = Channels.Id AND Channels.TeamId = ? AND"
args = []any{options.TeamId}
} else {
query += " WHERE"
}
query += ` Posts.CreateAt <= ?
AND Posts.CreateAt >= ?
GROUP BY DATE(FROM_UNIXTIME(Posts.CreateAt / 1000))
ORDER BY Name DESC
LIMIT 30`
end := utils.MillisFromTime(utils.EndOfDay(utils.Yesterday()))
start := utils.MillisFromTime(utils.StartOfDay(utils.Yesterday().AddDate(0, 0, -31)))
endDay := utils.Yesterday().Format("2006-01-02")
startDay := utils.Yesterday().AddDate(0, 0, -31).Format("2006-01-02")
if options.YesterdayOnly {
start = utils.MillisFromTime(utils.StartOfDay(utils.Yesterday().AddDate(0, 0, -1)))
startDay = utils.Yesterday().AddDate(0, 0, -1).Format("2006-01-02")
}
args = append(args, end, start)
rows := model.AnalyticsRows{}
err := s.GetReplica().Select(&rows, query, args...)
if err != nil {
return nil, errors.Wrapf(err, "failed to find Posts with teamId=%s", options.TeamId)
// Use materialized views
if options.BotsOnly {
return s.countBotPostsByDay(options.TeamId, startDay, endDay)
}
return rows, nil
return s.countPostsByDay(options.TeamId, startDay, endDay)
}
func (s *SqlPostStore) countByTeam(teamID string) (int64, error) {
@ -2455,11 +2319,7 @@ func (s *SqlPostStore) countByTeam(teamID string) (int64, error) {
}
func (s *SqlPostStore) AnalyticsPostCountByTeam(teamID string) (int64, error) {
if s.DriverName() == model.DatabaseDriverPostgres {
return s.countByTeam(teamID)
}
return s.AnalyticsPostCount(&model.PostCountOptions{TeamId: teamID})
return s.countByTeam(teamID)
}
func (s *SqlPostStore) AnalyticsPostCount(options *model.PostCountOptions) (int64, error) {
@ -2630,12 +2490,7 @@ func (s *SqlPostStore) PermanentDeleteBatchForRetentionPolicies(retentionPolicyB
}
func (s *SqlPostStore) PermanentDeleteBatch(endTime int64, limit int64) (int64, error) {
var query string
if s.DriverName() == model.DatabaseDriverPostgres {
query = "DELETE from Posts WHERE Id = any (array (SELECT Id FROM Posts WHERE CreateAt < ? LIMIT ?))"
} else {
query = "DELETE from Posts WHERE CreateAt < ? LIMIT ?"
}
query := "DELETE from Posts WHERE Id = any (array (SELECT Id FROM Posts WHERE CreateAt < ? LIMIT ?))"
sqlResult, err := s.GetMaster().Exec(query, endTime, limit)
if err != nil {
@ -2668,22 +2523,18 @@ func (s *SqlPostStore) GetOldest() (*model.Post, error) {
func (s *SqlPostStore) determineMaxPostSize() int {
var maxPostSizeBytes int32
if s.DriverName() == model.DatabaseDriverPostgres {
// The Post.Message column in Postgres has historically been VARCHAR(4000), but
// may be manually enlarged to support longer posts.
if err := s.GetReplica().Get(&maxPostSizeBytes, `
SELECT
COALESCE(character_maximum_length, 0)
FROM
information_schema.columns
WHERE
table_name = 'posts'
AND column_name = 'message'
`); err != nil {
mlog.Warn("Unable to determine the maximum supported post size", mlog.Err(err))
}
} else {
mlog.Error("No implementation found to determine the maximum supported post size")
// The Post.Message column in Postgres has historically been VARCHAR(4000), but
// may be manually enlarged to support longer posts.
if err := s.GetReplica().Get(&maxPostSizeBytes, `
SELECT
COALESCE(character_maximum_length, 0)
FROM
information_schema.columns
WHERE
table_name = 'posts'
AND column_name = 'message'
`); err != nil {
mlog.Warn("Unable to determine the maximum supported post size", mlog.Err(err))
}
// Assume a worst-case representation of four bytes per rune.
@ -2994,20 +2845,13 @@ func (s *SqlPostStore) deleteThread(transaction *sqlxTxWrapper, postId string, d
}
func (s *SqlPostStore) deleteThreadFiles(transaction *sqlxTxWrapper, postID string, deleteAtTime int64) error {
var query sq.UpdateBuilder
if s.DriverName() == model.DatabaseDriverPostgres {
query = s.getQueryBuilder().Update("FileInfo").
Set("DeleteAt", deleteAtTime).
From("Posts")
} else {
query = s.getQueryBuilder().Update("FileInfo", "Posts").
Set("FileInfo.DeleteAt", deleteAtTime)
}
query = query.Where(sq.And{
sq.Expr("FileInfo.PostId = Posts.Id"),
sq.Eq{"Posts.RootId": postID},
})
query := s.getQueryBuilder().Update("FileInfo").
Set("DeleteAt", deleteAtTime).
From("Posts").
Where(sq.And{
sq.Expr("FileInfo.PostId = Posts.Id"),
sq.Eq{"Posts.RootId": postID},
})
_, err := transaction.ExecBuilder(query)
if err != nil {
@ -3039,14 +2883,7 @@ func (s *SqlPostStore) updateThreadAfterReplyDeletion(transaction *sqlxTxWrapper
updateQuery := s.getQueryBuilder().Update("Threads")
if count == 0 {
if s.DriverName() == model.DatabaseDriverPostgres {
updateQuery = updateQuery.Set("Participants", sq.Expr("Participants - ?", userId))
} else {
updateQuery = updateQuery.
Set("Participants", sq.Expr(
`IFNULL(JSON_REMOVE(Participants, JSON_UNQUOTE(JSON_SEARCH(Participants, 'one', ?))), Participants)`, userId,
))
}
updateQuery = updateQuery.Set("Participants", sq.Expr("Participants - ?", userId))
}
lastReplyAtSubquery := sq.Select("COALESCE(MAX(CreateAt), 0)").
@ -3261,40 +3098,12 @@ func (s *SqlPostStore) SetPostReminder(reminder *model.PostReminder) error {
return nil
}
func (s *SqlPostStore) GetPostReminders(now int64) (_ []*model.PostReminder, err error) {
func (s *SqlPostStore) GetPostReminders(now int64) ([]*model.PostReminder, error) {
reminders := []*model.PostReminder{}
transaction, err := s.GetMaster().Beginx()
err := s.GetMaster().Select(&reminders, `DELETE FROM PostReminders WHERE TargetTime <= $1 RETURNING PostId, UserId`, now)
if err != nil {
return nil, errors.Wrap(err, "begin_transaction")
return nil, errors.Wrap(err, "failed to get and delete post reminders")
}
defer finalizeTransactionX(transaction, &err)
err = transaction.Select(&reminders, `SELECT PostId, UserId
FROM PostReminders
WHERE TargetTime <= ?`, now)
if err != nil && err != sql.ErrNoRows {
return nil, errors.Wrap(err, "failed to get post reminders")
}
if err == sql.ErrNoRows {
// No need to execute delete statement if there's nothing to delete.
return reminders, nil
}
// TODO: https://mattermost.atlassian.net/browse/MM-63368
// Postgres supports RETURNING * in a DELETE statement, but MySQL doesn't.
// So we are stuck with 2 queries. Not taking separate paths for Postgres
// and MySQL for simplicity.
_, err = transaction.Exec(`DELETE from PostReminders WHERE TargetTime <= ?`, now)
if err != nil {
return nil, errors.Wrap(err, "failed to delete post reminders")
}
if err = transaction.Commit(); err != nil {
return nil, errors.Wrap(err, "commit_transaction")
}
return reminders, nil
}
@ -3324,19 +3133,17 @@ func (s *SqlPostStore) GetPostReminderMetadata(postID string) (*store.PostRemind
}
func (s *SqlPostStore) RefreshPostStats() error {
if s.DriverName() == model.DatabaseDriverPostgres {
// CONCURRENTLY is not used deliberately because as per Postgres docs,
// not using CONCURRENTLY takes less resources and completes faster
// at the expense of locking the mat view. Since viewing admin console
// is not a very frequent activity, we accept the tradeoff to let the
// refresh happen as fast as possible.
if _, err := s.GetMaster().Exec("REFRESH MATERIALIZED VIEW posts_by_team_day"); err != nil {
return errors.Wrap(err, "error refreshing materialized view posts_by_team_day")
}
// CONCURRENTLY is not used deliberately because as per Postgres docs,
// not using CONCURRENTLY takes less resources and completes faster
// at the expense of locking the mat view. Since viewing admin console
// is not a very frequent activity, we accept the tradeoff to let the
// refresh happen as fast as possible.
if _, err := s.GetMaster().Exec("REFRESH MATERIALIZED VIEW posts_by_team_day"); err != nil {
return errors.Wrap(err, "error refreshing materialized view posts_by_team_day")
}
if _, err := s.GetMaster().Exec("REFRESH MATERIALIZED VIEW bot_posts_by_team_day"); err != nil {
return errors.Wrap(err, "error refreshing materialized view bot_posts_by_team_day")
}
if _, err := s.GetMaster().Exec("REFRESH MATERIALIZED VIEW bot_posts_by_team_day"); err != nil {
return errors.Wrap(err, "error refreshing materialized view bot_posts_by_team_day")
}
return nil

View file

@ -224,15 +224,12 @@ func (s SqlPreferenceStore) DeleteCategoryAndName(category string, name string)
// DeleteOrphanedRows removes entries from Preferences (flagged post) when a
// corresponding post no longer exists.
func (s *SqlPreferenceStore) DeleteOrphanedRows(limit int) (deleted int64, err error) {
// We need the extra level of nesting to deal with MySQL's locking
const query = `
DELETE FROM Preferences WHERE Name IN (
SELECT Name FROM (
SELECT Preferences.Name FROM Preferences
LEFT JOIN Posts ON Preferences.Name = Posts.Id
WHERE Posts.Id IS NULL AND Category = ?
LIMIT ?
) AS A
DELETE FROM Preferences WHERE ctid IN (
SELECT Preferences.ctid FROM Preferences
LEFT JOIN Posts ON Preferences.Name = Posts.Id
WHERE Posts.Id IS NULL AND Category = $1
LIMIT $2
)`
result, err := s.GetMaster().Exec(query, model.PreferenceCategoryFlaggedPost, limit)
@ -284,12 +281,8 @@ func (s SqlPreferenceStore) CleanupFlagsBatch(limit int64) (int64, error) {
// Delete preference for limit_visible_dms_gms where their value is greater than "40" or less than "1"
func (s SqlPreferenceStore) DeleteInvalidVisibleDmsGms() (int64, error) {
var queryString string
var args []any
var err error
// We need to pad the value field with zeros when doing comparison's because the value is stored as a string.
// Having them the same length allows Postgres/MySQL to compare them correctly.
// Having them the same length allows Postgres to compare them correctly.
whereClause := sq.And{
sq.Eq{"Category": model.PreferenceCategorySidebarSettings},
sq.Eq{"Name": model.PreferenceLimitVisibleDmsGms},
@ -298,28 +291,17 @@ func (s SqlPreferenceStore) DeleteInvalidVisibleDmsGms() (int64, error) {
sq.Lt{"SUBSTRING(CONCAT('000000000000000', Value), LENGTH(Value) + 1, 15)": "000000000000001"},
},
}
if s.DriverName() == "postgres" {
subQuery := s.getQueryBuilder().
Select("UserId, Category, Name").
From("Preferences").
Where(whereClause).
Limit(100)
queryString, args, err = s.getQueryBuilder().
Delete("Preferences").
Where(sq.Expr("(userid, category, name) IN (?)", subQuery)).
ToSql()
if err != nil {
return int64(0), errors.Wrap(err, "could not build sql query to delete preference")
}
} else {
queryString, args, err = s.getQueryBuilder().
Delete("Preferences").
Where(whereClause).
Limit(100).
ToSql()
if err != nil {
return int64(0), errors.Wrap(err, "could not build sql query to delete preference")
}
subQuery := s.getQueryBuilder().
Select("UserId, Category, Name").
From("Preferences").
Where(whereClause).
Limit(100)
queryString, args, err := s.getQueryBuilder().
Delete("Preferences").
Where(sq.Expr("(userid, category, name) IN (?)", subQuery)).
ToSql()
if err != nil {
return int64(0), errors.Wrap(err, "could not build sql query to delete preference")
}
result, err := s.GetMaster().Exec(queryString, args...)

View file

@ -200,7 +200,6 @@ func (s *SqlPropertyFieldStore) Update(groupID string, fields []*model.PropertyF
defer finalizeTransactionX(transaction, &err)
updateTime := model.GetMillis()
isPostgres := s.DriverName() == model.DatabaseDriverPostgres
nameCase := sq.Case("id")
typeCase := sq.Case("id")
attrsCase := sq.Case("id")
@ -217,21 +216,12 @@ func (s *SqlPropertyFieldStore) Update(groupID string, fields []*model.PropertyF
ids[i] = field.ID
whenID := sq.Expr("?", field.ID)
if isPostgres {
nameCase = nameCase.When(whenID, sq.Expr("?::text", field.Name))
typeCase = typeCase.When(whenID, sq.Expr("?::property_field_type", field.Type))
attrsCase = attrsCase.When(whenID, sq.Expr("?::jsonb", field.Attrs))
targetIDCase = targetIDCase.When(whenID, sq.Expr("?::text", field.TargetID))
targetTypeCase = targetTypeCase.When(whenID, sq.Expr("?::text", field.TargetType))
deleteAtCase = deleteAtCase.When(whenID, sq.Expr("?::bigint", field.DeleteAt))
} else {
nameCase = nameCase.When(whenID, sq.Expr("?", field.Name))
typeCase = typeCase.When(whenID, sq.Expr("?", field.Type))
attrsCase = attrsCase.When(whenID, sq.Expr("?", field.Attrs))
targetIDCase = targetIDCase.When(whenID, sq.Expr("?", field.TargetID))
targetTypeCase = targetTypeCase.When(whenID, sq.Expr("?", field.TargetType))
deleteAtCase = deleteAtCase.When(whenID, sq.Expr("?", field.DeleteAt))
}
nameCase = nameCase.When(whenID, sq.Expr("?::text", field.Name))
typeCase = typeCase.When(whenID, sq.Expr("?::property_field_type", field.Type))
attrsCase = attrsCase.When(whenID, sq.Expr("?::jsonb", field.Attrs))
targetIDCase = targetIDCase.When(whenID, sq.Expr("?::text", field.TargetID))
targetTypeCase = targetTypeCase.When(whenID, sq.Expr("?::text", field.TargetType))
deleteAtCase = deleteAtCase.When(whenID, sq.Expr("?::bigint", field.DeleteAt))
}
builder := s.getQueryBuilder().

View file

@ -203,7 +203,6 @@ func (s *SqlPropertyValueStore) Update(groupID string, values []*model.PropertyV
defer finalizeTransactionX(transaction, &err)
updateTime := model.GetMillis()
isPostgres := s.DriverName() == model.DatabaseDriverPostgres
valueCase := sq.Case("id")
deleteAtCase := sq.Case("id")
ids := make([]string, len(values))
@ -220,13 +219,8 @@ func (s *SqlPropertyValueStore) Update(groupID string, values []*model.PropertyV
valueJSON = AppendBinaryFlag(valueJSON)
}
if isPostgres {
valueCase = valueCase.When(sq.Expr("?", value.ID), sq.Expr("?::jsonb", valueJSON))
deleteAtCase = deleteAtCase.When(sq.Expr("?", value.ID), sq.Expr("?::bigint", value.DeleteAt))
} else {
valueCase = valueCase.When(sq.Expr("?", value.ID), sq.Expr("?", valueJSON))
deleteAtCase = deleteAtCase.When(sq.Expr("?", value.ID), sq.Expr("?", value.DeleteAt))
}
valueCase = valueCase.When(sq.Expr("?", value.ID), sq.Expr("?::jsonb", valueJSON))
deleteAtCase = deleteAtCase.When(sq.Expr("?", value.ID), sq.Expr("?::bigint", value.DeleteAt))
}
builder := s.getQueryBuilder().

View file

@ -345,12 +345,7 @@ func (s *SqlReactionStore) DeleteOrphanedRowsByIds(r *model.RetentionIdsForDelet
}
func (s *SqlReactionStore) PermanentDeleteBatch(endTime int64, limit int64) (int64, error) {
var query string
if s.DriverName() == "postgres" {
query = "DELETE from Reactions WHERE CreateAt = any (array (SELECT CreateAt FROM Reactions WHERE CreateAt < ? LIMIT ?))"
} else {
query = "DELETE from Reactions WHERE CreateAt < ? LIMIT ?"
}
query := "DELETE from Reactions WHERE CreateAt = any (array (SELECT CreateAt FROM Reactions WHERE CreateAt < ? LIMIT ?))"
sqlResult, err := s.GetMaster().Exec(query, endTime, limit)
if err != nil {

View file

@ -5,7 +5,6 @@ package sqlstore
import (
"database/sql"
"encoding/json"
"fmt"
"strconv"
"strings"
@ -31,8 +30,7 @@ func newSqlRetentionPolicyStore(sqlStore *SqlStore, metrics einterfaces.MetricsI
}
}
// executePossiblyEmptyQuery only executes the query if it is non-empty. This helps avoid
// having to check for MySQL, which, unlike Postgres, does not allow empty queries.
// executePossiblyEmptyQuery only executes the query if it is non-empty.
func executePossiblyEmptyQuery(txn *sqlxTxWrapper, query string, args ...any) (sql.Result, error) {
if query == "" {
return nil, nil
@ -641,15 +639,11 @@ func subQueryIN(property string, query sq.SelectBuilder) sq.Sqlizer {
// DeleteOrphanedRows removes entries from RetentionPoliciesChannels and RetentionPoliciesTeams
// where a channel or team no longer exists.
func (s *SqlRetentionPolicyStore) DeleteOrphanedRows(limit int) (deleted int64, err error) {
// We need the extra level of nesting to deal with MySQL's locking
rpcSubQuery := sq.Select("ChannelId").FromSelect(
sq.Select("ChannelId").
From("RetentionPoliciesChannels").
LeftJoin("Channels ON RetentionPoliciesChannels.ChannelId = Channels.Id").
Where("Channels.Id IS NULL").
Limit(uint64(limit)),
"A",
)
rpcSubQuery := sq.Select("ChannelId").
From("RetentionPoliciesChannels").
LeftJoin("Channels ON RetentionPoliciesChannels.ChannelId = Channels.Id").
Where("Channels.Id IS NULL").
Limit(uint64(limit))
rpcDeleteQuery, rpcArgs, err := s.getQueryBuilder().
Delete("RetentionPoliciesChannels").
@ -659,15 +653,11 @@ func (s *SqlRetentionPolicyStore) DeleteOrphanedRows(limit int) (deleted int64,
return int64(0), errors.Wrap(err, "retention_policies_channels_tosql")
}
// We need the extra level of nesting to deal with MySQL's locking
rptSubQuery := sq.Select("TeamId").FromSelect(
sq.Select("TeamId").
From("RetentionPoliciesTeams").
LeftJoin("Teams ON RetentionPoliciesTeams.TeamId = Teams.Id").
Where("Teams.Id IS NULL").
Limit(uint64(limit)),
"A",
)
rptSubQuery := sq.Select("TeamId").
From("RetentionPoliciesTeams").
LeftJoin("Teams ON RetentionPoliciesTeams.TeamId = Teams.Id").
Where("Teams.Id IS NULL").
Limit(uint64(limit))
rptDeleteQuery, rptArgs, err := s.getQueryBuilder().
Delete("RetentionPoliciesTeams").
@ -817,26 +807,14 @@ func (s *SqlRetentionPolicyStore) GetChannelPoliciesCountForUser(userID string)
return count, nil
}
func scanRetentionIdsForDeletion(rows *sql.Rows, isPostgres bool) ([]*model.RetentionIdsForDeletion, error) {
func scanRetentionIdsForDeletion(rows *sql.Rows) ([]*model.RetentionIdsForDeletion, error) {
idsForDeletion := []*model.RetentionIdsForDeletion{}
for rows.Next() {
var row model.RetentionIdsForDeletion
if isPostgres {
if err := rows.Scan(
&row.Id, &row.TableName, pq.Array(&row.Ids),
); err != nil {
return nil, errors.Wrap(err, "unable to scan columns")
}
} else {
var ids []byte
if err := rows.Scan(
&row.Id, &row.TableName, &ids,
); err != nil {
return nil, errors.Wrap(err, "unable to scan columns")
}
if err := json.Unmarshal(ids, &row.Ids); err != nil {
return nil, errors.Wrap(err, "failed to unmarshal ids")
}
if err := rows.Scan(
&row.Id, &row.TableName, pq.Array(&row.Ids),
); err != nil {
return nil, errors.Wrap(err, "unable to scan columns")
}
idsForDeletion = append(idsForDeletion, &row)
@ -867,8 +845,7 @@ func (s *SqlRetentionPolicyStore) GetIdsForDeletionByTableName(tableName string,
}
defer rows.Close()
isPostgres := s.DriverName() == model.DatabaseDriverPostgres
idsForDeletion, err := scanRetentionIdsForDeletion(rows, isPostgres)
idsForDeletion, err := scanRetentionIdsForDeletion(rows)
if err != nil {
return nil, errors.Wrap(err, "failed to scan ids for deletion")
}
@ -880,18 +857,8 @@ func insertRetentionIdsForDeletion(txn *sqlxTxWrapper, row *model.RetentionIdsFo
row.PreSave()
insertBuilder := s.getQueryBuilder().
Insert("RetentionIdsForDeletion").
Columns("Id", "TableName", "Ids")
if s.DriverName() == model.DatabaseDriverPostgres {
insertBuilder = insertBuilder.
Values(row.Id, row.TableName, pq.Array(row.Ids))
} else {
jsonIds, err := json.Marshal(row.Ids)
if err != nil {
return err
}
insertBuilder = insertBuilder.
Values(row.Id, row.TableName, jsonIds)
}
Columns("Id", "TableName", "Ids").
Values(row.Id, row.TableName, pq.Array(row.Ids))
insertQuery, insertArgs, err := insertBuilder.ToSql()
if err != nil {
return err

View file

@ -15,12 +15,7 @@ import (
)
// GetSchemaDefinition dumps the database schema.
// Only Postgres is supported.
func (ss *SqlStore) GetSchemaDefinition() (*model.SupportPacketDatabaseSchema, error) {
if ss.DriverName() != model.DatabaseDriverPostgres {
return nil, errors.New("schema dump is only supported for Postgres")
}
var schemaInfo model.SupportPacketDatabaseSchema
var rErr *multierror.Error

View file

@ -369,12 +369,7 @@ func (me SqlSessionStore) AnalyticsSessionCount() (int64, error) {
}
func (me SqlSessionStore) Cleanup(expiryTime int64, batchSize int64) error {
var query string
if me.DriverName() == model.DatabaseDriverPostgres {
query = "DELETE FROM Sessions WHERE Id IN (SELECT Id FROM Sessions WHERE ExpiresAt != 0 AND ? > ExpiresAt LIMIT ?)"
} else {
query = "DELETE FROM Sessions WHERE ExpiresAt != 0 AND ? > ExpiresAt LIMIT ?"
}
query := "DELETE FROM Sessions WHERE Id IN (SELECT Id FROM Sessions WHERE ExpiresAt != 0 AND ? > ExpiresAt LIMIT ?)"
var rowsAffected int64 = 1

View file

@ -4,7 +4,6 @@
package sqlstore
import (
"context"
"database/sql"
"fmt"
"strings"
@ -825,30 +824,25 @@ func (s SqlSharedChannelStore) GetUsersForSync(filter model.GetUsersForSyncFilte
// UpdateUserLastSyncAt updates the LastSyncAt timestamp for the specified SharedChannelUser.
func (s SqlSharedChannelStore) UpdateUserLastSyncAt(userID string, channelID string, remoteID string) error {
// fetching the user first creates a minor race condition. This is mitigated by ensuring that the
// LastUpdateAt is only ever increased. Doing it this way avoids the update with join that has differing
// syntax between MySQL and Postgres which Squirrel cannot handle. It also allows us to return
// a proper error when trying to update for a non-existent user, which cannot be done by checking RowsAffected
// when doing updates; RowsAffected=0 when the LastUpdateAt doesn't change and is the same result if user doesn't
// exist.
user, err := s.stores.user.Get(context.Background(), userID)
if err != nil {
return err
}
updateAt := max(user.UpdateAt, user.LastPictureUpdate)
// Use UPDATE FROM with RETURNING to do this in a single query. The RETURNING clause lets us detect
// if the user doesn't exist (no rows returned).
query := s.getQueryBuilder().
Update("SharedChannelUsers AS scu").
Set("LastSyncAt", sq.Expr("GREATEST(scu.LastSyncAt, ?)", updateAt)).
Set("LastSyncAt", sq.Expr("GREATEST(scu.LastSyncAt, GREATEST(u.UpdateAt, u.LastPictureUpdate))")).
From("Users AS u").
Where("u.Id = scu.UserId").
Where(sq.Eq{
"scu.UserId": userID,
"scu.ChannelId": channelID,
"scu.RemoteId": remoteID,
})
}).
Suffix("RETURNING scu.UserId")
_, err = s.GetMaster().ExecBuilder(query)
if err != nil {
var returnedID string
if err := s.GetMaster().GetBuilder(&returnedID, query); err != nil {
if err == sql.ErrNoRows {
return store.NewErrNotFound("User", userID)
}
return fmt.Errorf("failed to update LastSyncAt for SharedChannelUser with userId=%s, channelId=%s, remoteId=%s: %w",
userID, channelID, remoteID, err)
}

View file

@ -17,7 +17,6 @@ import (
"github.com/jmoiron/sqlx"
"github.com/mattermost/mattermost/server/public/model"
"github.com/mattermost/mattermost/server/public/shared/mlog"
"github.com/mattermost/mattermost/server/v8/channels/store/storetest"
sq "github.com/mattermost/squirrel"
@ -63,9 +62,7 @@ type sqlxExecutor interface {
SelectBuilder(dest any, builder Builder) error
}
// namedParamRegex is used to capture all named parameters and convert them
// to lowercase. This is necessary to be able to use a single query for both
// Postgres and MySQL.
// namedParamRegex is used to capture all named parameters and convert them to lowercase.
// This will also lowercase any constant strings containing a :, but sqlx
// will fail the query, so it won't be checked in inadvertently.
var namedParamRegex = regexp.MustCompile(`:\w+`)
@ -134,9 +131,7 @@ func (w *sqlxDBWrapper) GetBuilder(dest any, builder Builder) error {
}
func (w *sqlxDBWrapper) NamedExec(query string, arg any) (sql.Result, error) {
if w.DB.DriverName() == model.DatabaseDriverPostgres {
query = namedParamRegex.ReplaceAllStringFunc(query, strings.ToLower)
}
query = namedParamRegex.ReplaceAllStringFunc(query, strings.ToLower)
ctx, cancel := context.WithTimeout(context.Background(), w.queryTimeout)
defer cancel()
@ -192,9 +187,7 @@ func (w *sqlxDBWrapper) ExecRaw(query string, args ...any) (sql.Result, error) {
}
func (w *sqlxDBWrapper) NamedQuery(query string, arg any) (*sqlx.Rows, error) {
if w.DB.DriverName() == model.DatabaseDriverPostgres {
query = namedParamRegex.ReplaceAllStringFunc(query, strings.ToLower)
}
query = namedParamRegex.ReplaceAllStringFunc(query, strings.ToLower)
ctx, cancel := context.WithTimeout(context.Background(), w.queryTimeout)
defer cancel()
@ -348,9 +341,7 @@ func (w *sqlxTxWrapper) ExecRaw(query string, args ...any) (sql.Result, error) {
}
func (w *sqlxTxWrapper) NamedExec(query string, arg any) (sql.Result, error) {
if w.Tx.DriverName() == model.DatabaseDriverPostgres {
query = namedParamRegex.ReplaceAllStringFunc(query, strings.ToLower)
}
query = namedParamRegex.ReplaceAllStringFunc(query, strings.ToLower)
ctx, cancel := context.WithTimeout(context.Background(), w.queryTimeout)
defer cancel()
@ -364,9 +355,7 @@ func (w *sqlxTxWrapper) NamedExec(query string, arg any) (sql.Result, error) {
}
func (w *sqlxTxWrapper) NamedQuery(query string, arg any) (*sqlx.Rows, error) {
if w.Tx.DriverName() == model.DatabaseDriverPostgres {
query = namedParamRegex.ReplaceAllStringFunc(query, strings.ToLower)
}
query = namedParamRegex.ReplaceAllStringFunc(query, strings.ToLower)
ctx, cancel := context.WithTimeout(context.Background(), w.queryTimeout)
defer cancel()

View file

@ -26,12 +26,11 @@ func newSqlStatusStore(sqlStore *SqlStore) store.StatusStore {
SqlStore: sqlStore,
}
manualColumnName := quoteColumnName(s.DriverName(), "Manual")
s.statusSelectQuery = s.getQueryBuilder().
Select(
"COALESCE(UserId, '') AS UserId",
"COALESCE(Status, '') AS Status",
fmt.Sprintf("COALESCE(%s, FALSE) AS %s", manualColumnName, manualColumnName),
"COALESCE(Manual, FALSE) AS Manual",
"COALESCE(LastActivityAt, 0) AS LastActivityAt",
"COALESCE(DNDEndTime, 0) AS DNDEndTime",
"COALESCE(PrevStatus, '') AS PrevStatus",
@ -44,7 +43,7 @@ func newSqlStatusStore(sqlStore *SqlStore) store.StatusStore {
func (s SqlStatusStore) SaveOrUpdate(st *model.Status) error {
query := s.getQueryBuilder().
Insert("Status").
Columns("UserId", "Status", quoteColumnName(s.DriverName(), "Manual"), "LastActivityAt", "DNDEndTime", "PrevStatus").
Columns("UserId", "Status", "Manual", "LastActivityAt", "DNDEndTime", "PrevStatus").
Values(st.UserId, st.Status, st.Manual, st.LastActivityAt, st.DNDEndTime, st.PrevStatus)
query = query.SuffixExpr(sq.Expr("ON CONFLICT (userid) DO UPDATE SET Status = EXCLUDED.Status, Manual = EXCLUDED.Manual, LastActivityAt = EXCLUDED.LastActivityAt, DNDEndTime = EXCLUDED.DNDEndTime, PrevStatus = EXCLUDED.PrevStatus"))
@ -70,7 +69,7 @@ func (s SqlStatusStore) SaveOrUpdateMany(statuses map[string]*model.Status) erro
query := s.getQueryBuilder().
Insert("Status").
Columns("UserId", "Status", quoteColumnName(s.DriverName(), "Manual"), "LastActivityAt", "DNDEndTime", "PrevStatus")
Columns("UserId", "Status", "Manual", "LastActivityAt", "DNDEndTime", "PrevStatus")
// Add values for each unique status
for _, st := range statuses {
@ -125,7 +124,7 @@ func (s SqlStatusStore) UpdateExpiredDNDStatuses() (_ []*model.Status, err error
Set("Status", sq.Expr("PrevStatus")).
Set("PrevStatus", model.StatusDnd).
Set("DNDEndTime", 0).
Set(quoteColumnName(s.DriverName(), "Manual"), false).
Set("Manual", false).
Suffix("RETURNING *")
statuses := []*model.Status{}
@ -138,7 +137,7 @@ func (s SqlStatusStore) UpdateExpiredDNDStatuses() (_ []*model.Status, err error
}
func (s SqlStatusStore) ResetAll() error {
if _, err := s.GetMaster().Exec(fmt.Sprintf("UPDATE Status SET Status = ? WHERE %s = false", quoteColumnName(s.DriverName(), "Manual")), model.StatusOffline); err != nil {
if _, err := s.GetMaster().Exec("UPDATE Status SET Status = ? WHERE Manual = false", model.StatusOffline); err != nil {
return errors.Wrap(err, "failed to update Statuses")
}
return nil

View file

@ -345,42 +345,23 @@ func (ss *SqlStore) DriverName() string {
}
// specialSearchChars have special meaning and can be treated as spaces
func (ss *SqlStore) specialSearchChars() []string {
chars := []string{
"<",
">",
"+",
"-",
"(",
")",
"~",
":",
}
// Postgres can handle "@" without any errors
// Also helps postgres in enabling search for EmailAddresses
if ss.DriverName() != model.DatabaseDriverPostgres {
chars = append(chars, "@")
}
return chars
var specialSearchChars = []string{
"<",
">",
"+",
"-",
"(",
")",
"~",
":",
}
// computeBinaryParam returns whether the data source uses binary_parameters
// when using Postgres
func (ss *SqlStore) computeBinaryParam() (bool, error) {
if ss.DriverName() != model.DatabaseDriverPostgres {
return false, nil
}
return DSNHasBinaryParam(*ss.settings.DataSource)
}
func (ss *SqlStore) computeDefaultTextSearchConfig() (string, error) {
if ss.DriverName() != model.DatabaseDriverPostgres {
return "", nil
}
var defaultTextSearchConfig string
err := ss.GetMaster().Get(&defaultTextSearchConfig, `SHOW default_text_search_config`)
return defaultTextSearchConfig, err
@ -395,14 +376,10 @@ func (ss *SqlStore) IsBinaryParamEnabled() bool {
// that can be parsed by callers.
func (ss *SqlStore) GetDbVersion(numerical bool) (string, error) {
var sqlVersion string
if ss.DriverName() == model.DatabaseDriverPostgres {
if numerical {
sqlVersion = `SHOW server_version_num`
} else {
sqlVersion = `SHOW server_version`
}
if numerical {
sqlVersion = `SHOW server_version_num`
} else {
return "", errors.New("Not supported driver")
sqlVersion = `SHOW server_version`
}
var version string
@ -952,19 +929,6 @@ func (ss *SqlStore) hasLicense() bool {
return hasLicense
}
func convertMySQLFullTextColumnsToPostgres(columnNames string) string {
columns := strings.Split(columnNames, ", ")
var concatenatedColumnNames strings.Builder
for i, c := range columns {
concatenatedColumnNames.WriteString(c)
if i < len(columns)-1 {
concatenatedColumnNames.WriteString(" || ' ' || ")
}
}
return concatenatedColumnNames.String()
}
// IsDuplicate checks whether an error is a duplicate key error, which comes when processes are competing on creating the same
// tables in the database.
func IsDuplicate(err error) bool {
@ -981,15 +945,12 @@ func IsDuplicate(err error) bool {
// ensureMinimumDBVersion gets the DB version and ensures it is
// above the required minimum version requirements.
func (ss *SqlStore) ensureMinimumDBVersion(ver string) (bool, error) {
switch *ss.settings.DriverName {
case model.DatabaseDriverPostgres:
intVer, err2 := strconv.Atoi(ver)
if err2 != nil {
return false, fmt.Errorf("cannot parse DB version: %v", err2)
}
if intVer < minimumRequiredPostgresVersion {
return false, fmt.Errorf("minimum Postgres version requirements not met. Found: %s, Wanted: %s", versionString(intVer, *ss.settings.DriverName), versionString(minimumRequiredPostgresVersion, *ss.settings.DriverName))
}
intVer, err := strconv.Atoi(ver)
if err != nil {
return false, fmt.Errorf("cannot parse DB version: %v", err)
}
if intVer < minimumRequiredPostgresVersion {
return false, fmt.Errorf("minimum Postgres version requirements not met. Found: %s, Wanted: %s", versionString(intVer), versionString(minimumRequiredPostgresVersion))
}
return true, nil
}
@ -998,7 +959,7 @@ func (ss *SqlStore) ensureMinimumDBVersion(ver string) (bool, error) {
// to a pretty-printed string.
// Postgres doesn't follow three-part version numbers from 10.0 onwards:
// https://www.postgresql.org/docs/13/libpq-status.html#LIBPQ-PQSERVERVERSION.
func versionString(v int, driver string) string {
func versionString(v int) string {
minor := v % 10000
major := v / 10000
return strconv.Itoa(major) + "." + strconv.Itoa(minor)

View file

@ -763,28 +763,24 @@ func TestVersionString(t *testing.T) {
versions := []struct {
input int
driver string
output string
}{
{
input: 100000,
driver: model.DatabaseDriverPostgres,
output: "10.0",
},
{
input: 90603,
driver: model.DatabaseDriverPostgres,
output: "9.603",
},
{
input: 120005,
driver: model.DatabaseDriverPostgres,
output: "12.5",
},
}
for _, v := range versions {
out := versionString(v.input, v.driver)
out := versionString(v.input)
assert.Equal(t, v.output, out)
}
}

View file

@ -6,7 +6,6 @@ package sqlstore
import (
"database/sql"
"fmt"
"slices"
"strings"
sq "github.com/mattermost/squirrel"
@ -1614,19 +1613,11 @@ func (s SqlTeamStore) UserBelongsToTeams(userId string, teamIds []string) (bool,
// UpdateMembersRole updates all the members of teamID in the adminIDs string array to be admins and sets all other
// users as not being admin.
// It returns the list of userIDs whose roles got updated.
func (s SqlTeamStore) UpdateMembersRole(teamID string, adminIDs []string) (_ []*model.TeamMember, err error) {
transaction, err := s.GetMaster().Beginx()
if err != nil {
return nil, err
}
defer finalizeTransactionX(transaction, &err)
// TODO: https://mattermost.atlassian.net/browse/MM-63368
// On MySQL it's not possible to update a table and select from it in the same query.
// A SELECT and a UPDATE query are needed.
// Once we only support PostgreSQL, this can be done in a single query using RETURNING.
query, args, err := s.teamMembersQuery.
// It returns the list of members whose roles got updated.
func (s SqlTeamStore) UpdateMembersRole(teamID string, adminIDs []string) ([]*model.TeamMember, error) {
query := s.getQueryBuilder().
Update("TeamMembers").
Set("SchemeAdmin", sq.Case().When(sq.Eq{"UserId": adminIDs}, "true").Else("false")).
Where(sq.Eq{"TeamId": teamID, "DeleteAt": 0}).
Where(sq.Or{sq.Eq{"SchemeGuest": false}, sq.Expr("SchemeGuest IS NULL")}).
Where(
@ -1642,42 +1633,14 @@ func (s SqlTeamStore) UpdateMembersRole(teamID string, adminIDs []string) (_ []*
sq.NotEq{"UserId": adminIDs},
},
},
).ToSql()
if err != nil {
return nil, errors.Wrap(err, "team_tosql")
}
).
Suffix("RETURNING " + strings.Join(teamMemberSliceColumns(), ", "))
var updatedMembers []*model.TeamMember
if err = transaction.Select(&updatedMembers, query, args...); err != nil {
return nil, errors.Wrap(err, "failed to get list of updated users")
}
// Update SchemeAdmin field as the data from the SQL is not updated yet
for _, member := range updatedMembers {
if slices.Contains(adminIDs, member.UserId) {
member.SchemeAdmin = true
} else {
member.SchemeAdmin = false
}
}
query, args, err = s.getQueryBuilder().
Update("TeamMembers").
Set("SchemeAdmin", sq.Case().When(sq.Eq{"UserId": adminIDs}, "true").Else("false")).
Where(sq.Eq{"TeamId": teamID, "DeleteAt": 0}).
Where(sq.Or{sq.Eq{"SchemeGuest": false}, sq.Expr("SchemeGuest IS NULL")}).ToSql()
if err != nil {
return nil, errors.Wrap(err, "team_tosql")
}
if _, err = transaction.Exec(query, args...); err != nil {
if err := s.GetMaster().SelectBuilder(&updatedMembers, query); err != nil {
return nil, errors.Wrap(err, "failed to update TeamMembers")
}
if err = transaction.Commit(); err != nil {
return nil, errors.Wrap(err, "commit_transaction")
}
return updatedMembers, nil
}

View file

@ -5,7 +5,6 @@ package sqlstore
import (
"database/sql"
"strconv"
"time"
sq "github.com/mattermost/squirrel"
@ -626,14 +625,8 @@ func (s *SqlThreadStore) MarkAllAsReadByChannels(userID string, channelIDs []str
now := model.GetMillis()
var query sq.UpdateBuilder
if s.DriverName() == model.DatabaseDriverPostgres {
query = s.getQueryBuilder().Update("ThreadMemberships").From("Threads")
} else {
query = s.getQueryBuilder().Update("ThreadMemberships", "Threads")
}
query = query.Set("LastViewed", now).
query := s.getQueryBuilder().Update("ThreadMemberships").From("Threads").
Set("LastViewed", now).
Set("UnreadMentions", 0).
Set("LastUpdated", now).
Where(sq.Eq{"ThreadMemberships.UserId": userID}).
@ -672,14 +665,7 @@ func (s *SqlThreadStore) MarkAllAsRead(userId string, threadIds []string) error
func (s *SqlThreadStore) MarkAllAsReadByTeam(userId, teamId string) error {
timestamp := model.GetMillis()
var query sq.UpdateBuilder
if s.DriverName() == model.DatabaseDriverPostgres {
query = s.getQueryBuilder().Update("ThreadMemberships").From("Threads")
} else {
query = s.getQueryBuilder().Update("ThreadMemberships", "Threads")
}
query = query.
query := s.getQueryBuilder().Update("ThreadMemberships").From("Threads").
Where("Threads.PostId = ThreadMemberships.PostId").
Where(sq.Eq{"ThreadMemberships.UserId": userId}).
Where(sq.Or{sq.Eq{"Threads.ThreadTeamId": teamId}, sq.Eq{"Threads.ThreadTeamId": ""}}).
@ -1111,30 +1097,19 @@ func (s *SqlThreadStore) SaveMultipleMemberships(memberships []*model.ThreadMemb
}
func (s *SqlThreadStore) updateThreadParticipantsForUserTx(trx *sqlxTxWrapper, postID, userID string) error {
if s.DriverName() == model.DatabaseDriverPostgres {
userIdParam, err := jsonArray([]string{userID}).Value()
if err != nil {
return err
}
if s.IsBinaryParamEnabled() {
userIdParam = AppendBinaryFlag(userIdParam.([]byte))
}
userIdParam, err := jsonArray([]string{userID}).Value()
if err != nil {
return err
}
if s.IsBinaryParamEnabled() {
userIdParam = AppendBinaryFlag(userIdParam.([]byte))
}
if _, err := trx.ExecRaw(`UPDATE Threads
SET participants = participants || $1::jsonb
WHERE postid=$2
AND NOT participants ? $3`, userIdParam, postID, userID); err != nil {
return err
}
} else {
// CONCAT('$[', JSON_LENGTH(Participants), ']') just generates $[n]
// which is the positional syntax required for appending.
if _, err := trx.Exec(`UPDATE Threads
SET Participants = JSON_ARRAY_INSERT(Participants, CONCAT('$[', JSON_LENGTH(Participants), ']'), ?)
WHERE PostId=?
AND NOT JSON_CONTAINS(Participants, ?)`, userID, postID, strconv.Quote(userID)); err != nil {
return err
}
if _, err := trx.ExecRaw(`UPDATE Threads
SET participants = participants || $1::jsonb
WHERE postid=$2
AND NOT participants ? $3`, userIdParam, postID, userID); err != nil {
return err
}
return nil

View file

@ -640,8 +640,6 @@ func (us SqlUserStore) GetEtagForAllProfiles() string {
}
func (us SqlUserStore) GetAllProfiles(options *model.UserGetOptions) ([]*model.User, error) {
isPostgreSQL := us.DriverName() == model.DatabaseDriverPostgres
// Determine ordering based on Sort option - default to Username ASC for backwards compatibility
orderBy := "Users.Username ASC"
if options.Sort == "update_at_asc" {
@ -654,8 +652,8 @@ func (us SqlUserStore) GetAllProfiles(options *model.UserGetOptions) ([]*model.U
query = applyViewRestrictionsFilter(query, options.ViewRestrictions, true)
query = applyRoleFilter(query, options.Role, isPostgreSQL)
query = applyMultiRoleFilters(query, options.Roles, []string{}, []string{}, isPostgreSQL)
query = applyRoleFilter(query, options.Role)
query = applyMultiRoleFilters(query, options.Roles, []string{}, []string{})
if options.Inactive {
query = query.Where("Users.DeleteAt != 0")
@ -679,22 +677,16 @@ func (us SqlUserStore) GetAllProfiles(options *model.UserGetOptions) ([]*model.U
return users, nil
}
func applyRoleFilter(query sq.SelectBuilder, role string, isPostgreSQL bool) sq.SelectBuilder {
func applyRoleFilter(query sq.SelectBuilder, role string) sq.SelectBuilder {
if role == "" {
return query
}
if isPostgreSQL {
roleParam := fmt.Sprintf("%%%s%%", sanitizeSearchTerm(role, "\\"))
return query.Where("Users.Roles LIKE LOWER(?)", roleParam)
}
roleParam := fmt.Sprintf("%%%s%%", sanitizeSearchTerm(role, "*"))
return query.Where("Users.Roles LIKE ? ESCAPE '*'", roleParam)
roleParam := fmt.Sprintf("%%%s%%", sanitizeSearchTerm(role, "\\"))
return query.Where("Users.Roles LIKE LOWER(?)", roleParam)
}
func applyMultiRoleFilters(query sq.SelectBuilder, systemRoles []string, teamRoles []string, channelRoles []string, isPostgreSQL bool) sq.SelectBuilder {
func applyMultiRoleFilters(query sq.SelectBuilder, systemRoles []string, teamRoles []string, channelRoles []string) sq.SelectBuilder {
sqOr := sq.Or{}
if len(systemRoles) > 0 && systemRoles[0] != "" {
@ -706,11 +698,7 @@ func applyMultiRoleFilters(query sq.SelectBuilder, systemRoles []string, teamRol
sqOr = append(sqOr, sq.Eq{"Users.Roles": role})
case model.SystemGuestRoleId, model.SystemAdminRoleId, model.SystemUserManagerRoleId, model.SystemReadOnlyAdminRoleId, model.SystemManagerRoleId:
// If querying for any other roles search using a wildcard.
if isPostgreSQL {
sqOr = append(sqOr, sq.ILike{"Users.Roles": queryRole})
} else {
sqOr = append(sqOr, sq.Like{"Users.Roles": queryRole})
}
sqOr = append(sqOr, sq.ILike{"Users.Roles": queryRole})
}
}
}
@ -719,17 +707,9 @@ func applyMultiRoleFilters(query sq.SelectBuilder, systemRoles []string, teamRol
for _, channelRole := range channelRoles {
switch channelRole {
case model.ChannelAdminRoleId:
if isPostgreSQL {
sqOr = append(sqOr, sq.And{sq.Eq{"cm.SchemeAdmin": true}, sq.NotILike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
} else {
sqOr = append(sqOr, sq.And{sq.Eq{"cm.SchemeAdmin": true}, sq.NotLike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
}
sqOr = append(sqOr, sq.And{sq.Eq{"cm.SchemeAdmin": true}, sq.NotILike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
case model.ChannelUserRoleId:
if isPostgreSQL {
sqOr = append(sqOr, sq.And{sq.Eq{"cm.SchemeUser": true}, sq.Eq{"cm.SchemeAdmin": false}, sq.NotILike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
} else {
sqOr = append(sqOr, sq.And{sq.Eq{"cm.SchemeUser": true}, sq.Eq{"cm.SchemeAdmin": false}, sq.NotLike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
}
sqOr = append(sqOr, sq.And{sq.Eq{"cm.SchemeUser": true}, sq.Eq{"cm.SchemeAdmin": false}, sq.NotILike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
case model.ChannelGuestRoleId:
sqOr = append(sqOr, sq.Eq{"cm.SchemeGuest": true})
}
@ -740,17 +720,9 @@ func applyMultiRoleFilters(query sq.SelectBuilder, systemRoles []string, teamRol
for _, teamRole := range teamRoles {
switch teamRole {
case model.TeamAdminRoleId:
if isPostgreSQL {
sqOr = append(sqOr, sq.And{sq.Eq{"tm.SchemeAdmin": true}, sq.NotILike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
} else {
sqOr = append(sqOr, sq.And{sq.Eq{"tm.SchemeAdmin": true}, sq.NotLike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
}
sqOr = append(sqOr, sq.And{sq.Eq{"tm.SchemeAdmin": true}, sq.NotILike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
case model.TeamUserRoleId:
if isPostgreSQL {
sqOr = append(sqOr, sq.And{sq.Eq{"tm.SchemeUser": true}, sq.Eq{"tm.SchemeAdmin": false}, sq.NotILike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
} else {
sqOr = append(sqOr, sq.And{sq.Eq{"tm.SchemeUser": true}, sq.Eq{"tm.SchemeAdmin": false}, sq.NotLike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
}
sqOr = append(sqOr, sq.And{sq.Eq{"tm.SchemeUser": true}, sq.Eq{"tm.SchemeAdmin": false}, sq.NotILike{"Users.Roles": wildcardSearchTerm(model.SystemAdminRoleId)}})
case model.TeamGuestRoleId:
sqOr = append(sqOr, sq.Eq{"tm.SchemeGuest": true})
}
@ -821,7 +793,6 @@ func (us SqlUserStore) GetEtagForProfiles(teamId string) string {
}
func (us SqlUserStore) GetProfiles(options *model.UserGetOptions) ([]*model.User, error) {
isPostgreSQL := us.DriverName() == model.DatabaseDriverPostgres
query := us.usersQuery.
Join("TeamMembers tm ON ( tm.UserId = Users.Id AND tm.DeleteAt = 0 )").
Where("tm.TeamId = ?", options.InTeamId).
@ -830,8 +801,8 @@ func (us SqlUserStore) GetProfiles(options *model.UserGetOptions) ([]*model.User
query = applyViewRestrictionsFilter(query, options.ViewRestrictions, true)
query = applyRoleFilter(query, options.Role, isPostgreSQL)
query = applyMultiRoleFilters(query, options.Roles, options.TeamRoles, options.ChannelRoles, isPostgreSQL)
query = applyRoleFilter(query, options.Role)
query = applyMultiRoleFilters(query, options.Roles, options.TeamRoles, options.ChannelRoles)
if options.Inactive {
query = query.Where("Users.DeleteAt != 0")
@ -868,7 +839,7 @@ func (us SqlUserStore) GetProfilesInChannel(options *model.UserGetOptions) ([]*m
query = query.Where("Users.DeleteAt = 0")
}
query = applyMultiRoleFilters(query, options.Roles, options.TeamRoles, options.ChannelRoles, us.DriverName() == model.DatabaseDriverPostgres)
query = applyMultiRoleFilters(query, options.Roles, options.TeamRoles, options.ChannelRoles)
users := []*model.User{}
if err := us.GetReplica().SelectBuilder(&users, query); err != nil {
@ -1030,7 +1001,6 @@ func (us SqlUserStore) GetProfilesNotInChannel(teamId string, channelId string,
}
func (us SqlUserStore) GetProfilesWithoutTeam(options *model.UserGetOptions) ([]*model.User, error) {
isPostgreSQL := us.DriverName() == model.DatabaseDriverPostgres
query := us.usersQuery.
Where(`(
SELECT
@ -1046,7 +1016,7 @@ func (us SqlUserStore) GetProfilesWithoutTeam(options *model.UserGetOptions) ([]
query = applyViewRestrictionsFilter(query, options.ViewRestrictions, true)
query = applyRoleFilter(query, options.Role, isPostgreSQL)
query = applyRoleFilter(query, options.Role)
if options.Inactive {
query = query.Where("Users.DeleteAt != 0")
@ -1448,17 +1418,12 @@ func (us SqlUserStore) Count(options model.UserCountOptions) (int64, error) {
query = query.Where(sq.Or{sq.Eq{"Users.RemoteId": ""}, sq.Eq{"Users.RemoteId": nil}})
}
isPostgreSQL := us.DriverName() == model.DatabaseDriverPostgres
if options.IncludeBotAccounts {
if options.ExcludeRegularUsers {
query = query.Join("Bots ON Users.Id = Bots.UserId")
}
} else {
if isPostgreSQL {
query = query.LeftJoin("Bots ON Users.Id = Bots.UserId").Where("Bots.UserId IS NULL")
} else {
query = query.Where(sq.Expr("Users.Id NOT IN (SELECT UserId FROM Bots)"))
}
query = query.LeftJoin("Bots ON Users.Id = Bots.UserId").Where("Bots.UserId IS NULL")
if options.ExcludeRegularUsers {
// Currently this doesn't make sense because it will always return 0
@ -1472,11 +1437,9 @@ func (us SqlUserStore) Count(options model.UserCountOptions) (int64, error) {
query = query.LeftJoin("ChannelMembers AS cm ON Users.Id = cm.UserId").Where("cm.ChannelId = ?", options.ChannelId)
}
query = applyViewRestrictionsFilter(query, options.ViewRestrictions, false)
query = applyMultiRoleFilters(query, options.Roles, options.TeamRoles, options.ChannelRoles, isPostgreSQL)
query = applyMultiRoleFilters(query, options.Roles, options.TeamRoles, options.ChannelRoles)
if isPostgreSQL {
query = query.PlaceholderFormat(sq.Dollar)
}
query = query.PlaceholderFormat(sq.Dollar)
queryString, args, err := query.ToSql()
if err != nil {
@ -1496,11 +1459,7 @@ func (us SqlUserStore) AnalyticsActiveCount(timePeriod int64, options model.User
query := us.getQueryBuilder().Select("COUNT(*)").From("Status AS s").Where("LastActivityAt > ?", time)
if !options.IncludeBotAccounts {
if us.DriverName() == model.DatabaseDriverPostgres {
query = query.LeftJoin("Bots ON s.UserId = Bots.UserId").Where("Bots.UserId IS NULL")
} else {
query = query.Where(sq.Expr("UserId NOT IN (SELECT UserId FROM Bots)"))
}
query = query.LeftJoin("Bots ON s.UserId = Bots.UserId").Where("Bots.UserId IS NULL")
}
if !options.IncludeRemoteUsers || !options.IncludeDeleted {
@ -1532,11 +1491,7 @@ func (us SqlUserStore) AnalyticsActiveCountForPeriod(startTime int64, endTime in
query := us.getQueryBuilder().Select("COUNT(*)").From("Status AS s").Where("LastActivityAt > ? AND LastActivityAt <= ?", startTime, endTime)
if !options.IncludeBotAccounts {
if us.DriverName() == model.DatabaseDriverPostgres {
query = query.LeftJoin("Bots ON s.UserId = Bots.UserId").Where("Bots.UserId IS NULL")
} else {
query = query.Where(sq.Expr("UserId NOT IN (SELECT UserId FROM Bots)"))
}
query = query.LeftJoin("Bots ON s.UserId = Bots.UserId").Where("Bots.UserId IS NULL")
}
if !options.IncludeRemoteUsers || !options.IncludeDeleted {
@ -1735,16 +1690,12 @@ func (us SqlUserStore) SearchNotInGroup(groupID string, term string, options *mo
return us.performSearch(query, term, options)
}
func generateSearchQuery(query sq.SelectBuilder, terms []string, fields []string, isPostgreSQL bool) sq.SelectBuilder {
func generateSearchQuery(query sq.SelectBuilder, terms []string, fields []string) sq.SelectBuilder {
for _, term := range terms {
searchFields := []string{}
termArgs := []any{}
for _, field := range fields {
if isPostgreSQL {
searchFields = append(searchFields, fmt.Sprintf("lower(%s) LIKE lower(?) escape '*' ", field))
} else {
searchFields = append(searchFields, fmt.Sprintf("%s LIKE ? escape '*' ", field))
}
searchFields = append(searchFields, fmt.Sprintf("lower(%s) LIKE lower(?) escape '*' ", field))
termArgs = append(termArgs, fmt.Sprintf("%%%s%%", strings.TrimLeft(term, "@")))
}
searchFields = append(searchFields, "Id = ?")
@ -1773,17 +1724,15 @@ func (us SqlUserStore) performSearch(query sq.SelectBuilder, term string, option
}
}
isPostgreSQL := us.DriverName() == model.DatabaseDriverPostgres
query = applyRoleFilter(query, options.Role, isPostgreSQL)
query = applyMultiRoleFilters(query, options.Roles, options.TeamRoles, options.ChannelRoles, isPostgreSQL)
query = applyRoleFilter(query, options.Role)
query = applyMultiRoleFilters(query, options.Roles, options.TeamRoles, options.ChannelRoles)
if !options.AllowInactive {
query = query.Where("Users.DeleteAt = 0")
}
if strings.TrimSpace(term) != "" {
query = generateSearchQuery(query, strings.Fields(term), searchType, isPostgreSQL)
query = generateSearchQuery(query, strings.Fields(term), searchType)
}
query = applyViewRestrictionsFilter(query, options.ViewRestrictions, true)
@ -1807,19 +1756,12 @@ func (us SqlUserStore) performSearch(query sq.SelectBuilder, term string, option
func (us SqlUserStore) AnalyticsGetInactiveUsersCount() (int64, error) {
query := us.getQueryBuilder().
Select("COUNT(Id)").
From("Users")
if us.DriverName() == model.DatabaseDriverPostgres {
query = query.LeftJoin("Bots ON Users.ID = Bots.UserId").
Where(sq.And{
sq.Gt{"Users.DeleteAt": 0},
sq.Eq{"Bots.UserId": nil},
})
} else {
query = query.Where(sq.And{
sq.Expr("Users.Id NOT IN (SELECT UserId FROM Bots)"),
From("Users").
LeftJoin("Bots ON Users.ID = Bots.UserId").
Where(sq.And{
sq.Gt{"Users.DeleteAt": 0},
sq.Eq{"Bots.UserId": nil},
})
}
var count int64
err := us.GetReplica().GetBuilder(&count, query)
@ -2354,11 +2296,7 @@ func (us SqlUserStore) IsEmpty(excludeBots bool) (bool, error) {
From("Users")
if excludeBots {
if us.DriverName() == model.DatabaseDriverPostgres {
builder = builder.LeftJoin("Bots ON Users.Id = Bots.UserId").Where("Bots.UserId IS NULL")
} else {
builder = builder.Where(sq.Expr("Users.Id NOT IN (SELECT UserId FROM Bots)"))
}
builder = builder.LeftJoin("Bots ON Users.Id = Bots.UserId").Where("Bots.UserId IS NULL")
}
builder = builder.Suffix(")")
@ -2409,19 +2347,14 @@ func (us SqlUserStore) GetUsersWithInvalidEmails(page int, perPage int, restrict
}
func (us SqlUserStore) RefreshPostStatsForUsers() error {
if us.DriverName() == model.DatabaseDriverPostgres {
if _, err := us.GetMaster().Exec("REFRESH MATERIALIZED VIEW poststats"); err != nil {
return errors.Wrap(err, "users_refresh_post_stats_exec")
}
} else {
mlog.Debug("Skipped running refresh post stats, only available on Postgres")
if _, err := us.GetMaster().Exec("REFRESH MATERIALIZED VIEW poststats"); err != nil {
return errors.Wrap(err, "users_refresh_post_stats_exec")
}
return nil
}
func applyUserReportFilter(query sq.SelectBuilder, filter *model.UserReportOptions, isPostgres bool) sq.SelectBuilder {
query = applyRoleFilter(query, filter.Role, isPostgres)
func applyUserReportFilter(query sq.SelectBuilder, filter *model.UserReportOptions) sq.SelectBuilder {
query = applyRoleFilter(query, filter.Role)
if filter.HasNoTeam {
query = query.Where(sq.Expr("Users.Id NOT IN (SELECT UserId FROM TeamMembers WHERE DeleteAt = 0)"))
} else if filter.Team != "" {
@ -2436,25 +2369,20 @@ func applyUserReportFilter(query sq.SelectBuilder, filter *model.UserReportOptio
}
if strings.TrimSpace(filter.SearchTerm) != "" {
query = generateSearchQuery(query, strings.Fields(sanitizeSearchTerm(filter.SearchTerm, "*")), UserSearchTypeAll, isPostgres)
query = generateSearchQuery(query, strings.Fields(sanitizeSearchTerm(filter.SearchTerm, "*")), UserSearchTypeAll)
}
return query
}
func (us SqlUserStore) GetUserCountForReport(filter *model.UserReportOptions) (int64, error) {
isPostgres := us.DriverName() == model.DatabaseDriverPostgres
query := us.getQueryBuilder().
Select("COUNT(Users.Id)").
From("Users")
From("Users").
LeftJoin("Bots ON Users.Id = Bots.UserId").
Where("Bots.UserId IS NULL")
if isPostgres {
query = query.LeftJoin("Bots ON Users.Id = Bots.UserId").Where("Bots.UserId IS NULL")
} else {
query = query.Where(sq.Expr("Users.Id NOT IN (SELECT UserId FROM Bots)"))
}
query = applyUserReportFilter(query, filter, isPostgres)
query = applyUserReportFilter(query, filter)
queryStr, args, err := query.ToSql()
if err != nil {
return 0, errors.Wrap(err, "user_count_report_tosql")
@ -2468,15 +2396,12 @@ func (us SqlUserStore) GetUserCountForReport(filter *model.UserReportOptions) (i
}
func (us SqlUserStore) GetUserReport(filter *model.UserReportOptions) ([]*model.UserReportQuery, error) {
isPostgres := us.DriverName() == model.DatabaseDriverPostgres
selectColumns := append(getUsersColumns(), "MAX(s.LastActivityAt) AS LastStatusAt")
if isPostgres {
selectColumns = append(selectColumns,
"MAX(ps.LastPostDate) AS LastPostDate",
"COUNT(ps.Day) AS DaysActive",
"SUM(ps.NumPosts) AS TotalPosts",
)
}
selectColumns := append(getUsersColumns(),
"MAX(s.LastActivityAt) AS LastStatusAt",
"MAX(ps.LastPostDate) AS LastPostDate",
"COUNT(ps.Day) AS DaysActive",
"SUM(ps.NumPosts) AS TotalPosts",
)
sortDirection := "ASC"
if filter.SortDesc {
@ -2522,24 +2447,22 @@ func (us SqlUserStore) GetUserReport(filter *model.UserReportOptions) ([]*model.
query = query.Limit(uint64(filter.PageSize))
}
if isPostgres {
joinSql := sq.And{}
if filter.StartAt > 0 {
startDate := time.UnixMilli(filter.StartAt)
joinSql = append(joinSql, sq.GtOrEq{"ps.Day": startDate.Format("2006-01-02")})
}
if filter.EndAt > 0 {
endDate := time.UnixMilli(filter.EndAt)
joinSql = append(joinSql, sq.Lt{"ps.Day": endDate.Format("2006-01-02")})
}
sql, args, err := joinSql.ToSql()
if err != nil {
return nil, err
}
query = query.LeftJoin("PostStats ps ON ps.UserId = Users.Id AND "+sql, args...)
joinSql := sq.And{}
if filter.StartAt > 0 {
startDate := time.UnixMilli(filter.StartAt)
joinSql = append(joinSql, sq.GtOrEq{"ps.Day": startDate.Format("2006-01-02")})
}
if filter.EndAt > 0 {
endDate := time.UnixMilli(filter.EndAt)
joinSql = append(joinSql, sq.Lt{"ps.Day": endDate.Format("2006-01-02")})
}
sql, args, err := joinSql.ToSql()
if err != nil {
return nil, err
}
query = query.LeftJoin("PostStats ps ON ps.UserId = Users.Id AND "+sql, args...)
query = applyUserReportFilter(query, filter, isPostgres)
query = applyUserReportFilter(query, filter)
parentQuery := query
// If we're going a page back...
@ -2560,7 +2483,7 @@ func (us SqlUserStore) GetUserReport(filter *model.UserReportOptions) ([]*model.
}
userResults := []*model.UserReportQuery{}
err := us.GetReplica().SelectBuilder(&userResults, parentQuery)
err = us.GetReplica().SelectBuilder(&userResults, parentQuery)
if err != nil {
return nil, errors.Wrap(err, "failed to get users for reporting")
}

View file

@ -159,11 +159,6 @@ func trimInput(input string) string {
return input
}
// Returns the column name for PostgreSQL.
func quoteColumnName(driver string, columnName string) string {
return columnName
}
// scanRowsIntoMap scans SQL rows into a map, using a provided scanner function to extract key-value pairs
func scanRowsIntoMap[K comparable, V any](rows *sql.Rows, scanner func(rows *sql.Rows) (K, V, error), defaults map[K]V) (map[K]V, error) {
results := make(map[K]V, len(defaults))

View file

@ -5973,8 +5973,7 @@ func testGetPostsForReporting(t *testing.T, rctx request.CTX, ss store.Store, s
//
// For reporting queries, we expect the query to use index seeks, not table scans
//
// Note: The actual query plan depends on the database (PostgreSQL vs MySQL),
// data distribution, and statistics. This test just verifies the query executes
// Note: The actual query plan depends on data distribution and statistics. This test just verifies the query executes
// efficiently by checking that it completes in a reasonable time.
// Create a larger dataset to better test index usage

View file

@ -254,7 +254,6 @@ func (h *MainHelper) setupResources() {
//
// Re-generate the files with:
// pg_dump -a -h localhost -U mmuser -d <> --no-comments --inserts -t roles -t systems
// mysqldump -u root -p <> --no-create-info --extended-insert=FALSE Systems Roles
// And keep only the permission related rows in the systems table output.
func preloadMigrations(driverName string, sqlStore *sqlstore.SqlStore) {
var buf []byte

View file

@ -1,41 +0,0 @@
-- MySQL dump 10.13 Distrib 5.7.12, for Linux (x86_64)
--
-- Host: localhost Database: mattermost_test
-- ------------------------------------------------------
-- Server version 5.7.12
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */;
/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
--
-- Dumping data for table `focalboard_system_settings`
--
LOCK TABLES `focalboard_system_settings` WRITE;
/*!40000 ALTER TABLE `focalboard_system_settings` DISABLE KEYS */;
INSERT INTO `focalboard_system_settings` VALUES ('CategoryUuidIdMigrationComplete','true');
INSERT INTO `focalboard_system_settings` VALUES ('DeDuplicateCategoryBoardTableComplete','true');
INSERT INTO `focalboard_system_settings` VALUES ('DeletedMembershipBoardsMigrationComplete','true');
INSERT INTO `focalboard_system_settings` VALUES ('TeamLessBoardsMigrationComplete','true');
INSERT INTO `focalboard_system_settings` VALUES ('UniqueIDsMigrationComplete','true');
/*!40000 ALTER TABLE `focalboard_system_settings` ENABLE KEYS */;
UNLOCK TABLES;
/*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
/*!40101 SET SQL_MODE=@OLD_SQL_MODE */;
/*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */;
/*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
-- Dump completed on 2023-03-31 11:37:35

File diff suppressed because one or more lines are too long

View file

@ -45,7 +45,7 @@ var ConfigSetCmd = &cobra.Command{
Use: "set",
Short: "Set config setting",
Long: "Sets the value of a config setting by its name in dot notation. Accepts multiple values for array settings",
Example: "config set SqlSettings.DriverName mysql\nconfig set SqlSettings.DataSourceReplicas \"replica1\" \"replica2\"",
Example: "config set SqlSettings.DriverName postgres\nconfig set SqlSettings.DataSourceReplicas \"replica1\" \"replica2\"",
Args: cobra.MinimumNArgs(2),
RunE: withClient(configSetCmdF),
}

View file

@ -20,7 +20,7 @@ Examples
::
config set SqlSettings.DriverName mysql
config set SqlSettings.DriverName postgres
config set SqlSettings.DataSourceReplicas "replica1" "replica2"
Options

View file

@ -84,8 +84,6 @@ func NewDatabaseStore(dsn string) (ds *DatabaseStore, err error) {
}
// initializeConfigurationsTable ensures the requisite tables in place to form the backing store.
//
// Uses MEDIUMTEXT on MySQL, and TEXT on sane databases.
func (ds *DatabaseStore) initializeConfigurationsTable() error {
assetsList, err := assets.ReadDir(filepath.Join("migrations", ds.driverName))
if err != nil {
@ -132,18 +130,9 @@ func (ds *DatabaseStore) initializeConfigurationsTable() error {
return engine.ApplyAll()
}
// parseDSN splits up a connection string into a driver name and data source name.
// parseDSN parses a PostgreSQL connection string and validates the scheme.
//
// For example:
//
// mysql://mmuser:mostest@localhost:5432/mattermost_test
//
// returns
//
// driverName = mysql
// dataSourceName = mmuser:mostest@localhost:5432/mattermost_test
//
// By contrast, a Postgres DSN is returned unmodified.
// Accepts postgres:// or postgresql:// schemes and returns the DSN unmodified.
func parseDSN(dsn string) (string, string, error) {
// Treat the DSN as the URL that it is.
s := strings.SplitN(dsn, "://", 2)

View file

@ -209,7 +209,7 @@ func TestDatabaseStoreNew(t *testing.T) {
_, err := NewDatabaseStore("")
require.Error(t, err)
_, err = NewDatabaseStore("mysql")
_, err = NewDatabaseStore("postgres")
require.Error(t, err)
})
@ -1054,17 +1054,10 @@ func TestDatabaseStoreString(t *testing.T) {
require.NotNil(t, ds)
defer ds.Close()
if *mainHelper.GetSQLSettings().DriverName == "postgres" {
maskedDSN := ds.String()
assert.True(t, strings.HasPrefix(maskedDSN, "postgres://"))
assert.False(t, strings.Contains(maskedDSN, "mmuser"))
assert.False(t, strings.Contains(maskedDSN, "mostest"))
} else {
maskedDSN := ds.String()
assert.False(t, strings.HasPrefix(maskedDSN, "mysql://"))
assert.False(t, strings.Contains(maskedDSN, "mmuser"))
assert.False(t, strings.Contains(maskedDSN, "mostest"))
}
maskedDSN := ds.String()
assert.True(t, strings.HasPrefix(maskedDSN, "postgres://"))
assert.False(t, strings.Contains(maskedDSN, "mmuser"))
assert.False(t, strings.Contains(maskedDSN, "mostest"))
}
func TestCleanUp(t *testing.T) {

View file

@ -64,10 +64,10 @@ func TestMigrate(t *testing.T) {
files[4],
}
cfg.SqlSettings.DataSourceReplicas = []string{
"mysql://mmuser:password@tcp(replicahost:3306)/mattermost",
"postgres://mmuser:password@replicahost:5432/mattermost",
}
cfg.SqlSettings.DataSourceSearchReplicas = []string{
"mysql://mmuser:password@tcp(searchreplicahost:3306)/mattermost",
"postgres://mmuser:password@searchreplicahost:5432/mattermost",
}
_, _, err := source.Set(cfg)

View file

@ -169,8 +169,7 @@ func Merge(cfg *model.Config, patch *model.Config, mergeConfig *utils.MergeConfi
}
func IsDatabaseDSN(dsn string) bool {
return strings.HasPrefix(dsn, "mysql://") ||
strings.HasPrefix(dsn, "postgres://") ||
return strings.HasPrefix(dsn, "postgres://") ||
strings.HasPrefix(dsn, "postgresql://")
}

View file

@ -168,11 +168,6 @@ func TestIsDatabaseDSN(t *testing.T) {
DSN string
Expected bool
}{
{
Name: "Mysql DSN",
DSN: "mysql://localhost",
Expected: true,
},
{
Name: "Postgresql 'postgres' DSN",
DSN: "postgres://localhost",
@ -231,7 +226,6 @@ func TestIsJSONMap(t *testing.T) {
{name: "array json", data: `["test1", "test2"]`, want: false},
{name: "bad json", data: `{huh?}`, want: false},
{name: "filename", data: "/tmp/logger.conf", want: false},
{name: "mysql dsn", data: "mysql://mmuser:@tcp(localhost:3306)/mattermost?charset=utf8mb4,utf8&readTimeout=30s", want: false},
{name: "postgres dsn", data: "postgres://mmuser:passwordlocalhost:5432/mattermost?sslmode=disable&connect_timeout=10", want: false},
}
for _, tt := range tests {

View file

@ -5061,49 +5061,39 @@ func (o *Config) Sanitize(pluginManifests []*Manifest, opts *SanitizeOptions) {
o.PluginSettings.Sanitize(pluginManifests)
}
// SanitizeDataSource redacts sensitive information (username and password) from a database
// SanitizeDataSource redacts sensitive information (username and password) from a PostgreSQL
// connection string while preserving other connection parameters.
//
// Parameters:
// - driverName: The database driver name (postgres or mysql)
// - dataSource: The database connection string to sanitize
// Example:
//
// Returns:
// - The sanitized connection string with username/password replaced by SanitizedPassword
// - An error if the driverName is not supported or if parsing fails
//
// Examples:
// - PostgreSQL: "postgres://user:pass@host:5432/db" -> "postgres://****:****@host:5432/db"
// - MySQL: "user:pass@tcp(host:3306)/db" -> "****:****@tcp(host:3306)/db"
// "postgres://user:pass@host:5432/db" -> "postgres://****:****@host:5432/db"
func SanitizeDataSource(driverName, dataSource string) (string, error) {
// Handle empty data source
if dataSource == "" {
return "", nil
}
switch driverName {
case DatabaseDriverPostgres:
u, err := url.Parse(dataSource)
if err != nil {
return "", err
}
u.User = url.UserPassword(SanitizedPassword, SanitizedPassword)
// Remove username and password from query string
params := u.Query()
params.Del("user")
params.Del("password")
u.RawQuery = params.Encode()
// Unescape the URL to make it human-readable
out, err := url.QueryUnescape(u.String())
if err != nil {
return "", err
}
return out, nil
default:
return "", errors.New("invalid drivername. Not postgres or mysql.")
if driverName != DatabaseDriverPostgres {
return "", errors.New("invalid drivername: only postgres is supported")
}
u, err := url.Parse(dataSource)
if err != nil {
return "", err
}
u.User = url.UserPassword(SanitizedPassword, SanitizedPassword)
// Remove username and password from query string
params := u.Query()
params.Del("user")
params.Del("password")
u.RawQuery = params.Encode()
// Unescape the URL to make it human-readable
out, err := url.QueryUnescape(u.String())
if err != nil {
return "", err
}
return out, nil
}
type FilterTag struct {

View file

@ -2525,7 +2525,7 @@ func TestFilterConfig(t *testing.T) {
require.NoError(t, err)
require.Empty(t, m)
cfg.SqlSettings.DriverName = NewPointer("mysql")
cfg.SqlSettings.DriverName = NewPointer("postgresql")
m, err = FilterConfig(cfg, ConfigFilterOptions{
GetConfigOptions: GetConfigOptions{
RemoveDefaults: true,
@ -2534,7 +2534,7 @@ func TestFilterConfig(t *testing.T) {
})
require.NoError(t, err)
require.NotEmpty(t, m)
require.Equal(t, "mysql", m["SqlSettings"].(map[string]any)["DriverName"])
require.Equal(t, "postgresql", m["SqlSettings"].(map[string]any)["DriverName"])
})
t.Run("should not clear non primitive types", func(t *testing.T) {

View file

@ -53,7 +53,7 @@ type FileInfo struct {
Width int `json:"width,omitempty"`
Height int `json:"height,omitempty"`
HasPreviewImage bool `json:"has_preview_image,omitempty"`
MiniPreview *[]byte `json:"mini_preview"` // declared as *[]byte to avoid postgres/mysql differences in deserialization
MiniPreview *[]byte `json:"mini_preview"` // pointer to distinguish NULL (no preview) from empty data
Content string `json:"-"`
RemoteId *string `json:"remote_id"`
Archived bool `json:"archived"`

View file

@ -136,10 +136,8 @@ func (o *LinkMetadata) DeserializeDataToConcreteType() error {
var b []byte
switch t := o.Data.(type) {
case []byte:
// MySQL uses a byte slice for JSON
b = t
case string:
// Postgres uses a string for JSON
b = []byte(t)
}

View file

@ -64,7 +64,7 @@ const (
PostFilenamesMaxRunes = 4000
PostHashtagsMaxRunes = 1000
PostMessageMaxRunesV1 = 4000
PostMessageMaxBytesV2 = 65535 // Maximum size of a TEXT column in MySQL
PostMessageMaxBytesV2 = 65535
PostMessageMaxRunesV2 = PostMessageMaxBytesV2 / 4 // Assume a worst-case representation
// Reporting API constants

View file

@ -54,9 +54,7 @@ type Driver interface {
// TODO: add this
// RowsColumnScanType(rowsID string, index int) reflect.Type
// Note: the following cannot be implemented because either MySQL or PG
// does not support it. So this implementation has to be a common subset
// of both DB implementations.
// Note: the following are not currently implemented.
// RowsColumnTypeLength(rowsID string, index int) (int64, bool)
// RowsColumnTypeNullable(rowsID string, index int) (bool, bool)
// ResetSession(ctx context.Context) error

View file

@ -1,160 +0,0 @@
/* Product notices are controlled externally, via the mattermost/notices repository.
When there is a new notice specified there, the server may have time, right after
the migration and before it is shut down, to download it and modify the
ProductNoticeViewState table, adding a row for all users that have not seen it or
removing old notices that no longer need to be shown. This can happen in the
UpdateProductNotices function that is executed periodically to update the notices
cache. The script will never do this, so we need to remove all rows in that table
to avoid any unwanted diff. */
DELETE FROM ProductNoticeViewState;
/* The script does not update the Systems row that tracks the version, so it is manually updated
here so that it does not show in the diff. */
UPDATE Systems SET Value = '6.3.0' WHERE Name = 'Version';
/* The script does not update the schema_migrations table, which is automatically used by the
migrate library to track the version, so we drop it altogether to avoid spurious errors in
the diff */
DROP TABLE IF EXISTS schema_migrations;
/* Migration 000054_create_crt_channelmembership_count.up sets
ChannelMembers.LastUpdateAt to the results of SELECT ROUND(UNIX_TIMESTAMP(NOW(3))*1000)
which will be different each time the migration is run. Thus, the column will always be
different when comparing the server and script migrations. To bypass this, we update all
rows in ChannelMembers so that they contain the same value for such column. */
UPDATE ChannelMembers SET LastUpdateAt = 1;
/* Migration 000055_create_crt_thread_count_and_unreads.up sets
ThreadMemberships.LastUpdated to the results of SELECT ROUND(UNIX_TIMESTAMP(NOW(3))*1000)
which will be different each time the migration is run. Thus, the column will always be
different when comparing the server and script migrations. To bypass this, we update all
rows in ThreadMemberships so that they contain the same value for such column. */
UPDATE ThreadMemberships SET LastUpdated = 1;
/* The security update check in the server may update the LastSecurityTime system value. To
avoid any spurious difference in the migrations, we update it to a fixed value. */
UPDATE Systems SET Value = 1 WHERE Name = 'LastSecurityTime';
/* The server migration contains an in-app migration that adds new roles for Playbooks:
doPlaybooksRolesCreationMigration, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/migrations.go#L345-L469
The roles are the ones defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/model/role.go#L874-L929
When this migration finishes, it also adds a new row to the Systems table with the key of the migration.
This in-app migration does not happen in the script, so we remove those rows here. */
DELETE FROM Roles WHERE Name = 'playbook_member';
DELETE FROM Roles WHERE Name = 'playbook_admin';
DELETE FROM Roles WHERE Name = 'run_member';
DELETE FROM Roles WHERE Name = 'run_admin';
DELETE FROM Systems WHERE Name = 'PlaybookRolesCreationMigrationComplete';
/* The server migration contains an in-app migration that add playbooks permissions to certain roles:
getAddPlaybooksPermissions, defined in https://github.com/mattermost/mattermost-server/blob/f9b996934cabf9a8fad5901835e7e9b418917402/app/permissions_migrations.go#L918-L951
The specific roles ('%playbook%') are removed in the procedure below, but the migrations also add a new row to the Systems table marking the migration as complete.
This in-app migration does not happen in the script, so we remove that rows here. */
DELETE FROM Systems WHERE Name = 'playbooks_permissions';
/* The rest of this script defines and executes a procedure to update the Roles table. It performs several changes:
1. Set the UpdateAt column of all rows to a fixed value, so that the server migration changes to this column
do not appear in the diff.
2. Remove the set of specific permissions added in the server migration that is not covered by the script, as
this logic happens all in-app after the normal DB migrations.
3. Set a consistent order in the Permissions column, which is modelled a space-separated string containing each of
the different permissions each role has. This change is the reason why we need a complex procedure, which creates
a temporary table that pairs each single permission to its corresponding ID. So if the Roles table contains two
rows like:
Id: 'abcd'
Permissions: 'view_team read_public_channel invite_user'
Id: 'efgh'
Permissions: 'view_team create_emojis'
then the new temporary table will contain five rows like:
Id: 'abcd'
Permissions: 'view_team'
Id: 'abcd'
Permissions: 'read_public_channel'
Id: 'abcd'
Permissions: 'invite_user'
Id: 'efgh'
Permissions: 'view_team'
Id: 'efgh'
Permissions: 'create_emojis'
*/
DROP PROCEDURE IF EXISTS splitPermissions;
DROP PROCEDURE IF EXISTS sortAndFilterPermissionsInRoles;
DROP TEMPORARY TABLE IF EXISTS temp_roles;
CREATE TEMPORARY TABLE temp_roles(id varchar(26), permission longtext);
DELIMITER //
/* Auxiliary procedure that splits the space-separated permissions string into single rows that are inserted
in the temporary temp_roles table along with their corresponding ID. */
CREATE PROCEDURE splitPermissions(
IN id varchar(26),
IN permissionsString longtext
)
BEGIN
DECLARE idx INT DEFAULT 0;
SELECT TRIM(permissionsString) INTO permissionsString;
SELECT LOCATE(' ', permissionsString) INTO idx;
WHILE idx > 0 DO
INSERT INTO temp_roles SELECT id, TRIM(LEFT(permissionsString, idx));
SELECT SUBSTR(permissionsString, idx+1) INTO permissionsString;
SELECT LOCATE(' ', permissionsString) INTO idx;
END WHILE;
INSERT INTO temp_roles(id, permission) VALUES(id, TRIM(permissionsString));
END; //
/* Main procedure that does update the Roles table */
CREATE PROCEDURE sortAndFilterPermissionsInRoles()
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE rolesId varchar(26) DEFAULT '';
DECLARE rolesPermissions longtext DEFAULT '';
DECLARE cur1 CURSOR FOR SELECT Id, Permissions FROM Roles;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
/* 1. Set a fixed value in the UpdateAt column for all rows in Roles table */
UPDATE Roles SET UpdateAt = 1;
/* Call splitPermissions for every row in the Roles table, thus populating the
temp_roles table. */
OPEN cur1;
read_loop: LOOP
FETCH cur1 INTO rolesId, rolesPermissions;
IF done THEN
LEAVE read_loop;
END IF;
CALL splitPermissions(rolesId, rolesPermissions);
END LOOP;
CLOSE cur1;
/* 2. Filter out the new permissions added by the in-app migrations */
DELETE FROM temp_roles WHERE permission LIKE '%playbook%';
DELETE FROM temp_roles WHERE permission LIKE 'run_create';
DELETE FROM temp_roles WHERE permission LIKE 'run_manage_members';
DELETE FROM temp_roles WHERE permission LIKE 'run_manage_properties';
DELETE FROM temp_roles WHERE permission LIKE 'run_view';
/* Temporarily set to the maximum permitted value, since the call to group_concat
below needs a value bigger than the default */
SET group_concat_max_len = 18446744073709551615;
/* 3. Update the Permissions column in the Roles table with the filtered, sorted permissions,
concatenated again as a space-separated string */
UPDATE
Roles INNER JOIN (
SELECT temp_roles.id as Id, TRIM(group_concat(temp_roles.permission ORDER BY temp_roles.permission SEPARATOR ' ')) as Permissions
FROM Roles JOIN temp_roles ON Roles.Id = temp_roles.id
GROUP BY temp_roles.id
) AS Sorted
ON Roles.Id = Sorted.Id
SET Roles.Permissions = Sorted.Permissions;
/* Reset group_concat_max_len to its default value */
SET group_concat_max_len = 1024;
END; //
DELIMITER ;
CALL sortAndFilterPermissionsInRoles();
DROP TEMPORARY TABLE IF EXISTS temp_roles;

View file

@ -1,695 +0,0 @@
/* ==> mysql/000054_create_crt_channelmembership_count.up.sql <== */
/* fixCRTChannelMembershipCounts fixes the channel counts, i.e. the total message count,
total root message count, mention count, and mention count in root messages for users
who have viewed the channel after the last post in the channel */
DELIMITER //
CREATE PROCEDURE MigrateCRTChannelMembershipCounts ()
BEGIN
IF(
SELECT
EXISTS (
SELECT
* FROM Systems
WHERE
Name = 'CRTChannelMembershipCountsMigrationComplete') = 0) THEN
UPDATE
ChannelMembers
INNER JOIN Channels ON Channels.Id = ChannelMembers.ChannelId SET
MentionCount = 0, MentionCountRoot = 0, MsgCount = Channels.TotalMsgCount, MsgCountRoot = Channels.TotalMsgCountRoot, LastUpdateAt = (
SELECT
(SELECT ROUND(UNIX_TIMESTAMP(NOW(3))*1000)))
WHERE
ChannelMembers.LastViewedAt >= Channels.LastPostAt;
INSERT INTO Systems
VALUES('CRTChannelMembershipCountsMigrationComplete', 'true');
END IF;
END//
DELIMITER ;
CALL MigrateCRTChannelMembershipCounts ();
DROP PROCEDURE IF EXISTS MigrateCRTChannelMembershipCounts;
/* ==> mysql/000055_create_crt_thread_count_and_unreads.up.sql <== */
/* fixCRTThreadCountsAndUnreads Marks threads as read for users where the last
reply time of the thread is earlier than the time the user viewed the channel.
Marking a thread means setting the mention count to zero and setting the
last viewed at time of the the thread as the last viewed at time
of the channel */
DELIMITER //
CREATE PROCEDURE MigrateCRTThreadCountsAndUnreads ()
BEGIN
IF(SELECT EXISTS(SELECT * FROM Systems WHERE Name = 'CRTThreadCountsAndUnreadsMigrationComplete') = 0) THEN
UPDATE
ThreadMemberships
INNER JOIN (
SELECT
PostId,
UserId,
ChannelMembers.LastViewedAt AS CM_LastViewedAt,
Threads.LastReplyAt
FROM
Threads
INNER JOIN ChannelMembers ON ChannelMembers.ChannelId = Threads.ChannelId
WHERE
Threads.LastReplyAt <= ChannelMembers.LastViewedAt) AS q ON ThreadMemberships.Postid = q.PostId
AND ThreadMemberships.UserId = q.UserId SET LastViewed = q.CM_LastViewedAt + 1, UnreadMentions = 0, LastUpdated = (
SELECT
(SELECT ROUND(UNIX_TIMESTAMP(NOW(3))*1000)));
INSERT INTO Systems
VALUES('CRTThreadCountsAndUnreadsMigrationComplete', 'true');
END IF;
END//
DELIMITER ;
CALL MigrateCRTThreadCountsAndUnreads ();
DROP PROCEDURE IF EXISTS MigrateCRTThreadCountsAndUnreads;
/* ==> mysql/000056_upgrade_channels_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Channels'
AND table_schema = DATABASE()
AND index_name = 'idx_channels_team_id_display_name'
) > 0,
'SELECT 1',
'CREATE INDEX idx_channels_team_id_display_name ON Channels(TeamId, DisplayName);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Channels'
AND table_schema = DATABASE()
AND index_name = 'idx_channels_team_id_type'
) > 0,
'SELECT 1',
'CREATE INDEX idx_channels_team_id_type ON Channels(TeamId, Type);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Channels'
AND table_schema = DATABASE()
AND index_name = 'idx_channels_team_id'
) > 0,
'DROP INDEX idx_channels_team_id ON Channels;',
'SELECT 1'
));
PREPARE removeIndexIfExists FROM @preparedStatement;
EXECUTE removeIndexIfExists;
DEALLOCATE PREPARE removeIndexIfExists;
/* ==> mysql/000057_upgrade_command_webhooks_v6.0.up.sql <== */
DELIMITER //
CREATE PROCEDURE MigrateRootId_CommandWebhooks () BEGIN DECLARE ParentId_EXIST INT;
SELECT COUNT(*)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'CommandWebhooks'
AND table_schema = DATABASE()
AND COLUMN_NAME = 'ParentId' INTO ParentId_EXIST;
IF(ParentId_EXIST > 0) THEN
UPDATE CommandWebhooks SET RootId = ParentId WHERE RootId = '' AND RootId != ParentId;
END IF;
END//
DELIMITER ;
CALL MigrateRootId_CommandWebhooks ();
DROP PROCEDURE IF EXISTS MigrateRootId_CommandWebhooks;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'CommandWebhooks'
AND table_schema = DATABASE()
AND column_name = 'ParentId'
) > 0,
'ALTER TABLE CommandWebhooks DROP COLUMN ParentId;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000058_upgrade_channelmembers_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'ChannelMembers'
AND table_schema = DATABASE()
AND column_name = 'NotifyProps'
AND column_type != 'JSON'
) > 0,
'ALTER TABLE ChannelMembers MODIFY COLUMN NotifyProps JSON;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'ChannelMembers'
AND table_schema = DATABASE()
AND index_name = 'idx_channelmembers_user_id'
) > 0,
'DROP INDEX idx_channelmembers_user_id ON ChannelMembers;',
'SELECT 1'
));
PREPARE removeIndexIfExists FROM @preparedStatement;
EXECUTE removeIndexIfExists;
DEALLOCATE PREPARE removeIndexIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'ChannelMembers'
AND table_schema = DATABASE()
AND index_name = 'idx_channelmembers_user_id_channel_id_last_viewed_at'
) > 0,
'SELECT 1',
'CREATE INDEX idx_channelmembers_user_id_channel_id_last_viewed_at ON ChannelMembers(UserId, ChannelId, LastViewedAt);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'ChannelMembers'
AND table_schema = DATABASE()
AND index_name = 'idx_channelmembers_channel_id_scheme_guest_user_id'
) > 0,
'SELECT 1',
'CREATE INDEX idx_channelmembers_channel_id_scheme_guest_user_id ON ChannelMembers(ChannelId, SchemeGuest, UserId);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000059_upgrade_users_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'Props'
AND column_type != 'JSON'
) > 0,
'ALTER TABLE Users MODIFY COLUMN Props JSON;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'NotifyProps'
AND column_type != 'JSON'
) > 0,
'ALTER TABLE Users MODIFY COLUMN NotifyProps JSON;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'Timezone'
AND column_default IS NOT NULL
) > 0,
'ALTER TABLE Users ALTER Timezone DROP DEFAULT;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'Timezone'
AND column_type != 'JSON'
) > 0,
'ALTER TABLE Users MODIFY COLUMN Timezone JSON;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'Roles'
AND column_type != 'text'
) > 0,
'ALTER TABLE Users MODIFY COLUMN Roles text;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000060_upgrade_jobs_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Jobs'
AND table_schema = DATABASE()
AND column_name = 'Data'
AND column_type != 'JSON'
) > 0,
'ALTER TABLE Jobs MODIFY COLUMN Data JSON;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000061_upgrade_link_metadata_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'LinkMetadata'
AND table_schema = DATABASE()
AND column_name = 'Data'
AND column_type != 'JSON'
) > 0,
'ALTER TABLE LinkMetadata MODIFY COLUMN Data JSON;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000062_upgrade_sessions_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Sessions'
AND table_schema = DATABASE()
AND column_name = 'Props'
AND column_type != 'JSON'
) > 0,
'ALTER TABLE Sessions MODIFY COLUMN Props JSON;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000063_upgrade_threads_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Threads'
AND table_schema = DATABASE()
AND column_name = 'Participants'
AND column_type != 'JSON'
) > 0,
'ALTER TABLE Threads MODIFY COLUMN Participants JSON;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Threads'
AND table_schema = DATABASE()
AND index_name = 'idx_threads_channel_id_last_reply_at'
) > 0,
'SELECT 1',
'CREATE INDEX idx_threads_channel_id_last_reply_at ON Threads(ChannelId, LastReplyAt);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Threads'
AND table_schema = DATABASE()
AND index_name = 'idx_threads_channel_id'
) > 0,
'DROP INDEX idx_threads_channel_id ON Threads;',
'SELECT 1'
));
PREPARE removeIndexIfExists FROM @preparedStatement;
EXECUTE removeIndexIfExists;
DEALLOCATE PREPARE removeIndexIfExists;
/* ==> mysql/000064_upgrade_status_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Status'
AND table_schema = DATABASE()
AND index_name = 'idx_status_status_dndendtime'
) > 0,
'SELECT 1',
'CREATE INDEX idx_status_status_dndendtime ON Status(Status, DNDEndTime);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Status'
AND table_schema = DATABASE()
AND index_name = 'idx_status_status'
) > 0,
'DROP INDEX idx_status_status ON Status;',
'SELECT 1'
));
PREPARE removeIndexIfExists FROM @preparedStatement;
EXECUTE removeIndexIfExists;
DEALLOCATE PREPARE removeIndexIfExists;
/* ==> mysql/000065_upgrade_groupchannels_v6.0.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'GroupChannels'
AND table_schema = DATABASE()
AND index_name = 'idx_groupchannels_schemeadmin'
) > 0,
'SELECT 1',
'CREATE INDEX idx_groupchannels_schemeadmin ON GroupChannels(SchemeAdmin);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000066_upgrade_posts_v6.0.up.sql <== */
DELIMITER //
CREATE PROCEDURE MigrateRootId_Posts ()
BEGIN
DECLARE ParentId_EXIST INT;
DECLARE Alter_FileIds INT;
DECLARE Alter_Props INT;
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Posts'
AND table_schema = DATABASE()
AND COLUMN_NAME = 'ParentId' INTO ParentId_EXIST;
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Posts'
AND table_schema = DATABASE()
AND column_name = 'FileIds'
AND column_type != 'text' INTO Alter_FileIds;
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Posts'
AND table_schema = DATABASE()
AND column_name = 'Props'
AND column_type != 'JSON' INTO Alter_Props;
IF (Alter_Props OR Alter_FileIds) THEN
IF(ParentId_EXIST > 0) THEN
UPDATE Posts SET RootId = ParentId WHERE RootId = '' AND RootId != ParentId;
ALTER TABLE Posts MODIFY COLUMN FileIds text, MODIFY COLUMN Props JSON, DROP COLUMN ParentId;
ELSE
ALTER TABLE Posts MODIFY COLUMN FileIds text, MODIFY COLUMN Props JSON;
END IF;
END IF;
END//
DELIMITER ;
CALL MigrateRootId_Posts ();
DROP PROCEDURE IF EXISTS MigrateRootId_Posts;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Posts'
AND table_schema = DATABASE()
AND index_name = 'idx_posts_root_id_delete_at'
) > 0,
'SELECT 1',
'CREATE INDEX idx_posts_root_id_delete_at ON Posts(RootId, DeleteAt);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Posts'
AND table_schema = DATABASE()
AND index_name = 'idx_posts_root_id'
) > 0,
'DROP INDEX idx_posts_root_id ON Posts;',
'SELECT 1'
));
PREPARE removeIndexIfExists FROM @preparedStatement;
EXECUTE removeIndexIfExists;
DEALLOCATE PREPARE removeIndexIfExists;
/* ==> mysql/000067_upgrade_channelmembers_v6.1.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'ChannelMembers'
AND table_schema = DATABASE()
AND column_name = 'Roles'
AND column_type != 'text'
) > 0,
'ALTER TABLE ChannelMembers MODIFY COLUMN Roles text;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000068_upgrade_teammembers_v6.1.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'TeamMembers'
AND table_schema = DATABASE()
AND column_name = 'Roles'
AND column_type != 'text'
) > 0,
'ALTER TABLE TeamMembers MODIFY COLUMN Roles text;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000069_upgrade_jobs_v6.1.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Jobs'
AND table_schema = DATABASE()
AND index_name = 'idx_jobs_status_type'
) > 0,
'SELECT 1',
'CREATE INDEX idx_jobs_status_type ON Jobs(Status, Type);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000070_upgrade_cte_v6.1.up.sql <== */
DELIMITER //
CREATE PROCEDURE Migrate_LastRootPostAt ()
BEGIN
DECLARE
LastRootPostAt_EXIST INT;
SELECT
COUNT(*)
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'Channels'
AND table_schema = DATABASE()
AND COLUMN_NAME = 'LastRootPostAt' INTO LastRootPostAt_EXIST;
IF(LastRootPostAt_EXIST = 0) THEN
ALTER TABLE Channels ADD COLUMN LastRootPostAt bigint DEFAULT 0;
UPDATE
Channels
INNER JOIN (
SELECT
Channels.Id channelid,
COALESCE(MAX(Posts.CreateAt), 0) AS lastrootpost
FROM
Channels
LEFT JOIN Posts FORCE INDEX (idx_posts_channel_id_update_at) ON Channels.Id = Posts.ChannelId
WHERE
Posts.RootId = ''
GROUP BY
Channels.Id) AS q ON q.channelid = Channels.Id SET LastRootPostAt = lastrootpost;
END IF;
END//
DELIMITER ;
CALL Migrate_LastRootPostAt ();
DROP PROCEDURE IF EXISTS Migrate_LastRootPostAt;
/* ==> mysql/000071_upgrade_sessions_v6.1.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Sessions'
AND table_schema = DATABASE()
AND column_name = 'Roles'
AND column_type != 'text'
) > 0,
'ALTER TABLE Sessions MODIFY COLUMN Roles text;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000072_upgrade_schemes_v6.3.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Schemes'
AND table_schema = DATABASE()
AND column_name = 'DefaultPlaybookAdminRole'
) > 0,
'SELECT 1',
'ALTER TABLE Schemes ADD COLUMN DefaultPlaybookAdminRole VARCHAR(64) DEFAULT "";'
));
PREPARE alterIfNotExists FROM @preparedStatement;
EXECUTE alterIfNotExists;
DEALLOCATE PREPARE alterIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Schemes'
AND table_schema = DATABASE()
AND column_name = 'DefaultPlaybookMemberRole'
) > 0,
'SELECT 1',
'ALTER TABLE Schemes ADD COLUMN DefaultPlaybookMemberRole VARCHAR(64) DEFAULT "";'
));
PREPARE alterIfNotExists FROM @preparedStatement;
EXECUTE alterIfNotExists;
DEALLOCATE PREPARE alterIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Schemes'
AND table_schema = DATABASE()
AND column_name = 'DefaultRunAdminRole'
) > 0,
'SELECT 1',
'ALTER TABLE Schemes ADD COLUMN DefaultRunAdminRole VARCHAR(64) DEFAULT "";'
));
PREPARE alterIfNotExists FROM @preparedStatement;
EXECUTE alterIfNotExists;
DEALLOCATE PREPARE alterIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Schemes'
AND table_schema = DATABASE()
AND column_name = 'DefaultRunMemberRole'
) > 0,
'SELECT 1',
'ALTER TABLE Schemes ADD COLUMN DefaultRunMemberRole VARCHAR(64) DEFAULT "";'
));
PREPARE alterIfNotExists FROM @preparedStatement;
EXECUTE alterIfNotExists;
DEALLOCATE PREPARE alterIfNotExists;
/* ==> mysql/000073_upgrade_plugin_key_value_store_v6.3.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT Count(*) FROM Information_Schema.Columns
WHERE table_name = 'PluginKeyValueStore'
AND table_schema = DATABASE()
AND column_name = 'PKey'
AND column_type != 'varchar(150)'
) > 0,
'ALTER TABLE PluginKeyValueStore MODIFY COLUMN PKey varchar(150);',
'SELECT 1'
));
PREPARE alterTypeIfExists FROM @preparedStatement;
EXECUTE alterTypeIfExists;
DEALLOCATE PREPARE alterTypeIfExists;
/* ==> mysql/000074_upgrade_users_v6.3.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'AcceptedTermsOfServiceId'
) > 0,
'ALTER TABLE Users DROP COLUMN AcceptedTermsOfServiceId;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;

View file

@ -1,199 +0,0 @@
/* Product notices are controlled externally, via the mattermost/notices repository.
When there is a new notice specified there, the server may have time, right after
the migration and before it is shut down, to download it and modify the
ProductNoticeViewState table, adding a row for all users that have not seen it or
removing old notices that no longer need to be shown. This can happen in the
UpdateProductNotices function that is executed periodically to update the notices
cache. The script will never do this, so we need to remove all rows in that table
to avoid any unwanted diff. */
DELETE FROM ProductNoticeViewState;
/* Remove migration-related tables that are only updated through the server to track which
migrations have been applied */
DROP TABLE IF EXISTS db_lock;
DROP TABLE IF EXISTS db_migrations;
/* Migration 000054_create_crt_channelmembership_count.up sets
ChannelMembers.LastUpdateAt to the results of SELECT ROUND(UNIX_TIMESTAMP(NOW(3))*1000)
which will be different each time the migration is run. Thus, the column will always be
different when comparing the server and script migrations. To bypass this, we update all
rows in ChannelMembers so that they contain the same value for such column. */
UPDATE ChannelMembers SET LastUpdateAt = 1;
/* Migration 000055_create_crt_thread_count_and_unreads.up sets
ThreadMemberships.LastUpdated to the results of SELECT ROUND(UNIX_TIMESTAMP(NOW(3))*1000)
which will be different each time the migration is run. Thus, the column will always be
different when comparing the server and script migrations. To bypass this, we update all
rows in ThreadMemberships so that they contain the same value for such column. */
UPDATE ThreadMemberships SET LastUpdated = 1;
/* The security update check in the server may update the LastSecurityTime system value. To
avoid any spurious difference in the migrations, we update it to a fixed value. */
UPDATE Systems SET Value = 1 WHERE Name = 'LastSecurityTime';
/* The server migration may contain a row in the Systems table marking the onboarding as complete.
There are no migrations related to this, so we can simply drop it here. */
DELETE FROM Systems WHERE Name = 'FirstAdminSetupComplete';
/* The server migration contains an in-app migration that adds new roles for Playbooks:
doPlaybooksRolesCreationMigration, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/migrations.go#L345-L469
The roles are the ones defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/model/role.go#L874-L929
When this migration finishes, it also adds a new row to the Systems table with the key of the migration.
This in-app migration does not happen in the script, so we remove those rows here. */
DELETE FROM Roles WHERE Name = 'playbook_member';
DELETE FROM Roles WHERE Name = 'playbook_admin';
DELETE FROM Roles WHERE Name = 'run_member';
DELETE FROM Roles WHERE Name = 'run_admin';
DELETE FROM Systems WHERE Name = 'PlaybookRolesCreationMigrationComplete';
/* The server migration contains two in-app migrations that add playbooks permissions to certain roles:
getAddPlaybooksPermissions and getPlaybooksPermissionsAddManageRoles, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/permissions_migrations.go#L1021-L1072
The specific roles ('%playbook%') are removed in the procedure below, but the migrations also add new rows to the Systems table marking the migrations as complete.
These in-app migrations do not happen in the script, so we remove those rows here. */
DELETE FROM Systems WHERE Name = 'playbooks_manage_roles';
DELETE FROM Systems WHERE Name = 'playbooks_permissions';
/* The server migration contains an in-app migration that adds boards permissions to certain roles:
getProductsBoardsPermissions, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/permissions_migrations.go#L1074-L1093
The specific roles (sysconsole_read_product_boards and sysconsole_write_product_boards) are removed in the procedure below,
but the migrations also adds a new row to the Systems table marking the migrations as complete.
This in-app migration does not happen in the script, so we remove that row here. */
DELETE FROM Systems WHERE Name = 'products_boards';
/* TODO: REVIEW STARTING HERE */
/* The server migration contain an in-app migration that adds Ids to the Teams whose InviteId is an empty string:
doRemainingSchemaMigrations, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/migrations.go#L515-L540
The migration is not replicated in the script, since it happens in-app, but the server adds a new row to the
Systems table marking the table as complete, which the script doesn't do, so we remove that row here. */
DELETE FROM Systems WHERE Name = 'RemainingSchemaMigrations';
/* The server migration contains three in-app migration that adds a new role and new permissions
related to custom groups. The migrations are:
- doCustomGroupAdminRoleCreationMigration https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/migrations.go#L345-L469
- getAddCustomUserGroupsPermissions https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/permissions_migrations.go#L974-L995
- getAddCustomUserGroupsPermissionRestore https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/permissions_migrations.go#L997-L1019
The specific roles and permissions are removed in the procedure below, but the migrations also
adds a new row to the Roles table for the new role and new rows to the Systems table marking the
migrations as complete.
This in-app migration does not happen in the script, so we remove that row here. */
DELETE FROM Roles WHERE Name = 'system_custom_group_admin';
DELETE FROM Systems WHERE Name = 'CustomGroupAdminRoleCreationMigrationComplete';
DELETE FROM Systems WHERE Name = 'custom_groups_permissions';
DELETE FROM Systems WHERE Name = 'custom_groups_permission_restore';
/* The server migration contains an in-app migration that updates the config, setting ServiceSettings.PostPriority
to true, doPostPriorityConfigDefaultTrueMigration, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/migrations.go#L542-L560
The migration is not replicated in the script, since it happens in-app, but the server adds a new row to the
Systems table marking the table as complete, which the script doesn't do, so we remove that row here. */
DELETE FROM Systems WHERE Name = 'PostPriorityConfigDefaultTrueMigrationComplete';
/* The rest of this script defines and executes a procedure to update the Roles table. It performs several changes:
1. Set the UpdateAt column of all rows to a fixed value, so that the server migration changes to this column
do not appear in the diff.
2. Remove the set of specific permissions added in the server migration that is not covered by the script, as
this logic happens all in-app after the normal DB migrations.
3. Set a consistent order in the Permissions column, which is modelled a space-separated string containing each of
the different permissions each role has. This change is the reason why we need a complex procedure, which creates
a temporary table that pairs each single permission to its corresponding ID. So if the Roles table contains two
rows like:
Id: 'abcd'
Permissions: 'view_team read_public_channel invite_user'
Id: 'efgh'
Permissions: 'view_team create_emojis'
then the new temporary table will contain five rows like:
Id: 'abcd'
Permissions: 'view_team'
Id: 'abcd'
Permissions: 'read_public_channel'
Id: 'abcd'
Permissions: 'invite_user'
Id: 'efgh'
Permissions: 'view_team'
Id: 'efgh'
Permissions: 'create_emojis'
*/
DROP PROCEDURE IF EXISTS splitPermissions;
DROP PROCEDURE IF EXISTS sortAndFilterPermissionsInRoles;
DROP TEMPORARY TABLE IF EXISTS temp_roles;
CREATE TEMPORARY TABLE temp_roles(id varchar(26), permission longtext);
DELIMITER //
/* Auxiliary procedure that splits the space-separated permissions string into single rows that are inserted
in the temporary temp_roles table along with their corresponding ID. */
CREATE PROCEDURE splitPermissions(
IN id varchar(26),
IN permissionsString longtext
)
BEGIN
DECLARE idx INT DEFAULT 0;
SELECT TRIM(permissionsString) INTO permissionsString;
SELECT LOCATE(' ', permissionsString) INTO idx;
WHILE idx > 0 DO
INSERT INTO temp_roles SELECT id, TRIM(LEFT(permissionsString, idx));
SELECT SUBSTR(permissionsString, idx+1) INTO permissionsString;
SELECT LOCATE(' ', permissionsString) INTO idx;
END WHILE;
INSERT INTO temp_roles(id, permission) VALUES(id, TRIM(permissionsString));
END; //
/* Main procedure that does update the Roles table */
CREATE PROCEDURE sortAndFilterPermissionsInRoles()
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE rolesId varchar(26) DEFAULT '';
DECLARE rolesPermissions longtext DEFAULT '';
DECLARE cur1 CURSOR FOR SELECT Id, Permissions FROM Roles;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
/* 1. Set a fixed value in the UpdateAt column for all rows in Roles table */
UPDATE Roles SET UpdateAt = 1;
/* Call splitPermissions for every row in the Roles table, thus populating the
temp_roles table. */
OPEN cur1;
read_loop: LOOP
FETCH cur1 INTO rolesId, rolesPermissions;
IF done THEN
LEAVE read_loop;
END IF;
CALL splitPermissions(rolesId, rolesPermissions);
END LOOP;
CLOSE cur1;
/* 2. Filter out the new permissions added by the in-app migrations */
DELETE FROM temp_roles WHERE permission LIKE 'sysconsole_read_products_boards';
DELETE FROM temp_roles WHERE permission LIKE 'sysconsole_write_products_boards';
DELETE FROM temp_roles WHERE permission LIKE '%playbook%';
DELETE FROM temp_roles WHERE permission LIKE 'run_create';
DELETE FROM temp_roles WHERE permission LIKE 'run_manage_members';
DELETE FROM temp_roles WHERE permission LIKE 'run_manage_properties';
DELETE FROM temp_roles WHERE permission LIKE 'run_view';
DELETE FROM temp_roles WHERE permission LIKE '%custom_group%';
/* Temporarily set to the maximum permitted value, since the call to group_concat
below needs a value bigger than the default */
SET group_concat_max_len = 18446744073709551615;
/* 3. Update the Permissions column in the Roles table with the filtered, sorted permissions,
concatenated again as a space-separated string */
UPDATE
Roles INNER JOIN (
SELECT temp_roles.id as Id, TRIM(group_concat(temp_roles.permission ORDER BY temp_roles.permission SEPARATOR ' ')) as Permissions
FROM Roles JOIN temp_roles ON Roles.Id = temp_roles.id
GROUP BY temp_roles.id
) AS Sorted
ON Roles.Id = Sorted.Id
SET Roles.Permissions = Sorted.Permissions;
/* Reset group_concat_max_len to its default value */
SET group_concat_max_len = 1024;
END; //
DELIMITER ;
CALL sortAndFilterPermissionsInRoles();
DROP TEMPORARY TABLE IF EXISTS temp_roles;

File diff suppressed because it is too large Load diff

View file

@ -1,168 +0,0 @@
/* Product notices are controlled externally, via the mattermost/notices repository.
When there is a new notice specified there, the server may have time, right after
the migration and before it is shut down, to download it and modify the
ProductNoticeViewState table, adding a row for all users that have not seen it or
removing old notices that no longer need to be shown. This can happen in the
UpdateProductNotices function that is executed periodically to update the notices
cache. The script will never do this, so we need to remove all rows in that table
to avoid any unwanted diff. */
DELETE FROM ProductNoticeViewState;
/* Remove migration-related tables that are only updated through the server to track which
migrations have been applied */
DROP TABLE IF EXISTS db_lock;
DROP TABLE IF EXISTS db_migrations;
/* The security update check in the server may update the LastSecurityTime system value. To
avoid any spurious difference in the migrations, we update it to a fixed value. */
UPDATE Systems SET Value = 1 WHERE Name = 'LastSecurityTime';
/* The server migration may contain a row in the Systems table marking the onboarding as complete.
There are no migrations related to this, so we can simply drop it here. */
DELETE FROM Systems WHERE Name = 'FirstAdminSetupComplete';
/* The server migration contains an in-app migration that add playbooks permissions to certain roles:
getPlaybooksPermissionsAddManageRoles, defined in https://github.com/mattermost/mattermost-server/blob/56a093ceaee6389a01a35b6d4626ef5a9fea4759/app/permissions_migrations.go#L1056-L1072
The specific roles ('%playbook%') are removed in the procedure below, but the migrations also add new rows to the Systems table marking the migrations as complete.
This in-app migration does not happen in the script, so we remove that rows here. */
DELETE FROM Systems WHERE Name = 'playbooks_manage_roles';
/* The server migration contains an in-app migration that adds boards permissions to certain roles:
getProductsBoardsPermissions, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/permissions_migrations.go#L1074-L1093
The specific roles (sysconsole_read_product_boards and sysconsole_write_product_boards) are removed in the procedure below,
but the migrations also adds a new row to the Systems table marking the migrations as complete.
This in-app migration does not happen in the script, so we remove that row here. */
DELETE FROM Systems WHERE Name = 'products_boards';
/* The server migration contains an in-app migration that adds Ids to the Teams whose InviteId is an empty string:
doRemainingSchemaMigrations, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/migrations.go#L515-L540
The migration is not replicated in the script, since it happens in-app, but the server adds a new row to the
Systems table marking the table as complete, which the script doesn't do, so we remove that row here. */
DELETE FROM Systems WHERE Name = 'RemainingSchemaMigrations';
/* The server migration contains three in-app migration that adds a new role and new permissions
related to custom groups. The migrations are:
- doCustomGroupAdminRoleCreationMigration https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/migrations.go#L345-L469
- getAddCustomUserGroupsPermissions https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/permissions_migrations.go#L974-L995
- getAddCustomUserGroupsPermissionRestore https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/permissions_migrations.go#L997-L1019
The specific roles and permissions are removed in the procedure below, but the migrations also
adds a new row to the Roles table for the new role and new rows to the Systems table marking the
migrations as complete.
This in-app migration does not happen in the script, so we remove that row here. */
DELETE FROM Roles WHERE Name = 'system_custom_group_admin';
DELETE FROM Systems WHERE Name = 'CustomGroupAdminRoleCreationMigrationComplete';
DELETE FROM Systems WHERE Name = 'custom_groups_permissions';
DELETE FROM Systems WHERE Name = 'custom_groups_permission_restore';
/* The server migration contains an in-app migration that updates the config, setting ServiceSettings.PostPriority
to true, doPostPriorityConfigDefaultTrueMigration, defined in https://github.com/mattermost/mattermost-server/blob/282bd351e3767dcfd8c8340da2e0915197c0dbcb/app/migrations.go#L542-L560
The migration is not replicated in the script, since it happens in-app, but the server adds a new row to the
Systems table marking the table as complete, which the script doesn't do, so we remove that row here. */
DELETE FROM Systems WHERE Name = 'PostPriorityConfigDefaultTrueMigrationComplete';
/* The rest of this script defines and executes a procedure to update the Roles table. It performs several changes:
1. Set the UpdateAt column of all rows to a fixed value, so that the server migration changes to this column
do not appear in the diff.
2. Remove the set of specific permissions added in the server migration that is not covered by the script, as
this logic happens all in-app after the normal DB migrations.
3. Set a consistent order in the Permissions column, which is modelled a space-separated string containing each of
the different permissions each role has. This change is the reason why we need a complex procedure, which creates
a temporary table that pairs each single permission to its corresponding ID. So if the Roles table contains two
rows like:
Id: 'abcd'
Permissions: 'view_team read_public_channel invite_user'
Id: 'efgh'
Permissions: 'view_team create_emojis'
then the new temporary table will contain five rows like:
Id: 'abcd'
Permissions: 'view_team'
Id: 'abcd'
Permissions: 'read_public_channel'
Id: 'abcd'
Permissions: 'invite_user'
Id: 'efgh'
Permissions: 'view_team'
Id: 'efgh'
Permissions: 'create_emojis'
*/
DROP PROCEDURE IF EXISTS splitPermissions;
DROP PROCEDURE IF EXISTS sortAndFilterPermissionsInRoles;
DROP TEMPORARY TABLE IF EXISTS temp_roles;
CREATE TEMPORARY TABLE temp_roles(id varchar(26), permission longtext);
DELIMITER //
/* Auxiliary procedure that splits the space-separated permissions string into single rows that are inserted
in the temporary temp_roles table along with their corresponding ID. */
CREATE PROCEDURE splitPermissions(
IN id varchar(26),
IN permissionsString longtext
)
BEGIN
DECLARE idx INT DEFAULT 0;
SELECT TRIM(permissionsString) INTO permissionsString;
SELECT LOCATE(' ', permissionsString) INTO idx;
WHILE idx > 0 DO
INSERT INTO temp_roles SELECT id, TRIM(LEFT(permissionsString, idx));
SELECT SUBSTR(permissionsString, idx+1) INTO permissionsString;
SELECT LOCATE(' ', permissionsString) INTO idx;
END WHILE;
INSERT INTO temp_roles(id, permission) VALUES(id, TRIM(permissionsString));
END; //
/* Main procedure that does update the Roles table */
CREATE PROCEDURE sortAndFilterPermissionsInRoles()
BEGIN
DECLARE done INT DEFAULT FALSE;
DECLARE rolesId varchar(26) DEFAULT '';
DECLARE rolesPermissions longtext DEFAULT '';
DECLARE cur1 CURSOR FOR SELECT Id, Permissions FROM Roles;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;
/* 1. Set a fixed value in the UpdateAt column for all rows in Roles table */
UPDATE Roles SET UpdateAt = 1;
/* Call splitPermissions for every row in the Roles table, thus populating the
temp_roles table. */
OPEN cur1;
read_loop: LOOP
FETCH cur1 INTO rolesId, rolesPermissions;
IF done THEN
LEAVE read_loop;
END IF;
CALL splitPermissions(rolesId, rolesPermissions);
END LOOP;
CLOSE cur1;
/* 2. Filter out the new permissions added by the in-app migrations */
DELETE FROM temp_roles WHERE permission LIKE 'sysconsole_read_products_boards';
DELETE FROM temp_roles WHERE permission LIKE 'sysconsole_write_products_boards';
DELETE FROM temp_roles WHERE permission LIKE 'playbook_public_manage_roles';
DELETE FROM temp_roles WHERE permission LIKE 'playbook_private_manage_roles';
DELETE FROM temp_roles WHERE permission LIKE '%custom_group%';
/* Temporarily set to the maximum permitted value, since the call to group_concat
below needs a value bigger than the default */
SET group_concat_max_len = 18446744073709551615;
/* 3. Update the Permissions column in the Roles table with the filtered, sorted permissions,
concatenated again as a space-separated string */
UPDATE
Roles INNER JOIN (
SELECT temp_roles.id as Id, TRIM(group_concat(temp_roles.permission ORDER BY temp_roles.permission SEPARATOR ' ')) as Permissions
FROM Roles JOIN temp_roles ON Roles.Id = temp_roles.id
GROUP BY temp_roles.id
) AS Sorted
ON Roles.Id = Sorted.Id
SET Roles.Permissions = Sorted.Permissions;
/* Reset group_concat_max_len to its default value */
SET group_concat_max_len = 1024;
END; //
DELIMITER ;
CALL sortAndFilterPermissionsInRoles();
DROP TEMPORARY TABLE IF EXISTS temp_roles;

View file

@ -1,599 +0,0 @@
/* ==> mysql/000041_create_upload_sessions.up.sql <== */
/* Release 5.37 was meant to contain the index idx_uploadsessions_type, but a bug prevented that.
This part of the migration #41 adds such index */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'UploadSessions'
AND table_schema = DATABASE()
AND index_name = 'idx_uploadsessions_type'
) > 0,
'SELECT 1',
'CREATE INDEX idx_uploadsessions_type ON UploadSessions(Type);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000075_alter_upload_sessions_index.up.sql <== */
DELIMITER //
CREATE PROCEDURE AlterIndex()
BEGIN
DECLARE columnName varchar(26) default '';
SELECT IFNULL(GROUP_CONCAT(column_name ORDER BY seq_in_index), '') INTO columnName
FROM information_schema.statistics
WHERE table_schema = DATABASE()
AND table_name = 'UploadSessions'
AND index_name = 'idx_uploadsessions_user_id'
GROUP BY index_name;
IF columnName = 'Type' THEN
DROP INDEX idx_uploadsessions_user_id ON UploadSessions;
CREATE INDEX idx_uploadsessions_user_id ON UploadSessions(UserId);
END IF;
END//
DELIMITER ;
CALL AlterIndex();
DROP PROCEDURE IF EXISTS AlterIndex;
/* ==> mysql/000076_upgrade_lastrootpostat.up.sql <== */
DELIMITER //
CREATE PROCEDURE Migrate_LastRootPostAt_Default ()
BEGIN
IF (
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Channels'
AND TABLE_SCHEMA = DATABASE()
AND COLUMN_NAME = 'LastRootPostAt'
AND (COLUMN_DEFAULT IS NULL OR COLUMN_DEFAULT != 0)
) = 1 THEN
ALTER TABLE Channels ALTER COLUMN LastRootPostAt SET DEFAULT 0;
END IF;
END//
DELIMITER ;
CALL Migrate_LastRootPostAt_Default ();
DROP PROCEDURE IF EXISTS Migrate_LastRootPostAt_Default;
DELIMITER //
CREATE PROCEDURE Migrate_LastRootPostAt_Fix ()
BEGIN
IF (
SELECT COUNT(*)
FROM Channels
WHERE LastRootPostAt IS NULL
) > 0 THEN
-- fixes migrate cte and sets the LastRootPostAt for channels that don't have it set
UPDATE
Channels
INNER JOIN (
SELECT
Channels.Id channelid,
COALESCE(MAX(Posts.CreateAt), 0) AS lastrootpost
FROM
Channels
LEFT JOIN Posts FORCE INDEX (idx_posts_channel_id_update_at) ON Channels.Id = Posts.ChannelId
WHERE
Posts.RootId = ''
GROUP BY
Channels.Id) AS q ON q.channelid = Channels.Id
SET
LastRootPostAt = lastrootpost
WHERE
LastRootPostAt IS NULL;
-- sets LastRootPostAt to 0, for channels with no posts
UPDATE Channels SET LastRootPostAt=0 WHERE LastRootPostAt IS NULL;
END IF;
END//
DELIMITER ;
CALL Migrate_LastRootPostAt_Fix ();
DROP PROCEDURE IF EXISTS Migrate_LastRootPostAt_Fix;
/* ==> mysql/000077_upgrade_users_v6.5.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'AcceptedServiceTermsId'
) > 0,
'ALTER TABLE Users DROP COLUMN AcceptedServiceTermsId;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000078_create_oauth_mattermost_app_id.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'OAuthApps'
AND table_schema = DATABASE()
AND column_name = 'MattermostAppID'
) > 0,
'SELECT 1',
'ALTER TABLE OAuthApps ADD COLUMN MattermostAppID varchar(32);'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000079_usergroups_displayname_index.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'UserGroups'
AND table_schema = DATABASE()
AND index_name = 'idx_usergroups_displayname'
) > 0,
'SELECT 1',
'CREATE INDEX idx_usergroups_displayname ON UserGroups(DisplayName);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000080_posts_createat_id.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Posts'
AND table_schema = DATABASE()
AND index_name = 'idx_posts_create_at_id'
) > 0,
'SELECT 1;',
'CREATE INDEX idx_posts_create_at_id on Posts(CreateAt, Id) LOCK=NONE;'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000081_threads_deleteat.up.sql <== */
-- Replaced by 000083_threads_threaddeleteat.up.sql
/* ==> mysql/000082_upgrade_oauth_mattermost_app_id.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'OAuthApps'
AND table_schema = DATABASE()
AND column_name = 'MattermostAppID'
) > 0,
'UPDATE OAuthApps SET MattermostAppID = "" WHERE MattermostAppID IS NULL;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'OAuthApps'
AND table_schema = DATABASE()
AND column_name = 'MattermostAppID'
) > 0,
'ALTER TABLE OAuthApps MODIFY MattermostAppID varchar(32) NOT NULL DEFAULT "";',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000083_threads_threaddeleteat.up.sql <== */
-- Drop any existing DeleteAt column from 000081_threads_deleteat.up.sql
SET @preparedStatement = (SELECT IF(
EXISTS(
SELECT 1 FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Threads'
AND table_schema = DATABASE()
AND column_name = 'DeleteAt'
) > 0,
'ALTER TABLE Threads DROP COLUMN DeleteAt;',
'SELECT 1;'
));
PREPARE removeColumnIfExists FROM @preparedStatement;
EXECUTE removeColumnIfExists;
DEALLOCATE PREPARE removeColumnIfExists;
SET @preparedStatement = (SELECT IF(
NOT EXISTS(
SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Threads'
AND table_schema = DATABASE()
AND column_name = 'ThreadDeleteAt'
),
'ALTER TABLE Threads ADD COLUMN ThreadDeleteAt bigint(20);',
'SELECT 1;'
));
PREPARE addColumnIfNotExists FROM @preparedStatement;
EXECUTE addColumnIfNotExists;
DEALLOCATE PREPARE addColumnIfNotExists;
UPDATE Threads, Posts
SET Threads.ThreadDeleteAt = Posts.DeleteAt
WHERE Posts.Id = Threads.PostId
AND Threads.ThreadDeleteAt IS NULL;
/* ==> mysql/000084_recent_searches.up.sql <== */
CREATE TABLE IF NOT EXISTS RecentSearches (
UserId CHAR(26),
SearchPointer int,
Query json,
CreateAt bigint NOT NULL,
PRIMARY KEY (UserId, SearchPointer)
);
/* ==> mysql/000085_fileinfo_add_archived_column.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'FileInfo'
AND table_schema = DATABASE()
AND column_name = 'Archived'
) > 0,
'SELECT 1',
'ALTER TABLE FileInfo ADD COLUMN Archived boolean NOT NULL DEFAULT false;'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000086_add_cloud_limits_archived.up.sql <== */
SET @preparedStatement = (SELECT IF(
NOT EXISTS(
SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Teams'
AND table_schema = DATABASE()
AND column_name = 'CloudLimitsArchived'
),
'ALTER TABLE Teams ADD COLUMN CloudLimitsArchived BOOLEAN NOT NULL DEFAULT FALSE;',
'SELECT 1'
));
PREPARE alterIfNotExists FROM @preparedStatement;
EXECUTE alterIfNotExists;
DEALLOCATE PREPARE alterIfNotExists;
/* ==> mysql/000087_sidebar_categories_index.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'SidebarCategories'
AND table_schema = DATABASE()
AND index_name = 'idx_sidebarcategories_userid_teamid'
) > 0,
'SELECT 1;',
'CREATE INDEX idx_sidebarcategories_userid_teamid on SidebarCategories(UserId, TeamId) LOCK=NONE;'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000088_remaining_migrations.up.sql <== */
DROP TABLE IF EXISTS JobStatuses;
DROP TABLE IF EXISTS PasswordRecovery;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'ThemeProps'
) > 0,
'INSERT INTO Preferences(UserId, Category, Name, Value) SELECT Id, \'\', \'\', ThemeProps FROM Users WHERE Users.ThemeProps != \'null\'',
'SELECT 1'
));
PREPARE migrateTheme FROM @preparedStatement;
EXECUTE migrateTheme;
DEALLOCATE PREPARE migrateTheme;
-- We have to do this twice because the prepared statement doesn't support multiple SQL queries
-- in a single string.
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Users'
AND table_schema = DATABASE()
AND column_name = 'ThemeProps'
) > 0,
'ALTER TABLE Users DROP COLUMN ThemeProps',
'SELECT 1'
));
PREPARE migrateTheme FROM @preparedStatement;
EXECUTE migrateTheme;
DEALLOCATE PREPARE migrateTheme;
/* ==> mysql/000089_add-channelid-to-reaction.up.sql <== */
SET @preparedStatement = (SELECT IF(
NOT EXISTS(
SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Reactions'
AND table_schema = DATABASE()
AND column_name = 'ChannelId'
),
'ALTER TABLE Reactions ADD COLUMN ChannelId varchar(26) NOT NULL DEFAULT "";',
'SELECT 1;'
));
PREPARE addColumnIfNotExists FROM @preparedStatement;
EXECUTE addColumnIfNotExists;
DEALLOCATE PREPARE addColumnIfNotExists;
UPDATE Reactions SET ChannelId = COALESCE((select ChannelId from Posts where Posts.Id = Reactions.PostId), '') WHERE ChannelId="";
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Reactions'
AND table_schema = DATABASE()
AND index_name = 'idx_reactions_channel_id'
) > 0,
'SELECT 1',
'CREATE INDEX idx_reactions_channel_id ON Reactions(ChannelId);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000090_create_enums.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Channels'
AND table_schema = DATABASE()
AND column_name = 'Type'
AND column_type != 'ENUM("D", "O", "G", "P")'
) > 0,
'ALTER TABLE Channels MODIFY COLUMN Type ENUM("D", "O", "G", "P");',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Teams'
AND table_schema = DATABASE()
AND column_name = 'Type'
AND column_type != 'ENUM("I", "O")'
) > 0,
'ALTER TABLE Teams MODIFY COLUMN Type ENUM("I", "O");',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'UploadSessions'
AND table_schema = DATABASE()
AND column_name = 'Type'
AND column_type != 'ENUM("attachment", "import")'
) > 0,
'ALTER TABLE UploadSessions MODIFY COLUMN Type ENUM("attachment", "import");',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000091_create_post_reminder.up.sql <== */
CREATE TABLE IF NOT EXISTS PostReminders (
PostId varchar(26) NOT NULL,
UserId varchar(26) NOT NULL,
TargetTime bigint,
PRIMARY KEY (PostId, UserId)
);
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'PostReminders'
AND table_schema = DATABASE()
AND index_name = 'idx_postreminders_targettime'
) > 0,
'SELECT 1',
'CREATE INDEX idx_postreminders_targettime ON PostReminders(TargetTime);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000092_add_createat_to_teammembers.up.sql <== */
SET @preparedStatement = (SELECT IF(
NOT EXISTS(
SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'TeamMembers'
AND table_schema = DATABASE()
AND column_name = 'CreateAt'
),
'ALTER TABLE TeamMembers ADD COLUMN CreateAt bigint DEFAULT 0;',
'SELECT 1;'
));
PREPARE addColumnIfNotExists FROM @preparedStatement;
EXECUTE addColumnIfNotExists;
DEALLOCATE PREPARE addColumnIfNotExists;
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'TeamMembers'
AND table_schema = DATABASE()
AND index_name = 'idx_teammembers_createat'
) > 0,
'SELECT 1',
'CREATE INDEX idx_teammembers_createat ON TeamMembers(CreateAt);'
));
PREPARE createIndexIfNotExists FROM @preparedStatement;
EXECUTE createIndexIfNotExists;
DEALLOCATE PREPARE createIndexIfNotExists;
/* ==> mysql/000093_notify_admin.up.sql <== */
CREATE TABLE IF NOT EXISTS NotifyAdmin (
UserId varchar(26) NOT NULL,
CreateAt bigint(20) DEFAULT NULL,
RequiredPlan varchar(26) NOT NULL,
RequiredFeature varchar(100) NOT NULL,
Trial BOOLEAN NOT NULL,
PRIMARY KEY (UserId, RequiredFeature, RequiredPlan)
);
/* ==> mysql/000094_threads_teamid.up.sql <== */
-- Replaced by 000096_threads_threadteamid.up.sql
/* ==> mysql/000095_remove_posts_parentid.up.sql <== */
-- While upgrading from 5.x to 6.x with manual queries, there is a chance that this
-- migration is skipped. In that case, we need to make sure that the column is dropped.
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Posts'
AND table_schema = DATABASE()
AND column_name = 'ParentId'
) > 0,
'ALTER TABLE Posts DROP COLUMN ParentId;',
'SELECT 1'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000096_threads_threadteamid.up.sql <== */
-- Drop any existing TeamId column from 000094_threads_teamid.up.sql
SET @preparedStatement = (SELECT IF(
EXISTS(
SELECT 1 FROM INFORMATION_SCHEMA.STATISTICS
WHERE table_name = 'Threads'
AND table_schema = DATABASE()
AND column_name = 'TeamId'
) > 0,
'ALTER TABLE Threads DROP COLUMN TeamId;',
'SELECT 1;'
));
PREPARE removeColumnIfExists FROM @preparedStatement;
EXECUTE removeColumnIfExists;
DEALLOCATE PREPARE removeColumnIfExists;
SET @preparedStatement = (SELECT IF(
NOT EXISTS(
SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Threads'
AND table_schema = DATABASE()
AND column_name = 'ThreadTeamId'
),
'ALTER TABLE Threads ADD COLUMN ThreadTeamId varchar(26) DEFAULT NULL;',
'SELECT 1;'
));
PREPARE addColumnIfNotExists FROM @preparedStatement;
EXECUTE addColumnIfNotExists;
DEALLOCATE PREPARE addColumnIfNotExists;
UPDATE Threads, Channels
SET Threads.ThreadTeamId = Channels.TeamId
WHERE Channels.Id = Threads.ChannelId
AND Threads.ThreadTeamId IS NULL;
/* ==> mysql/000097_create_posts_priority.up.sql <== */
CREATE TABLE IF NOT EXISTS PostsPriority (
PostId varchar(26) NOT NULL,
ChannelId varchar(26) NOT NULL,
Priority varchar(32) NOT NULL,
RequestedAck tinyint(1),
PersistentNotifications tinyint(1),
PRIMARY KEY (PostId)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
SET @preparedStatement = (SELECT IF(
NOT EXISTS(
SELECT 1 FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'ChannelMembers'
AND table_schema = DATABASE()
AND column_name = 'UrgentMentionCount'
),
'ALTER TABLE ChannelMembers ADD COLUMN UrgentMentionCount bigint(20);',
'SELECT 1;'
));
PREPARE alterIfNotExists FROM @preparedStatement;
EXECUTE alterIfNotExists;
DEALLOCATE PREPARE alterIfNotExists;
/* ==> mysql/000098_create_post_acknowledgements.up.sql <== */
CREATE TABLE IF NOT EXISTS PostAcknowledgements (
PostId varchar(26) NOT NULL,
UserId varchar(26) NOT NULL,
AcknowledgedAt bigint(20) DEFAULT NULL,
PRIMARY KEY (PostId, UserId)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
/* ==> mysql/000099_create_drafts.up.sql <== */
CREATE TABLE IF NOT EXISTS Drafts (
CreateAt bigint(20) DEFAULT NULL,
UpdateAt bigint(20) DEFAULT NULL,
DeleteAt bigint(20) DEFAULT NULL,
UserId varchar(26) NOT NULL,
ChannelId varchar(26) NOT NULL,
RootId varchar(26) DEFAULT '',
Message text,
Props text,
FileIds text,
PRIMARY KEY (UserId, ChannelId, RootId)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
/* ==> mysql/000100_add_draft_priority_column.up.sql <== */
SET @preparedStatement = (SELECT IF(
(
SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
WHERE table_name = 'Drafts'
AND table_schema = DATABASE()
AND column_name = 'Priority'
) > 0,
'SELECT 1',
'ALTER TABLE Drafts ADD COLUMN Priority text;'
));
PREPARE alterIfExists FROM @preparedStatement;
EXECUTE alterIfExists;
DEALLOCATE PREPARE alterIfExists;
/* ==> mysql/000101_create_true_up_review_history.up.sql <== */
CREATE TABLE IF NOT EXISTS TrueUpReviewHistory (
DueDate bigint(20),
Completed boolean,
PRIMARY KEY (DueDate)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

View file

@ -1,23 +0,0 @@
/* The sessions in the DB dump may have expired before the CI tests run, making
the server remove the rows and generating a spurious diff that we want to avoid.
In order to do so, we mark all sessions' ExpiresAt value to 0, so they never expire. */
UPDATE Sessions SET ExpiresAt = 0;
/* The dump may not contain a system-bot user, in which case the server will create
one if it's not shutdown before a job requests it. This situation creates a flaky
tests in which, in rare ocassions, the system-bot is indeed created, generating a
spurious diff. We avoid this by making sure that there is a system-bot user and
corresponding bot */
DELIMITER //
CREATE PROCEDURE AddSystemBotIfNeeded ()
BEGIN
DECLARE CreateSystemBot BOOLEAN;
SELECT COUNT(*) = 0 FROM Users WHERE Username = 'system-bot' INTO CreateSystemBot;
IF CreateSystemBot THEN
/* These values are retrieved from a real system-bot created by a server */
INSERT INTO `Bots` VALUES ('nc7y5x1i8jgr9btabqo5m3579c','','phxrtijfrtfg7k4bwj9nophqyc',0,1681308600015,1681308600015,0);
INSERT INTO `Users` VALUES ('nc7y5x1i8jgr9btabqo5m3579c',1681308600014,1681308600014,0,'system-bot','',NULL,'','system-bot@localhost',0,'','System','','','system_user',0,'{}','{\"push\": \"mention\", \"email\": \"true\", \"channel\": \"true\", \"desktop\": \"mention\", \"comments\": \"never\", \"first_name\": \"false\", \"push_status\": \"away\", \"mention_keys\": \"\", \"push_threads\": \"all\", \"desktop_sound\": \"true\", \"email_threads\": \"all\", \"desktop_threads\": \"all\"}',1681308600014,0,0,'en','{\"manualTimezone\": \"\", \"automaticTimezone\": \"\", \"useAutomaticTimezone\": \"true\"}',0,'',NULL);
END IF;
END//
DELIMITER ;
CALL AddSystemBotIfNeeded();

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -5,9 +5,6 @@
"13": "postgres:13@sha256:1b154a7bbf474aa1a2e67dc7c976835645fe6c3425320e7ad3f5a926d509e8fc",
"14": "postgres:14@sha256:1c418702ab77adc7e84c7e726c2ab4f9cb63b8f997341ffcfab56629bab1429d"
},
"mysql": {
"8.0.32": "mysql/mysql-server:8.0.32@sha256:d6c8301b7834c5b9c2b733b10b7e630f441af7bc917c74dba379f24eeeb6a313"
},
"minio": {
"RELEASE.2019-10-11T00-38-09Z-1": "minio/minio:RELEASE.2019-10-11T00-38-09Z@sha256:0d02f16a1662653f9b961211b21ed7de04bf04492f44c2b7594bacbfcc519eb5",
"RELEASE.2024-06-22T05-26-45Z": "minio/minio:RELEASE.2024-06-22T05-26-45Z@sha256:dda5e13d3df07fae2c1877701998742bcbe3bbb2b9c24c18ed5b9469cc777761"

View file

@ -1,59 +0,0 @@
./scripts/jq-dep-check.sh
TMPDIR=`mktemp -d 2>/dev/null || mktemp -d -t 'tmpConfigDir'`
DUMPDIR=`mktemp -d 2>/dev/null || mktemp -d -t 'dumpDir'`
SCHEMA_VERSION=$1
echo "Creating databases"
docker exec mattermost-mysql mysql -uroot -pmostest -e "CREATE DATABASE migrated; CREATE DATABASE latest; GRANT ALL PRIVILEGES ON migrated.* TO mmuser; GRANT ALL PRIVILEGES ON latest.* TO mmuser"
echo "Importing mysql dump from version ${SCHEMA_VERSION}"
docker exec -i mattermost-mysql mysql -D migrated -uroot -pmostest < $(pwd)/scripts/mattermost-mysql-$SCHEMA_VERSION.sql
docker exec -i mattermost-mysql mysql -D migrated -uroot -pmostest -e "INSERT INTO Systems (Name, Value) VALUES ('Version', '$SCHEMA_VERSION')"
echo "Setting up config for db migration"
cat config/config.json | \
jq '.SqlSettings.DataSource = "mmuser:mostest@tcp(localhost:3306)/migrated?charset=utf8mb4&readTimeout=30s&writeTimeout=30s"' | \
jq '.SqlSettings.DriverName = "mysql"' > $TMPDIR/config.json
echo "Running the migration"
make ARGS="db migrate --config $TMPDIR/config.json" run-cli
echo "Setting up config for fresh db setup"
cat config/config.json | \
jq '.SqlSettings.DataSource = "mmuser:mostest@tcp(localhost:3306)/latest?charset=utf8mb4&readTimeout=30s&writeTimeout=30s"' | \
jq '.SqlSettings.DriverName = "mysql"' > $TMPDIR/config.json
echo "Setting up fresh db"
make ARGS="db migrate --config $TMPDIR/config.json" run-cli
if [ "$SCHEMA_VERSION" == "5.0.0" ]; then
for i in "ChannelMembers SchemeGuest" "ChannelMembers MsgCountRoot" "ChannelMembers MentionCountRoot" "Channels TotalMsgCountRoot"; do
a=( $i );
echo "Ignoring known MySQL mismatch: ${a[0]}.${a[1]}"
docker exec mattermost-mysql mysql -D migrated -uroot -pmostest -e "ALTER TABLE ${a[0]} DROP COLUMN ${a[1]};"
docker exec mattermost-mysql mysql -D latest -uroot -pmostest -e "ALTER TABLE ${a[0]} DROP COLUMN ${a[1]};"
done
fi
echo "Generating dump"
docker exec mattermost-mysql mysqldump --skip-opt --no-data --compact -u root -pmostest migrated > $DUMPDIR/migrated.sql
docker exec mattermost-mysql mysqldump --skip-opt --no-data --compact -u root -pmostest latest > $DUMPDIR/latest.sql
echo "Removing databases created for db comparison"
docker exec mattermost-mysql mysql -uroot -pmostest -e "DROP DATABASE migrated; DROP DATABASE latest"
echo "Generating diff"
git diff --word-diff=color $DUMPDIR/migrated.sql $DUMPDIR/latest.sql > $DUMPDIR/diff.txt
diffErrorCode=$?
if [ $diffErrorCode -eq 0 ]; then
echo "Both schemas are same"
else
echo "Schema mismatch"
cat $DUMPDIR/diff.txt
fi
rm -rf $TMPDIR $DUMPDIR
exit $diffErrorCode

View file

@ -1,4 +0,0 @@
#!/bin/bash
stmt="STOP SLAVE SQL_THREAD FOR CHANNEL '';CHANGE MASTER TO MASTER_DELAY = $1;START SLAVE SQL_THREAD FOR CHANNEL '';SHOW SLAVE STATUS\G;"
docker exec mattermost-mysql-read-replica sh -c "export MYSQL_PWD=mostest; mysql -u root -e \"$stmt\"" | grep SQL_Delay

View file

@ -1,32 +0,0 @@
#!/bin/bash
until docker exec mattermost-mysql sh -c 'mysql -u root -pmostest -e ";"'
do
echo "Waiting for mattermost-mysql database connection..."
sleep 4
done
priv_stmt='GRANT REPLICATION SLAVE ON *.* TO "mmuser"@"%" IDENTIFIED BY "mostest"; FLUSH PRIVILEGES;'
docker exec mattermost-mysql sh -c "mysql -u root -pmostest -e '$priv_stmt'"
until docker compose -f docker-compose.makefile.yml exec mysql-read-replica sh -c 'mysql -u root -pmostest -e ";"'
do
echo "Waiting for mysql-read-replica database connection..."
sleep 4
done
docker-ip() {
docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' "$@"
}
MS_STATUS=`docker exec mattermost-mysql sh -c 'mysql -u root -pmostest -e "SHOW MASTER STATUS"'`
CURRENT_LOG=`echo $MS_STATUS | awk '{print $6}'`
CURRENT_POS=`echo $MS_STATUS | awk '{print $7}'`
start_slave_stmt="CHANGE MASTER TO MASTER_HOST='$(docker-ip mattermost-mysql)',MASTER_USER='mmuser',MASTER_PASSWORD='mostest',MASTER_LOG_FILE='$CURRENT_LOG',MASTER_LOG_POS=$CURRENT_POS; START SLAVE;"
start_slave_cmd='mysql -u root -pmostest -e "'
start_slave_cmd+="$start_slave_stmt"
start_slave_cmd+='"'
docker exec mattermost-mysql-read-replica sh -c "$start_slave_cmd"
docker exec mattermost-mysql-read-replica sh -c "mysql -u root -pmostest -e 'SHOW SLAVE STATUS \G'"

View file

@ -1,45 +0,0 @@
LOAD DATABASE
FROM mysql://{{ .mysql_user }}:{{ .mysql_password }}@mysql:3306/{{ .source_schema }}
INTO pgsql://{{ .pg_user }}:{{ .pg_password }}@postgres:5432/{{ .target_schema }}
WITH data only,
workers = 8, concurrency = 1,
multiple readers per thread, rows per range = 10000,
prefetch rows = 10000, batch rows = 2500,
create no tables, create no indexes,
preserve index names
SET PostgreSQL PARAMETERS
maintenance_work_mem to '128MB',
work_mem to '12MB'
SET MySQL PARAMETERS
net_read_timeout = '120',
net_write_timeout = '120'
CAST column Channels.Type to "channel_type" drop typemod,
column Teams.Type to "team_type" drop typemod,
column UploadSessions.Type to "upload_session_type" drop typemod,
column ChannelBookmarks.Type to "channel_bookmark_type" drop typemod,
column Drafts.Priority to text,
type int when (= precision 11) to integer drop typemod,
type bigint when (= precision 20) to bigint drop typemod,
type text to varchar drop typemod using remove-null-characters,
type tinyint when (<= precision 4) to boolean using tinyint-to-boolean,
type json to jsonb drop typemod using remove-null-characters
EXCLUDING TABLE NAMES MATCHING ~<IR_>, ~<focalboard>
BEFORE LOAD DO
$$ ALTER SCHEMA public RENAME TO {{ .source_schema }}; $$,
$$ TRUNCATE TABLE {{ .source_schema }}.systems; $$,
$$ DROP INDEX IF EXISTS {{ .source_schema }}.idx_posts_message_txt; $$,
$$ DROP INDEX IF EXISTS {{ .source_schema }}.idx_fileinfo_content_txt; $$
AFTER LOAD DO
$$ UPDATE {{ .source_schema }}.db_migrations set name='add_createat_to_teamembers' where version=92; $$,
$$ CREATE INDEX IF NOT EXISTS idx_posts_message_txt ON {{ .source_schema }}.posts USING gin(to_tsvector('english', message)); $$,
$$ CREATE INDEX IF NOT EXISTS idx_fileinfo_content_txt ON {{ .source_schema }}.fileinfo USING gin(to_tsvector('english', content)); $$,
$$ ALTER SCHEMA {{ .source_schema }} RENAME TO public; $$,
$$ SELECT pg_catalog.set_config('search_path', '"$user", public', false); $$,
$$ ALTER USER {{ .pg_user }} SET SEARCH_PATH TO 'public'; $$;

View file

@ -89,8 +89,8 @@
"IosMinVersion": ""
},
"SqlSettings": {
"DriverName": "mysql",
"DataSource": "mmuser:mostest@tcp(localhost:3306)/mattermost_test?charset=utf8mb4\u0026readTimeout=30s\u0026writeTimeout=30s\u0026maxAllowedPacket=4194304",
"DriverName": "postgres",
"DataSource": "postgres://mmuser:mostest@localhost:5432/mattermost_test?sslmode=disable\u0026connect_timeout=10",
"DataSourceReplicas": [],
"DataSourceSearchReplicas": [],
"Trace": false,

View file

@ -219,13 +219,8 @@ exports[`components/DatabaseSettings should match snapshot 1`] = `
disabled={false}
helpText={
<Memo(MemoizedFormattedMessage)
defaultMessage="Minimum number of characters in a hashtag. This must be greater than or equal to 2. MySQL databases must be configured to support searching strings shorter than three characters, <link>see documentation</link>."
defaultMessage="Minimum number of characters in a hashtag. This must be greater than or equal to 2."
id="admin.service.minimumHashtagLengthDescription"
values={
Object {
"link": [Function],
}
}
/>
}
id="minimumHashtagLength"

View file

@ -67,7 +67,7 @@ const messages = defineMessages({
connMaxIdleTimeTitle: {id: 'admin.sql.connMaxIdleTimeTitle', defaultMessage: 'Maximum Connection Idle Time:'},
connMaxIdleTimeDescription: {id: 'admin.sql.connMaxIdleTimeDescription', defaultMessage: 'Maximum idle time for a connection to the database in milliseconds.'},
minimumHashtagLengthTitle: {id: 'admin.service.minimumHashtagLengthTitle', defaultMessage: 'Minimum Hashtag Length:'},
minimumHashtagLengthDescription: {id: 'admin.service.minimumHashtagLengthDescription', defaultMessage: 'Minimum number of characters in a hashtag. This must be greater than or equal to 2. MySQL databases must be configured to support searching strings shorter than three characters, <link>see documentation</link>.'},
minimumHashtagLengthDescription: {id: 'admin.service.minimumHashtagLengthDescription', defaultMessage: 'Minimum number of characters in a hashtag. This must be greater than or equal to 2.'},
traceTitle: {id: 'admin.sql.traceTitle', defaultMessage: 'SQL Statement Logging: '},
traceDescription: {id: 'admin.sql.traceDescription', defaultMessage: '(Development Mode) When true, executing SQL statements are written to the log.'},
});
@ -321,19 +321,7 @@ export default class DatabaseSettings extends OLDAdminSettings<Props, State> {
}
placeholder={defineMessage({id: 'admin.service.minimumHashtagLengthExample', defaultMessage: 'E.g.: "3"'})}
helpText={
<FormattedMessage
{...messages.minimumHashtagLengthDescription}
values={{
link: (msg) => (
<ExternalLink
location='database_settings'
href='https://dev.mysql.com/doc/refman/8.0/en/fulltext-fine-tuning.html'
>
{msg}
</ExternalLink>
),
}}
/>
<FormattedMessage {...messages.minimumHashtagLengthDescription}/>
}
value={this.state.minimumHashtagLength}
onChange={this.handleChange}

View file

@ -80,11 +80,3 @@ table.systemUsersTable {
}
}
}
.systemUsers__mySqlAlertBanner {
margin-bottom: 20px;
.systemUsers__mySqlAlertBanner-buttons {
margin-top: 12px;
}
}

View file

@ -2709,7 +2709,7 @@
"admin.service.maximumPayloadSizeDescription": "The maximum number of bytes allowed in the payload of incoming HTTP calls",
"admin.service.mfaDesc": "When true, users with AD/LDAP or email login can add multi-factor authentication to their account using an authenticator app.",
"admin.service.mfaTitle": "Enable Multi-factor Authentication:",
"admin.service.minimumHashtagLengthDescription": "Minimum number of characters in a hashtag. This must be greater than or equal to 2. MySQL databases must be configured to support searching strings shorter than three characters, <link>see documentation</link>.",
"admin.service.minimumHashtagLengthDescription": "Minimum number of characters in a hashtag. This must be greater than or equal to 2.",
"admin.service.minimumHashtagLengthExample": "E.g.: \"3\"",
"admin.service.minimumHashtagLengthTitle": "Minimum Hashtag Length:",
"admin.service.mobileSessionHours": "Session Length Mobile (hours):",

View file

@ -75,7 +75,6 @@ const Preferences = {
CATEGORY_REPORTING: 'reporting',
HIDE_BATCH_EXPORT_CONFIRM_MODAL: 'hide_batch_export_confirm_modal',
HIDE_MYSQL_STATS_NOTIFICATION: 'hide_mysql_stats_notifcation',
CATEGORY_OVERAGE_USERS_BANNER: 'overage_users_banner',
CATEGORY_POST_HISTORY_LIMIT_BANNER: 'post_history_limit_banner',