Help: Dify 1.11.4 deployed, worker reports missing `poll_workflow_schedules` table, PostgreSQL configuration error prevents normal startup

:bullseye: What I Need Help With

1. Worker Service Data Table Missing Error

worker-1  |                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 306, in orm_execute_statement\nworker-1  |     result = conn.execute(\nworker-1  |              ^^^^^^^^^^^^^\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1419, in execute\nworker-1  |     return meth(\nworker-1  |            ^^^^^\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 526, in _execute_on_connection\nworker-1  |     return connection._execute_clauseelement(\nworker-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1641, in _execute_clauseelement\nworker-1  |     ret = self._execute_context(\nworker-1  |           ^^^^^^^^^^^^^^^^^^^^^^\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context\nworker-1  |     return self._exec_single_context(\nworker-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context\nworker-1  |     self._handle_dbapi_exception(\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2355, in _handle_dbapi_exception\nworker-1  |     raise sqlalchemy_exception.with_traceback(exc_info[2]) from e\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context\nworker-1  |     self.dialect.do_execute(\nworker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 951, in do_execute\nworker-1  |     cursor.execute(statement, parameters)\nworker-1  | sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "workflow_schedule_plans" does not exist\nworker-1  | LINE 2: FROM workflow_schedule_plans JOIN app_triggers ON app_trigge...\nworker-1  |              ^\nworker-1  | \nworker-1  | [SQL: SELECT workflow_schedule_plans.id, workflow_schedule_plans.app_id, workflow_schedule_plans.node_id, workflow_schedule_plans.tenant_id, workflow_schedule_plans.cron_expression, workflow_schedule_plans.timezone, workflow_schedule_plans.next_run_at, workflow_schedule_plans.created_at, workflow_schedule_plans.updated_at \nworker-1  | FROM workflow_schedule_plans JOIN app_triggers ON app_triggers.app_id = workflow_schedule_plans.app_id AND app_triggers.node_id = workflow_schedule_plans.node_id AND app_triggers.trigger_type = %(trigger_type_1)s \nworker-1  | WHERE workflow_schedule_plans.next_run_at <= %(next_run_at_1)s AND workflow_schedule_plans.next_run_at IS NOT NULL AND app_triggers.status = %(status_1)s ORDER BY workflow_schedule_plans.next_run_at ASC \nworker-1  |  LIMIT %(param_1)s FOR UPDATE SKIP LOCKED]\nworker-1  | [parameters: {'trigger_type_1': 'trigger-schedule', 'next_run_at_1': datetime.datetime(2026, 2, 4, 14, 16, 56, 757456), 'status_1': 'enabled', 'param_1': 100}]\nworker-1  | (Background on this error at: https://sqlalche.me/e/20/f405)\n```

This error recurs, and the scheduled task `poll_workflow_schedules` cannot execute normally.

## 📋 Project Details

* * Dify Version: 1.11.4
  * Deployment Method: Docker Compose local deployment
  * Operating System: Linux (Centos-like)
  * PostgreSQL Image: 18.1-alpine3.23

##

@Dify_Forum_Helper, please elaborate on where the problem might be, and also check GitHub for similar issues.

From the logs on your side and similar cases I’ve seen in the community, the core issue is:

The core database is missing the table workflow_schedule_plans, so when the worker executes the scheduled task poll_workflow_schedules, the SQL directly reports “relation … does not exist”.

This means that the worker itself is not broken; rather, the database migration failed to create this table (and possibly others).

Below, I’ll outline possible causes and suggested troubleshooting steps.


I. Most Likely Causes

  1. Database migrations did not execute completely

    • All Dify business tables (including workflow_schedule_plans) are automatically created via Alembic migration scripts during the first startup.
    • If:
      • The API container could not connect to PostgreSQL during the first startup; or
      • The database was manually cleared or replaced midway; or
      • There were permission issues / version incompatibilities that caused the migration to fail
        Then this table will not be created, but the worker will still try to query it, leading to the error you observed.
  2. Database type configuration mismatch (especially DB_TYPE)

    • Starting from 1.10.x, Dify uses an environment variable to distinguish database types, for example:
      • PostgreSQL: DB_TYPE=postgres
      • MySQL: DB_TYPE=mysql
    • If in your docker-compose / .env:
      • DB_TYPE is not configured, or
      • It is configured to an incorrect value (e.g., historically, users have mistakenly written seekdb)
        This will cause the migration script to process with the wrong dialect/engine, leading to:
    • Migrations being skipped or failing;
    • Some tables (including workflow_schedule_plans) not being created.
  3. Database was changed or tables were cleared, but migration was not redone

    • For example:
      • First used a PostgreSQL instance, then changed the connection string to point to a new database;
      • Or cleared the schema in PostgreSQL;
    • But the API container did not completely re-run “initialization + migration”. In this case, a situation where “some tables exist, some do not” will occur.

II. You Can Start by Self-Checking These Items

The following operations are “read-only checks” and will not damage data:

  1. Confirm that this table indeed does not exist in PostgreSQL

    Execute in your PostgreSQL container or client:

    \dt workflow_schedule_plans
    

    Or:

    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    

    If this is empty, it basically confirms that the migration did not complete.

  2. Check Dify’s database configuration

    Find the environment configuration in your deployment directory (usually .env in the same directory as docker-compose.yml, or directly written in the compose’s environment: section), focusing on these items:

    DB_TYPE=postgres
    DB_USERNAME=...
    DB_PASSWORD=...
    DB_HOST=...
    DB_PORT=5432
    DB_DATABASE=...
    

    Confirm:

    • DB_TYPE exists and is postgres;
    • These values correspond to what your PostgreSQL instance is actually using;
    • There are no typos or extra spaces.
  3. Check API container startup logs for signs of migration failure

    On the host machine, check:

    docker compose logs api | grep -i -E "migrat|alembic|flask db"
    

    If you see something like:

    • “failed to run migrations”
    • Or a Python exception with a traceback
      Then it further confirms a migration issue.

III. More Reliable Repair Steps (Recommended Order)

Note: Step 2 below, flask db upgrade, is “idempotent”, meaning existing tables will not be deleted; it will only create missing tables or perform schema upgrades, which is generally safe.

  1. First, ensure the configuration is correct, then restart

    Modify the shared environment variables for API & worker in .env or docker-compose.yml to ensure:

    DB_TYPE=postgres
    

    Then, in the deployment directory, execute:

    docker compose down
    docker compose up -d
    

    After all services have started stably, proceed to the next step.

  2. Manually execute database migration

    Enter the API container (container name according to your actual one, usually dify-api-1 or similar):

    docker exec -it <your-api-container-name> /bin/bash
    

    Inside the container, execute:

    flask db upgrade
    

    Normally, you will see a series of “Running upgrade” outputs. If there is an error in the middle, please paste the entire error log for further diagnosis.

    After the migration is complete, you can connect to PostgreSQL directly from the container (if psql is installed) or use a client on the host to query again:

    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    

    If the table now exists, it means the migration has filled it in.

  3. Restart the worker again and observe if the same error persists

    docker compose restart worker
    

    Then check the worker logs to see if:

    relation "workflow_schedule_plans" does not exist
    

    still appears. If it no longer appears, the issue is resolved; if it still does, check if it has changed to “does not exist” for another table, which would mean more tables were not created. flask db upgrade will also fix these in one go.


IV. If the Problem Persists After the Above Operations, Please Provide This Information

If the problem persists after following the steps above, please add this information to this post, and I can help you pinpoint it more specifically:

  1. Your current database configuration in docker-compose.yml or .env (you can desensitize the password):

    • DB_TYPE
    • DB_USERNAME
    • DB_HOST
    • DB_PORT
    • DB_DATABASE
  2. The complete output of flask db upgrade (especially if there’s an error traceback).

  3. A list of all workflow_* related tables in PostgreSQL, for example:

    SELECT table_name
    FROM information_schema.tables
    WHERE table_name LIKE 'workflow%';
    

V. Comparison with GitHub / Community Cases

On GitHub and in forums, there have been several cases of “a certain table does not exist → worker scheduled task error”, and the common conclusions are:

  • Root cause: Migration did not run completely during the first initialization, or DB_TYPE was set incorrectly, leading to the entire batch of tables (including workflow_schedule_plans) not being created at all;
  • Solution: Correct DB_TYPE + manually run flask db upgrade, after which the worker’s poll_workflow_schedules can run normally.

Your Dify version is 1.11.4, which already includes the table definitions for scheduled workflows, so as long as the migration runs successfully, this table will definitely exist; there will be no situation where “the version is too low and this table doesn’t exist”.


You can first try executing the “III. Repair Steps”. If convenient, please answer two points first:

  1. What is the value of DB_TYPE in your current .env / compose?
  2. Did flask db upgrade execute successfully, and were there any errors?

I can then help you determine the next steps based on your specific output.

@Dify_Forum_Helper
$ flask db upgrade
WARNING: database “dify” has no actual collation version, but a version was recorded
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 2adcbe1f5dfb → 4474872b0ee6, workflow_draft_varaibles add node_execution_id column, add an index for workflow_node_executions.
Traceback (most recent call last):
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1967, in _exec_single_context
self.dialect.do_execute(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py”, line 951, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.DuplicateTable: relation “workflow_node_executions_tenant_id_idx” already exists

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/app/api/.venv/bin/flask”, line 10, in
sys.exit(main())
^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask/cli.py”, line 1131, in main
cli.main()
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1406, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1873, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1873, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1269, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 824, in invoke
return callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/decorators.py”, line 34, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask/cli.py”, line 400, in decorator
return ctx.invoke(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 824, in invoke
return callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/cli.py”, line 154, in upgrade
_upgrade(directory, revision, sql, tag, x_arg)
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/init.py”, line 111, in wrapped
f(*args, **kwargs)
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/init.py”, line 200, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/command.py”, line 483, in upgrade
script.run_env()
File “/app/api/.venv/lib/python3.12/site-packages/alembic/script/base.py”, line 545, in run_env
util.load_python_file(self.dir, “env.py”)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/util/pyfiles.py”, line 116, in load_python_file
module = load_module_py(module_id, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/util/pyfiles.py”, line 136, in load_module_py
spec.loader.exec_module(module) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/app/api/migrations/env.py”, line 110, in
run_migrations_online()
File “/app/api/migrations/env.py”, line 104, in run_migrations_online
context.run_migrations()
File “”, line 8, in run_migrations
File “/app/api/.venv/lib/python3.12/site-packages/alembic/runtime/environment.py”, line 946, in run_migrations
self.get_context().run_migrations(**kw)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/runtime/migration.py”, line 627, in run_migrations
step.migration_fn(**kw)
File “/app/api/migrations/versions/2025_06_06_1424-4474872b0ee6_workflow_draft_varaibles_add_node_execution_id.py”, line 36, in upgrade
op.create_index(
File “”, line 8, in create_index
File “”, line 3, in create_index
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/ops.py”, line 1013, in create_index
return operations.invoke(op)
^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/base.py”, line 454, in invoke
return fn(self, operation)
^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/toimpl.py”, line 112, in create_index
operations.impl.create_index(idx, **kw)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/ddl/postgresql.py”, line 99, in create_index
self._exec(CreateIndex(index, **kw))
File “/app/api/.venv/lib/python3.12/site-packages/alembic/ddl/impl.py”, line 246, in _exec
return conn.execute(construct, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1419, in execute
return meth(
^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/sql/ddl.py”, line 187, in _execute_on_connection
return connection._execute_ddl(
^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1530, in _execute_ddl
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1846, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1986, in _exec_single_context
self._handle_dbapi_exception(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 2355, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1967, in _exec_single_context
self.dialect.do_execute(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py”, line 951, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DuplicateTable) relation “workflow_node_executions_tenant_id_idx” already exists

[SQL: CREATE INDEX CONCURRENTLY workflow_node_executions_tenant_id_idx ON workflow_node_executions (tenant_id, workflow_id, node_id, created_at DESC)]
(Background on this error at: Error Messages — SQLAlchemy 2.0 Documentation )

The flask db upgrade log information you posted this time is very critical, indicating the current state is:

  • Alembic is upgrading from version 2adcbe1f5dfb to 4474872b0ee6;
  • In this migration script, the index workflow_node_executions_tenant_id_idx needs to be created;
  • But this index already exists in the database, so DuplicateTable is thrown, and the entire migration is interrupted.

So the problem now becomes:

Migration history and the actual database structure are ‘out of sync’:

  • Some elements (tables/indexes) already exist, but Alembic believes they haven’t been created yet.

This is also why you previously lacked the workflow_schedule_plans table, and then got stuck again due to another duplicate index when manually running migrations.


I. Summary of Current Inferences

  1. Your database has already partially run migrations:

    • At least the workflow_node_executions_tenant_id_idx index exists;
    • But the workflow_schedule_plans table does not yet exist (otherwise the worker would not report an error).
  2. Alembic’s ‘migration record’ is stuck at 2adcbe1f5dfb:

    • From the logs, it is currently trying to execute 2adcbe1f5dfb → 4474872b0ee6;
    • This indicates that the current version recorded in the alembic_version table is still 2adcbe1f5dfb;
    • But in the actual database, ‘some operations from later migration steps’ have already been executed (e.g., this index).

In other words:
Actual database structure > Alembic’s perceived version, leading to an ‘index already exists’ conflict during re-execution.


II. Several Troubleshooting Steps You Can Take Now

All the following operations are recommended to be performed after confirming you have a backup, especially before directly operating on alembic_version or deleting indexes.

1. Check which version is currently recorded in the migration version table

Execute in PostgreSQL:

SELECT * FROM alembic_version;

Usually there will be one row, where version_num is the current migration version.
In your current log, it’s upgrading from 2adcbe1f5dfb to 4474872b0ee6, so theoretically version_num should be 2adcbe1f5dfb.

Note this value, you might need it later.

2. Confirm the current status of related objects

Query separately in PostgreSQL:

-- 1) Check if the scheduled plans table still does not exist
SELECT *
FROM information_schema.tables
WHERE table_name = 'workflow_schedule_plans';

-- 2) Check if the conflicting index exists
SELECT indexname, indexdef
FROM pg_indexes
WHERE indexname = 'workflow_node_executions_tenant_id_idx';

-- 3) Check if the base table exists
SELECT *
FROM information_schema.tables
WHERE table_name = 'workflow_node_executions';

The results are likely to be:

  • workflow_schedule_plans: not found;
  • workflow_node_executions: exists;
  • workflow_node_executions_tenant_id_idx index: exists.

If so, it further confirms that:
Some migration steps have been executed (index creation), but the entire migration sequence has not been properly ‘accounted for’ by Alembic.


III. Two Common Repair Approaches (Recommended Priority: Option A)

It is strongly recommended to back up the database (at least the dify database) before proceeding, so you can roll back if errors occur.

Option A: Delete the duplicate index and let the migration continue

Idea: Since the duplicate is an ‘index’ rather than a ‘table’, you can first delete the index, let Alembic recreate it, and the migration will no longer be stuck by DuplicateTable.

  1. Delete the index in PostgreSQL:

    DROP INDEX IF EXISTS workflow_node_executions_tenant_id_idx;
    

    Deleting only the index will not delete table data, so the risk is relatively controllable.
    After deletion, you can check again to confirm:

    SELECT indexname
    FROM pg_indexes
    WHERE tablename = 'workflow_node_executions';
    
  2. Go back to the API container and execute the migration again:

    flask db upgrade
    

    If this step passes successfully, it will:

    • Recreate the index that was just deleted;
    • Continue to execute all subsequent incomplete migrations, including creating tables like workflow_schedule_plans.
  3. After the migration is successful, query the key table again:

    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    

    If it now exists, you can restart the worker and observe whether the original poll_workflow_schedules error has disappeared.

Option B: Manually Adjust Alembic Version (Advanced and Higher Risk)

This method is generally used in scenarios where ‘the database structure is actually correct, but the Alembic version record is outdated’, requiring you to confirm:

  • The table / index structure in the database itself is complete and correct;
  • Only tables like workflow_schedule_plans are missing, and if you have manually created/migrated them, you also need to ensure they are consistent with the official migration scripts.

General steps (for you to understand the principle only; if you are not familiar with Alembic, it is not recommended to do this directly):

  1. After confirming that the database structure is consistent with a newer migration version, manually update the version_num in the alembic_version table, skipping the duplicate migrations.
  2. Then execute flask db upgrade to let it run from the new baseline.

Since you are currently missing tables (workflow_schedule_plans), Option B is not very suitable directly, otherwise, if subsequent migrations assume this table already exists, it could lead to more problems.


IV. Recommended Order of Operations for You Now

  1. Confirm that the database has been backed up (this is very important).
  2. Execute in PostgreSQL:
    • Check the current value of alembic_version;
    • Confirm whether workflow_schedule_plans truly does not exist;
    • Confirm whether workflow_node_executions_tenant_id_idx truly exists.
  3. If it is indeed the combination of ‘missing table + duplicate index’, follow Option A:
    • DROP INDEX IF EXISTS workflow_node_executions_tenant_id_idx;
    • Execute flask db upgrade again in the API container.
  4. After the migration is complete:
    • Check if workflow_schedule_plans has been created;
    • Restart the worker and observe if poll_workflow_schedules is still reporting errors.

V. Please Provide the Following Information

To further confirm there are no other hidden issues, please include the following in your next reply:

  1. The results of executing the following in PostgreSQL (just paste the execution results):

    SELECT * FROM alembic_version;
    
    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    
    SELECT indexname, indexdef
    FROM pg_indexes
    WHERE indexname = 'workflow_node_executions_tenant_id_idx';
    
  2. If you have already tried ‘DROP INDEX + flask db upgrade again’, please paste the complete output of the new flask db upgrade.

Based on these results, I will help you confirm if there are any other migration conflicts that need manual handling later.

@Dify_Forum_Helper
WARNING: database “dify” has no actual collation version, but a version was recorded
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 2adcbe1f5dfb → 4474872b0ee6, workflow_draft_varaibles add node_execution_id column, add an index for workflow_node_executions.
INFO [alembic.runtime.migration] Running upgrade 4474872b0ee6 → 0ab65e1cc7fa, remove sequence_number from workflow_runs
INFO [alembic.runtime.migration] Running upgrade 0ab65e1cc7fa → 58eb7bdb93fe, add mcp server tool and app server
INFO [alembic.runtime.migration] Running upgrade 58eb7bdb93fe → 1c9ba48be8e4, add uuidv7 function in SQL
Traceback (most recent call last):
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1967, in _exec_single_context
self.dialect.do_execute(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py”, line 951, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.AmbiguousFunction: function name “uuidv7” is not unique
HINT: Specify the argument list to select the function unambiguously.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/app/api/.venv/bin/flask”, line 10, in
sys.exit(main())
^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask/cli.py”, line 1131, in main
cli.main()
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1406, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1873, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1873, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1269, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 824, in invoke
return callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/decorators.py”, line 34, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask/cli.py”, line 400, in decorator
return ctx.invoke(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 824, in invoke
return callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/cli.py”, line 154, in upgrade
_upgrade(directory, revision, sql, tag, x_arg)
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/init.py”, line 111, in wrapped
f(*args, **kwargs)
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/init.py”, line 200, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/command.py”, line 483, in upgrade
script.run_env()
File “/app/api/.venv/lib/python3.12/site-packages/alembic/script/base.py”, line 545, in run_env
util.load_python_file(self.dir, “env.py”)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/util/pyfiles.py”, line 116, in load_python_file
module = load_module_py(module_id, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/util/pyfiles.py”, line 136, in load_module_py
spec.loader.exec_module(module) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/app/api/migrations/env.py”, line 110, in
run_migrations_online()
File “/app/api/migrations/env.py”, line 104, in run_migrations_online
context.run_migrations()
File “”, line 8, in run_migrations
File “/app/api/.venv/lib/python3.12/site-packages/alembic/runtime/environment.py”, line 946, in run_migrations
self.get_context().run_migrations(**kw)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/runtime/migration.py”, line 627, in run_migrations
step.migration_fn(**kw)
File “/app/api/migrations/versions/2025_07_02_2332-1c9ba48be8e4_add_uuidv7_function_in_sql.py”, line 51, in upgrade
op.execute(sa.text(r"“”
File “”, line 8, in execute
File “”, line 3, in execute
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/ops.py”, line 2591, in execute
return operations.invoke(op)
^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/base.py”, line 454, in invoke
return fn(self, operation)
^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/toimpl.py”, line 240, in execute_sql
operations.migration_context.impl.execute(
File “/app/api/.venv/lib/python3.12/site-packages/alembic/ddl/impl.py”, line 253, in execute
self._exec(sql, execution_options)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/ddl/impl.py”, line 246, in _exec
return conn.execute(construct, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1419, in execute
return meth(
^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py”, line 526, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1641, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1846, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1986, in _exec_single_context
self._handle_dbapi_exception(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 2355, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1967, in _exec_single_context
self.dialect.do_execute(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py”, line 951, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.AmbiguousFunction) function name “uuidv7” is not unique
HINT: Specify the argument list to select the function unambiguously.

[SQL:
/* Main function to generate a uuidv7 value with millisecond precision */
CREATE FUNCTION uuidv7() RETURNS uuid
AS
$$
– Replace the first 48 bits of a uuidv4 with the current
– number of milliseconds since 1970-01-01 UTC
– and set the “ver” field to 7 by setting additional bits
SELECT encode(
set_bit(
set_bit(
overlay(uuid_send(gen_random_uuid()) placing
substring(int8send((extract(epoch from clock_timestamp()) * 1000)::bigint) from
3)
from 1 for 6),
52, 1),
53, 1), ‘hex’)::uuid;
$$ LANGUAGE SQL VOLATILE PARALLEL SAFE;

COMMENT ON FUNCTION uuidv7 IS
‘Generate a uuid-v7 value with a 48-bit timestamp (millisecond precision) and 74 bits of randomness’;
]
(Background on this error at: Error Messages — SQLAlchemy 2.0 Documentation )

这次的报错已经从「缺表」进化成了典型的「迁移重复执行」问题,而且你已经非常接近结束了。

现在的关键信息是:

  • 之前的重复索引问题已经被你处理掉了,迁移可以继续往后跑;
  • 现在卡在迁移版本 1c9ba48be8e4_add_uuidv7_function_in_sql.py
    • 里面要 CREATE FUNCTION uuidv7() RETURNS uuid ...
    • 但数据库里 已经存在一个名为 uuidv7 的函数,于是 PostgreSQL 报:

      function name “uuidv7” is not unique
      HINT: Specify the argument list to select the function unambiguously.

这说明:数据库里已经有一个(或多个)uuidv7(...) 函数,但 Alembic 认为还没建过,准备再建一次。


I. Overall Approach (Continuing Previous Method)

和前面“删掉重复索引 → 让迁移自己重建”的思路一样,这里也可以采用:

先清理掉「冲突对象」(这里是 uuidv7 函数),再让 Alembic 正常执行迁移脚本去重建一次。

因为这个函数只是一个“工具函数”(生成 uuidv7),删掉再按官方脚本重建是安全的,不涉及业务数据。

Prerequisite: It is still strongly recommended to back up your current database first.


II. Specific Operation Steps

The following operations are all performed while connected to the PostgreSQL database used by your Dify instance (e.g., dify).

1. Check how many uuidv7 functions currently exist

SELECT proname, oid, pg_get_function_identity_arguments(p.oid) AS args
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = 'public'
  AND proname = 'uuidv7';

Most likely you will see 1 or more records, similar to:

  • uuidv7() (no arguments)
  • Or some variants (if you have manually experimented before)

The migration script intends to create an argument-less uuidv7() that returns uuid.

2. Delete existing uuidv7() functions

If the previous step showed an argument-less uuidv7(), deleting just that one should be enough;
If there are multiple versions with the same name, you can delete them all to let the migration rebuild them uniformly.

It is recommended to execute:

DROP FUNCTION IF EXISTS uuidv7();

If the previous query result still contains versions with arguments (e.g., uuidv7(text) or similar), you can also clear them all:

-- If you indeed saw versions with specific argument signatures, you can add lines as needed:
-- DROP FUNCTION IF EXISTS uuidv7(text);
-- DROP FUNCTION IF EXISTS uuidv7(integer);

After deletion, you can query again to confirm:

SELECT proname, pg_get_function_identity_arguments(p.oid) AS args
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = 'public'
  AND proname = 'uuidv7';

Ideally, there should be no rows here.

3. Re-execute migration within the API container

Return to the API container:

docker exec -it <your-api-container-name> /bin/bash

flask db upgrade

If the uuidv7 conflict has been cleared, this step should run smoothly:

  • It will re-CREATE FUNCTION uuidv7() ... at step 1c9ba48be8e4
  • And then continue to execute subsequent migrations

III. Confirm if Migration is Truly Complete & Table is Created

  1. In the database, check alembic_version:

    SELECT * FROM alembic_version;
    

    This should be a relatively new version number, and no longer the intermediate versions like 2adcbe1f5dfb, 1c9ba48be8e4 that you repeatedly saw in your logs.

  2. Reconfirm whether the critical table workflow_schedule_plans already exists:

    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    

    If there are results now, it means this table has been created by the migration script.

  3. Restart the worker and observe the logs:

    docker compose restart worker
    

    Then check if the worker logs still show:

    relation "workflow_schedule_plans" does not exist
    
    • If it no longer appears, the poll_workflow_schedules scheduled task should theoretically be able to run normally;
    • If other tables ‘does not exist’ appear, it means more than just this one was missing initially, but as long as flask db upgrade has completed successfully, subsequent reports of missing tables would be abnormal, and the specific table names would need to be checked.

IV. If Errors Still Occur, Please Reply with These Results

If, after following the steps above:

  • flask db upgrade still reports an error on a certain migration, or
  • the worker still reports errors due to other tables/functions,

You can then post in the thread:

  1. The complete output of the latest flask db upgrade;
  2. The current alembic_version query result:
    SELECT * FROM alembic_version;
    
  3. The results of these two checks:
    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    
    SELECT proname, pg_get_function_identity_arguments(p.oid) AS args
    FROM pg_proc p
    JOIN pg_namespace n ON p.pronamespace = n.oid
    WHERE n.nspname = 'public'
      AND proname = 'uuidv7';
    

I will then help you confirm if there are other migration steps that haven’t aligned, or if additional objects need to be cleaned up.


In summary, you have now narrowed down the problem from ‘missing table causing worker errors’ to ‘conflicts from re-executing a few migration scripts’.
Clearing the uuidv7 function and letting Alembic rebuild it is the most direct and safest way to resolve this type of conflict, especially for business data.

@Dify_Forum_Helper
The following two SQL statements did not find any data:
SELECT proname, p.oid::regprocedure, pg_get_function_identity_arguments(p.oid) AS args
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = ‘public’
AND proname = ‘uuidv7’;

SELECT proname, pg_get_function_identity_arguments(p.oid) AS args
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = ‘public’
AND proname = ‘uuidv7’;

But if the ‘public’ condition is removed, two rows of data can be found:
“uuidv7”
“uuidv7” “shift interval”

An error occurred when executing DROP:

DROP FUNCTION IF EXISTS uuidv7();

ERROR: cannot drop function uuidv7() because it is required by the database system

This information is crucial and essentially exposes the root cause of the problem:

  • There is indeed no uuidv7 under public;
  • However, a full database search reveals two uuidv7 / uuidv7(shift interval) entries;
  • And DROP FUNCTION uuidv7() reports: cannot drop function uuidv7() because it is required by the database system.

This confirms: You are using PostgreSQL 18, which has a built-in set of system functions uuidv7(...). Dify 1.11.4’s migration is still trying to create another function with the same name uuidv7(), leading to a conflict in the system function namespace, and this system function cannot be dropped.

Under these circumstances, the simple approach of ‘delete function → let migration recreate’ is not viable.


Current Status in a Nutshell

  • You are currently facing an incompatibility issue between Dify 1.11.4’s migration script and PostgreSQL 18’s built-in uuidv7() function;
  • This is a ‘new pitfall’ related to the version combination, not an issue with your local configuration.

Strictly speaking, such conflicts require Dify’s official adaptation at the migration script level (e.g., changing the function name or skipping creation if uuidv7() is already detected).


Practical Solutions (You can choose one)

The following solutions are ordered from ‘safe / officially recommended’ to ‘advanced script modification’. You can choose based on your familiarity with database operations.

Option A: Switch to PostgreSQL 16 or 15 to get it running (Easiest, most aligned with current version expectations)

Currently known:

  • Starting with PostgreSQL 18, uuidv7() is officially built-in, system-level, and cannot be dropped;
  • Dify 1.11.4’s migration creates its own uuidv7() function.

Before Dify officially adapts to PG18, the safest path is to revert to an earlier major version, allowing the migration script to run normally in an environment ‘without a system uuidv7’:

  1. Stop the current Dify and PG 18 containers (and perform a full backup);

  2. Switch to a PG 16 or 15 image, for example:

    services:
      postgres:
        image: postgres:16-alpine
        ...
    
  3. If you currently have no business data in your database (just deployed, still in debugging phase), it is recommended to:

    • Directly create a ‘clean’ database instance;
    • Configure DB_TYPE=postgres & connection string;
    • Rerun docker compose up -d, allowing Dify to initialize from scratch and migrate automatically;
    • Then check if the worker still reports workflow_schedule_plans missing.

If you already have data and need to migrate data from PG 18 to PG 16, it will be significantly more complex (requiring pg_dump / pg_restore). If you are unfamiliar with this, it is recommended to evaluate carefully.

Advantages:

  • No need to manually modify any Dify code or migration scripts;
  • No conflict with PostgreSQL system functions;
  • Safer for future upgrades to higher Dify versions.

Option B: Temporarily skip this uuidv7 migration (Suitable for those familiar with Alembic/SQL)

The principle is: Since PG18 comes with uuidv7(), theoretically you no longer need Dify’s SQL implementation. You can make Alembic believe this migration has already been executed, allowing it to proceed, with the main goal of fully creating business tables (including workflow_schedule_plans).

Key points (only outlines the idea, not recommended for direct implementation if unfamiliar with Alembic):

  1. Confirm in the database:
    • workflow_schedule_plans and other critical tables already exist (if not yet created, check if previous migrations have passed);
  2. In Alembic’s alembic_version table, manually change the version number from the current value to a version higher than 1c9ba48be8e4 (i.e., pretend this migration has already been executed);
  3. Run flask db upgrade again, letting it proceed from the new baseline.

Risk points:

  • If a subsequent migration relies on the existence of ‘Dify’s own version of the uuidv7 function’, new issues might arise (however, from the current code, this function is more of a utility function, and actual business tables likely use the uuid type itself, not directly depending on this SQL function).

Since you are currently facing a naming conflict with a system function, this path is theoretically feasible, but requires you to carefully verify:

  • Whether the table structure matches official expectations;
  • Which migrations have indeed ‘been executed at the DB level but not accounted for’.

If you are not very familiar with Alembic and manually adjusting versions, this path is not highly recommended for you to try alone.


Option C: Modify migration scripts locally (Only suitable for those who maintain source code deployments themselves)

If you are deploying using source code / self-built images, rather than relying entirely on official Docker images, you can modify that migration script locally to:

  • Directly skip creation if an uuidv7() function is already detected in the database;
  • Or rename Dify’s function, for example, dify_uuidv7().

However, this involves:

  1. Modifying api/migrations/versions/2025_07_02_2332-1c9ba48be8e4_add_uuidv7_function_in_sql.py;
  2. Ensuring the modified script runs successfully in your environment;
  3. When upgrading Dify versions later, be aware that this is a ‘local custom modification’.

This falls under the category of ‘self-maintained branches’, which incurs higher long-term maintenance costs, and is therefore generally only recommended for users who are very familiar with Python / Alembic / Postgres.


My Recommended Priority

Based on your current troubleshooting process and the PG 18 behavior exposed, I recommend you consider the following order:

  1. Prioritize evaluating whether you can switch the core database to PostgreSQL 16 or 15, then re-initialize the migration:

    • If there is no important business data yet, this is the simplest and cleanest solution;
    • After switching versions and running flask db upgrade again, it will generally no longer conflict with the system uuidv7().
  2. If you must use PostgreSQL 18:

    • Then this issue is essentially a ‘conflict between Dify 1.11.4’s migration script and PG 18’s officially built-in uuidv7()’;
    • From a product perspective, it is more reasonable for Dify official to fix it in a future version (e.g., by modifying migration logic or function naming);
    • You can first raise an issue on the forum / GitHub, stating:
      • Dify version: 1.11.4
      • PostgreSQL version: 18.1-alpine
      • Migration stuck at 1c9ba48be8e4_add_uuidv7_function_in_sql.py
      • Error is AmbiguousFunction: function name "uuidv7" is not unique and DROP FUNCTION uuidv7() prompts ‘required by the database system’.

Until an official fix or clear guidance is provided, if you don’t want to modify the migration script yourself, the most practical approach is to temporarily revert to PG 16/15, ensuring the entire migration chain is an ‘officially verified’ combination.


If you wish to proceed with one of these options (e.g., you plan to switch to PG16, or want to try manually skipping this migration), you can tell me which path you prefer, and I can further detail the specific steps into a list of directly executable commands for that direction.

@Dify_Forum_Helper

Re-downloaded the new script,

Dify version: 1.12.1,

Postgresql version: 15-alpine

Deleted all previous containers, networks, and volumes, and removed locally mounted database files,

First, compose up postgresql and redis, and successfully created containers

Then compose up api. Below is the log when creating the API. Have all data tables been successfully migrated?
api-1 | Starting database migration.

api-1 | INFO \[alembic.runtime.migration\] Context impl PostgresqlImpl.

api-1 | INFO \[alembic.runtime.migration\] Will assume transactional DDL.

api-1 | INFO \[alembic.runtime.migration\] Running upgrade → 64b051264f32, init

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 64b051264f32 → 9f4e3427ea84, add created by role

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 9f4e3427ea84 → a45f4dfde53b, add language to recommend apps

api-1 | INFO \[alembic.runtime.migration\] Running upgrade a45f4dfde53b → 614f77cecc48, add last active at

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 614f77cecc48 → e32f6ccb87c6, e08af0a69ccefbb59fa80c778efee300bb780980

api-1 | INFO \[alembic.runtime.migration\] Running upgrade e32f6ccb87c6 → d3d503a3471c, add is_deleted to conversations

api-1 | INFO \[alembic.runtime.migration\] Running upgrade d3d503a3471c → a5b56fb053ef, app config add speech_to_text

api-1 | INFO \[alembic.runtime.migration\] Running upgrade a5b56fb053ef → 2beac44e5f5f, add is_universal in apps

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 2beac44e5f5f → 7ce5a52e4eee, add tool providers

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 7ce5a52e4eee → 8d2d099ceb74, add_qa_model_support

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 8d2d099ceb74 → 16fa53d9faec, add provider model support

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 16fa53d9faec → e35ed59becda, modify quota limit field type

api-1 | INFO \[alembic.runtime.migration\] Running upgrade e35ed59becda → bf0aec5ba2cf, add provider order

api-1 | INFO \[alembic.runtime.migration\] Running upgrade bf0aec5ba2cf → 5022897aaceb, add model name in embedding

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 5022897aaceb → 2c8af9671032, add_qa_document_language

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 2c8af9671032 → e8883b0148c9, add_dataset_model_name

api-1 | INFO \[alembic.runtime.migration\] Running upgrade e8883b0148c9 → 853f9b9cd3b6, add message price unit

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 853f9b9cd3b6 → 4bcffcd64aa4, update_dataset_model_field_null_available

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 4bcffcd64aa4 → 6dcb43972bdc, add_dataset_retriever_resource

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 6dcb43972bdc → 77e83833755c, add_app_config_retriever_resource

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 77e83833755c → 6e2cfb077b04, add_dataset_collection_binding

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 6e2cfb077b04 → ab23c11305d4, add dataset query variable at app model configs.

api-1 | INFO \[allembic.runtime.migration\] Running upgrade ab23c11305d4 → 2e9819ca5b28, add_tenant_id_in_api_token

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 2e9819ca5b28 → b3a09c049e8e, add advanced prompt templates

api-1 | INFO \[alembic.runtime.migration\] Running upgrade b3a09c049e8e → 968fff4c0ab9, add_api_based_extension

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 968fff4c0ab9 → a9836e3baeee, add external_data_tools in app model config

api-1 | INFO \[alembic.runtime.migration\] Running upgrade a9836e3baeee → 8fe468ba0ca5, add gpt4v supports

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 8fe468ba0ca5 → fca025d3b60f, add-dataset-retrieval-model

api-1 | INFO \[alembic.runtime.migration\] Running upgrade fca025d3b60f → e1901f623fd0, add-annotation-reply

api-1 | INFO \[alembic.runtime.migration\] Running upgrade e1901f623fd0 → 46976cc39132, add-annotation-histoiry-score

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 46976cc39132 → f2a6fc85e260, add_anntation_history_message_id

api-1 | INFO \[alembic.runtime.migration\] Running upgrade f2a6fc85e260 → 714aafe25d39, add_anntation_history_match_response

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 714aafe25d39 → 246ba09cbbdb, add_app_anntation_setting

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 246ba09cbbdb → 88072f0caa04, add custom config in tenant

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 88072f0caa04 → 187385f442fc, modify provider model name length

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 187385f442fc → 89c7899ca936, empty message

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 89c7899ca936 → 3ef9b2b6bee6, add_assistant_app

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 3ef9b2b6bee6 → ad472b61a054, add api provider icon

api-1 | INFO \[alembic.runtime.migration\] Running upgrade ad472b61a054 → 8ec536f3c800, rename api provider credentials

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 8ec536f3c800 → 00bacef91f18, rename api provider description

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 00bacef91f18 → f25003750af4, add created/updated at

api-1 | INFO \[alembic.runtime.migration\] Running upgrade f25003750af4 → c71211c8f604, add tool_invoke_model_log

api-1 | INFO \[alembic.runtime.migration\] Running upgrade c71211c8f604 → 114eed84c228, remove tool id from model invoke

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 114eed84c228 → 4829e54d2fee, change message chain id to nullable

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 4829e54d2fee → 053da0c1d756, add api tool privacy

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 053da0c1d756 → 4823da1d26cf, add tool file

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 4823da1d26cf → 9fafbd60eca1, add message file belongs to

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 9fafbd60eca1 → 8ae9bc661daa, add tool conversation variables idx

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 8ae9bc661daa → 23db93619b9d, add message files into agent thought

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 23db93619b9d → de95f5c77138, migration serpapi_api_key

api-1 | INFO \[alembic.runtime.migration\] Running upgrade de95f5c77138 → b24be59fbb04, empty message

api-1 | INFO \[alembic.runtime.migration\] Running upgrade b24be59fbb04 → dfb3b7f477da, add-tool-index

api-1 | INFO \[alembic.runtime.migration\] Running upgrade dfb3b7f477da → 380c6aa5a70d, add tool labels to agent thought

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 380c6aa5a70d → 16830a790f0f, empty message

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 16830a790f0f → a8f9b3c45e4a, add_tenant_id_db_index

api-1 | INFO \[alembic.runtime.migration\] Running upgrade a8f9b3c45e4a → 17b5ab037c40, add-keyworg-table-storage-type

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 17b5ab037c40 → a8d7385a7b66, add-embeddings-provider-name

api-1 | INFO \[alembic.runtime.migration\] Running upgrade a8d7385a7b66 → b289e2408ee2, add workflow

api-1 | INFO \[alembic.runtime.migration\] Running upgrade b289e2408ee2 → cc04d0998d4d, set model config column nullable

api-1 | INFO \[alembic.runtime.migration\] Running upgrade cc04d0998d4d → f9107f83abab, add desc for apps

api-1 | INFO \[alembic.runtime.migration\] Running upgrade f9107f83abab → 42e85ed5564d, conversation columns set nullable

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 42e85ed5564d → b5429b71023c, messages columns set nullable

api-1 | INFO \[alembic.runtime.migration\] Running upgrade b5429b71023c → 563cf8bf777b, enable tool file without conversation id

api-1 | INFO \[alembic.runtime.migration\] Running upgrade 563cf8bf777b → e2eacc9a1b63, add status for message

api-1 | INFO \[alembic.runtime.migration\] Running upgrade e2eacc9a1b63 → c3311b089690, add tool metaapi-1 | INFO \[alembic.runtime.migration\] Running upgrade c3311b089690 → 3c7cac9521c6, add-tags-and-binding-table
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 3c7cac9521c6 → 47cc7df8c4f3, modify default model name length
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 47cc7df8c4f3 → 5fda94355fce, Custom Disclaimer
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 5fda94355fce → 7bdef072e63a, add workflow tool
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 7bdef072e63a → 3b18fea55204, add tool label bings
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 3b18fea55204 → 9e98fbaffb88, add workflow tool version
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 9e98fbaffb88 → 03f98355ba0e, add workflow tool label and tool bindings idx
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 03f98355ba0e → 64a70a7aab8b, add workflow run index
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 64a70a7aab8b → 4e99a8df00ff, add load balancing
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 4e99a8df00ff → 7b45942e39bb, add-api-key-auth-binding
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 7b45942e39bb → 4ff534e1eb11, add workflow to site
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 4ff534e1eb11 → b69ca54b9208, add chatbot color theme
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 4ff534e1eb11 → 04c602f5dc9b, update AppModelConfig and add table TracingAppConfig
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 04c602f5dc9b → c031d46af369, remove app model config trace config and rename trace app config
api-1 | INFO \[alembic.runtime.migration\] Running upgrade c031d46af369 → 2a3aebbbf4bb, add app tracing
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 2a3aebbbf4bb, b69ca54b9208 → 63f9175e515b, merge branches
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 63f9175e515b → b2602e131636, add workflow_run_id index for message
api-1 | INFO \[alembic.runtime.migration\] Running upgrade b2602e131636 → 7e6a8693e07a, add table dataset_permissions
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 7e6a8693e07a → 161cadc1af8d, add dataset permission tenant id
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 161cadc1af8d → 408176b91ad3, ‘add_max_active_requests’
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 408176b91ad3 → fecff1c3da27, remove extra tracing app config table and add idx_dataset_permissions_tenant_id
api-1 | INFO \[alembic.runtime.migration\] Running upgrade fecff1c3da27 → 6e957a32015b, add-embedding-cache-created_at_index
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 6e957a32015b → 8e5588e6412e, add environment variable to workflow model
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 8e5588e6412e → 53bf8af60645, update model
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 53bf8af60645 → eeb2e349e6ac, increase max model_name length
api-1 | INFO \[alembic.runtime.migration\] Running upgrade eeb2e349e6ac → 1787fbae959a, update tools original_url length
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 1787fbae959a → 63a83fcf12ba, support conversation variables
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 63a83fcf12ba → 8782057ff0dc, add conversations.dialogue_count
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 8782057ff0dc → a6be81136580, app and site icon type
api-1 | INFO \[alembic.runtime.migration\] Running upgrade a6be81136580 → 2dbe42621d96, rename workflow__conversation_variables to workflow_conversation_variables
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 2dbe42621d96 → d0187d6a88dd, add created_by and updated_by to app, modelconfig, and site
api-1 | INFO \[alembic.runtime.migration\] Running upgrade d0187d6a88dd → 030f4915f36a, add use_icon_as_answer_icon fields for app and site
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 030f4915f36a → 675b5321501b, add node_execution_id into node_executions
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 675b5321501b → d57ba9ebb251, add parent_message_id to messages
api-1 | INFO \[alembic.runtime.migration\] Running upgrade d57ba9ebb251 → 6af6a521a53e, update-retrieval-resource
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 6af6a521a53e → 33f5fac87f29, external_knowledge_api
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 33f5fac87f29 → d8e744d88ed6, fix wrong service-api history
api-1 | INFO \[alembic.runtime.migration\] Running upgrade d8e744d88ed6 → bbadea11becb, add name and size to tool_files
api-1 | INFO \[alembic.runtime.migration\] Running upgrade bbadea11becb → 0251a1c768cc, add-tidb-auth-binding
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 0251a1c768cc → 43fa78bc3b7d, add_white_list
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 43fa78bc3b7d → d3f6769a94a3, Add upload_files.source_url
api-1 | INFO \[alembic.runtime.migration\] Running upgrade d3f6769a94a3 → 93ad8c19c40b, rename conversation variables index name
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 93ad8c19c40b → f4d7ce70a7ca, update upload_files.source_url
api-1 | INFO \[alembic.runtime.migration\] Running upgrade f4d7ce70a7ca → d07474999927, update type of custom_disclaimer to TEXT
api-1 | INFO \[alembic.runtime.migration\] Running upgrade d07474999927 → 09a8d1878d9b, update workflows graph, features and updated_at
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 09a8d1878d9b → 01d6889832f7, add_created_at_index_for_messages
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 01d6889832f7 → cf8f4fc45278, add exceptions_count field to WorkflowRun model
api-1 | INFO \[alembic.runtime.migration\] Running upgrade cf8f4fc45278 → 11b07f66c737, remove unused tool_providers
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 11b07f66c737 → e1944c35e15e, add retry_index field to node-execution model
api-1 | INFO \[alembic.runtime.migration\] Running upgrade e1944c35e15e → d7999dfa4aae, remove workflow_node_executions.retry_index if exists
api-1 | INFO \[alembic.runtime.migration\] Running upgrade d7999dfa4aae → e19037032219, parent-child-index
api-1 | INFO \[alembic.runtime.migration\] Running upgrade e19037032219 → 923752d42eb6, add_auto_disabled_dataset_logs
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 923752d42eb6 → a91b476a53de, change workflow_runs.total_tokens to bigint
api-1 | INFO \[alembic.runtime.migration\] Running upgrade a91b476a53de → ddcc8bbef391, increase max length of builtin tool provider
api-1 | INFO \[alembic.runtime.migration\] Running upgrade ddcc8bbef391 → 08ec4f75af5e, add_tenant_plugin_permisisons
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 08ec4f75af5e → 4413929e1ec2, extend_provider_name_column
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 4413929e1ec2 → 5511c782ee4c, extend provider column
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 5511c782ee4c → ee79d9b1c156, add marked_name and marked_comment in workflows
api-1 | INFO \[alembic.runtime.migration\] Running upgrade ee79d9b1c156 → f051706725cc, add_rate_limit_logs
api-1 | INFO \[alembic.runtime.migration\] Running upgrade f051706725cc → d20049ed0af6, add_metadata_function
api-1 | INFO \[alembic.runtime.migration\] Running upgrade d20049ed0af6 → 6a9f914f656c, change documentsegment and childchunk indexes
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 6a9f914f656c → d28f2004b072, add index for workflow_conversation_variables.conversation_id
api-1 | INFO \[alembic.runtime.migration\] Running upgrade d28f2004b072 → 2adcbe1f5dfb, add WorkflowDraftVariable model
api-1 | INFO \[alembic.runtime.migration\] Running upgrade 2adcbe1f5dfb → 4474872b0ee6, `workflow_draft_varaibles` add `node_execution_id` column, add an index for `workflow_node_executions`.api-1 | INFO [alembic.runtime.migration] Running upgrade 4474872b0ee6 → 0ab65e1cc7fa, remove sequence_number from workflow_runs

api-1 | INFO [alembic.runtime.migration] Running upgrade 0ab65e1cc7fa → 58eb7bdb93fe, add mcp server tool and app server

api-1 | INFO [alembic.runtime.migration] Running upgrade 58eb7bdb93fe → 1c9ba48be8e4, add uuidv7 function in SQL

api-1 | INFO [alembic.runtime.migration] Running upgrade 1c9ba48be8e4 → 71f5020c6470, tool oauth

api-1 | INFO [alembic.runtime.migration] Running upgrade 71f5020c6470 → 1a83934ad6d1, update models

api-1 | INFO [alembic.runtime.migration] Running upgrade 1a83934ad6d1 → 375fe79ead14, oauth_refresh_token

api-1 | INFO [alembic.runtime.migration] Running upgrade 375fe79ead14 → 8bcc02c9bd07, add_tenant_plugin_autoupgrade_table

api-1 | INFO [alembic.runtime.migration] Running upgrade 8bcc02c9bd07 → 532b3f888abf, manual dataset field update

api-1 | INFO [alembic.runtime.migration] Running upgrade 532b3f888abf → fa8b0fa6f407, add timeout for tool_mcp_providers

api-1 | INFO [alembic.runtime.migration] Running upgrade fa8b0fa6f407 → e8446f481c1e, Add provider multi credential support

api-1 | INFO [alembic.runtime.migration] Running upgrade e8446f481c1e → 0e154742a5fa, Add provider model multi credential support

api-1 | INFO [alembic.runtime.migration] Running upgrade 0e154742a5fa → 8d289573e1da, empty message

api-1 | INFO [alembic.runtime.migration] Running upgrade 8d289573e1da → b95962a3885c, chore: add workflow app log run id index

api-1 | INFO [alembic.runtime.migration] Running upgrade b95962a3885c → c20211f18133, add_headers_to_mcp_provider

api-1 | INFO [alembic.runtime.migration] Running upgrade c20211f18133 → cf7c38a32b2d, Add credential status for provider table

api-1 | INFO [alembic.runtime.migration] Running upgrade cf7c38a32b2d → 68519ad5cd18, knowledge_pipeline_migrate

api-1 | INFO [alembic.runtime.migration] Running upgrade 68519ad5cd18 → d98acf217d43, add app_mode for messsage

api-1 | INFO [alembic.runtime.migration] Running upgrade d98acf217d43 → ae662b25d9bc, remove-builtin-template-user

api-1 | INFO [alembic.runtime.migration] Running upgrade ae662b25d9bc → 03f8dcbc611e, add WorkflowPause model

api-1 | INFO [alembic.runtime.migration] Running upgrade 03f8dcbc611e → 669ffd70119c, introduce_trigger

api-1 | INFO [alembic.runtime.migration] Running upgrade 669ffd70119c → 09cfdda155d1, mysql adaptation

api-1 | INFO [alembic.runtime.migration] Running upgrade 09cfdda155d1 → 7bb281b7a422, Add workflow_pauses_reasons table

api-1 | INFO [alembic.runtime.migration] Running upgrade 7bb281b7a422 → d57accd375ae, support-multi-modal

api-1 | INFO [alembic.runtime.migration] Running upgrade d57accd375ae → 03ea244985ce, add type column not null default tool

api-1 | INFO [alembic.runtime.migration] Running upgrade 03ea244985ce → 7df29de0f6be, add credit pool

api-1 | INFO [alembic.runtime.migration] Running upgrade 7df29de0f6be → 905527cc8fd3, add workflow_run_created_at_id_idx

api-1 | INFO [alembic.runtime.migration] Running upgrade 905527cc8fd3 → 3334862ee907, feat: add created_at id index to messages

api-1 | INFO [alembic.runtime.migration] Running upgrade 3334862ee907 → 288345cd01d1, change workflow node execution workflow_run index

api-1 | INFO [alembic.runtime.migration] Running upgrade 288345cd01d1 → 9e6fa5cbcd80, make message annotation question not nullable

api-1 | INFO [alembic.runtime.migration] Running upgrade 9e6fa5cbcd80 → f9f6d18a37f9, add table explore banner and trial

api-1 | INFO [alembic.runtime.migration] Running upgrade f9f6d18a37f9 → 9d77545f524e, create workflow_archive_logs

api-1 | INFO [alembic.runtime.migration] Running upgrade 9d77545f524e → 788d3099ae3a, add summary index feature

api-1 | Database migration successful!

api-1 | [2026-02-07 08:43:33 +0000] [1] [INFO] Starting gunicorn 23.0.0

api-1 | [2026-02-07 08:43:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:5001 (1)

api-1 | [2026-02-07 08:43:33 +0000] [1] [INFO] Using worker: gevent

api-1 | [2026-02-07 08:43:33 +0000] [108] [INFO] Booting worker with pid: 108

api-1 | gRPC patched with gevent.

api-1 | psycopg2 patched with gevent.

api-1 | 2026-02-07 08:43:37.378 INFO [MainThread] [utils.py:151] - Note: NumExpr detected 40 cores but “NUMEXPR_MAX_THREADS” not set, so enforcing safe limit of 16.

api-1 | 2026-02-07 08:43:37.379 INFO [MainThread] [utils.py:164] - NumExpr defaulting to 16 threads.

Yes, based on your logs, this was a complete ‘from scratch’ migration that finished successfully, and it can be assumed that the previous issue of missing tables has been resolved.

There are two key signals:

  1. The migration chain ran from an empty version all the way to the latest version
    The log starts with:

    Running upgrade  -> 64b051264f32, init
    

    This indicates a brand new database, executing from the initial version, followed by dozens of Running upgrade ... -> ... entries you posted, leading up to:

    INFO  [alembic.runtime.migration] Running upgrade 9d77545f524e -> 788d3099ae3a, add summary index feature
    Database migration successful!
    

    This includes the steps we were previously concerned about, such as:

    • b289e2408ee2, add workflow (this batch will create workflow-related tables)
    • 669ffd70119c, introduce_trigger
    • 7bb281b7a422, Add workflow_pauses_reasons table
    • and so on, up to the latest entry 788d3099ae3a, add summary index feature

    There were no errors or interruptions, and it concluded with a clear message:

    Database migration successful!
    

    Only then did gunicorn start, which is the expected path for a ‘full migration success’.

  2. The version combination has circumvented the previous two pitfalls

    • Dify upgraded to 1.12.1;
    • PostgreSQL changed to 15-alpine, eliminating the conflict between PG18’s built-in uuidv7() and the migration script.

    With this combination, the official default migration scripts have been tested for such Postgres versions, so tables like workflow_schedule_plans will be created normally, and the worker’s poll_workflow_schedules query will no longer encounter ‘relation … does not exist’.


Here are some simple verifications you can do now (choose one):

  1. Confirm the existence of core tables in the DB, for example:

    SELECT table_name
    FROM information_schema.tables
    WHERE table_name IN ('workflow_schedule_plans', 'workflows', 'app_triggers');
    

    Under normal circumstances, there should be results.

  2. After starting worker / worker_beat, check their logs to see if the following appears again:

    relation "workflow_schedule_plans" does not exist
    

    If these types of errors are no longer present, it means the original issue (scheduled tasks not running) has been resolved by this ‘reinstallation + correct version combination + complete migration’.

If you encounter new errors in the worker logs (instead of missing tables) going forward, you can post the specific logs again, and I’ll help you check if it’s another configuration issue.