【求助】Dify 1.11.4 部署后 worker 报数据表poll_workflow_schedules缺失,PostgreSQL 配置报错无法正常启动

:bullseye: What I Need Help With

1. Worker 服务数据表缺失报错

worker-1  |                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 306, in orm_execute_statement
worker-1  |     result = conn.execute(
worker-1  |              ^^^^^^^^^^^^^
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1419, in execute
worker-1  |     return meth(
worker-1  |            ^^^^^
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 526, in _execute_on_connection
worker-1  |     return connection._execute_clauseelement(
worker-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1641, in _execute_clauseelement
worker-1  |     ret = self._execute_context(
worker-1  |           ^^^^^^^^^^^^^^^^^^^^^^
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context
worker-1  |     return self._exec_single_context(
worker-1  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context
worker-1  |     self._handle_dbapi_exception(
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2355, in _handle_dbapi_exception
worker-1  |     raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context
worker-1  |     self.dialect.do_execute(
worker-1  |   File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 951, in do_execute
worker-1  |     cursor.execute(statement, parameters)
worker-1  | sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "workflow_schedule_plans" does not exist
worker-1  | LINE 2: FROM workflow_schedule_plans JOIN app_triggers ON app_trigge...
worker-1  |              ^
worker-1  | 
worker-1  | [SQL: SELECT workflow_schedule_plans.id, workflow_schedule_plans.app_id, workflow_schedule_plans.node_id, workflow_schedule_plans.tenant_id, workflow_schedule_plans.cron_expression, workflow_schedule_plans.timezone, workflow_schedule_plans.next_run_at, workflow_schedule_plans.created_at, workflow_schedule_plans.updated_at 
worker-1  | FROM workflow_schedule_plans JOIN app_triggers ON app_triggers.app_id = workflow_schedule_plans.app_id AND app_triggers.node_id = workflow_schedule_plans.node_id AND app_triggers.trigger_type = %(trigger_type_1)s 
worker-1  | WHERE workflow_schedule_plans.next_run_at <= %(next_run_at_1)s AND workflow_schedule_plans.next_run_at IS NOT NULL AND app_triggers.status = %(status_1)s ORDER BY workflow_schedule_plans.next_run_at ASC 
worker-1  |  LIMIT %(param_1)s FOR UPDATE SKIP LOCKED]
worker-1  | [parameters: {'trigger_type_1': 'trigger-schedule', 'next_run_at_1': datetime.datetime(2026, 2, 4, 14, 16, 56, 757456), 'status_1': 'enabled', 'param_1': 100}]
worker-1  | (Background on this error at: https://sqlalche.me/e/20/f405)

重复出现该报错,定时任务 poll_workflow_schedules 无法正常执行。

:clipboard: Project Details

    • Dify 版本:1.11.4
    • 部署方式:Docker Compose 本地部署
    • 操作系统:Linux (类Centos)
    • PostgreSQL 镜像:18.1-alpine3.23

@Dify_Forum_Helper 你来详细说一下问题可能出在哪里,顺便也查一下 GitHub 有没有类似的 Issues。

从你这边的日志和我在社区里能看到的类似案例来看,这个问题本质上是:

核心数据库里缺少表 workflow_schedule_plans,所以 worker 在执行定时任务 poll_workflow_schedules 时 SQL 直接报 “relation … does not exist”。

也就是说:不是 worker 本身坏了,而是数据库迁移没有把这一张(以及可能其他)表创建出来。

下面按可能原因 + 建议排查步骤给你梳理一下。


一、最可能的原因

  1. 数据库迁移(migrations)没有完整执行

    • Dify 所有业务表(包括 workflow_schedule_plans)都是通过 Alembic 迁移脚本在第一次启动时自动创建的。
    • 如果:
      • 第一次启动时 API 容器还连不上 PostgreSQL;或者
      • 中途手动清过库、换过数据库;或者
      • 有权限问题 / 版本不兼容导致迁移失败
        那么这张表就不会被创建,但 worker 依然会尝试去查它,于是就报你看到的这个错。
  2. 数据库类型配置不匹配(尤其是 DB_TYPE

    • 从 1.10.x 开始,Dify 用一个环境变量来区分数据库类型,比如:
      • PostgreSQL:DB_TYPE=postgres
      • MySQL:DB_TYPE=mysql
    • 如果你在 docker-compose / .env 里:
      • 没有配置 DB_TYPE,或者
      • 配置成了错误的值(比如历史上有用户乱写成 seekdb
        会导致迁移脚本按错误的方言/引擎处理,进而:
    • 迁移被跳过或失败;
    • 部分表(包括 workflow_schedule_plans)没被建出来。
  3. 曾经更换数据库或清表,但没重新做迁移

    • 比如:
      • 先用一个 PostgreSQL 实例,后来改了连接串指向新库;
      • 或者在 PostgreSQL 里清空了 schema;
    • 但 API 容器并没有重新完整跑一遍 “初始化 + 迁移”,这时就会出现“有的表有,有的表没有”的情况。

二、你可以先自查这几项

下面这些操作都是「只读检查」,不会破坏数据:

  1. 确认 PostgreSQL 里确实没有这张表

    在你的 PostgreSQL 容器或客户端里执行:

    \dt workflow_schedule_plans
    

    或者:

    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    

    如果这里就是空的,那就基本确认:迁移没跑到位。

  2. 检查 Dify 的数据库配置

    找到你部署目录中的环境配置(通常是和 docker-compose.yml 同目录下的 .env,或者直接写在 compose 的 environment: 里),重点看下面几项:

    DB_TYPE=postgres
    DB_USERNAME=...
    DB_PASSWORD=...
    DB_HOST=...
    DB_PORT=5432
    DB_DATABASE=...
    

    确认:

    • DB_TYPE 存在并且是 postgres
    • 这些值和你 PostgreSQL 实例实际使用的是对应的;
    • 没有拼写错误或多余空格。
  3. 看下 API 容器启动日志里是否有 migration 失败的痕迹

    在宿主机上看:

    docker compose logs api | grep -i -E "migrat|alembic|flask db"
    

    如果看到类似:

    • “failed to run migrations”
    • 或者带 traceback 的 Python 异常
      那就能进一步确认是迁移问题。

三、比较稳妥的修复步骤(建议按顺序来)

注意:下面第 2 步的 flask db upgrade 是“幂等”的,也就是已经存在的表不会被删掉,只会补齐没创建的表或者做结构升级,一般是安全的。

  1. 先确保配置正确,再重启

    修改 .envdocker-compose.yml 中 API & worker 共用的环境变量,保证:

    DB_TYPE=postgres
    

    然后在部署目录执行:

    docker compose down
    docker compose up -d
    

    等所有服务启动稳定后,再继续下一步。

  2. 手动执行数据库迁移

    进入 API 容器(容器名按你实际的为准,通常叫 dify-api-1 或类似):

    docker exec -it <你的-api-容器名> /bin/bash
    

    在容器里执行:

    flask db upgrade
    

    正常情况下会看到一系列 “Running upgrade” 之类的输出,如果中间有报错,请把整段报错日志贴出来,方便进一步判断。

    迁移执行完后,可以从容器里直接连 PostgreSQL(如果装了 psql)或在宿主机用 client 再查一次:

    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    

    如果这时表已经存在,则说明迁移补齐了。

  3. 再重启 worker,观察是否还有相同报错

    docker compose restart worker
    

    然后看 worker 日志里是否还出现:

    relation "workflow_schedule_plans" does not exist
    

    如果不再出现,说明问题已经解决;如果还有,再看是否换成了别的表的 “does not exist”,那就意味着有更多表没被创建,同样通过 flask db upgrade 一次性会补齐。


四、如果以上操作后仍有问题,建议你补充这些信息

如果你按上述步骤操作后,问题还在,建议你在这个帖子里追加这些信息,我可以帮你更具体地定位:

  1. 你当前 docker-compose.yml.env 中关于数据库的配置(可以脱敏掉密码):

    • DB_TYPE
    • DB_USERNAME
    • DB_HOST
    • DB_PORT
    • DB_DATABASE
  2. flask db upgrade 的完整输出(尤其是有没有报错堆栈)。

  3. PostgreSQL 中所有 workflow_* 相关表的列表,例如:

    SELECT table_name
    FROM information_schema.tables
    WHERE table_name LIKE 'workflow%';
    

五、和 GitHub / 社区上的情况对比

在 GitHub 和论坛里,已有几起“某张表不存在 → worker 定时任务报错”的案例,常见结论都是:

  • 根因:第一次初始化时迁移没跑完整,或者 DB_TYPE 设置错误,导致整批表(包括 workflow_schedule_plans)压根没建出来;
  • 解决方案:修正 DB_TYPE + 手动 flask db upgrade,之后 worker 的 poll_workflow_schedules 就能正常跑了。

你这边 Dify 版本是 1.11.4,已经包含定时工作流相关的表定义,所以只要迁移跑成功,这张表一定会存在;不会存在“版本太低没这个表”的情况。


你可以先按「三、修复步骤」执行一下,如果方便的话,先回答两点:

  1. 你现在 .env / compose 里 DB_TYPE 的值是什么?
  2. flask db upgrade 是否成功执行,有没有报错?

我可以根据你的具体输出再帮你看下一步怎么处理。

@Dify_Forum_Helper
$ flask db upgrade
警告: 数据库“dify”没有实际的排序规则版本,但已记录了一个版本
信息 [alembic.runtime.migration] 上下文实现 PostgresqlImpl.
信息 [alembic.runtime.migration] 将假定事务性DDL。
信息 [alembic.runtime.migration] 正在运行升级 2adcbe1f5dfb → 4474872b0ee6, workflow_draft_varaibles 添加 node_execution_id 列,为 workflow_node_executions 添加索引。
追溯(最近一次调用):
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1967, in _exec_single_context
self.dialect.do_execute(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py”, line 951, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.DuplicateTable: relation “workflow_node_executions_tenant_id_idx” already exists

上述异常是以下异常的直接原因:

追溯(最近一次调用):
File “/app/api/.venv/bin/flask”, line 10, in
sys.exit(main())
^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask/cli.py”, line 1131, in main
cli.main()
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1406, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1873, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1873, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1269, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 824, in invoke
return callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/decorators.py”, line 34, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask/cli.py”, line 400, in decorator
return ctx.invoke(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 824, in invoke
return callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/cli.py”, line 154, in upgrade
_upgrade(directory, revision, sql, tag, x_arg)
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/init.py”, line 111, in wrapped
f(*args, **kwargs)
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/init.py”, line 200, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/command.py”, line 483, in upgrade
script.run_env()
File “/app/api/.venv/lib/python3.12/site-packages/alembic/script/base.py”, line 545, in run_env
util.load_python_file(self.dir, “env.py”)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/util/pyfiles.py”, line 116, in load_python_file
module = load_module_py(module_id, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/util/pyfiles.py”, line 136, in load_module_py
spec.loader.exec_module(module) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/app/api/migrations/env.py”, line 110, in
run_migrations_online()
File “/app/api/migrations/env.py”, line 104, in run_migrations_online
context.run_migrations()
File “”, line 8, in run_migrations
File “/app/api/.venv/lib/python3.12/site-packages/alembic/runtime/environment.py”, line 946, in run_migrations
self.get_context().run_migrations(**kw)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/runtime/migration.py”, line 627, in run_migrations
step.migration_fn(**kw)
File “/app/api/migrations/versions/2025_06_06_1424-4474872b0ee6_workflow_draft_varaibles_add_node_execution_id.py”, line 36, in upgrade
op.create_index(
File “”, line 8, in create_index
File “”, line 3, in create_index
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/ops.py”, line 1013, in create_index
return operations.invoke(op)
^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/base.py”, line 454, in invoke
return fn(self, operation)
^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/toimpl.py”, line 112, in create_index
operations.impl.create_index(idx, **kw)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/ddl/postgresql.py”, line 99, in create_index
self._exec(CreateIndex(index, **kw))
File “/app/api/.venv/lib/python3.12/site-packages/alembic/ddl/impl.py”, line 246, in _exec
return conn.execute(construct, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1419, in execute
return meth(
^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/sql/ddl.py”, line 187, in _execute_on_connection
return connection._execute_ddl(
^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1530, in _execute_ddl
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1846, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1986, in _exec_single_context
self._handle_dbapi_exception(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 2355, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1967, in _exec_single_context
self.dialect.do_execute(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py”, line 951, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.DuplicateTable) relation “workflow_node_executions_tenant_id_idx” already exists

[SQL: CREATE INDEX CONCURRENTLY workflow_node_executions_tenant_id_idx ON workflow_node_executions (tenant_id, workflow_id, node_id, created_at DESC)]
(有关此错误的背景信息,请访问: Error Messages — SQLAlchemy 2.0 Documentation )

你这次贴的 flask db upgrade 日志信息非常关键,说明现在的状态是:

  • Alembic 正在从版本 2adcbe1f5dfb 升级到 4474872b0ee6
  • 在这个迁移脚本里要创建索引 workflow_node_executions_tenant_id_idx
  • 但数据库里这个索引已经存在,于是抛出 DuplicateTable,整个迁移被中断。

所以现在的问题变成了:

迁移历史和数据库真实结构“不同步”:

  • 表/索引里有些东西已经存在了,但 Alembic 认为还没创建过。

这也是为什么你之前缺表 workflow_schedule_plans,手动跑迁移时又因为另一个索引重复而卡住。


一、小结一下目前的推断

  1. 你的数据库里已经部分跑过迁移

    • 至少 workflow_node_executions_tenant_id_idx 这个索引是存在的;
    • workflow_schedule_plans 表还不存在(否则 worker 不会报错)。
  2. Alembic 的“迁移记录”停在 2adcbe1f5dfb

    • 从日志看,它现在正尝试执行 2adcbe1f5dfb → 4474872b0ee6
    • 说明在 alembic_version 表中记录的当前版本还是 2adcbe1f5dfb
    • 但在真正的数据库里,“后面几步迁移的部分操作”已经被执行过(比如这个索引)。

换句话说:
数据库实际结构 > Alembic 认知的版本,于是重复执行时遇到了“索引已存在”的冲突。


二、你现在可以做的几步排查

下面这些操作都建议在确认有备份的前提下进行,尤其是直接操作 alembic_version 或删索引之前。

1. 看看迁移版本表里现在记录的是哪一版

在 PostgreSQL 里执行:

SELECT * FROM alembic_version;

通常会有一行,version_num 就是当前迁移版本。
你现在日志里是从 2adcbe1f5dfb 升级到 4474872b0ee6,理论上 version_num 应该是 2adcbe1f5dfb

记下这个值,后面可能要用。

2. 确认一下相关对象的现状

在 PostgreSQL 里分别查一下:

-- 1)定时计划表是否仍然不存在
SELECT *
FROM information_schema.tables
WHERE table_name = 'workflow_schedule_plans';

-- 2)冲突的索引是否存在
SELECT indexname, indexdef
FROM pg_indexes
WHERE indexname = 'workflow_node_executions_tenant_id_idx';

-- 3)基础表是否存在
SELECT *
FROM information_schema.tables
WHERE table_name = 'workflow_node_executions';

结果大概率是:

  • workflow_schedule_plans:找不到;
  • workflow_node_executions:存在;
  • workflow_node_executions_tenant_id_idx 索引:存在。

如果是这样,就能进一步确认:
有一些迁移步骤已经被执行过(建索引),但整个迁移序列没有被 Alembic 正确“记账”。


三、两种常见修复思路(建议优先方案 A)

强烈建议在操作前先备份数据库(至少备份 dify 这个库),以便出错时可以回滚。

方案 A:删除重复索引,让迁移继续往下跑

思路:既然重复的是“索引”而不是“表”,可以先删掉索引,让 Alembic 重新创建一遍,迁移就不会再被 DuplicateTable 卡住。

  1. 在 PostgreSQL 里删除该索引:

    DROP INDEX IF EXISTS workflow_node_executions_tenant_id_idx;
    

    只删索引不会删表数据,风险相对可控。
    删完后可以再查一眼确认:

    SELECT indexname
    FROM pg_indexes
    WHERE tablename = 'workflow_node_executions';
    
  2. 回到 API 容器里,再次执行迁移:

    flask db upgrade
    

    这一步如果顺利通过,就会:

    • 重建刚刚删掉的索引;
    • 继续执行后续所有未完成的迁移,包括创建 workflow_schedule_plans 等表。
  3. 迁移成功后,再查一次关键表:

    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    

    如果此时已存在,就可以重启 worker,观察原来的 poll_workflow_schedules 报错是否消失。

方案 B:手动调整 Alembic 版本(高阶且风险更大)

这种方式一般用于“数据库结构其实已经正确,只是 Alembic 版本记录落后”的场景,需要你确认:

  • 现在数据库里的表 / 索引结构本身是完整且正确的;
  • 缺的只是 workflow_schedule_plans 这一类表,如果你手动建 / 手动迁移过,也需要保证它们和官方迁移脚本一致。

大致步骤(仅供你对原理有个概念,如果你对 Alembic 不熟,不推荐直接这么做):

  1. 在确认数据库结构已经与某个较新的 migration 版本一致后,手动更新 alembic_version 表的 version_num,跳过重复的那几个 migration。
  2. 再执行 flask db upgrade,让它从新的基线往后跑。

因为你这边目前还是缺表workflow_schedule_plans),所以不太适合直接走方案 B,否则后续的迁移里如果假设这张表已经存在,反而容易出现更多问题。


四、建议你现在按这个顺序操作

  1. 确认数据库已经做好备份(这一点很重要)。
  2. 在 PostgreSQL 里执行:
    • 查看 alembic_version 当前值;
    • 确认 workflow_schedule_plans 是否确实不存在;
    • 确认 workflow_node_executions_tenant_id_idx 确实存在。
  3. 如果确实是“缺表 + 重复索引”这种组合,按方案 A
    • DROP INDEX IF EXISTS workflow_node_executions_tenant_id_idx;
    • 再次在 API 容器中执行 flask db upgrade
  4. 迁移完成后:
    • 检查 workflow_schedule_plans 是否已创建;
    • 重启 worker,观察 poll_workflow_schedules 是否还报错。

五、麻烦你再反馈这几项信息

为了进一步确认没有其他隐藏问题,你可以在下一条回复中附上:

  1. 在 PostgreSQL 中执行的结果(直接贴执行结果即可):

    SELECT * FROM alembic_version;
    
    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    
    SELECT indexname, indexdef
    FROM pg_indexes
    WHERE indexname = 'workflow_node_executions_tenant_id_idx';
    
  2. 如果你已经尝试了“DROP INDEX + 再次 flask db upgrade”,请贴出新的 flask db upgrade 完整输出。

我会根据这几项结果,帮你确认后面是否还有其他需要手动处理的迁移冲突。

@Dify_Forum_Helper
警告:数据库“dify”没有实际的排序规则版本,但已记录了一个版本
信息 [alembic.runtime.migration] 上下文实现 PostgresqlImpl。
信息 [alembic.runtime.migration] 将假定事务性 DDL。
信息 [alembic.runtime.migration] 正在运行升级 2adcbe1f5dfb → 4474872b0ee6,workflow_draft_varaibles 添加 node_execution_id 列,为 workflow_node_executions 添加索引。
信息 [alembic.runtime.migration] 正在运行升级 4474872b0ee6 → 0ab65e1cc7fa,从 workflow_runs 中移除 sequence_number
信息 [alembic.runtime.migration] 正在运行升级 0ab65e1cc7fa → 58eb7bdb93fe,添加 mcp 服务器工具和应用服务器
信息 [alembic.runtime.migration] 正在运行升级 58eb7bdb93fe → 1c9ba48be8e4,在 SQL 中添加 uuidv7 函数
追溯 (最近一次调用在最后):
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1967, in _exec_single_context
self.dialect.do_execute(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py”, line 951, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.AmbiguousFunction: 函数名“uuidv7”不唯一
提示:指定参数列表以明确选择函数。

上述异常是以下异常的直接原因:

追溯 (最近一次调用在最后):
File “/app/api/.venv/bin/flask”, line 10, in
sys.exit(main())
^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask/cli.py”, line 1131, in main
cli.main()
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1406, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1873, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1873, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 1269, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 824, in invoke
return callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/decorators.py”, line 34, in new_func
return f(get_current_context(), *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask/cli.py”, line 400, in decorator
return ctx.invoke(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/click/core.py”, line 824, in invoke
return callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/cli.py”, line 154, in upgrade
_upgrade(directory, revision, sql, tag, x_arg)
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/init.py”, line 111, in wrapped
f(*args, **kwargs)
File “/app/api/.venv/lib/python3.12/site-packages/flask_migrate/init.py”, line 200, in upgrade
command.upgrade(config, revision, sql=sql, tag=tag)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/command.py”, line 483, in upgrade
script.run_env()
File “/app/api/.venv/lib/python3.12/site-packages/alembic/script/base.py”, line 545, in run_env
util.load_python_file(self.dir, “env.py”)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/util/pyfiles.py”, line 116, in load_python_file
module = load_module_py(module_id, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/util/pyfiles.py”, line 136, in load_module_py
spec.loader.exec_module(module) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “”, line 999, in exec_module
File “”, line 488, in _call_with_frames_removed
File “/app/api/migrations/env.py”, line 110, in
run_migrations_online()
File “/app/api/migrations/env.py”, line 104, in run_migrations_online
context.run_migrations()
File “”, line 8, in run_migrations
File “/app/api/.venv/lib/python3.12/site-packages/alembic/runtime/environment.py”, line 946, in run_migrations
self.get_context().run_migrations(**kw)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/runtime/migration.py”, line 627, in run_migrations
step.migration_fn(**kw)
File “/app/api/migrations/versions/2025_07_02_2332-1c9ba48be8e4_add_uuidv7_function_in_sql.py”, line 51, in upgrade
op.execute(sa.text(r“””
File “”, line 8, in execute
File “”, line 3, in execute
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/ops.py”, line 2591, in execute
return operations.invoke(op)
^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/base.py”, line 454, in invoke
return fn(self, operation)
^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/alembic/operations/toimpl.py”, line 240, in execute_sql
operations.migration_context.impl.execute(
File “/app/api/.venv/lib/python3.12/site-packages/alembic/ddl/impl.py”, line 253, in execute
self._exec(sql, execution_options)
File “/app/api/.venv/lib/python3.12/site-packages/alembic/ddl/impl.py”, line 246, in _exec
return conn.execute(construct, params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1419, in execute
return meth(
^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py”, line 526, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1641, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1846, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1986, in _exec_single_context
self._handle_dbapi_exception(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 2355, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py”, line 1967, in _exec_single_context
self.dialect.do_execute(
File “/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py”, line 951, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.AmbiguousFunction) 函数名“uuidv7”不唯一
提示:指定参数列表以明确选择函数。

[SQL:
/* 用于生成具有毫秒精度 uuidv7 值的主函数 */
CREATE FUNCTION uuidv7() RETURNS uuid
AS
$$
– 将 uuidv4 的前 48 位替换为当前
– 自 1970-01-01 UTC 以来的毫秒数
– 并通过设置附加位将“ver”字段设置为 7
SELECT encode(
set_bit(
set_bit(
overlay(uuid_send(gen_random_uuid()) placing
substring(int8send((extract(epoch from clock_timestamp()) * 1000)::bigint) from
3)
from 1 for 6),
52, 1),
53, 1), ‘hex’)::uuid;
$$ LANGUAGE SQL VOLATILE PARALLEL SAFE;

COMMENT ON FUNCTION uuidv7 IS
‘生成一个 uuid-v7 值,具有 48 位时间戳(毫秒精度)和 74 位随机性’;

]
(此错误的背景信息请访问:Error Messages — SQLAlchemy 2.0 Documentation )

这次的报错已经从「缺表」进化成了典型的「迁移重复执行」问题,而且你已经非常接近结束了。

现在的关键信息是:

  • 之前的重复索引问题已经被你处理掉了,迁移可以继续往后跑;
  • 现在卡在迁移版本 1c9ba48be8e4_add_uuidv7_function_in_sql.py
    • 里面要 CREATE FUNCTION uuidv7() RETURNS uuid ...
    • 但数据库里 已经存在一个名为 uuidv7 的函数,于是 PostgreSQL 报:

      function name “uuidv7” is not unique
      HINT: Specify the argument list to select the function unambiguously.

这说明:数据库里已经有一个(或多个)uuidv7(...) 函数,但 Alembic 认为还没建过,准备再建一次。


一、总体思路(延续之前的做法)

和前面“删掉重复索引 → 让迁移自己重建”的思路一样,这里也可以采用:

先清理掉「冲突对象」(这里是 uuidv7 函数),再让 Alembic 正常执行迁移脚本去重建一次。

因为这个函数只是一个“工具函数”(生成 uuidv7),删掉再按官方脚本重建是安全的,不涉及业务数据。

前提依然强烈建议:先备份当前数据库。


二、具体操作步骤

以下操作都在连接到你那套 Dify 使用的 PostgreSQL 库(比如 dify)中执行。

1. 看看现在有几个 uuidv7 函数

SELECT proname, oid, pg_get_function_identity_arguments(p.oid) AS args
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = 'public'
  AND proname = 'uuidv7';

大概率你会看到 1 条或多条记录,类似:

  • uuidv7()(无参数)
  • 或者某些变体(如果你以前手动试验过)

迁移脚本要创建的是一个 无参的 uuidv7(),返回 uuid

2. 删除现有的 uuidv7() 函数

如果上一步看到有一条无参的 uuidv7(),可以直接删这一个就够了;
如果有多个重名版本,可以全删干净,让迁移统一重建。

建议执行:

DROP FUNCTION IF EXISTS uuidv7();

如果上一步查询结果里还有带参数的版本(例如 uuidv7(text) 之类),一并清掉也可以:

-- 若你确实看到了特定参数签名的版本,可按需加行:
-- DROP FUNCTION IF EXISTS uuidv7(text);
-- DROP FUNCTION IF EXISTS uuidv7(integer);

删除后可以再查一遍确认:

SELECT proname, pg_get_function_identity_arguments(p.oid) AS args
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = 'public'
  AND proname = 'uuidv7';

理想情况是这里不再有任何行。

3. 再次在 API 容器内执行迁移

回到 API 容器:

docker exec -it <你的-api-容器名> /bin/bash

flask db upgrade

如果 uuidv7 的冲突已经清理掉,这一步应该能顺利跑过:

  • 会在 1c9ba48be8e4 这一步重新 CREATE FUNCTION uuidv7() ...
  • 然后继续执行后面的迁移

三、确认迁移是否真正完成 & 表是否已创建

  1. 在数据库里查看 alembic_version

    SELECT * FROM alembic_version;
    

    这里应该是 一个比较新的版本号,而不再是你日志中反复看到的 2adcbe1f5dfb1c9ba48be8e4 等中间版本。

  2. 再次确认关键表 workflow_schedule_plans 是否已经存在:

    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    

    如果现在有结果了,说明这张表已经被迁移脚本创建出来。

  3. 重启 worker 并观察日志:

    docker compose restart worker
    

    然后看 worker 日志中是否还出现:

    relation "workflow_schedule_plans" does not exist
    
    • 如果不再出现,poll_workflow_schedules 定时任务理论上已经可以正常跑;
    • 如果出现其他表 “does not exist”,那说明当初缺的不止这一张,但只要 flask db upgrade 已经完整成功,后面再报缺表就不正常了,需要再看具体表名。

四、如果还报错,建议你回帖补充这几项结果

如果你按上面的步骤做完后:

  • flask db upgrade 仍然在某个 migration 上报错,或者
  • worker 仍然因为别的表/函数报错,

可以在帖子里再贴出:

  1. 最新一次 flask db upgrade 的完整输出;
  2. 当前 alembic_version 查询结果:
    SELECT * FROM alembic_version;
    
  3. 下面这两个检查的结果:
    SELECT *
    FROM information_schema.tables
    WHERE table_name = 'workflow_schedule_plans';
    
    SELECT proname, pg_get_function_identity_arguments(p.oid) AS args
    FROM pg_proc p
    JOIN pg_namespace n ON p.pronamespace = n.oid
    WHERE n.nspname = 'public'
      AND proname = 'uuidv7';
    

我再帮你确认是不是还有其他迁移步骤没对上,或者需要额外清理的对象。


总之,你现在已经把问题从「缺表导致 worker 报错」缩小到了「少数迁移脚本的重复执行冲突」。
uuidv7 函数清掉,让 Alembic 自己重建,是解决这一类冲突最直接、对业务数据也最安全的办法。

@Dify_Forum_Helper
以下两段sql并没有找到任何数据,
SELECT proname, p.oid::regprocedure, pg_get_function_identity_arguments(p.oid) AS args
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = ‘public’
AND proname = ‘uuidv7’;

SELECT proname, pg_get_function_identity_arguments(p.oid) AS args
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = ‘public’
AND proname = ‘uuidv7’;

但如果把public的条件去掉,就可以搜到两行数据
“uuidv7”
“uuidv7” “shift interval”

执行drop的时候报错

DROP FUNCTION IF EXISTS uuidv7();

ERROR: cannot drop function uuidv7() because it is required by the database system

你这条信息非常关键,把问题的根源基本暴露出来了:

  • public 下确实没有 uuidv7
  • 但全库范围查有两条 uuidv7 / uuidv7(shift interval)
  • DROP FUNCTION uuidv7() 报:cannot drop function uuidv7() because it is required by the database system

这就印证了:
你用的是 PostgreSQL 18,它自己内置了一组系统函数 uuidv7(...),而 Dify 1.11.4 这条 migration 还在试图再创建一个同名的 uuidv7(),于是在 系统函数名空间上发生冲突,而且这个系统函数又不能被 DROP。

在这种前提下,简单靠「删函数 → 让迁移重建」这条路走不通了。


一句话总结当前状态

  • 你现在遇到的是 Dify 1.11.4 的迁移脚本与 PostgreSQL 18 内置 uuidv7() 函数不兼容 的问题;
  • 这是一个版本组合上的“新坑”,不是你本地配置的问题。

这类冲突,严格来说需要 Dify 官方在迁移脚本层面做适配(比如改函数名或在检测到已有 uuidv7() 时跳过创建)。


现实可行的几条路(你可以择一操作)

下面几种方案从“稳妥 / 官方推荐”到“高阶改脚本”排序,你可以根据你对数据库运维的熟悉程度来选。

方案 A:换成 PostgreSQL 16 或 15 先跑通(最省心、最符合当前版本预期)

目前已知:

  • PostgreSQL 18 开始,官方内置 uuidv7(),而且是系统级、不可删除;
  • Dify 1.11.4 的 migration 写法是“自己造一个 uuidv7() 函数”。

在 Dify 官方正式适配 PG18 之前,最稳妥的路径就是退回一个主版本,让迁移脚本在「没有系统 uuidv7 的环境」里正常跑完:

  1. 停掉当前 Dify 和 PG 18 容器(并做完整备份);

  2. 换用 PG 16 或 15 的镜像,例如:

    services:
      postgres:
        image: postgres:16-alpine
        ...
    
  3. 如果你现在数据库里本身就没什么业务数据(刚部署、还在调试阶段),建议:

    • 直接新建一个“干净的”数据库实例;
    • 配好 DB_TYPE=postgres & 连接串;
    • 重新 docker compose up -d,让 Dify 从 0 开始初始化并自动迁移一次;
    • 再看 worker 是否还会报 workflow_schedule_plans 缺失。

如果你已经有数据,需要把 PG 18 里的数据迁到 PG 16,会复杂不少(需要 pg_dump / pg_restore),不熟悉的话建议谨慎评估。

优点:

  • 不需要你手改任何 Dify 代码或迁移脚本;
  • 也不会和 PostgreSQL 系统函数打架;
  • 对以后升级到更高版本 Dify 也更安全。

方案 B:临时跳过这个 uuidv7 迁移(适合熟悉 Alembic/SQL 的人)

原理是:既然 PG18 自带 uuidv7(),理论上 你可以不再需要 Dify 那个 SQL 实现,让 Alembic 认为这条迁移已经执行完,往后继续跑,重点是把业务表(包括 workflow_schedule_plans)建全。

要点(只讲思路,不建议在不熟 Alembic 时直接上手):

  1. 在数据库里确认:
    • workflow_schedule_plans 和其他关键表 已经存在(如果还没建出来,就要看前面版本的迁移是否都过了);
  2. 在 Alembic 的 alembic_version 表中,手动把版本号从当前值改成 高于 1c9ba48be8e4 的某个版本(即假装这条 migration 已经执行);
  3. 再跑一次 flask db upgrade,让它从新的基线往后跑。

风险点:

  • 如果后续某个 migration 依赖“Dify 自己那版 uuidv7 函数”的存在,可能会遇到新的问题(不过从目前代码来看,这个函数更多只是一个工具函数,真正业务表大概率用的是 uuid 类型本身,不直接依赖这个 SQL 函数)。

因为你现在遇到的是和系统函数命名冲突,这条路在理论上可行,但需要你非常小心地核对:

  • 表结构是否和官方预期一致;
  • 哪些迁移确实已经在 DB 层“执行过了但没记账”。

如果你对 Alembic 和手工调版本不太熟,这条路不太推荐你单独尝试。


方案 C:本地改迁移脚本(只适合自己维护源码部署的人)

如果你是用源码 / 自建镜像部署,而不是完全依赖官方 Docker 镜像,可以在本地把那条迁移脚本改成:

  • 在检测到数据库里已有 uuidv7() 函数时 直接跳过创建
  • 或者把 Dify 的函数改名,比如 dify_uuidv7()

但这涉及:

  1. 修改 api/migrations/versions/2025_07_02_2332-1c9ba48be8e4_add_uuidv7_function_in_sql.py
  2. 确保改动后的脚本在你环境中跑得通;
  3. 后续再升级 Dify 版本时,注意这是一处“本地自定义改动”。

这种属于“自维护分支”的范畴,对长期运维成本较高,因此一般只建议给对 Python / Alembic / Postgres 都比较熟的用户。


我的建议优先级

结合你当前的排查过程和这次暴露出来的 PG 18 行为,我更建议你按这个顺序考虑:

  1. 优先评估能否把核心数据库换成 PostgreSQL 16 或 15,然后重新初始化迁移:

    • 如果现在还没有重要业务数据,这是最简单、最干净的方案;
    • 换完版本再跑一遍 flask db upgrade 基本不会再撞系统 uuidv7()
  2. 如果你 必须用 PostgreSQL 18

    • 那这个问题本质上就是“Dify 1.11.4 迁移脚本与 PG 18 官方内置 uuidv7() 的冲突”;
    • 从产品角度看,更合理的是由 Dify 官方在后续版本里修掉(例如修改迁移逻辑或函数命名);
    • 你可以先在论坛 / GitHub 提一个 issue,说明:
      • Dify 版本:1.11.4
      • PostgreSQL 版本:18.1-alpine
      • 迁移卡在 1c9ba48be8e4_add_uuidv7_function_in_sql.py
      • 报错为 AmbiguousFunction: function name "uuidv7" is not uniqueDROP FUNCTION uuidv7() 提示“required by the database system”。

在官方修复或给出明确指引之前,如果你不想自己改迁移脚本,最务实的就是暂时退回 PG 16/15,保证整套迁移链路是“官方已验证”的组合。


如果你想继续往下走某一种方案(比如:你打算换成 PG16,或者想试试手动跳过这条迁移),你可以告诉我你更倾向哪条路,我可以按那个方向帮你把具体的操作步骤再细化成一组可直接执行的命令清单。

@Dify_Forum_Helper

重新下载了新的脚本,

dify 版本:1.12.1,

Postgresql 版本:15-alphine

删除之前所有的容器以及网络和volume,并删除本地挂载的数据库文件,

先compose up postgresql 和 redis,并成功建立容器

在compose up api,以下是建立api时的记录,是否已经成功迁移所有数据表:
api-1 | Starting database migration.

api-1 | INFO [alembic.runtime.migration] Context impl PostgresqlImpl.

api-1 | INFO [alembic.runtime.migration] Will assume transactional DDL.

api-1 | INFO [alembic.runtime.migration] Running upgrade → 64b051264f32, init

api-1 | INFO [alembic.runtime.migration] Running upgrade 64b051264f32 → 9f4e3427ea84, add created by role

api-1 | INFO [alembic.runtime.migration] Running upgrade 9f4e3427ea84 → a45f4dfde53b, add language to recommend apps

api-1 | INFO [alembic.runtime.migration] Running upgrade a45f4dfde53b → 614f77cecc48, add last active at

api-1 | INFO [alembic.runtime.migration] Running upgrade 614f77cecc48 → e32f6ccb87c6, e08af0a69ccefbb59fa80c778efee300bb780980

api-1 | INFO [alembic.runtime.migration] Running upgrade e32f6ccb87c6 → d3d503a3471c, add is_deleted to conversations

api-1 | INFO [alembic.runtime.migration] Running upgrade d3d503a3471c → a5b56fb053ef, app config add speech_to_text

api-1 | INFO [alembic.runtime.migration] Running upgrade a5b56fb053ef → 2beac44e5f5f, add is_universal in apps

api-1 | INFO [alembic.runtime.migration] Running upgrade 2beac44e5f5f → 7ce5a52e4eee, add tool providers

api-1 | INFO [alembic.runtime.migration] Running upgrade 7ce5a52e4eee → 8d2d099ceb74, add_qa_model_support

api-1 | INFO [alembic.runtime.migration] Running upgrade 8d2d099ceb74 → 16fa53d9faec, add provider model support

api-1 | INFO [alembic.runtime.migration] Running upgrade 16fa53d9faec → e35ed59becda, modify quota limit field type

api-1 | INFO [alembic.runtime.migration] Running upgrade e35ed59becda → bf0aec5ba2cf, add provider order

api-1 | INFO [alembic.runtime.migration] Running upgrade bf0aec5ba2cf → 5022897aaceb, add model name in embedding

api-1 | INFO [alembic.runtime.migration] Running upgrade 5022897aaceb → 2c8af9671032, add_qa_document_language

api-1 | INFO [alembic.runtime.migration] Running upgrade 2c8af9671032 → e8883b0148c9, add_dataset_model_name

api-1 | INFO [alembic.runtime.migration] Running upgrade e8883b0148c9 → 853f9b9cd3b6, add message price unit

api-1 | INFO [alembic.runtime.migration] Running upgrade 853f9b9cd3b6 → 4bcffcd64aa4, update_dataset_model_field_null_available

api-1 | INFO [alembic.runtime.migration] Running upgrade 4bcffcd64aa4 → 6dcb43972bdc, add_dataset_retriever_resource

api-1 | INFO [alembic.runtime.migration] Running upgrade 6dcb43972bdc → 77e83833755c, add_app_config_retriever_resource

api-1 | INFO [alembic.runtime.migration] Running upgrade 77e83833755c → 6e2cfb077b04, add_dataset_collection_binding

api-1 | INFO [alembic.runtime.migration] Running upgrade 6e2cfb077b04 → ab23c11305d4, add dataset query variable at app model configs.

api-1 | INFO [alembic.runtime.migration] Running upgrade ab23c11305d4 → 2e9819ca5b28, add_tenant_id_in_api_token

api-1 | INFO [alembic.runtime.migration] Running upgrade 2e9819ca5b28 → b3a09c049e8e, add advanced prompt templates

api-1 | INFO [alembic.runtime.migration] Running upgrade b3a09c049e8e → 968fff4c0ab9, add_api_based_extension

api-1 | INFO [alembic.runtime.migration] Running upgrade 968fff4c0ab9 → a9836e3baeee, add external_data_tools in app model config

api-1 | INFO [alembic.runtime.migration] Running upgrade a9836e3baeee → 8fe468ba0ca5, add gpt4v supports

api-1 | INFO [alembic.runtime.migration] Running upgrade 8fe468ba0ca5 → fca025d3b60f, add-dataset-retrieval-model

api-1 | INFO [alembic.runtime.migration] Running upgrade fca025d3b60f → e1901f623fd0, add-annotation-reply

api-1 | INFO [alembic.runtime.migration] Running upgrade e1901f623fd0 → 46976cc39132, add-annotation-histoiry-score

api-1 | INFO [alembic.runtime.migration] Running upgrade 46976cc39132 → f2a6fc85e260, add_anntation_history_message_id

api-1 | INFO [alembic.runtime.migration] Running upgrade f2a6fc85e260 → 714aafe25d39, add_anntation_history_match_response

api-1 | INFO [alembic.runtime.migration] Running upgrade 714aafe25d39 → 246ba09cbbdb, add_app_anntation_setting

api-1 | INFO [alembic.runtime.migration] Running upgrade 246ba09cbbdb → 88072f0caa04, add custom config in tenant

api-1 | INFO [alembic.runtime.migration] Running upgrade 88072f0caa04 → 187385f442fc, modify provider model name length

api-1 | INFO [alembic.runtime.migration] Running upgrade 187385f442fc → 89c7899ca936, empty message

api-1 | INFO [alembic.runtime.migration] Running upgrade 89c7899ca936 → 3ef9b2b6bee6, add_assistant_app

api-1 | INFO [alembic.runtime.migration] Running upgrade 3ef9b2b6bee6 → ad472b61a054, add api provider icon

api-1 | INFO [alembic.runtime.migration] Running upgrade ad472b61a054 → 8ec536f3c800, rename api provider credentials

api-1 | INFO [alembic.runtime.migration] Running upgrade 8ec536f3c800 → 00bacef91f18, rename api provider description

api-1 | INFO [alembic.runtime.migration] Running upgrade 00bacef91f18 → f25003750af4, add created/updated at

api-1 | INFO [alembic.runtime.migration] Running upgrade f25003750af4 → c71211c8f604, add tool_invoke_model_log

api-1 | INFO [alembic.runtime.migration] Running upgrade c71211c8f604 → 114eed84c228, remove tool id from model invoke

api-1 | INFO [alembic.runtime.migration] Running upgrade 114eed84c228 → 4829e54d2fee, change message chain id to nullable

api-1 | INFO [alembic.runtime.migration] Running upgrade 4829e54d2fee → 053da0c1d756, add api tool privacy

api-1 | INFO [alembic.runtime.migration] Running upgrade 053da0c1d756 → 4823da1d26cf, add tool file

api-1 | INFO [alembic.runtime.migration] Running upgrade 4823da1d26cf → 9fafbd60eca1, add message file belongs to

api-1 | INFO [alembic.runtime.migration] Running upgrade 9fafbd60eca1 → 8ae9bc661daa, add tool conversation variables idx

api-1 | INFO [alembic.runtime.migration] Running upgrade 8ae9bc661daa → 23db93619b9d, add message files into agent thought

api-1 | INFO [alembic.runtime.migration] Running upgrade 23db93619b9d → de95f5c77138, migration serpapi_api_key

api-1 | INFO [alembic.runtime.migration] Running upgrade de95f5c77138 → b24be59fbb04, empty message

api-1 | INFO [alembic.runtime.migration] Running upgrade b24be59fbb04 → dfb3b7f477da, add-tool-index

api-1 | INFO [alembic.runtime.migration] Running upgrade dfb3b7f477da → 380c6aa5a70d, add tool labels to agent thought

api-1 | INFO [alembic.runtime.migration] Running upgrade 380c6aa5a70d → 16830a790f0f, empty message

api-1 | INFO [alembic.runtime.migration] Running upgrade 16830a790f0f → a8f9b3c45e4a, add_tenant_id_db_index

api-1 | INFO [alembic.runtime.migration] Running upgrade a8f9b3c45e4a → 17b5ab037c40, add-keyworg-table-storage-type

api-1 | INFO [alembic.runtime.migration] Running upgrade 17b5ab037c40 → a8d7385a7b66, add-embeddings-provider-name

api-1 | INFO [alembic.runtime.migration] Running upgrade a8d7385a7b66 → b289e2408ee2, add workflow

api-1 | INFO [alembic.runtime.migration] Running upgrade b289e2408ee2 → cc04d0998d4d, set model config column nullable

api-1 | INFO [alembic.runtime.migration] Running upgrade cc04d0998d4d → f9107f83abab, add desc for apps

api-1 | INFO [alembic.runtime.migration] Running upgrade f9107f83abab → 42e85ed5564d, conversation columns set nullable

api-1 | INFO [alembic.runtime.migration] Running upgrade 42e85ed5564d → b5429b71023c, messages columns set nullable

api-1 | INFO [alembic.runtime.migration] Running upgrade b5429b71023c → 563cf8bf777b, enable tool file without conversation id

api-1 | INFO [alembic.runtime.migration] Running upgrade 563cf8bf777b → e2eacc9a1b63, add status for message

api-1 | INFO [alembic.runtime.migration] Running upgrade e2eacc9a1b63 → c3311b089690, add tool meta

api-1 | INFO [alembic.runtime.migration] Running upgrade c3311b089690 → 3c7cac9521c6, add-tags-and-binding-table

api-1 | INFO [alembic.runtime.migration] Running upgrade 3c7cac9521c6 → 47cc7df8c4f3, modify default model name length

api-1 | INFO [alembic.runtime.migration] Running upgrade 47cc7df8c4f3 → 5fda94355fce, Custom Disclaimer

api-1 | INFO [alembic.runtime.migration] Running upgrade 5fda94355fce → 7bdef072e63a, add workflow tool

api-1 | INFO [alembic.runtime.migration] Running upgrade 7bdef072e63a → 3b18fea55204, add tool label bings

api-1 | INFO [alembic.runtime.migration] Running upgrade 3b18fea55204 → 9e98fbaffb88, add workflow tool version

api-1 | INFO [alembic.runtime.migration] Running upgrade 9e98fbaffb88 → 03f98355ba0e, add workflow tool label and tool bindings idx

api-1 | INFO [alembic.runtime.migration] Running upgrade 03f98355ba0e → 64a70a7aab8b, add workflow run index

api-1 | INFO [alembic.runtime.migration] Running upgrade 64a70a7aab8b → 4e99a8df00ff, add load balancing

api-1 | INFO [alembic.runtime.migration] Running upgrade 4e99a8df00ff → 7b45942e39bb, add-api-key-auth-binding

api-1 | INFO [alembic.runtime.migration] Running upgrade 7b45942e39bb → 4ff534e1eb11, add workflow to site

api-1 | INFO [alembic.runtime.migration] Running upgrade 4ff534e1eb11 → b69ca54b9208, add chatbot color theme

api-1 | INFO [alembic.runtime.migration] Running upgrade 4ff534e1eb11 → 04c602f5dc9b, update AppModelConfig and add table TracingAppConfig

api-1 | INFO [alembic.runtime.migration] Running upgrade 04c602f5dc9b → c031d46af369, remove app model config trace config and rename trace app config

api-1 | INFO [alembic.runtime.migration] Running upgrade c031d46af369 → 2a3aebbbf4bb, add app tracing

api-1 | INFO [alembic.runtime.migration] Running upgrade 2a3aebbbf4bb, b69ca54b9208 → 63f9175e515b, merge branches

api-1 | INFO [alembic.runtime.migration] Running upgrade 63f9175e515b → b2602e131636, add workflow_run_id index for message

api-1 | INFO [alembic.runtime.migration] Running upgrade b2602e131636 → 7e6a8693e07a, add table dataset_permissions

api-1 | INFO [alembic.runtime.migration] Running upgrade 7e6a8693e07a → 161cadc1af8d, add dataset permission tenant id

api-1 | INFO [alembic.runtime.migration] Running upgrade 161cadc1af8d → 408176b91ad3, ‘add_max_active_requests’

api-1 | INFO [alembic.runtime.migration] Running upgrade 408176b91ad3 → fecff1c3da27, remove extra tracing app config table and add idx_dataset_permissions_tenant_id

api-1 | INFO [alembic.runtime.migration] Running upgrade fecff1c3da27 → 6e957a32015b, add-embedding-cache-created_at_index

api-1 | INFO [alembic.runtime.migration] Running upgrade 6e957a32015b → 8e5588e6412e, add environment variable to workflow model

api-1 | INFO [alembic.runtime.migration] Running upgrade 8e5588e6412e → 53bf8af60645, update model

api-1 | INFO [alembic.runtime.migration] Running upgrade 53bf8af60645 → eeb2e349e6ac, increase max model_name length

api-1 | INFO [alembic.runtime.migration] Running upgrade eeb2e349e6ac → 1787fbae959a, update tools original_url length

api-1 | INFO [alembic.runtime.migration] Running upgrade 1787fbae959a → 63a83fcf12ba, support conversation variables

api-1 | INFO [alembic.runtime.migration] Running upgrade 63a83fcf12ba → 8782057ff0dc, add conversations.dialogue_count

api-1 | INFO [alembic.runtime.migration] Running upgrade 8782057ff0dc → a6be81136580, app and site icon type

api-1 | INFO [alembic.runtime.migration] Running upgrade a6be81136580 → 2dbe42621d96, rename workflow__conversation_variables to workflow_conversation_variables

api-1 | INFO [alembic.runtime.migration] Running upgrade 2dbe42621d96 → d0187d6a88dd, add created_by and updated_by to app, modelconfig, and site

api-1 | INFO [alembic.runtime.migration] Running upgrade d0187d6a88dd → 030f4915f36a, add use_icon_as_answer_icon fields for app and site

api-1 | INFO [alembic.runtime.migration] Running upgrade 030f4915f36a → 675b5321501b, add node_execution_id into node_executions

api-1 | INFO [alembic.runtime.migration] Running upgrade 675b5321501b → d57ba9ebb251, add parent_message_id to messages

api-1 | INFO [alembic.runtime.migration] Running upgrade d57ba9ebb251 → 6af6a521a53e, update-retrieval-resource

api-1 | INFO [alembic.runtime.migration] Running upgrade 6af6a521a53e → 33f5fac87f29, external_knowledge_api

api-1 | INFO [alembic.runtime.migration] Running upgrade 33f5fac87f29 → d8e744d88ed6, fix wrong service-api history

api-1 | INFO [alembic.runtime.migration] Running upgrade d8e744d88ed6 → bbadea11becb, add name and size to tool_files

api-1 | INFO [alembic.runtime.migration] Running upgrade bbadea11becb → 0251a1c768cc, add-tidb-auth-binding

api-1 | INFO [alembic.runtime.migration] Running upgrade 0251a1c768cc → 43fa78bc3b7d, add_white_list

api-1 | INFO [alembic.runtime.migration] Running upgrade 43fa78bc3b7d → d3f6769a94a3, Add upload_files.source_url

api-1 | INFO [alembic.runtime.migration] Running upgrade d3f6769a94a3 → 93ad8c19c40b, rename conversation variables index name

api-1 | INFO [alembic.runtime.migration] Running upgrade 93ad8c19c40b → f4d7ce70a7ca, update upload_files.source_url

api-1 | INFO [alembic.runtime.migration] Running upgrade f4d7ce70a7ca → d07474999927, update type of custom_disclaimer to TEXT

api-1 | INFO [alembic.runtime.migration] Running upgrade d07474999927 → 09a8d1878d9b, update workflows graph, features and updated_at

api-1 | INFO [alembic.runtime.migration] Running upgrade 09a8d1878d9b → 01d6889832f7, add_created_at_index_for_messages

api-1 | INFO [alembic.runtime.migration] Running upgrade 01d6889832f7 → cf8f4fc45278, add exceptions_count field to WorkflowRun model

api-1 | INFO [alembic.runtime.migration] Running upgrade cf8f4fc45278 → 11b07f66c737, remove unused tool_providers

api-1 | INFO [alembic.runtime.migration] Running upgrade 11b07f66c737 → e1944c35e15e, add retry_index field to node-execution model

api-1 | INFO [alembic.runtime.migration] Running upgrade e1944c35e15e → d7999dfa4aae, remove workflow_node_executions.retry_index if exists

api-1 | INFO [alembic.runtime.migration] Running upgrade d7999dfa4aae → e19037032219, parent-child-index

api-1 | INFO [alembic.runtime.migration] Running upgrade e19037032219 → 923752d42eb6, add_auto_disabled_dataset_logs

api-1 | INFO [alembic.runtime.migration] Running upgrade 923752d42eb6 → a91b476a53de, change workflow_runs.total_tokens to bigint

api-1 | INFO [alembic.runtime.migration] Running upgrade a91b476a53de → ddcc8bbef391, increase max length of builtin tool provider

api-1 | INFO [alembic.runtime.migration] Running upgrade ddcc8bbef391 → 08ec4f75af5e, add_tenant_plugin_permisisons

api-1 | INFO [alembic.runtime.migration] Running upgrade 08ec4f75af5e → 4413929e1ec2, extend_provider_name_column

api-1 | INFO [alembic.runtime.migration] Running upgrade 4413929e1ec2 → 5511c782ee4c, extend provider column

api-1 | INFO [alembic.runtime.migration] Running upgrade 5511c782ee4c → ee79d9b1c156, add marked_name and marked_comment in workflows

api-1 | INFO [alembic.runtime.migration] Running upgrade ee79d9b1c156 → f051706725cc, add_rate_limit_logs

api-1 | INFO [alembic.runtime.migration] Running upgrade f051706725cc → d20049ed0af6, add_metadata_function

api-1 | INFO [alembic.runtime.migration] Running upgrade d20049ed0af6 → 6a9f914f656c, change documentsegment and childchunk indexes

api-1 | INFO [alembic.runtime.migration] Running upgrade 6a9f914f656c → d28f2004b072, add index for workflow_conversation_variables.conversation_id

api-1 | INFO [alembic.runtime.migration] Running upgrade d28f2004b072 → 2adcbe1f5dfb, add WorkflowDraftVariable model

api-1 | INFO [alembic.runtime.migration] Running upgrade 2adcbe1f5dfb → 4474872b0ee6, `workflow_draft_varaibles` add `node_execution_id` column, add an index for `workflow_node_executions`.

api-1 | INFO [alembic.runtime.migration] Running upgrade 4474872b0ee6 → 0ab65e1cc7fa, remove sequence_number from workflow_runs

api-1 | INFO [alembic.runtime.migration] Running upgrade 0ab65e1cc7fa → 58eb7bdb93fe, add mcp server tool and app server

api-1 | INFO [alembic.runtime.migration] Running upgrade 58eb7bdb93fe → 1c9ba48be8e4, add uuidv7 function in SQL

api-1 | INFO [alembic.runtime.migration] Running upgrade 1c9ba48be8e4 → 71f5020c6470, tool oauth

api-1 | INFO [alembic.runtime.migration] Running upgrade 71f5020c6470 → 1a83934ad6d1, update models

api-1 | INFO [alembic.runtime.migration] Running upgrade 1a83934ad6d1 → 375fe79ead14, oauth_refresh_token

api-1 | INFO [alembic.runtime.migration] Running upgrade 375fe79ead14 → 8bcc02c9bd07, add_tenant_plugin_autoupgrade_table

api-1 | INFO [alembic.runtime.migration] Running upgrade 8bcc02c9bd07 → 532b3f888abf, manual dataset field update

api-1 | INFO [alembic.runtime.migration] Running upgrade 532b3f888abf → fa8b0fa6f407, add timeout for tool_mcp_providers

api-1 | INFO [alembic.runtime.migration] Running upgrade fa8b0fa6f407 → e8446f481c1e, Add provider multi credential support

api-1 | INFO [alembic.runtime.migration] Running upgrade e8446f481c1e → 0e154742a5fa, Add provider model multi credential support

api-1 | INFO [alembic.runtime.migration] Running upgrade 0e154742a5fa → 8d289573e1da, empty message

api-1 | INFO [alembic.runtime.migration] Running upgrade 8d289573e1da → b95962a3885c, chore: add workflow app log run id index

api-1 | INFO [alembic.runtime.migration] Running upgrade b95962a3885c → c20211f18133, add_headers_to_mcp_provider

api-1 | INFO [alembic.runtime.migration] Running upgrade c20211f18133 → cf7c38a32b2d, Add credential status for provider table

api-1 | INFO [alembic.runtime.migration] Running upgrade cf7c38a32b2d → 68519ad5cd18, knowledge_pipeline_migrate

api-1 | INFO [alembic.runtime.migration] Running upgrade 68519ad5cd18 → d98acf217d43, add app_mode for messsage

api-1 | INFO [alembic.runtime.migration] Running upgrade d98acf217d43 → ae662b25d9bc, remove-builtin-template-user

api-1 | INFO [alembic.runtime.migration] Running upgrade ae662b25d9bc → 03f8dcbc611e, add WorkflowPause model

api-1 | INFO [alembic.runtime.migration] Running upgrade 03f8dcbc611e → 669ffd70119c, introduce_trigger

api-1 | INFO [alembic.runtime.migration] Running upgrade 669ffd70119c → 09cfdda155d1, mysql adaptation

api-1 | INFO [alembic.runtime.migration] Running upgrade 09cfdda155d1 → 7bb281b7a422, Add workflow_pauses_reasons table

api-1 | INFO [alembic.runtime.migration] Running upgrade 7bb281b7a422 → d57accd375ae, support-multi-modal

api-1 | INFO [alembic.runtime.migration] Running upgrade d57accd375ae → 03ea244985ce, add type column not null default tool

api-1 | INFO [alembic.runtime.migration] Running upgrade 03ea244985ce → 7df29de0f6be, add credit pool

api-1 | INFO [alembic.runtime.migration] Running upgrade 7df29de0f6be → 905527cc8fd3, add workflow_run_created_at_id_idx

api-1 | INFO [alembic.runtime.migration] Running upgrade 905527cc8fd3 → 3334862ee907, feat: add created_at id index to messages

api-1 | INFO [alembic.runtime.migration] Running upgrade 3334862ee907 → 288345cd01d1, change workflow node execution workflow_run index

api-1 | INFO [alembic.runtime.migration] Running upgrade 288345cd01d1 → 9e6fa5cbcd80, make message annotation question not nullable

api-1 | INFO [alembic.runtime.migration] Running upgrade 9e6fa5cbcd80 → f9f6d18a37f9, add table explore banner and trial

api-1 | INFO [alembic.runtime.migration] Running upgrade f9f6d18a37f9 → 9d77545f524e, create workflow_archive_logs

api-1 | INFO [alembic.runtime.migration] Running upgrade 9d77545f524e → 788d3099ae3a, add summary index feature

api-1 | Database migration successful!

api-1 | [2026-02-07 08:43:33 +0000] [1] [INFO] Starting gunicorn 23.0.0

api-1 | [2026-02-07 08:43:33 +0000] [1] [INFO] Listening at: http://0.0.0.0:5001 (1)

api-1 | [2026-02-07 08:43:33 +0000] [1] [INFO] Using worker: gevent

api-1 | [2026-02-07 08:43:33 +0000] [108] [INFO] Booting worker with pid: 108

api-1 | gRPC patched with gevent.

api-1 | psycopg2 patched with gevent.

api-1 | 2026-02-07 08:43:37.378 INFO [MainThread] [utils.py:151] - Note: NumExpr detected 40 cores but “NUMEXPR_MAX_THREADS” not set, so enforcing safe limit of 16.

api-1 | 2026-02-07 08:43:37.379 INFO [MainThread] [utils.py:164] - NumExpr defaulting to 16 threads.

是的,从你这段日志来看,这次已经是一次“从无到有”的完整迁移,并且成功结束了,可以认为之前缺表的问题已经解决了。

关键信号有两个:

  1. 迁移链路是从空版本一路跑到最新版本
    日志开头就是:

    Running upgrade  -> 64b051264f32, init
    

    说明这是一个全新的库,从初始版本开始执行,后面你贴出来的几十条 Running upgrade ... -> ... 一直串到:

    INFO  [alembic.runtime.migration] Running upgrade 9d77545f524e -> 788d3099ae3a, add summary index feature
    Database migration successful!
    

    中间包括我们之前关心的那几步,比如:

    • b289e2408ee2, add workflow(这一批会建 workflow 相关表)
    • 669ffd70119c, introduce_trigger
    • 7bb281b7a422, Add workflow_pauses_reasons table
    • 等等,一直到最新一条 788d3099ae3a, add summary index feature

    没有任何报错中断,最后有明确的:

    Database migration successful!
    

    然后才启动 gunicorn,这就是预期的“全量迁移成功”路径。

  2. 版本组合已经规避了之前的两个坑

    • Dify 升级到 1.12.1
    • PostgreSQL 改成了 15-alpine,不再有 PG18 自带 uuidv7() 与迁移脚本冲突的问题。

    在这个组合下,官方默认的迁移脚本就是为这样的 Postgres 版本测试过的,所以像 workflow_schedule_plans 这类表会正常被建出来,worker 的 poll_workflow_schedules 查询就不会再遇到 “relation … does not exist”。


你现在可以做的简单验证(任选其一):

  1. 在 DB 里确认一下核心表是否存在,例如:

    SELECT table_name
    FROM information_schema.tables
    WHERE table_name IN ('workflow_schedule_plans', 'workflows', 'app_triggers');
    

    正常情况下都会有结果。

  2. 启动 worker / worker_beat 后,看一眼它们的日志里有没有再出现:

    relation "workflow_schedule_plans" does not exist
    

    如果已经没有这类报错,说明原始问题(定时任务跑不起来)已经被这次“重装 + 正确版本组合 + 完整迁移”解决了。

如果你接下来在 worker 日志里看到新的报错(而不是缺表了),可以把具体日志再贴上来,我再帮你看是不是别的配置问题。