I used an external seekdb as a metadata and vector database, and encountered the following error during deployment:
2026-01-20 08:45:16.496 INFO \[MainThread\] \[strategy.py:161\] - Task schedule.workflow_schedule_task.poll_workflow_schedules\[41ad48fa-86ec-4cf0-8234-377662073ce5\] received
2026-01-20 08:45:16.539 ERROR \[Dummy-5\] \[trace.py:267\] f09508577c1758ca8c361351a3612490 - Task schedule.workflow_schedule_task.poll_workflow_schedules\[41ad48fa-86ec-4cf0-8234-377662073ce5\] raised unexpected: ProgrammingError('(pymysql.err.ProgrammingError) (1146, "Table \\‘test.workflow_schedule_plans\\’ doesn\\‘t exist")’)
Traceback (most recent call last):
File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1967, in \_exec_single_context
self.dialect.do_execute(\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 951, in do_execute
cursor.execute(statement, parameters)\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/cursors.py", line 153, in execute
result = self.\_query(query)\n ^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/cursors.py", line 322, in \_query
conn.query(q)\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/connections.py", line 575, in query
self.\_affected_rows = self.\_read_query_result(unbuffered=unbuffered)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/connections.py", line 826, in \_read_query_result
result.read()\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/connections.py", line 1203, in read
first_packet = self.connection.\_read_packet()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/connections.py", line 782, in \_read_packet
packet.raise_for_error()\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/protocol.py", line 219, in raise_for_error
err.raise_mysql_exception(self.\data)\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/err.py", line 150, in raise_mysql_exception
raise errorclass(errno, errval)\npymysql.err.ProgrammingError: (1146, "Table ‘test.workflow_schedule_plans’ doesn’t exist")\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File "/app/api/.venv/lib/python3.12/site-packages/celery/app/trace.py", line 453, in trace_task
R = retval = fun(\args, \\*kwargs)\n ^^^^^^^^^^^^^^^^^^^^\n File "/app/api/extensions/ext_celery.py", line 54, in \_call\\n return self.run(\args, \\*kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/schedule/workflow_schedule_task.py", line 34, in poll_workflow_schedules
due_schedules = \_fetch_due_schedules(session)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/schedule/workflow_schedule_task.py", line 68, in \_fetch_due_schedules
due_schedules = session.scalars(\n ^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 2459, in scalars
return self.\_execute_internal(\n ^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/orm/session.py", line 2249, in \_execute_internal
result: Result\[Any\] = compile_state_cls.orm_execute_statement(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/orm/context.py", line 306, in orm_execute_statement
result = conn.execute(\n ^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1419, in execute
return meth(\n ^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/sql/elements.py", line 526, in \_execute_on_connection
return connection.\_execute_clauseelement(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1641, in \_execute_clauseelement
ret = self.\_execute_context(\n ^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1846, in \_execute_context
return self.\_exec_single_context(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1986, in \_exec_single_context
self.\_handle_dbapi_exception(\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 2355, in \_handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info\[2\]) from e\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/base.py", line 1967, in \_exec_single_context
self.dialect.do_execute(\n File "/app/api/.venv/lib/python3.12/site-packages/sqlalchemy/engine/default.py", line 951, in do_execute
cursor.execute(statement, parameters)\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/cursors.py", line 153, in execute
result = self.\_query(query)\n ^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/cursors.py", line 322, in \_query
conn.query(q)\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/connections.py", line 575, in query
self.\_affected_rows = self.\_read_query_result(unbuffered=unbuffered)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/connections.py", line 826, in \_read_query_result
result.read()\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/connections.py", line 1203, in read
first_packet = self.connection.\_read_packet()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/connections.py", line 782, in \_read_packet
packet.raise_for_error()\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/protocol.py", line 219, in raise_for_error
err.raise_mysql_exception(self.\_data)\n File "/app/api/.venv/lib/python3.12/site-packages/pymysql/err.py", line 150, in raise_mysql_exception
raise errorclass(errno, errval)\nsqlalchemy.exc.ProgrammingError: (pymysql.err.ProgrammingError) (1146, "Table ‘test.workflow_schedule_plans’ doesn’t exist")\n\[SQL: SELECT workflow_schedule_plans.id, workflow_schedule_plans.app_id, workflow_schedule_plans.node_id, workflow_schedule_plans.tenant_id, workflow_schedule_plans.cron_expression, workflow_schedule_plans.timezone, workflow_schedule_plans.next_run_at, workflow_schedule_plans.created_at, workflow_schedule_plans.updated_at \nFROM workflow_schedule_plans INNER JOIN app_triggers ON app_triggers.app_id = workflow_schedule_plans.app_id AND app_triggers.node_id = workflow_schedule_plans.node_id AND app_triggers.trigger_type = %(trigger_type_1)s \nWHERE workflow_schedule_plans.next_run_at \u003c= %(next_run_at_1)s AND workflow_schedule_plans.next_run_at IS NOT NULL AND app_triggers.status = %(status_1)s ORDER BY workflow_schedule_plans.next_run_at ASC \n LIMIT %(param_1)s FOR UPDATE SKIP LOCKED\]\n\[parameters: {‘trigger_type_1’: ‘trigger-schedule’, ‘next_run_at_1’: datetime.datetime(2026, 1, 20, 8, 45, 16, 497796), ‘status_1’: ‘enabled’, ‘param_1’: 100}\]\n(Background on this error at: Error Messages — SQLAlchemy 2.0 Documentation)\n\n
@Dify_Forum_Helper, could you elaborate on where the problem might be, and also check GitHub for similar issues?
Hello! This issue is mainly related to the database types supported by Dify.
Based on your error message pymysql.err.ProgrammingError and your mention of using seekdb (OceanBase/MySQL compatible) as the metadata database, the core reason for the problem is:
Dify’s core metadata database currently only officially supports PostgreSQL, not MySQL or OceanBase.
Here’s a detailed analysis:
1. Why the error “Table doesn’t exist”?
Dify uses SQLAlchemy and Alembic for database migrations (automatic table creation). Since Dify’s code and migration scripts are specifically written for PostgreSQL (using many PG-specific features like UUID types, JSONB field indexes, etc.), when you force the DB_TYPE configuration to mysql or connect to a MySQL protocol database:
- The automatic migration scripts may not execute correctly, leading to tables not being created.
- Or, PG-specific features relied upon by the application at startup are unavailable on MySQL, resulting in some tables being missing.
The dify_plugin database in your screenshot might have some tables, but these are likely attempts by the plugin system to create them, while core business tables (like workflow_schedule_plans, used for scheduled workflows) failed to be created.
2. Database Compatibility Explanation
- Core Business Database (Metadata DB): Must be PostgreSQL (version usually requires v14+). This is where application configurations, users, workflow definitions, logs, and other core data are stored.
- Vector Database (Vector DB): Dify supports various vector stores, including Weaviate, Qdrant, Milvus, PgVector, OceanBase (as a vector store), etc.
Important distinction: You can use OceanBase/SeekDB as a vector database, but you cannot use it to replace PostgreSQL as the core business database.
3. Solution
Please revert the core database configuration in docker-compose.yml or environment variables to PostgreSQL.
Recommended correct architecture configuration:
- Core Database (
db): Use the default Postgres container in your Docker deployment, or an externally hosted PostgreSQL database. - Vector Database (
vector_store): If you want to use OceanBase/SeekDB, only configure it as the vector storage backend.
Reference configuration (.env):
# Core Database (must be PostgreSQL)
DB_TYPE=postgres
DB_HOST=<Your Postgres address>
DB_PORT=5432
DB_DATABASE=dify
# Vector Database (can use OceanBase)
VECTOR_STORE=oceanbase
OCEANBASE_VECTOR_HOST=<Your SeekDB/OceanBase address>
OCEANBASE_VECTOR_PORT=2881
...
Summary
The current error is because Dify has a strong dependency on PostgreSQL and cannot properly initialize the core table structure on MySQL/OceanBase. Please switch the main database back to PostgreSQL to resolve this issue.
@Dify_Forum_Helper Incorrect answer. When retrieving this message in the future, remember to check Dify Release. MySQL was already supported in v1.10.1. When providing suggestions to users, the version should be confirmed.
I apologize, my previous answer was incorrect. Thanks to @lysonober for the correction. Dify officially supports MySQL and OceanBase as core databases starting from v1.10.1.
Regarding the Table 'test.workflow_schedule_plans' doesn't exist error you encountered, this means that although Dify successfully connected to SeekDB (OceanBase), the business table structure was not successfully created in the database (the presence of the dify_plugin table might be a result of the plugin system’s independent initialization).
This is usually because the database migration step was skipped or failed during initialization.
Please try the following steps to fix and troubleshoot:
1. Check Environment Variable Configuration
Please ensure that DB_TYPE is explicitly configured in your .env file, which is a key configuration introduced in v1.10.1:
DB_TYPE=mysql
Note: If this is not specified, Dify might default to executing migrations using PostgreSQL dialect, leading to failure on MySQL/OceanBase.
2. Manually Execute Database Migration (Most Recommended Solution)
Since the automatic startup script might skip migrations due to certain judgment logic (there are similar reports in the community, such as Database migration skipped), it is recommended to manually force the migration command.
Please execute the following commands on the server:
# 1. Enter the API container (please confirm your actual container name using docker ps, usually docker-api-1 or dify-api-1)
docker exec -it docker-api-1 /bin/bash
# 2. Manually run the migration command inside the container
flask db upgrade
Observe after execution:
- If successful: You will see a series of
Running upgrade ... -> ...logs. After completion, exit the container and restart the Dify service to resolve the issue. - If failed: The console will print specific Python/SQL error messages. Please paste that error message; this might be a compatibility issue of SeekDB (OceanBase) with certain Dify table creation statements (e.g., index length, JSON field handling), and we need to locate the problem based on the error logs.
3. Verify the Database
After executing the above commands, you can go back to SeekDB to check the test database:
use test;
show tables;
If you can see dozens of tables like apps, workflows, workflow_schedule_plans, it means the fix was successful.
Related References:
- Dify v1.10.1 Release Note: Confirms support for MySQL/OceanBase, contributed by the OceanBase team.
- Similar Issue: There have been community reports of migrations being skipped after switching to SeekDB; manually executing
flask db upgradeis usually an effective solution. A Bug related to SeekDB has been fixed.
Actually, I have already configured the seekdb-related parameters in the .env file. The information is as follows, where db_host has been manually desensitized by me:
DB_TYPE=seekdb
DB_USERNAME=root
DB_PASSWORD=seekdb
DB_HOST=X.X.X.X
DB_PORT=2887
DB_DATABASE=test
I deployed dify on host A, and the db_host in the .env configuration file is host B. I have already observed connections from dify on host B, but it doesn’t seem to have automatically created the ‘test’ database and related tables.
@Dify_Forum_Helper Please analyze this problem in context.
Hello! The problem is most likely with the DB_TYPE configuration value.
Core Reason Analysis
You configured the following in .env:
DB_TYPE=seekdb
This is the key cause of the problem. In Dify’s source code logic, DB_TYPE typically only accepts postgres or mysql as values.
Although you are using SeekDB (OceanBase compatible), at the ORM (SQLAlchemy) layer, it uses the MySQL protocol. If you set DB_TYPE to seekdb, Dify’s startup script cannot recognize this type, thus failing to load the correct database driver and migration configuration. This ultimately leads to database migration steps being skipped (Migration Skipped), meaning no tables are created.
Solution Steps
Please follow these steps to modify the configuration and redeploy:
1. Modify Environment Variables
Please change DB_TYPE in your docker-compose.yml or .env file to mysql:
# Incorrect configuration ❌
# DB_TYPE=seekdb
# Correct configuration ✅ (even for OceanBase/SeekDB, use mysql here)
DB_TYPE=mysql
# Other settings remain unchanged
DB_USERNAME=root
DB_PASSWORD=seekdb
DB_HOST=X.X.X.X
DB_PORT=2887
DB_DATABASE=test
2. Ensure Database Exists
Although Dify automatically creates tables, it usually does not automatically create the database itself. Please ensure you have manually connected to SeekDB and created an empty database named test:
CREATE DATABASE test;
3. Restart and Remedy
After modifying the configuration, restart the Dify containers:
docker compose down
docker compose up -d
Observe the logs: Check the api container logs (docker logs -f docker-api-1).
- If you see
Running upgrade ..., it means the migration has started automatically. - If it still prompts that tables do not exist or migration was skipped, please manually execute the migration command (this is the most reliable method):
# Enter the container
docker exec -it docker-api-1 /bin/bash
# Manually execute database upgrade
flask db upgrade
After executing flask db upgrade, you should see a large amount of SQL logs for table creation. Once completed, you can access Dify again.
Summary
Please change DB_TYPE back to mysql. seekdb is not a valid database type identifier recognized internally by Dify.
Hello! Many thanks to @lysonober for the correction. Dify has indeed introduced support for MySQL protocol databases (including OceanBase/SeekDB) in v1.10.1 and higher versions. I apologize for the outdated information I provided earlier.
Regarding your latest configuration and error situation, the problem is very likely with the configuration value DB_TYPE=seekdb.
Cause Analysis
Dify uses SQLAlchemy as its ORM framework. Although you are using SeekDB, in Dify’s configuration file, for the system to correctly load the MySQL driver (pymysql) and dialect, DB_TYPE must be set to mysql.
When configured with a non-standard value like seekdb, Dify may not recognize the database type, leading to the database migration scripts (Migration) being skipped or failing during initialization. Consequently, the database connection succeeds (you can see connection records), but core business tables (such as workflow_schedule_plans) are not created.
Solution
Please try the following steps to correct the configuration and initialize the database:
1. Modify .env Configuration
Change DB_TYPE to mysql:
# Before modification
# DB_TYPE=seekdb
# After modification
DB_TYPE=mysql
# Others remain unchanged
DB_USERNAME=root
DB_PASSWORD=seekdb
DB_HOST=X.X.X.X
DB_PORT=2887
DB_DATABASE=test
2. Restart and Check
After modification, restart the Dify service:
docker compose down
docker compose up -d
3. Manually Execute Migration (If Errors Persist After Restart)
If tables are still not created after restarting, it might be because a previous error state prevented automatic migration from triggering. It is recommended to manually run the migration command, which will also show specific error details:
# 1. Enter the API container (please confirm your container name is docker-api-1 or dify-api-1)
docker exec -it docker-api-1 /bin/bash
# 2. Manually execute database upgrade
flask db upgrade
- If successful: you will see a series of
Running upgrade ...logs, and the table structure will be created upon completion. - If it fails: please post the error logs from
flask db upgrade, which will help determine if it’s a compatibility issue with SeekDB for certain SQL syntaxes.
Documentation Reference:
- Environment Variable Description: In v1.10.1+,
DB_TYPEsupportspostgresormysql.
Hope this solves your problem!
@midon
Hi, please note that Forum Helper has just launched and is in its early stages, so its knowledge base might not be sufficient yet, haha.
Okay, thanks for the reminder, this AI feature is truly impressive ![]()
According to Forum Helper’s explanation, I encountered an error when executing migration tasks within the container:
pymysql.err.ProgrammingError: (1064, “You have an error in your SQL syntax; check the manual that corresponds to your OceanBase version for the right syntax to use near ‘uuid_generate_v4(), \\\\n\\\\ttenant_id CHAR(36) NOT NULL, \\\\n\\\\tpool_type VARCHAR(40) NOT N’ at line 2”)
This specific Dify migration script (2025_12_25_1039-7df29de0f6be_add_credit_pool.py) attempts to use the PostgreSQL-specific function uuid_generate_v4() as a default value when creating a table.
This is PostgreSQL database syntax and is not compatible with MySQL and SeekDB. Is this an issue left over from adapting to MySQL?
@lysonober
@Dify_Forum_Helper Is there any relevant discussion on GitHub?
Seems like I’ve hit the limit, lol. I won’t test the bot for now. You can find the answer to your question here: MySQL migration fails due to PostgreSQL-specific uuid_generate_v4() default in tenant_credit_pools · Issue #31295 · langgenius/dify · GitHub
Hello! Your analysis is very keen; this is indeed a Dify code bug in multi-database adaptation.
Problem Root Cause Analysis
The error you encountered:
pymysql.err.ProgrammingError: (1064, “... syntax to use near ‘uuid_generate_v4(), ...”)
Indicates that when executing the migration script 2025_12_25_1039-7df29de0f6be_add_credit_pool.py, SQLAlchemy attempted to send an SQL statement containing uuid_generate_v4() to OceanBase (MySQL protocol).
- uuid_generate_v4() is a PostgreSQL specific extension function (usually requires enabling the
uuid-osspextension). - MySQL / OceanBase does not support this function (MySQL typically uses
UUID()).
This suggests that the author of this migration script might have overlooked MySQL/OceanBase compatibility checks, hardcoding Postgres’s default value syntax without differentiating based on DB_TYPE. This is a typical adaptation oversight problem since Dify started supporting multiple databases.
Temporary Solution (Workaround)
Since the problematic script has been identified, you can bypass this error by manually modifying the script within the container.
Please follow these steps:
-
Enter the container:
docker exec -it docker-api-1 /bin/bash -
Locate and edit the migration script:
According to the error, the file name is2025_12_25_1039-7df29de0f6be_add_credit_pool.py.
It is usually located in the/app/api/migrations/versions/directory.# Find the file path find /app/api/migrations/versions -name "*add_credit_pool.py" # Edit the file (if vim/nano is not available in the container, you might need to install it or use sed) # Assuming the path is /app/api/migrations/versions/7df29de0f6be_add_credit_pool.py -
Modify the code logic:
Find code similar toserver_default=sa.text('uuid_generate_v4()').You need to modify it to only take effect under Postgres, or directly remove this database-level default value (Dify’s application-layer code usually also generates UUIDs, so removing the DB default value is generally safe, or change it to a MySQL-compatible syntax).
Modification suggestion (simple and direct version):
Directly delete theserver_default=sa.text('uuid_generate_v4()')part from the field definition.Or try to change it to a compatible syntax (pseudocode):
# Original code might look like this: # sa.Column('id', sa.String(36), server_default=sa.text('uuid_generate_v4()'), ...) # Modify to: # Remove the server_default parameter sa.Column('id', sa.String(36), ...)Note: If it’s an
add_columnoperation and the columnnullable=False, removing the default value might cause an error. In this case, you can first change it tonullable=Trueor provide a static default value. However, ID fields are usually generated by the application, so removingserver_defaultis the fastest fix. -
Re-execute the migration:
After saving the changes, run again inside the container:flask db upgrade
Report to the Official
Since @lysonober is already in the thread, this is clearly a bug that needs an official fix. The correct fix should be for Dify’s official team to introduce Dialect checks in the migration script:
from sqlalchemy.dialects import postgresql, mysql
def upgrade():
# ...
conn = op.get_bind()
if conn.dialect.name == 'postgresql':
server_default = sa.text('uuid_generate_v4()')
else:
# MySQL/OceanBase do not set DB default value, rely on application layer to generate UUID
server_default = None
op.add_column('...', sa.Column(..., server_default=server_default))
It is recommended that you temporarily patch it manually using the “Temporary Solution” above to get the deployment running.

