当尝试将新文件上传到 KB 时,无法上传,并显示以下错误信息。
如果有人知道如何解决此问题,请务必告知。
您是在使用自托管环境,对吧?
目前 Weaviate 无法连接,出现了错误,但仅凭您提供的片段很难确定原因,如果您能提供完整的错误信息或截图,将有助于我进行评论和分析。
此外,根据错误的具体内容,建议您检查容器的运行状态,并查看 api、worker 和 weaviate 容器的日志。
是的,我正在使用自托管版本(通过 Docker 构建的环境)。
在上传文件时,出现如下错误:
(!) 的内容为:“Could not connect to Weaviate: Connection to Weaviate failed. Details: .”
■ 容器运行状态
通过 docker compose ps 命令确认,包含 api / worker / weaviate 在内的所有容器均处于 Up 状态 并正常运行。
(结果如下所示)
docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
docker-api-1 langgenius/dify-api:1.11.2 “/bin/bash /entrypoi…” api 16 hours ago Up 16 hours 5001/tcp
docker-db_postgres-1 postgres:15-alpine “docker-entrypoint.s…” db_postgres 16 hours ago Up 16 hours (healthy) 5432/tcp
docker-nginx-1 nginx:latest “sh -c 'cp /docker-e…” nginx 16 hours ago Up 16 hours 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp
docker-pgvector-1 pgvector/pgvector:pg16 “/docker-entrypoint.…” pgvector 5 weeks ago Up 6 days (healthy) 5432/tcp
docker-plugin_daemon-1 langgenius/dify-plugin-daemon:0.5.2-local “/bin/bash -c /app/e…” plugin_daemon 16 hours ago Up 16 hours 0.0.0.0:5003->5003/tcp, [::]:5003->5003/tcp
docker-redis-1 redis:6-alpine “docker-entrypoint.s…” redis 16 hours ago Up 16 hours (healthy) 6379/tcp
docker-sandbox-1 langgenius/dify-sandbox:0.2.12 “/main” sandbox 16 hours ago Up 16 hours (healthy)
docker-ssrf_proxy-1 ubuntu/squid:latest “sh -c 'cp /docker-e…” ssrf_proxy 16 hours ago Up 16 hours 3128/tcp
docker-weaviate-1 semitechnologies/weaviate:1.27.0 “/bin/weaviate --hos…” weaviate 16 hours ago Up 16 hours 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp
docker-web-1 langgenius/dify-web:1.11.2 “/bin/sh ./entrypoin…” web 16 hours ago Up 16 hours 3000/tcp
docker-worker-1 langgenius/dify-api:1.11.2 “/bin/bash /entrypoi…” worker 16 hours ago Up 16 hours 5001/tcp
docker-worker_beat-1 langgenius/dify-api:1.11.2 “/bin/bash /entrypoi…” worker_beat 16 hours ago Up 16 hours 5001/tcp
■ 容器日志
1. api 容器
通过 docker logs docker-api-1 查看,未发现启动错误或 Weaviate 连接错误,与 plugin_daemon 的通信似乎正常。
2. worker 容器
docker logs docker-worker-1 --tail=100
File “/usr/local/lib/python3.12/concurrent/futures/_base.py”, line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File “/usr/local/lib/python3.12/concurrent/futures/_base.py”, line 401, in __get_result
raise self._exception
File “/usr/local/lib/python3.12/concurrent/futures/thread.py”, line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/core/indexing_runner.py”, line 684, in _process_chunk
index_processor.load(
File “/app/api/core/rag/index_processor/processor/parent_child_index_processor.py”, line 125, in load
vector = Vector(dataset)
^^^^^^^^^^^^^^^
File “/app/api/core/rag/datasource/vdb/vector_factory.py”, line 45, in __init__
self._vector_processor = self._init_vector()
^^^^^^^^^^^^^^^^^^^
File “/app/api/core/rag/datasource/vdb/vector_factory.py”, line 65, in _init_vector
return vector_factory_cls().init_vector(self._dataset, self._attributes, self._embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py”, line 463, in init_vector
return WeaviateVector(
^^^^^^^^^^^^^^^
File “/app/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py”, line 79, in __init__
self._client = self._init_client(config)
^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/core/rag/datasource/vdb/weaviate/weaviate_vector.py”, line 122, in _init_client
client = weaviate.connect_to_custom(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/weaviate/connect/helpers.py”, line 347, in connect_to_custom
return __connect(
^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/weaviate/connect/helpers.py”, line 371, in __connect
raise e
File “/app/api/.venv/lib/python3.12/site-packages/weaviate/connect/helpers.py”, line 367, in __connect
client.connect()
File “/app/api/.venv/lib/python3.12/site-packages/weaviate/client_executor.py”, line 149, in connect
return executor.execute(
^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/weaviate/connect/executor.py”, line 99, in execute
return cast(T, exception_callback(e))
^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/weaviate/connect/executor.py”, line 38, in raise_exception
raise e
File “/app/api/.venv/lib/python3.12/site-packages/weaviate/connect/executor.py”, line 80, in execute
call = method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File “/app/api/.venv/lib/python3.12/site-packages/weaviate/connect/v4.py”, line 934, in connect
raise WeaviateStartUpError(f"Could not connect to Weaviate:{e}.") from e
weaviate.exceptions.WeaviateStartUpError: Could not connect to Weaviate:Connection to Weaviate failed. Details: .
2026-01-15 00:29:13.492 INFO [Dummy-1047] [document_indexing_task.py:102] - \u001b[32mProcessed dataset: 12ea72e3-c9ff-4b33-a519-e288ce5cabb4 latency: 0.5789082440314814\u001b[0m
2026-01-15 00:29:13.493 INFO [Dummy-1047] [document_indexing_task.py:131] - document indexing tenant isolation queue 081fa228-d2d2-4b9c-a586-dca51415c1b5 next tasks: [ ]
2026-01-15 00:29:13.502 INFO [Dummy-1047] [trace.py:128] - Task tasks.document_indexing_task.priority_document_indexing_task[16c8ad3d-b01e-4049-a916-ed73dae6870a] succeeded in 0.5901001710444689s: None
2026-01-15 00:29:42.851 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[550440d9-6185-49f4-97ae-18678d7aaf1d] received
2026-01-15 00:29:42.865 INFO [Dummy-1048] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[550440d9-6185-49f4-97ae-18678d7aaf1d] succeeded in 0.013193074963055551s: None
2026-01-15 00:30:00.002 INFO [MainThread] [strategy.py:161] - Task schedule.check_upgradable_plugin_task.check_upgradable_plugin_task[5ba74d5e-703e-43c2-992e-1b55e53dc80f] received
2026-01-15 00:30:00.003 WARNING [Dummy-1049] [log.py:232] - Start check upgradable plugin.
2026-01-15 00:30:00.003 WARNING [Dummy-1049] [log.py:232] - Now seconds of day: 1770.0032949447632
2026-01-15 00:30:00.007 WARNING [Dummy-1049] [log.py:232] - Total strategies: 0
2026-01-15 00:30:00.008 WARNING [Dummy-1049] [log.py:232] - Checked upgradable plugin success latency: 0.004957632045261562
2026-01-15 00:30:00.019 INFO [Dummy-1049] [trace.py:128] - Task schedule.check_upgradable_plugin_task.check_upgradable_plugin_task[5ba74d5e-703e-43c2-992e-1b55e53dc80f] succeeded in 0.01647809403948486s: None
2026-01-15 00:30:42.851 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[c8788ac7-0ce2-46ee-91a0-de6a630afba6] received
2026-01-15 00:30:42.863 INFO [Dummy-1050] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[c8788ac7-0ce2-46ee-91a0-de6a630afba6] succeeded in 0.011534405988641083s: None
2026-01-15 00:31:42.851 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[bc91dfa5-fb3c-4622-ac85-f9ef9f647f67] received
2026-01-15 00:31:42.863 INFO [Dummy-1051] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[bc91dfa5-fb3c-4622-ac85-f9ef9f647f67] succeeded in 0.011604055063799024s: None
2026-01-15 00:32:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[d243d57b-10de-4756-8a12-4775cef1fd22] received
2026-01-15 00:32:42.865 INFO [Dummy-1052] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[d243d57b-10de-4756-8a12-4775cef1fd22] succeeded in 0.012991473078727722s: None
2026-01-15 00:33:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[c599d787-27cf-4eb7-9feb-da14ded388d1] received
2026-01-15 00:33:42.867 INFO [Dummy-1053] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[c599d787-27cf-4eb7-9feb-da14ded388d1] succeeded in 0.014653912978246808s: None
2026-01-15 00:34:42.851 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[3c418b4e-cfe6-497f-9be7-554cc210b621] received
2026-01-15 00:34:42.864 INFO [Dummy-1054] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[3c418b4e-cfe6-497f-9be7-554cc210b621] succeeded in 0.012223441968671978s: None
2026-01-15 00:35:42.851 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[450bf627-115d-4317-9679-d992d16ea990] received
2026-01-15 00:35:42.865 INFO [Dummy-1055] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[450bf627-115d-4317-9679-d992d16ea990] succeeded in 0.013502061017788947s: None
2026-01-15 00:36:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[28dfc0f2-4617-4059-b0e7-d3897c32d123] received
2026-01-15 00:36:42.865 INFO [Dummy-1056] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[28dfc0f2-4617-4059-b0e7-d3897c32d123] succeeded in 0.012684233952313662s: None
2026-01-15 00:37:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[389d8a7c-ca0a-4d79-a546-5de234e4c341] received
2026-01-15 00:37:42.863 INFO [Dummy-1057] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[389d8a7c-ca0a-4d79-a546-5de234e4c341] succeeded in 0.010889502940699458s: None
2026-01-15 00:38:42.853 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[ef3e0bac-cd20-4065-8255-fd1bdf5ecfb4] received
2026-01-15 00:38:42.866 INFO [Dummy-1058] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[ef3e0bac-cd20-4065-8255-fd1bdf5ecfb4] succeeded in 0.012295693974010646s: None
2026-01-15 00:39:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[c5ce7903-eb95-490d-8b6a-17a9f3631eb0] received
2026-01-15 00:39:42.865 INFO [Dummy-1059] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[c5ce7903-eb95-490d-8b6a-17a9f3631eb0] succeeded in 0.012386396992951632s: None
2026-01-15 00:40:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[f5084a94-fbfb-407b-9be3-8ff83a636105] received
2026-01-15 00:40:42.864 INFO [Dummy-1060] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[f5084a94-fbfb-407b-9be3-8ff83a636105] succeeded in 0.011906969011761248s: None
2026-01-15 00:41:42.851 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[764d16ac-cfe8-4303-a5df-c13219fcebab] received
2026-01-15 00:41:42.865 INFO [Dummy-1061] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[764d16ac-cfe8-4303-a5df-c13219fcebab] succeeded in 0.013232778990641236s: None
2026-01-15 00:42:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[edd67a18-d230-4d6c-97e2-f1a8c936742e] received
2026-01-15 00:42:42.865 INFO [Dummy-1062] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[edd67a18-d230-4d6c-97e2-f1a8c936742e] succeeded in 0.013191823964007199s: None
2026-01-15 00:43:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[58c62185-ebe0-4747-85cd-83f2cb3d0eb9] received
2026-01-15 00:43:42.864 INFO [Dummy-1063] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[58c62185-ebe0-4747-85cd-83f2cb3d0eb9] succeeded in 0.012177053955383599s: None
2026-01-15 00:44:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[ffd45049-93e7-4218-b226-50e63ff2c8cb] received
2026-01-15 00:44:42.865 INFO [Dummy-1064] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[ffd45049-93e7-4218-b226-50e63ff2c8cb] succeeded in 0.012952374061569571s: None
2026-01-15 00:45:00.005 INFO [MainThread] [strategy.py:161] - Task schedule.check_upgradable_plugin_task.check_upgradable_plugin_task[9faa50ff-0305-4727-84ea-10ea78288e17] received
2026-01-15 00:45:00.007 WARNING [Dummy-1065] [log.py:232] - Start check upgradable plugin.
2026-01-15 00:45:00.007 WARNING [Dummy-1065] [log.py:232] - Now seconds of day: 2670.007495164871
2026-01-15 00:45:00.011 WARNING [Dummy-1065] [log.py:232] - Total strategies: 0
2026-01-15 00:45:00.012 WARNING [Dummy-1065] [log.py:232] - Checked upgradable plugin success latency: 0.004497095942497253
2026-01-15 00:45:00.022 INFO [Dummy-1065] [trace.py:128] - Task schedule.check_upgradable_plugin_task.check_upgradable_plugin_task[9faa50ff-0305-4727-84ea-10ea78288e17] succeeded in 0.01553877501282841s: None
2026-01-15 00:45:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[fa448614-8c08-4142-8291-08f261611692] received
2026-01-15 00:45:42.880 INFO [Dummy-1066] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[fa448614-8c08-4142-8291-08f261611692] succeeded in 0.028055537957698107s: None
2026-01-15 00:46:42.852 INFO [MainThread] [strategy.py:161] - Task schedule.workflow_schedule_task.poll_workflow_schedules[06097d2c-05f7-4af4-b523-d4f42b9c36e7] received
2026-01-15 00:46:42.864 INFO [Dummy-1067] [trace.py:128] - Task schedule.workflow_schedule_task.poll_workflow_schedules[06097d2c-05f7-4af4-b523-d4f42b9c36e7] succeeded in 0.012092349003069103s: None
3. weaviate 容器
docker logs docker-weaviate-1 --tail=100
docker : {"action":"startup","build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","default_vectorizer_module":"none","level":"info","msg":"the default vectorizer modules is set to \"none\", as a result all new schema classes without an explicit vectorizer setting, will use this vectorizer","time":"2026-01-14T08:18:23Z"}
发生位置 行:1 字符:1
* docker logs docker-weaviate-1 --tail=100
* ```
+ CategoryInfo : NotSpecified: ({"action":"star...-14T08:18:23Z"}:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
{"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"No resource limits set, weaviate will use all available memory and CPU. To limit resources, set LIMIT_RESOURCES=true","time":"2026-01-14T08:18:23Z"}
{"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"module offload-s3 is enabled","time":"2026-01-14T08:18:23Z"}
{"action":"startup","build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","default_vectorizer_module":"none","level":"info","msg":"the default vectorizer modules is set to \"none\", as a result all new schema classes without an explicit vectorizer setting, will use this vectorizer","time":"2026-01-14T08:20:58Z"}
{"action":"startup","auto_schema_enabled":true,"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"auto schema enabled setting is set to \"true\"","time":"2026-01-14T08:20:58Z"}
{"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"No resource limits set, weaviate will use all available memory and CPU. To limit resources, set LIMIT_RESOURCES=true","time":"2026-01-14T08:20:58Z"}
{"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"module offload-s3 is enabled","time":"2026-01-14T08:20:58Z"}
{"action":"startup","build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","default_vectorizer_module":"none","level":"info","msg":"the default vectorizer modules is set to \"none\", as a result all new schema classes without an explicit vectorizer setting, will use this vectorizer","time":"2026-01-14T08:47:13Z"}
{"action":"startup","auto_schema_enabled":true,"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"auto schema enabled setting is set to \"true\"","time":"2026-01-14T08:47:13Z"}
{"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"No resource limits set, weaviate will use all available memory and CPU. To limit resources, set LIMIT_RESOURCES=true","time":"2026-01-14T08:47:13Z"}
{"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"module offload-s3 is enabled","time":"2026-01-14T08:47:13Z"}
很抱歉格式不太清晰,
感谢您的理解。
嗯,似乎没有出现太直接的错误信息……
虽然这个方法有点临时抱佛脚,但能否尝试一下完全停止并重启环境?
docker compose down
docker compose up -d
另外,如果还有其他可能性的话,可能是 .env 文件的内容…… 您是否对 Weaviate 相关的配置做过什么修改?
虽然这可能与错误无关,只是多管闲事,但 docker compose ps 的结果显示有一个 6 天前启动的 pgvector 容器仍然存在,如果不再使用的话,或许可以清理一下。
在粘贴命令执行结果或日志时,可以用三重反引号(```)单独成行包裹内容,这样显示会更清晰。供您参考:
```
这里粘贴日志
```
````
似乎 Weaviate 并未实际在端口 8080 上监听,这导致 Dify 无法连接到它,因此无法将新文件上传到知识库。
docker exec -it docker-weaviate-1 sh
/ # ps aux | grep weaviate
1 root 0:07 /bin/weaviate --host 0.0.0.0 --port 8080 --scheme http
27 root 0:00 grep weaviate
/ # netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.11:43709 0.0.0.0:* LISTEN -
tcp 0 0 :::7946 :::* LISTEN 1/weaviate
tcp 0 0 :::6060 :::* LISTEN 1/weaviate
您有什么建议,可以让我让 Weaviate 正确地在端口 8080 上监听并恢复连接吗?
我非常感谢您的帮助。谢谢!
@rose_1001
感谢你的更新!
我重新查看了你 docker compose ps 的输出,发现 Weaviate 正在将端口 8080 暴露给主机网络。
docker-weaviate-1 ... 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp
这不是默认配置,因此我推测你可能自定义了类似 docker-compose.yaml 或 docker-compose.override.yaml 的文件。
能否分享一下你在 compose 文件中是如何定义 weaviate 和 worker 的?
另外,由于 docker logs docker-weaviate-1 的输出非常简短,Weaviate 可能因端口冲突或某些网络相关问题未能正常启动——这可能是由自定义配置引起的,但也可能并非如此。
@kurokobo
非常感谢您的指导。
我已经按照建议更新了我的 docker-compose 配置并重启了服务,但仍然无法将新文件上传到知识库,问题尚未解决。
以下是我在 docker-compose.yml 文件中的当前 Weaviate 配置:
weaviate:
image: semitechnologies/weaviate:1.27.0
profiles:
- weaviate
restart: always
volumes:
- ./volumes/weaviate:/var/lib/weaviate
environment:
PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate}
QUERY_DEFAULTS_LIMIT: ${WEAVIATE_QUERY_DEFAULTS_LIMIT:-25}
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: ${WEAVIATE_AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED:-true}
DEFAULT_VECTORIZER_MODULE: ${WEAVIATE_DEFAULT_VECTORIZER_MODULE:-none}
CLUSTER_HOSTNAME: ${WEAVIATE_CLUSTER_HOSTNAME:-node1}
AUTHENTICATION_APIKEY_ENABLED: ${WEAVIATE_AUTHENTICATION_APIKEY_ENABLED:-true}
AUTHENTICATION_APIKEY_ALLOWED_KEYS: ${WEAVIATE_AUTHENTICATION_APIKEY_ALLOWED_KEYS:-WVF5YThaHlkYwhGUSmCRgsX3tD5ngdN8pkih}
AUTHENTICATION_APIKEY_USERS: ${WEAVIATE_AUTHENTICATION_APIKEY_USERS:-hello@dify.ai}
AUTHORIZATION_ADMINLIST_ENABLED: ${WEAVIATE_AUTHORIZATION_ADMINLIST_ENABLED:-true}
AUTHORIZATION_ADMINLIST_USERS: ${WEAVIATE_AUTHORIZATION_ADMINLIST_USERS:-hello@dify.ai}
DISABLE_TELEMETRY: ${WEAVIATE_DISABLE_TELEMETRY:-false}
ENABLE_TOKENIZER_GSE: ${WEAVIATE_ENABLE_TOKENIZER_GSE:-false}
ENABLE_TOKENIZER_KAGOME_JA: ${WEAVIATE_ENABLE_TOKENIZER_KAGOME_JA:-false}
ENABLE_TOKENIZER_KAGOME_KR: ${WEAVIATE_ENABLE_TOKENIZER_KAGOME_KR:-false}
在重启 Weaviate 后,我检查了日志,这是目前看到的所有内容:
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8331ab3b0bcb nginx:latest \"sh -c 'cp /docker-e…\" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, [::]:80->80/tcp, 0.0.0.0:443->443/tcp, [::]:443->443/tcp docker-nginx-1
d2ca69e4a8d5 langgenius/dify-api:1.11.2 \"/bin/bash /entrypoi…\" About a minute ago Up About a minute 5001/tcp docker-worker_beat-1
139ece472876 langgenius/dify-api:1.11.2 \"/bin/bash /entrypoi…\" About a minute ago Up About a minute 5001/tcp docker-worker-1
3009802ec24e langgenius/dify-api:1.11.2 \"/bin/bash /entrypoi…\" About a minute ago Up About a minute 5001/tcp docker-api-1
a1a0cb827a47 langgenius/dify-plugin-daemon:0.5.2-local \"/bin/bash -c /app/e…\" About a minute ago Up About a minute 0.0.0.0:5003->5003/tcp, [::]:5003->5003/tcp docker-plugin_daemon-1
fdad17ccce73 postgres:15-alpine \"docker-entrypoint.s…\" About a minute ago Up About a minute (healthy) 5432/tcp docker-db_postgres-1
e8bec6c3a779 redis:6-alpine \"docker-entrypoint.s…\" About a minute ago Up About a minute (healthy) 6379/tcp docker-redis-1
60cf986d8a46 langgenius/dify-web:1.11.2 \"/bin/sh ./entrypoin…\" About a minute ago Up About a minute 3000/tcp docker-web-1
f4f3dc3785c5 semitechnologies/weaviate:1.27.0 \"/bin/weaviate --hos…\" About a minute ago Up About a minute docker-weaviate-1
04a5ff4b82a6 langgenius/dify-sandbox:0.2.12 \"/main\" About a minute ago Up About a minute (healthy) docker-sandbox-1
cf007e85bd6a ubuntu/squid:latest \"sh -c 'cp /docker-e…\" About a minute ago Up About a minute 3128/tcp docker-ssrf_proxy-1
53f4d6cbcf72 pgvector/pgvector:pg16 \"/docker-entrypoint.…\" 6 weeks ago Up 10 days (healthy) 5432/tcp docker-pgvector-1
docker logs docker-weaviate-1
{"action":"startup","build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","default_vectorizer_module":"none","level":"info","msg":"the default vectorizer modules is set to \"none\", as a result all new schema classes without an explicit vectorizer setting, will use this vectorizer","time":"2026-01-19T00:20:19Z"}
{"action":"startup","auto_schema_enabled":true,"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"auto schema enabled setting is set to \"true\"","time":"2026-01-19T00:20:19Z"}
{"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"No resource limits set, weaviate will use all available memory and CPU. To limit resources, set LIMIT_RESOURCES=true","time":"2026-01-19T00:20:19Z"}
{"build_git_commit":"6c571ff","build_go_version":"go1.22.8","build_image_tag":"","build_wv_version":"","level":"info","msg":"module offload-s3 is enabled","time":"2026-01-19T00:20:19Z"}
如果您能再次帮忙查看一下,看看这个配置是否仍可能导致启动、端口监听或模块相关的问题,我将不胜感激。
@kurokobo
我将
volumes:
- ./volumes/weaviate:/var/lib/weaviate
更改为
volumes:
- weaviate_data:/var/lib/weaviate
并在 compose 文件末尾添加了以下内容:
volumes:
weaviate_data:
完成此更改后,我能够成功上传文件到知识库。
非常感谢您亲切而详细的帮助,我由衷感激,并希望未来还能得到您的支持。
@rose_1001
很高兴听到问题似乎已解决。
如果你的修复生效了,问题可能与 Docker 主机上的文件系统权限、ACL、SELinux 或类似机制有关。此外,Docker 可能正在以无根用户模式运行。
无论如何,通常情况下,如果你使用命名卷(weaviate_data:/var/lib/weaviate)而不是绑定挂载(./volumes/weaviate:/var/lib/weaviate),日常使用中不应出现任何问题。但如果你将来考虑基于文件备份或恢复环境,请注意:Weaviate 的文件并不位于 Docker 主机上的 ./volumes/weaviate 目录中。请留意这一点。
祝你使用愉快!
@Dify_Forum_Helper 你来总结一下 Q&A
问:
自托管 Dify (Docker) 在上传文档到知识库时显示错误:
Could not connect to Weaviate: Connection to Weaviate failed. Details: .
所有容器(包括 weaviate)都处于 Up 状态,但在 Weaviate 容器内部,netstat 显示端口 8080 没有监听器。为什么会发生这种情况,以及如何解决的?
摘要 / Summary
- 环境: Dify 自托管 (Docker, v1.11.2),中间件为 Weaviate 1.27.0。
- 症状:
- 上传文件到知识库 (KB) 时出现“Could not connect to Weaviate”错误。
docker compose ps显示docker-weaviate-1处于Up状态。- 但在容器内部检查时,Weaviate 进程虽然在运行,但没有监听
:8080端口。
根本原因
Weaviate 的数据目录被分配为宿主目录的绑定挂载 (bind mount),这被认为是问题的原因。
weaviate:
...
volumes:
- ./volumes/weaviate:/var/lib/weaviate # ← bind mount
对于此路径,宿主侧很可能存在以下问题:
- 权限 / 所有权不一致
- ACL / SELinux / AppArmor 等限制
- Docker 在 rootless 模式下运行,导致对该路径的访问受限
结果,Weaviate 存储初始化失败,尽管进程本身存在,但未能正常绑定 HTTP 端口 8080。
从 Dify 方面来看,这表现为“容器处于 Up 状态,但无法通过 TCP 连接”,从而导致了错误。
实际采取的对策和解决方案
- 首先尝试使用
docker compose down && docker compose up -d重启,但未能解决问题。 - 将 Weaviate 的卷设置从宿主目录绑定挂载 (bind mount) 更改为 Docker 命名卷 (named volume)。
weaviate:
image: semitechnologies/weaviate:1.27.0
profiles:
- weaviate
restart: always
volumes:
- weaviate_data:/var/lib/weaviate # 这里进行了更改
environment:
PERSISTENCE_DATA_PATH: ${WEAVIATE_PERSISTENCE_DATA_PATH:-/var/lib/weaviate}
...
# 添加到文件末尾
volumes:
weaviate_data:
- 更改后重启容器,Weaviate 正常启动并开始监听 8080 端口,知识库的文件上传也成功完成。
从本案例中得出的最佳实践
-
建议 Weaviate 使用命名卷 (named volume)
- 与官方的
docker-compose.yaml类似,使用weaviate_data:/var/lib/weaviate这样的命名卷可以更容易地避免由宿主操作系统的权限或 SELinux 设置引起的问题。 - 如果没有特殊需要使用绑定挂载 (bind mount) (
./volumes/weaviate:/var/lib/weaviate),使用命名卷会更安全。
- 与官方的
-
使用绑定挂载 (bind mount) 时应检查的事项
- 相应目录 (
./volumes/weaviate) 是否实际存在。 - 使用
ls -ld ./volumes ./volumes/weaviate检查运行 Docker 的用户是否具有读写权限。 - 如果在 SELinux 启用环境中,可能需要考虑
:z或:Z选项,或者调整上下文。 - 如果是 Docker rootless 模式,请确认该用户是否可以访问该路径。
- 相应目录 (
-
当“Could not connect to Weaviate”但容器处于 Up 状态时的故障排除
- 从容器内部检查端口是否实际被监听:
docker exec -it docker-weaviate-1 sh netstat -tlnp | grep 8080 - 如果 8080 没有出现,则 Weaviate 启动失败,应首先怀疑卷设置或环境变量。
- 从容器内部检查端口是否实际被监听:
-
备份时的注意事项
- 使用命名卷时,Weaviate 的数据不会放置在宿主机的任意目录(例如
./volumes/weaviate)中。 - 如果需要基于文件进行备份,则需要明确处理卷的步骤,例如
docker run --rm -v weaviate_data:/data ...。
- 使用命名卷时,Weaviate 的数据不会放置在宿主机的任意目录(例如
关于这个主题的结论是:
通过将 Weaviate 的数据目录从宿主机的绑定挂载 (bind mount) 切换到 Docker 命名卷 (named volume),端口 8080 得以正确监听,Dify 的连接错误也随之解决。
这是一个问答 (Q&A)。
