-
Notifications
You must be signed in to change notification settings - Fork 5
Open
Labels
Description
Describe the bug
The complete news audio doesn't play when trying to access the feed in development mode. Logs show that there's a crash on the TTS service (see logs below).
This is because React.JS mounts each component twice during development mode. This means that two audio generation processes are triggered at the same time, and for some reason, the TTS server can't handle both at once; either because of GPU resource exhaustion or just thread-unsafe code.
The server runs without any issues with make prod.
To Reproduce
- Clone the repo (commit
275b813abb8be23f3268e2d79460c585743f3d50at the time of writing) make dev- Log in, add a source
- Try accessing the feed.
- Notice that only the first sentence of the briefing plays. then, the TTS service throws a bunch of errors.
Expected behavior
For the news briefing audio to play exactly in the same way that it does with make prod.
Screenshots
N/A
Client information:
Browser: Brave v1.73.97
Server information
- OS: Ubuntu Server 24.04
- Newsbridge version (or latest commit hash):
275b813abb8be23f3268e2d79460c585743f3d50 - Server CPU: AMD Ryzen 5 1400
- Server RAM: 64GB
- Server GPU: Nvidia RTX 3060 12GB
Additional context
Server logs from beginning to first crash:
db-1 |
db-1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db-1 |
db-1 | 2024-12-20 20:01:36.887 UTC [1] LOG: starting PostgreSQL 17.0 (Debian 17.0-1.pgdg120+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 12.2.0-14) 12.2.0, 64-bit
db-1 | 2024-12-20 20:01:36.894 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db-1 | 2024-12-20 20:01:36.894 UTC [1] LOG: listening on IPv6 address "::", port 5432
db-1 | 2024-12-20 20:01:36.900 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db-1 | 2024-12-20 20:01:36.917 UTC [29] LOG: database system was shut down at 2024-12-20 19:53:01 UTC
db-1 | 2024-12-20 20:01:36.926 UTC [1] LOG: database system is ready to accept connections
client-1 |
client-1 | > login-ui@0.1.0 start
client-1 | > react-scripts start
client-1 |
server-1 | INFO: Will watch for changes in these directories: ['/usr/src/app']
server-1 | INFO: Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit)
server-1 | INFO: Started reloader process [1] using StatReload
llm-1 | INFO: Will watch for changes in these directories: ['/app']
llm-1 | INFO: Uvicorn running on http://0.0.0.0:11000 (Press CTRL+C to quit)
llm-1 | INFO: Started reloader process [1] using StatReload
server-1 | INFO: Started server process [8]
server-1 | INFO: Waiting for application startup.
server-1 | INFO: Application startup complete.
llm-1 | INFO: Started server process [8]
llm-1 | INFO: Waiting for application startup.
llm-1 | 2024/12/20 20:01:39 routes.go:1259: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false ROCR_VISIBLE_DEVICES: http_proxy: https_proxy: no_proxy:]"
llm-1 | time=2024-12-20T20:01:39.871Z level=INFO source=images.go:757 msg="total blobs: 12"
llm-1 | time=2024-12-20T20:01:39.871Z level=INFO source=images.go:764 msg="total unused blobs removed: 0"
llm-1 | [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
llm-1 |
llm-1 | [GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
llm-1 | - using env: export GIN_MODE=release
llm-1 | - using code: gin.SetMode(gin.ReleaseMode)
llm-1 |
llm-1 | [GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
llm-1 | [GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
llm-1 | [GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
llm-1 | [GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
llm-1 | [GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
llm-1 | [GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
llm-1 | [GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
llm-1 | [GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
llm-1 | [GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
llm-1 | [GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
llm-1 | [GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
llm-1 | [GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
llm-1 | [GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
llm-1 | [GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
llm-1 | [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
llm-1 | time=2024-12-20T20:01:39.872Z level=INFO source=routes.go:1310 msg="Listening on 127.0.0.1:11434 (version 0.5.4)"
llm-1 | time=2024-12-20T20:01:39.873Z level=INFO source=routes.go:1339 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx]"
llm-1 | time=2024-12-20T20:01:39.873Z level=INFO source=gpu.go:226 msg="looking for compatible GPUs"
llm-1 | time=2024-12-20T20:01:40.103Z level=INFO source=types.go:131 msg="inference compute" id=GPU-f105236b-47b2-c587-59b6-5caed1905a05 library=cuda variant=v12 compute=8.6 driver=12.7 name="NVIDIA GeForce RTX 3060" total="11.7 GiB" available="10.9 GiB"
client-1 | (node:25) [DEP_WEBPACK_DEV_SERVER_ON_AFTER_SETUP_MIDDLEWARE] DeprecationWarning: 'onAfterSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
client-1 | (Use `node --trace-deprecation ...` to show where the warning was created)
client-1 | (node:25) [DEP_WEBPACK_DEV_SERVER_ON_BEFORE_SETUP_MIDDLEWARE] DeprecationWarning: 'onBeforeSetupMiddleware' option is deprecated. Please use the 'setupMiddlewares' option.
llm-1 | Waiting for Ollama server to start
llm-1 | Downloading if' llama3.2 'not already downloaded...
client-1 | Starting the development server...
client-1 |
llm-1 | [GIN] 2024/12/20 - 20:01:41 | 200 | 760.326837ms | 127.0.0.1 | POST "/api/pull"
llm-1 | INFO: Application startup complete.
client-1 | Compiled successfully!
client-1 |
client-1 | You can now view login-ui in the browser.
client-1 |
client-1 | Local: http://localhost:3000
client-1 | On Your Network: http://172.19.0.6:3000
client-1 |
client-1 | Note that the development build is not optimized.
client-1 | To create a production build, use npm run build.
client-1 |
client-1 | webpack compiled successfully
tts-1 | INFO: Will watch for changes in these directories: ['/app']
tts-1 | INFO: Uvicorn running on http://0.0.0.0:5002 (Press CTRL+C to quit)
tts-1 | INFO: Started reloader process [1] using StatReload
tts-1 | INFO: Started server process [32]
tts-1 | INFO: Waiting for application startup.
tts-1 | INFO: Application startup complete.
server-1 | INFO: 172.19.0.1:59620 - "OPTIONS /login HTTP/1.1" 200 OK
server-1 | INFO: 172.19.0.1:59630 - "POST /login HTTP/1.1" 200 OK
server-1 | INFO: 172.19.0.1:59630 - "OPTIONS /get-headers/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFAYS5jb20iLCJsYW5nIjoiZW4ifQ.zzMReba3bcBEfuW4sgg1oUHa_Tt2NQVpeq3FOFXqBkE HTTP/1.1" 200 OK
db-1 | 2024-12-20 20:03:06.466 UTC [34] LOG: unexpected EOF on client connection with an open transaction
db-1 | 2024-12-20 20:03:06.466 UTC [35] LOG: unexpected EOF on client connection with an open transaction
server-1 | /usr/src/app/src/parse_rss.py:63: RuntimeWarning: coroutine 'get_article_content' was never awaited
server-1 | if (get_article_content(entry.link)) == "Could not find the article body.": # TODO: this is not needed
server-1 | RuntimeWarning: Enable tracemalloc to get the object allocation traceback
server-1 | INFO: 172.19.0.1:59630 - "GET /get-headers/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFAYS5jb20iLCJsYW5nIjoiZW4ifQ.zzMReba3bcBEfuW4sgg1oUHa_Tt2NQVpeq3FOFXqBkE HTTP/1.1" 200 OK
server-1 | INFO: 172.19.0.1:59562 - "OPTIONS /get-headers/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFAYS5jb20iLCJsYW5nIjoiZW4ifQ.zzMReba3bcBEfuW4sgg1oUHa_Tt2NQVpeq3FOFXqBkE HTTP/1.1" 200 OK
server-1 | INFO: 172.19.0.1:59568 - "GET /get-audio/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFAYS5jb20iLCJsYW5nIjoiZW4ifQ.zzMReba3bcBEfuW4sgg1oUHa_Tt2NQVpeq3FOFXqBkE HTTP/1.1" 200 OK
server-1 | INFO: 172.19.0.1:59562 - "GET /get-headers/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFAYS5jb20iLCJsYW5nIjoiZW4ifQ.zzMReba3bcBEfuW4sgg1oUHa_Tt2NQVpeq3FOFXqBkE HTTP/1.1" 200 OK
db-1 | 2024-12-20 20:03:09.993 UTC [36] LOG: unexpected EOF on client connection with an open transaction
db-1 | 2024-12-20 20:03:09.993 UTC [37] LOG: unexpected EOF on client connection with an open transaction
db-1 | 2024-12-20 20:03:09.994 UTC [38] LOG: unexpected EOF on client connection with an open transaction
server-1 | INFO: 172.19.0.1:59576 - "GET /get-audio/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFAYS5jb20iLCJsYW5nIjoiZW4ifQ.zzMReba3bcBEfuW4sgg1oUHa_Tt2NQVpeq3FOFXqBkE HTTP/1.1" 200 OK
db-1 | 2024-12-20 20:03:12.492 UTC [39] LOG: unexpected EOF on client connection with an open transaction
db-1 | 2024-12-20 20:03:12.492 UTC [40] LOG: unexpected EOF on client connection with an open transaction
llm-1 | INFO: 172.19.0.5:58516 - "GET /api/generate?prompt=I'm+gonna+give+you+a+news+story.-+Summarize+it+in+English.-+Talk+as+if+YOU+are+reporting+the+news+on+English+TV.+DON'T+mention+%22The+article+...%22.-+State+the+news+agency+(not+the+literal+link)+before+you+start+summarizing.+(e.g.+This+story+is+from+...)-+DO+NOT+use+AI+phrases+or+commentary.-+ONLY+USE+PARAGRAPH+FORMATTING+-+no+lists,+bolding,+or+italics.+ONLY+PLAIN+TEXT.-+Summarize+the+main+points+in+ONLY+1-2+paragraphs.+Be+clear+and+informative.Here's+the+story:%22%22%22(from+https://www.tabnak.ir/fa/rss/1/mostvisited)%D8%A8%D9%87+%DA%AF%D8%B2%D8%A7%D8%B1%D8%B4+%D8%AA%D8%A7%D8%A8%D9%86%D8%A7%DA%A9%D8%9B+%D8%B9%D8%A8%D8%AF%D8%A7%D9%84%D8%B1%D8%B6%D8%A7+%D8%AF%D8%A7%D9%88%D8%B1%DB%8C+%D8%A8%D8%A7+%D8%B7%D8%B1%D8%AD+%D8%B3%D9%88%D8%A7%D9%84%D8%A7%D8%AA%DB%8C+%D8%AF%D8%B1%D8%A8%D8%A7%D8%B1%D9%87+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C+%D8%AF%D8%B1+%D8%B4%D8%A8%DA%A9%D9%87+%D8%A7%D8%AC%D8%AA%D9%85%D8%A7%D8%B9%DB%8C+%D8%A7%DB%8C%DA%A9%D8%B3+%D9%86%D9%88%D8%B4%D8%AA:%D9%A1-%DA%A9%D8%A7%D8%B1%D9%88%D8%A7%D9%86%D8%B3%D8%B1%D8%A7%DB%8C+%D8%AF%DB%8C%D8%B1+%DA%AF%DA%86%DB%8C%D9%86%D8%8C+%D9%85%D8%AD%D9%84+%D8%A7%D8%AC%D8%B1%D8%A7%DB%8C+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA%D8%8C+%D8%AF%D8%B1+%D9%A8%D9%A0+%DA%A9%DB%8C%D9%84%D9%88%D9%85%D8%AA%D8%B1%DB%8C+%D9%82%D9%85%D8%8C+%D8%B2%DB%8C%D8%B1%D9%86%D8%B8%D8%B1+%D9%88%D8%B2%D8%A7%D8%B1%D8%AA+%DA%AF%D8%B1%D8%AF%D8%B4%DA%AF%D8%B1%DB%8C+%D9%81%D8%B9%D8%A7%D9%84%DB%8C%D8%AA+%D9%85%DB%8C%DA%A9%D9%86%D8%AF+%DA%A9%D9%87+%D8%B5%D8%B1%D9%81%D8%A7+%D8%A8%D8%B1%D8%A7%DB%8C+%D8%A8%D8%A7%D8%B2%D8%AF%DB%8C%D8%AF+%D8%A2%D9%86%D8%8C+%D9%87%D8%B1+%D9%86%D9%81%D8%B1+%D8%A8%D8%A7%DB%8C%D8%AF+%D8%A8%D9%84%DB%8C%D8%B7+%DB%B4%D9%A0+%D9%87%D8%B2%D8%A7%D8%B1+%D8%AA%D9%88%D9%85%D8%A7%D9%86%DB%8C+%D8%A8%D8%AE%D8%B1%D8%AF.+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C+%D8%A8%D8%B1%D8%A7%D8%B3%D8%A7%D8%B3+%DA%86%D9%87+%D8%B1%D9%88%D8%B4+%DB%8C%D8%A7+%D9%82%D8%B1%D8%A7%D8%B1%D8%AF%D8%A7%D8%AF%DB%8C+%D8%AA%D9%88%D8%A7%D9%86%D8%B3%D8%AA%D9%87+%D8%A7%D8%B2+%DA%A9%D8%A7%D8%B1%D9%88%D8%A7%D9%86%D8%B3%D8%B1%D8%A7+%D8%A8%D8%B9%D9%86%D9%88%D8%A7%D9%86+%D9%85%D8%AD%D9%84+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA+%D8%A7%D8%B3%D8%AA%D9%81%D8%A7%D8%AF%D9%87+%DA%A9%D9%86%D8%AF%D8%9F%E2%80%8F%D9%A2-%D8%A7%D9%86%D8%AA%D9%82%D8%A7%D9%84+%D9%88+%D9%86%D8%B5%D8%A8+%D8%B3%D8%A7%D8%B2%D9%87%D8%A7%D8%8C+%D8%AA%D8%AC%D9%87%DB%8C%D8%B2%D8%A7%D8%AA+%D8%B5%D8%AD%D9%86%D9%87%D8%8C+%D8%B5%D9%88%D8%AA%D8%8C+%D8%AA%D8%B5%D9%88%DB%8C%D8%B1%E2%80%8C+%D9%88+%D9%86%D9%88%D8%B1%D9%BE%D8%B1%D8%AF%D8%A7%D8%B2%DB%8C+%D8%A8%D9%87+%DA%A9%D8%A7%D8%B1%D9%88%D8%A7%D9%86%D8%B3%D8%B1%D8%A7%DB%8C+%D8%AF%DB%8C%D8%B1+%DA%AF%DA%86%DB%8C%D9%86%D8%8C+%D8%B2%D9%85%D8%A7%D9%86%E2%80%8C%D8%A8%D8%B1+%D9%88+%D9%86%DB%8C%D8%A7%D8%B2%D9%85%D9%86%D8%AF+%D9%85%D8%AC%D9%88%D8%B2+%D8%A7%D8%B3%D8%AA+%D9%88+%D8%AF%D9%88%D8%B1+%D8%A7%D8%B2+%DA%86%D8%B4%D9%85+%D9%85%D9%82%D8%A7%D9%85%D8%A7%D8%AA+%D9%85%D9%86%D8%B7%D9%82%D9%87+%D9%85%DB%8C%D8%B3%D8%B1+%D9%86%DB%8C%D8%B3%D8%AA.%D9%A3-%D8%A8%D8%A7+%D8%A7%D8%B9%D9%84%D8%A7%D9%85+%D9%87%D9%88%D8%A7%D8%B4%D9%86%D8%A7%D8%B3%DB%8C+%D9%82%D9%85%D8%8C+%D8%AF%D9%85%D8%A7%DB%8C+%D9%85%D9%86%D8%B7%D9%82%D9%87+%DA%A9%D8%A7%D8%B1%D9%88%D8%A7%D9%86%D8%B3%D8%B1%D8%A7%DB%8C+%DA%AF%DA%86%DB%8C%D9%86+%D8%AF%D8%B1+%D8%B2%D9%85%D8%A7%D9%86+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA%D8%8C+%D9%85%D9%86%D9%81%DB%8C+%DB%8C%DA%A9+%D8%AF%D8%B1%D8%AC%D9%87+%D8%A8%D9%88%D8%AF%D9%87+%DA%A9%D9%87+%D9%84%D8%A8%D8%A7%D8%B3+%D8%AF%DA%A9%D9%88%D9%84%D8%AA%D9%87+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C+%D8%AF%D8%B1+%D8%A7%DB%8C%D9%86+%D8%AF%D9%85%D8%A7+%D8%B9%D8%AC%DB%8C%D8%A8+%D8%A7%D8%B3%D8%AA!%E2%80%8F%DB%B4-%D8%B5%D9%81%D8%AD%D9%87+%D9%88%DB%8C%DA%A9%DB%8C+%D9%BE%D8%AF%DB%8C%D8%A7%DB%8C+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C%D8%8C+%D9%87%D9%85%D8%B2%D9%85%D8%A7%D9%86+%D8%A8%D8%A7+%D8%A8%D8%B1%DA%AF%D8%B2%D8%A7%D8%B1%DB%8C+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA+%D8%AF%D8%B1+%D8%B5%D8%A8%D8%AD+%D9%BE%D9%86%D8%AC%D8%B4%D9%86%D8%A8%D9%87+%DB%B1%DB%B2+%D8%AF%D8%B3%D8%A7%D9%85%D8%A8%D8%B1+%D8%AA%D9%88%D8%B3%D8%B7+%E2%80%8E%D8%AD%D8%B3%D9%8A%D9%86+%D8%B1%D9%88%D9%86%D9%82%DB%8C+%D8%A8%D8%B9%D9%86%D9%88%D8%A7%D9%86+%DB%8C%DA%A9%DB%8C+%D8%A7%D8%B2+%D9%86%D9%88%DB%8C%D8%B3%D9%86%D8%AF%DA%AF%D8%A7%D9%86+%D9%88+%D8%A7%D8%AF%DB%8C%D8%AA%D9%88%D8%B1%D9%87%D8%A7%DB%8C+%D8%B5%D9%81%D8%AD%D9%87+%D9%88%DB%8C+%D8%A7%DB%8C%D8%AC%D8%A7%D8%AF+%D9%88+%C2%A0%D8%AA%DA%A9%D9%85%DB%8C%D9%84+%D8%B4%D8%AF%D9%87+%D8%A7%D8%B3%D8%AA.%D8%A8%D9%86%D8%A7+%D8%A8%D8%B1+%D8%A7%D8%B3%D9%86%D8%A7%D8%AF+%D9%85%D9%88%D8%AC%D9%88%D8%AF%D8%8C+%DA%A9%D9%84+%D9%87%D8%B2%DB%8C%D9%86%D9%87+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C%D8%8C+%D8%B4%D8%A7%D9%85%D9%84+%D8%AF%D8%B3%D8%AA%D9%85%D8%B2%D8%AF+%D8%AE%D9%88%D8%A7%D9%86%D9%86%D8%AF%D9%87+%D9%88+%D9%86%D9%88%D8%A7%D8%B2%D9%86%D8%AF%DA%AF%D8%A7%D9%86+%D9%88+%D8%AA%D8%A7%D9%85%DB%8C%D9%86+%D8%AA%D8%AC%D9%87%DB%8C%D8%B2%D8%A7%D8%AA+%D9%88+%D8%A7%D8%B3%D8%AA%DB%8C%D8%AC%D8%8C+%D8%AA%D9%88%D8%B3%D8%B7+%DA%A9%D9%85%D9%BE%D8%A7%D9%86%DB%8C+%D8%A8%DB%8C%D9%86+%D8%A7%D9%84%D9%85%D9%84%D9%84%DB%8C+%22%D9%81%D8%B1%D8%B4+%D8%B9%D8%B1%D8%B3%DB%8C%D9%86%22+%DA%A9%D9%87+%D8%AF%D8%B1+%D8%AA%D9%87%D8%B1%D8%A7%D9%86+%D9%88+%D9%87%DB%8C%D9%88%D8%B3%D8%AA%D9%88%D9%86+%D8%A2%D9%85%D8%B1%DB%8C%DA%A9%D8%A7+%D8%B4%D8%B9%D8%A8%D9%87+%D8%AF%D8%A7%D8%B1%D8%AF%D8%8C+%D9%BE%D8%B1%D8%AF%D8%A7%D8%AE%D8%AA+%D8%B4%D8%AF%D9%87+%D8%A7%D8%B3%D8%AA.+%D8%A8%D8%B2%D9%88%D8%AF%DB%8C+%D8%AF%D8%B1%D8%A8%D8%A7%D8%B1%D9%87+%D8%A7%D9%82%D8%AF%D8%A7%D9%85%D8%A7%D8%AA+%D9%81%D8%B1%D9%87%D9%86%DA%AF%DB%8C+%D8%A7%DB%8C%D9%86+%DA%A9%D9%85%D9%BE%D8%A7%D9%86%DB%8C+%D8%AC%D9%87%D8%A7%D9%86%DB%8C+%D8%AE%D9%88%D8%A7%D9%87%D9%85+%D9%86%D9%88%D8%B4%D8%AA.%D8%B9%D8%A8%D8%A7%D8%B3+%D9%88+%D8%B3%D8%B9%DB%8C%D8%AF+%D8%B9%D8%B1%D8%B3%DB%8C%D9%86+(%D9%A1)%D9%88(%D9%A2)%22%22%22 HTTP/1.1" 200 OK
llm-1 | INFO: 172.19.0.5:58524 - "GET /api/generate?prompt=I'm+gonna+give+you+a+news+story.-+Summarize+it+in+English.-+Talk+as+if+YOU+are+reporting+the+news+on+English+TV.+DON'T+mention+%22The+article+...%22.-+State+the+news+agency+(not+the+literal+link)+before+you+start+summarizing.+(e.g.+This+story+is+from+...)-+DO+NOT+use+AI+phrases+or+commentary.-+ONLY+USE+PARAGRAPH+FORMATTING+-+no+lists,+bolding,+or+italics.+ONLY+PLAIN+TEXT.-+Summarize+the+main+points+in+ONLY+1-2+paragraphs.+Be+clear+and+informative.Here's+the+story:%22%22%22(from+https://www.tabnak.ir/fa/rss/1/mostvisited)%D8%A8%D9%87+%DA%AF%D8%B2%D8%A7%D8%B1%D8%B4+%D8%AA%D8%A7%D8%A8%D9%86%D8%A7%DA%A9%D8%9B+%D8%B9%D8%A8%D8%AF%D8%A7%D9%84%D8%B1%D8%B6%D8%A7+%D8%AF%D8%A7%D9%88%D8%B1%DB%8C+%D8%A8%D8%A7+%D8%B7%D8%B1%D8%AD+%D8%B3%D9%88%D8%A7%D9%84%D8%A7%D8%AA%DB%8C+%D8%AF%D8%B1%D8%A8%D8%A7%D8%B1%D9%87+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C+%D8%AF%D8%B1+%D8%B4%D8%A8%DA%A9%D9%87+%D8%A7%D8%AC%D8%AA%D9%85%D8%A7%D8%B9%DB%8C+%D8%A7%DB%8C%DA%A9%D8%B3+%D9%86%D9%88%D8%B4%D8%AA:%D9%A1-%DA%A9%D8%A7%D8%B1%D9%88%D8%A7%D9%86%D8%B3%D8%B1%D8%A7%DB%8C+%D8%AF%DB%8C%D8%B1+%DA%AF%DA%86%DB%8C%D9%86%D8%8C+%D9%85%D8%AD%D9%84+%D8%A7%D8%AC%D8%B1%D8%A7%DB%8C+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA%D8%8C+%D8%AF%D8%B1+%D9%A8%D9%A0+%DA%A9%DB%8C%D9%84%D9%88%D9%85%D8%AA%D8%B1%DB%8C+%D9%82%D9%85%D8%8C+%D8%B2%DB%8C%D8%B1%D9%86%D8%B8%D8%B1+%D9%88%D8%B2%D8%A7%D8%B1%D8%AA+%DA%AF%D8%B1%D8%AF%D8%B4%DA%AF%D8%B1%DB%8C+%D9%81%D8%B9%D8%A7%D9%84%DB%8C%D8%AA+%D9%85%DB%8C%DA%A9%D9%86%D8%AF+%DA%A9%D9%87+%D8%B5%D8%B1%D9%81%D8%A7+%D8%A8%D8%B1%D8%A7%DB%8C+%D8%A8%D8%A7%D8%B2%D8%AF%DB%8C%D8%AF+%D8%A2%D9%86%D8%8C+%D9%87%D8%B1+%D9%86%D9%81%D8%B1+%D8%A8%D8%A7%DB%8C%D8%AF+%D8%A8%D9%84%DB%8C%D8%B7+%DB%B4%D9%A0+%D9%87%D8%B2%D8%A7%D8%B1+%D8%AA%D9%88%D9%85%D8%A7%D9%86%DB%8C+%D8%A8%D8%AE%D8%B1%D8%AF.+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C+%D8%A8%D8%B1%D8%A7%D8%B3%D8%A7%D8%B3+%DA%86%D9%87+%D8%B1%D9%88%D8%B4+%DB%8C%D8%A7+%D9%82%D8%B1%D8%A7%D8%B1%D8%AF%D8%A7%D8%AF%DB%8C+%D8%AA%D9%88%D8%A7%D9%86%D8%B3%D8%AA%D9%87+%D8%A7%D8%B2+%DA%A9%D8%A7%D8%B1%D9%88%D8%A7%D9%86%D8%B3%D8%B1%D8%A7+%D8%A8%D8%B9%D9%86%D9%88%D8%A7%D9%86+%D9%85%D8%AD%D9%84+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA+%D8%A7%D8%B3%D8%AA%D9%81%D8%A7%D8%AF%D9%87+%DA%A9%D9%86%D8%AF%D8%9F%E2%80%8F%D9%A2-%D8%A7%D9%86%D8%AA%D9%82%D8%A7%D9%84+%D9%88+%D9%86%D8%B5%D8%A8+%D8%B3%D8%A7%D8%B2%D9%87%D8%A7%D8%8C+%D8%AA%D8%AC%D9%87%DB%8C%D8%B2%D8%A7%D8%AA+%D8%B5%D8%AD%D9%86%D9%87%D8%8C+%D8%B5%D9%88%D8%AA%D8%8C+%D8%AA%D8%B5%D9%88%DB%8C%D8%B1%E2%80%8C+%D9%88+%D9%86%D9%88%D8%B1%D9%BE%D8%B1%D8%AF%D8%A7%D8%B2%DB%8C+%D8%A8%D9%87+%DA%A9%D8%A7%D8%B1%D9%88%D8%A7%D9%86%D8%B3%D8%B1%D8%A7%DB%8C+%D8%AF%DB%8C%D8%B1+%DA%AF%DA%86%DB%8C%D9%86%D8%8C+%D8%B2%D9%85%D8%A7%D9%86%E2%80%8C%D8%A8%D8%B1+%D9%88+%D9%86%DB%8C%D8%A7%D8%B2%D9%85%D9%86%D8%AF+%D9%85%D8%AC%D9%88%D8%B2+%D8%A7%D8%B3%D8%AA+%D9%88+%D8%AF%D9%88%D8%B1+%D8%A7%D8%B2+%DA%86%D8%B4%D9%85+%D9%85%D9%82%D8%A7%D9%85%D8%A7%D8%AA+%D9%85%D9%86%D8%B7%D9%82%D9%87+%D9%85%DB%8C%D8%B3%D8%B1+%D9%86%DB%8C%D8%B3%D8%AA.%D9%A3-%D8%A8%D8%A7+%D8%A7%D8%B9%D9%84%D8%A7%D9%85+%D9%87%D9%88%D8%A7%D8%B4%D9%86%D8%A7%D8%B3%DB%8C+%D9%82%D9%85%D8%8C+%D8%AF%D9%85%D8%A7%DB%8C+%D9%85%D9%86%D8%B7%D9%82%D9%87+%DA%A9%D8%A7%D8%B1%D9%88%D8%A7%D9%86%D8%B3%D8%B1%D8%A7%DB%8C+%DA%AF%DA%86%DB%8C%D9%86+%D8%AF%D8%B1+%D8%B2%D9%85%D8%A7%D9%86+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA%D8%8C+%D9%85%D9%86%D9%81%DB%8C+%DB%8C%DA%A9+%D8%AF%D8%B1%D8%AC%D9%87+%D8%A8%D9%88%D8%AF%D9%87+%DA%A9%D9%87+%D9%84%D8%A8%D8%A7%D8%B3+%D8%AF%DA%A9%D9%88%D9%84%D8%AA%D9%87+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C+%D8%AF%D8%B1+%D8%A7%DB%8C%D9%86+%D8%AF%D9%85%D8%A7+%D8%B9%D8%AC%DB%8C%D8%A8+%D8%A7%D8%B3%D8%AA!%E2%80%8F%DB%B4-%D8%B5%D9%81%D8%AD%D9%87+%D9%88%DB%8C%DA%A9%DB%8C+%D9%BE%D8%AF%DB%8C%D8%A7%DB%8C+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C%D8%8C+%D9%87%D9%85%D8%B2%D9%85%D8%A7%D9%86+%D8%A8%D8%A7+%D8%A8%D8%B1%DA%AF%D8%B2%D8%A7%D8%B1%DB%8C+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA+%D8%AF%D8%B1+%D8%B5%D8%A8%D8%AD+%D9%BE%D9%86%D8%AC%D8%B4%D9%86%D8%A8%D9%87+%DB%B1%DB%B2+%D8%AF%D8%B3%D8%A7%D9%85%D8%A8%D8%B1+%D8%AA%D9%88%D8%B3%D8%B7+%E2%80%8E%D8%AD%D8%B3%D9%8A%D9%86+%D8%B1%D9%88%D9%86%D9%82%DB%8C+%D8%A8%D8%B9%D9%86%D9%88%D8%A7%D9%86+%DB%8C%DA%A9%DB%8C+%D8%A7%D8%B2+%D9%86%D9%88%DB%8C%D8%B3%D9%86%D8%AF%DA%AF%D8%A7%D9%86+%D9%88+%D8%A7%D8%AF%DB%8C%D8%AA%D9%88%D8%B1%D9%87%D8%A7%DB%8C+%D8%B5%D9%81%D8%AD%D9%87+%D9%88%DB%8C+%D8%A7%DB%8C%D8%AC%D8%A7%D8%AF+%D9%88+%C2%A0%D8%AA%DA%A9%D9%85%DB%8C%D9%84+%D8%B4%D8%AF%D9%87+%D8%A7%D8%B3%D8%AA.%D8%A8%D9%86%D8%A7+%D8%A8%D8%B1+%D8%A7%D8%B3%D9%86%D8%A7%D8%AF+%D9%85%D9%88%D8%AC%D9%88%D8%AF%D8%8C+%DA%A9%D9%84+%D9%87%D8%B2%DB%8C%D9%86%D9%87+%DA%A9%D9%86%D8%B3%D8%B1%D8%AA+%D9%BE%D8%B1%D8%B3%D8%AA%D9%88+%D8%A7%D8%AD%D9%85%D8%AF%DB%8C%D8%8C+%D8%B4%D8%A7%D9%85%D9%84+%D8%AF%D8%B3%D8%AA%D9%85%D8%B2%D8%AF+%D8%AE%D9%88%D8%A7%D9%86%D9%86%D8%AF%D9%87+%D9%88+%D9%86%D9%88%D8%A7%D8%B2%D9%86%D8%AF%DA%AF%D8%A7%D9%86+%D9%88+%D8%AA%D8%A7%D9%85%DB%8C%D9%86+%D8%AA%D8%AC%D9%87%DB%8C%D8%B2%D8%A7%D8%AA+%D9%88+%D8%A7%D8%B3%D8%AA%DB%8C%D8%AC%D8%8C+%D8%AA%D9%88%D8%B3%D8%B7+%DA%A9%D9%85%D9%BE%D8%A7%D9%86%DB%8C+%D8%A8%DB%8C%D9%86+%D8%A7%D9%84%D9%85%D9%84%D9%84%DB%8C+%22%D9%81%D8%B1%D8%B4+%D8%B9%D8%B1%D8%B3%DB%8C%D9%86%22+%DA%A9%D9%87+%D8%AF%D8%B1+%D8%AA%D9%87%D8%B1%D8%A7%D9%86+%D9%88+%D9%87%DB%8C%D9%88%D8%B3%D8%AA%D9%88%D9%86+%D8%A2%D9%85%D8%B1%DB%8C%DA%A9%D8%A7+%D8%B4%D8%B9%D8%A8%D9%87+%D8%AF%D8%A7%D8%B1%D8%AF%D8%8C+%D9%BE%D8%B1%D8%AF%D8%A7%D8%AE%D8%AA+%D8%B4%D8%AF%D9%87+%D8%A7%D8%B3%D8%AA.+%D8%A8%D8%B2%D9%88%D8%AF%DB%8C+%D8%AF%D8%B1%D8%A8%D8%A7%D8%B1%D9%87+%D8%A7%D9%82%D8%AF%D8%A7%D9%85%D8%A7%D8%AA+%D9%81%D8%B1%D9%87%D9%86%DA%AF%DB%8C+%D8%A7%DB%8C%D9%86+%DA%A9%D9%85%D9%BE%D8%A7%D9%86%DB%8C+%D8%AC%D9%87%D8%A7%D9%86%DB%8C+%D8%AE%D9%88%D8%A7%D9%87%D9%85+%D9%86%D9%88%D8%B4%D8%AA.%D8%B9%D8%A8%D8%A7%D8%B3+%D9%88+%D8%B3%D8%B9%DB%8C%D8%AF+%D8%B9%D8%B1%D8%B3%DB%8C%D9%86+(%D9%A1)%D9%88(%D9%A2)%22%22%22 HTTP/1.1" 200 OK
llm-1 | time=2024-12-20T20:03:14.205Z level=INFO source=sched.go:714 msg="new model will fit in available VRAM in single GPU, loading" model=/root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff gpu=GPU-f105236b-47b2-c587-59b6-5caed1905a05 parallel=4 available=7605846016 required="3.7 GiB"
llm-1 | time=2024-12-20T20:03:14.398Z level=INFO source=server.go:104 msg="system memory" total="62.7 GiB" free="45.2 GiB" free_swap="64.0 GiB"
llm-1 | time=2024-12-20T20:03:14.399Z level=INFO source=memory.go:356 msg="offload to cuda" layers.requested=-1 layers.model=29 layers.offload=29 layers.split="" memory.available="[7.1 GiB]" memory.gpu_overhead="0 B" memory.required.full="3.7 GiB" memory.required.partial="3.7 GiB" memory.required.kv="896.0 MiB" memory.required.allocations="[3.7 GiB]" memory.weights.total="2.4 GiB" memory.weights.repeating="2.1 GiB" memory.weights.nonrepeating="308.2 MiB" memory.graph.full="424.0 MiB" memory.graph.partial="570.7 MiB"
llm-1 | time=2024-12-20T20:03:14.400Z level=INFO source=server.go:376 msg="starting llama server" cmd="/usr/local/lib/ollama/runners/cuda_v12_avx/ollama_llama_server runner --model /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff --ctx-size 8192 --batch-size 512 --n-gpu-layers 29 --threads 4 --parallel 4 --port 45907"
llm-1 | time=2024-12-20T20:03:14.400Z level=INFO source=sched.go:449 msg="loaded runners" count=1
llm-1 | time=2024-12-20T20:03:14.400Z level=INFO source=server.go:555 msg="waiting for llama runner to start responding"
llm-1 | time=2024-12-20T20:03:14.401Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server error"
llm-1 | time=2024-12-20T20:03:14.455Z level=INFO source=runner.go:945 msg="starting go runner"
llm-1 | ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
llm-1 | ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
llm-1 | ggml_cuda_init: found 1 CUDA devices:
llm-1 | Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
llm-1 | time=2024-12-20T20:03:14.472Z level=INFO source=runner.go:946 msg=system info="CUDA : ARCHS = 600,610,620,700,720,750,800,860,870,890,900 | USE_GRAPHS = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | LLAMAFILE = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=4
llm-1 | time=2024-12-20T20:03:14.472Z level=INFO source=.:0 msg="Server listening on 127.0.0.1:45907"
llm-1 | llama_load_model_from_file: using device CUDA0 (NVIDIA GeForce RTX 3060) - 7253 MiB free
llm-1 | time=2024-12-20T20:03:14.652Z level=INFO source=server.go:589 msg="waiting for server to become available" status="llm server loading model"
llm-1 | llama_model_loader: loaded meta data with 30 key-value pairs and 255 tensors from /root/.ollama/models/blobs/sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff (version GGUF V3 (latest))
llm-1 | llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llm-1 | llama_model_loader: - kv 0: general.architecture str = llama
llm-1 | llama_model_loader: - kv 1: general.type str = model
llm-1 | llama_model_loader: - kv 2: general.name str = Llama 3.2 3B Instruct
llm-1 | llama_model_loader: - kv 3: general.finetune str = Instruct
llm-1 | llama_model_loader: - kv 4: general.basename str = Llama-3.2
llm-1 | llama_model_loader: - kv 5: general.size_label str = 3B
llm-1 | llama_model_loader: - kv 6: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llm-1 | llama_model_loader: - kv 7: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llm-1 | llama_model_loader: - kv 8: llama.block_count u32 = 28
llm-1 | llama_model_loader: - kv 9: llama.context_length u32 = 131072
llm-1 | llama_model_loader: - kv 10: llama.embedding_length u32 = 3072
llm-1 | llama_model_loader: - kv 11: llama.feed_forward_length u32 = 8192
llm-1 | llama_model_loader: - kv 12: llama.attention.head_count u32 = 24
llm-1 | llama_model_loader: - kv 13: llama.attention.head_count_kv u32 = 8
llm-1 | llama_model_loader: - kv 14: llama.rope.freq_base f32 = 500000.000000
llm-1 | llama_model_loader: - kv 15: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llm-1 | llama_model_loader: - kv 16: llama.attention.key_length u32 = 128
llm-1 | llama_model_loader: - kv 17: llama.attention.value_length u32 = 128
llm-1 | llama_model_loader: - kv 18: general.file_type u32 = 15
llm-1 | llama_model_loader: - kv 19: llama.vocab_size u32 = 128256
llm-1 | llama_model_loader: - kv 20: llama.rope.dimension_count u32 = 128
llm-1 | llama_model_loader: - kv 21: tokenizer.ggml.model str = gpt2
llm-1 | llama_model_loader: - kv 22: tokenizer.ggml.pre str = llama-bpe
llm-1 | llama_model_loader: - kv 23: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llm-1 | llama_model_loader: - kv 24: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llm-1 | llama_model_loader: - kv 25: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llm-1 | llama_model_loader: - kv 26: tokenizer.ggml.bos_token_id u32 = 128000
llm-1 | llama_model_loader: - kv 27: tokenizer.ggml.eos_token_id u32 = 128009
llm-1 | llama_model_loader: - kv 28: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llm-1 | llama_model_loader: - kv 29: general.quantization_version u32 = 2
llm-1 | llama_model_loader: - type f32: 58 tensors
llm-1 | llama_model_loader: - type q4_K: 168 tensors
llm-1 | llama_model_loader: - type q6_K: 29 tensors
llm-1 | llm_load_vocab: special tokens cache size = 256
llm-1 | llm_load_vocab: token to piece cache size = 0.7999 MB
llm-1 | llm_load_print_meta: format = GGUF V3 (latest)
llm-1 | llm_load_print_meta: arch = llama
llm-1 | llm_load_print_meta: vocab type = BPE
llm-1 | llm_load_print_meta: n_vocab = 128256
llm-1 | llm_load_print_meta: n_merges = 280147
llm-1 | llm_load_print_meta: vocab_only = 0
llm-1 | llm_load_print_meta: n_ctx_train = 131072
llm-1 | llm_load_print_meta: n_embd = 3072
llm-1 | llm_load_print_meta: n_layer = 28
llm-1 | llm_load_print_meta: n_head = 24
llm-1 | llm_load_print_meta: n_head_kv = 8
llm-1 | llm_load_print_meta: n_rot = 128
llm-1 | llm_load_print_meta: n_swa = 0
llm-1 | llm_load_print_meta: n_embd_head_k = 128
llm-1 | llm_load_print_meta: n_embd_head_v = 128
llm-1 | llm_load_print_meta: n_gqa = 3
llm-1 | llm_load_print_meta: n_embd_k_gqa = 1024
llm-1 | llm_load_print_meta: n_embd_v_gqa = 1024
llm-1 | llm_load_print_meta: f_norm_eps = 0.0e+00
llm-1 | llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm-1 | llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm-1 | llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm-1 | llm_load_print_meta: f_logit_scale = 0.0e+00
llm-1 | llm_load_print_meta: n_ff = 8192
llm-1 | llm_load_print_meta: n_expert = 0
llm-1 | llm_load_print_meta: n_expert_used = 0
llm-1 | llm_load_print_meta: causal attn = 1
llm-1 | llm_load_print_meta: pooling type = 0
llm-1 | llm_load_print_meta: rope type = 0
llm-1 | llm_load_print_meta: rope scaling = linear
llm-1 | llm_load_print_meta: freq_base_train = 500000.0
llm-1 | llm_load_print_meta: freq_scale_train = 1
llm-1 | llm_load_print_meta: n_ctx_orig_yarn = 131072
llm-1 | llm_load_print_meta: rope_finetuned = unknown
llm-1 | llm_load_print_meta: ssm_d_conv = 0
llm-1 | llm_load_print_meta: ssm_d_inner = 0
llm-1 | llm_load_print_meta: ssm_d_state = 0
llm-1 | llm_load_print_meta: ssm_dt_rank = 0
llm-1 | llm_load_print_meta: ssm_dt_b_c_rms = 0
llm-1 | llm_load_print_meta: model type = 3B
llm-1 | llm_load_print_meta: model ftype = Q4_K - Medium
llm-1 | llm_load_print_meta: model params = 3.21 B
llm-1 | llm_load_print_meta: model size = 1.87 GiB (5.01 BPW)
llm-1 | llm_load_print_meta: general.name = Llama 3.2 3B Instruct
llm-1 | llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm-1 | llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm-1 | llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm-1 | llm_load_print_meta: EOM token = 128008 '<|eom_id|>'
llm-1 | llm_load_print_meta: LF token = 128 'Ä'
llm-1 | llm_load_print_meta: EOG token = 128008 '<|eom_id|>'
llm-1 | llm_load_print_meta: EOG token = 128009 '<|eot_id|>'
llm-1 | llm_load_print_meta: max token length = 256
llm-1 | llm_load_tensors: offloading 28 repeating layers to GPU
llm-1 | llm_load_tensors: offloading output layer to GPU
llm-1 | llm_load_tensors: offloaded 29/29 layers to GPU
llm-1 | llm_load_tensors: CPU_Mapped model buffer size = 308.23 MiB
llm-1 | llm_load_tensors: CUDA0 model buffer size = 1918.35 MiB
llm-1 | llama_new_context_with_model: n_seq_max = 4
llm-1 | llama_new_context_with_model: n_ctx = 8192
llm-1 | llama_new_context_with_model: n_ctx_per_seq = 2048
llm-1 | llama_new_context_with_model: n_batch = 2048
llm-1 | llama_new_context_with_model: n_ubatch = 512
llm-1 | llama_new_context_with_model: flash_attn = 0
llm-1 | llama_new_context_with_model: freq_base = 500000.0
llm-1 | llama_new_context_with_model: freq_scale = 1
llm-1 | llama_new_context_with_model: n_ctx_per_seq (2048) < n_ctx_train (131072) -- the full capacity of the model will not be utilized
llm-1 | llama_kv_cache_init: CUDA0 KV buffer size = 896.00 MiB
llm-1 | llama_new_context_with_model: KV self size = 896.00 MiB, K (f16): 448.00 MiB, V (f16): 448.00 MiB
llm-1 | llama_new_context_with_model: CUDA_Host output buffer size = 2.00 MiB
llm-1 | llama_new_context_with_model: CUDA0 compute buffer size = 424.00 MiB
llm-1 | llama_new_context_with_model: CUDA_Host compute buffer size = 22.01 MiB
llm-1 | llama_new_context_with_model: graph nodes = 902
llm-1 | llama_new_context_with_model: graph splits = 2
llm-1 | time=2024-12-20T20:03:16.159Z level=INFO source=server.go:594 msg="llama runner started in 1.76 seconds"
tts-1 | INFO: 172.19.0.5:36400 - "GET /api/tts?text=This+story+is+from+BBC+News.%0A&languagecode=en HTTP/1.1" 500 Internal Server Error
tts-1 | ERROR: Exception in ASGI application
tts-1 | Traceback (most recent call last):
tts-1 | File "/app/.venv/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
tts-1 | result = await app( # type: ignore[func-returns-value]
tts-1 | File "/app/.venv/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
tts-1 | return await self.app(scope, receive, send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
tts-1 | await super().__call__(scope, receive, send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
tts-1 | await self.middleware_stack(scope, receive, send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
tts-1 | raise exc
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
tts-1 | await self.app(scope, receive, _send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
tts-1 | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
tts-1 | raise exc
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
tts-1 | await app(scope, receive, sender)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/routing.py", line 715, in __call__
tts-1 | await self.middleware_stack(scope, receive, send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
tts-1 | await route.handle(scope, receive, send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
tts-1 | await self.app(scope, receive, send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
tts-1 | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
tts-1 | raise exc
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
tts-1 | await app(scope, receive, sender)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
tts-1 | response = await f(request)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
tts-1 | raw_response = await run_endpoint_function(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/fastapi/routing.py", line 214, in run_endpoint_function
tts-1 | return await run_in_threadpool(dependant.call, **values)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/starlette/concurrency.py", line 39, in run_in_threadpool
tts-1 | return await anyio.to_thread.run_sync(func, *args)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
tts-1 | return await get_async_backend().run_sync_in_worker_thread(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
tts-1 | return await future
tts-1 | File "/app/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1005, in run
tts-1 | result = context.run(func, *args)
tts-1 | File "/app/tts-server.py", line 27, in text_to_wav_audio
tts-1 | wav_arr = api.tts(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/TTS/api.py", line 312, in tts
tts-1 | wav = self.synthesizer.tts(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 406, in tts
tts-1 | outputs = self.tts_model.synthesize(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 401, in synthesize
tts-1 | return self.inference(text, language, gpt_cond_latent, speaker_embedding, **settings)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
tts-1 | return func(*args, **kwargs)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/TTS/tts/models/xtts.py", line 532, in inference
tts-1 | gpt_codes = self.gpt.generate(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/TTS/tts/layers/xtts/gpt.py", line 512, in generate
tts-1 | gen = self.gpt_inference.generate(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
tts-1 | return func(*args, **kwargs)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 2215, in generate
tts-1 | result = self._sample(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 3206, in _sample
tts-1 | outputs = self(**model_inputs, return_dict=True)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
tts-1 | return self._call_impl(*args, **kwargs)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
tts-1 | return forward_call(*args, **kwargs)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/TTS/tts/layers/xtts/gpt_inference.py", line 98, in forward
tts-1 | transformer_outputs = self.transformer(
tts-1 | File "/app/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
tts-1 | return self._call_impl(*args, **kwargs)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
tts-1 | return forward_call(*args, **kwargs)
tts-1 | File "/app/.venv/lib/python3.10/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1032, in forward
tts-1 | hidden_states = inputs_embeds + position_embeds
tts-1 | RuntimeError: The size of tensor a (55) must match the size of tensor b (51) at non-singleton dimension 1
server-1 | ERROR: Exception in ASGI application
server-1 | Traceback (most recent call last):
server-1 | File "/usr/local/lib/python3.13/site-packages/starlette/responses.py", line 259, in __call__
server-1 | await wrap(partial(self.listen_for_disconnect, receive))
server-1 | File "/usr/local/lib/python3.13/site-packages/starlette/responses.py", line 255, in wrap
server-1 | await func()
server-1 | File "/usr/local/lib/python3.13/site-packages/starlette/responses.py", line 232, in listen_for_disconnect
server-1 | message = await receive()
server-1 | ^^^^^^^^^^^^^^^
server-1 | File "/usr/local/lib/python3.13/site-packages/uvicorn/protocols/http/h11_impl.py", line 534, in receive
server-1 | await self.message_event.wait()
server-1 | File "/usr/local/lib/python3.13/asyncio/locks.py", line 213, in wait
server-1 | await fut
server-1 | asyncio.exceptions.CancelledError: Cancelled by cancel scope 7db769726850
server-1 |
server-1 | During handling of the above exception, another exception occurred:
server-1 |
server-1 | + Exception Group Traceback (most recent call last):
server-1 | | File "/usr/local/lib/python3.13/site-packages/uvicorn/protocols/http/h11_impl.py", line 406, in run_asgi
server-1 | | result = await app( # type: ignore[func-returns-value]
server-1 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | | self.scope, self.receive, self.send
server-1 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | | )
server-1 | | ^
server-1 | | File "/usr/local/lib/python3.13/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
server-1 | | return await self.app(scope, receive, send)
server-1 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | | File "/usr/local/lib/python3.13/site-packages/fastapi/applications.py", line 1054, in __call__
server-1 | | await super().__call__(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/applications.py", line 113, in __call__
server-1 | | await self.middleware_stack(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/middleware/errors.py", line 187, in __call__
server-1 | | raise exc
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/middleware/errors.py", line 165, in __call__
server-1 | | await self.app(scope, receive, _send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/middleware/cors.py", line 85, in __call__
server-1 | | await self.app(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
server-1 | | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
server-1 | | raise exc
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
server-1 | | await app(scope, receive, sender)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 715, in __call__
server-1 | | await self.middleware_stack(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 735, in app
server-1 | | await route.handle(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 288, in handle
server-1 | | await self.app(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 76, in app
server-1 | | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
server-1 | | raise exc
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
server-1 | | await app(scope, receive, sender)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/routing.py", line 74, in app
server-1 | | await response(scope, receive, send)
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/responses.py", line 252, in __call__
server-1 | | async with anyio.create_task_group() as task_group:
server-1 | | ~~~~~~~~~~~~~~~~~~~~~~~^^
server-1 | | File "/usr/local/lib/python3.13/site-packages/anyio/_backends/_asyncio.py", line 763, in __aexit__
server-1 | | raise BaseExceptionGroup(
server-1 | | "unhandled errors in a TaskGroup", self._exceptions
server-1 | | )
server-1 | | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
server-1 | +-+---------------- 1 ----------------
server-1 | | Traceback (most recent call last):
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/responses.py", line 255, in wrap
server-1 | | await func()
server-1 | | File "/usr/local/lib/python3.13/site-packages/starlette/responses.py", line 244, in stream_response
server-1 | | async for chunk in self.body_iterator:
server-1 | | ...<2 lines>...
server-1 | | await send({"type": "http.response.body", "body": chunk, "more_body": True})
server-1 | | File "/usr/src/app/src/server.py", line 121, in get_all_sources_summary_audios
server-1 | | audio = await tts.text_to_audio(phrase, lang)
server-1 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
server-1 | | File "/usr/src/app/src/tts.py", line 18, in text_to_audio
server-1 | | raise Exception(f"Error: {response.status}, {await response.text()}")
server-1 | | Exception: Error: 500, Internal Server Error
server-1 | +------------------------------------
Reactions are currently unavailable