|
| 1 | +# Local development of Chat App |
| 2 | + |
| 3 | +You can only run locally **after** having successfully run the `azd up` command. If you haven't yet, follow the steps in [Azure deployment](../README.md#azure-deployment) above. |
| 4 | + |
| 5 | +1. Run `azd auth login` |
| 6 | +2. Change dir to `app` |
| 7 | +3. Run `./start.ps1` or `./start.sh` or run the "VS Code Task: Start App" to start the project locally. |
| 8 | + |
| 9 | +## Hot reloading frontend and backend files |
| 10 | + |
| 11 | +When you run `./start.ps1` or `./start.sh`, the backend files will be watched and reloaded automatically. However, the frontend files will not be watched and reloaded automatically. |
| 12 | + |
| 13 | +To enable hot reloading of frontend files, open a new terminal and navigate to the frontend directory: |
| 14 | + |
| 15 | +```shell |
| 16 | +cd app/frontend |
| 17 | +``` |
| 18 | + |
| 19 | +Then run: |
| 20 | + |
| 21 | +```shell |
| 22 | +npm run dev |
| 23 | +``` |
| 24 | + |
| 25 | +You should see: |
| 26 | + |
| 27 | +```shell |
| 28 | + |
| 29 | +> vite |
| 30 | + |
| 31 | + |
| 32 | + VITE v4.5.1 ready in 957 ms |
| 33 | + |
| 34 | + ➜ Local: http://localhost:5173/ |
| 35 | + ➜ Network: use --host to expose |
| 36 | + ➜ press h to show help |
| 37 | +``` |
| 38 | + |
| 39 | +Navigate to the URL shown in the terminal (in this case, `http://localhost:5173/`). This local server will watch and reload frontend files. All backend requests will be routed to the Python server according to `vite.config.ts`. |
| 40 | + |
| 41 | +Then, whenever you make changes to frontend files, the changes will be automatically reloaded, without any browser refresh needed. |
| 42 | + |
| 43 | + |
| 44 | +## Using a local OpenAI-compatible API |
| 45 | + |
| 46 | +You may want to save costs by developing against a local LLM server, such as |
| 47 | +[llamafile](https://github.com/Mozilla-Ocho/llamafile/). Note that a local LLM |
| 48 | +will generally be slower and not as sophisticated. |
| 49 | + |
| 50 | +Once you've got your local LLM running and serving an OpenAI-compatible endpoint, set these environment variables: |
| 51 | + |
| 52 | +```shell |
| 53 | +azd env set OPENAI_HOST local |
| 54 | +azd env set OPENAI_BASE_URL <your local endpoint> |
| 55 | +``` |
| 56 | + |
| 57 | +For example, to point at a local llamafile server running on its default port: |
| 58 | + |
| 59 | +```shell |
| 60 | +azd env set OPENAI_BASE_URL http://localhost:8080/v1 |
| 61 | +``` |
| 62 | + |
| 63 | +If you're running inside a dev container, use this local URL instead: |
| 64 | + |
| 65 | +```shell |
| 66 | +azd env set OPENAI_BASE_URL http://host.docker.internal:8080/v1 |
| 67 | +``` |
0 commit comments