diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 3250e3279ecb6..3e50de2cc8828 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -16,7 +16,7 @@ llama.cpp is a large-scale C/C++ project for efficient LLM (Large Language Model ### Prerequisites - CMake 3.14+ (primary build system) -- C++17 compatible compiler (GCC 13.3+, Clang, MSVC) +- C++23 compatible compiler (GCC 11+, Clang 12+, MSVC 2022+) - Optional: ccache for faster compilation ### Basic Build (CPU-only) @@ -229,7 +229,7 @@ Primary tools: ### Required Tools - CMake 3.14+ (install via system package manager) -- Modern C++ compiler with C++17 support +- Modern C++ compiler with C++23 support - Git (for submodule management) - Python 3.9+ with virtual environment (`.venv` is provided) diff --git a/docs/backend/SYCL.md b/docs/backend/SYCL.md index 6e9b88935da97..e13f8fb3285ae 100644 --- a/docs/backend/SYCL.md +++ b/docs/backend/SYCL.md @@ -15,7 +15,7 @@ ## Background -**SYCL** is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++17. +**SYCL** is a high-level parallel programming model designed to improve developers productivity writing code across various hardware accelerators such as CPUs, GPUs, and FPGAs. It is a single-source language designed for heterogeneous computing and based on standard C++23. **oneAPI** is an open ecosystem and a standard-based specification, supporting multiple architectures including but not limited to Intel CPUs, GPUs and FPGAs. The key components of the oneAPI ecosystem include: