Commit a466f09
Upstream merge 24 10 08 (#226)
* [Build/CI] Upgrade to gcc 10 in the base build Docker image (vllm-project#8814)
* [Docs] Add README to the build docker image (vllm-project#8825)
* [CI/Build] Fix missing ci dependencies (vllm-project#8834)
* [misc][installation] build from source without compilation (vllm-project#8818)
* [ci] Soft fail Entrypoints, Samplers, LoRA, Decoder-only VLM (vllm-project#8872)
Signed-off-by: kevin <[email protected]>
* [Bugfix] Include encoder prompts len to non-stream api usage response (vllm-project#8861)
* [Misc] Change dummy profiling and BOS fallback warns to log once (vllm-project#8820)
* [Bugfix] Fix print_warning_once's line info (vllm-project#8867)
* fix validation: Only set tool_choice `auto` if at least one tool is provided (vllm-project#8568)
* [Bugfix] Fixup advance_step.cu warning (vllm-project#8815)
* [BugFix] Fix test breakages from transformers 4.45 upgrade (vllm-project#8829)
* [Installation] Allow lower versions of FastAPI to maintain Ray 2.9 compatibility (vllm-project#8764)
* [Feature] Add support for Llama 3.1 and 3.2 tool use (vllm-project#8343)
Signed-off-by: Max de Bayser <[email protected]>
* [Core] rename`PromptInputs` and `inputs` (vllm-project#8876)
* [misc] fix collect env (vllm-project#8894)
* [MISC] Fix invalid escape sequence '\' (vllm-project#8830)
Signed-off-by: Peter Pan <[email protected]>
* [Bugfix][VLM] Fix Fuyu batching inference with `max_num_seqs>1` (vllm-project#8892)
* [TPU] Update pallas.py to support trillium (vllm-project#8871)
* [torch.compile] use empty tensor instead of None for profiling (vllm-project#8875)
* [Kernel] AQ AZP 4/4: Integrate asymmetric quantization to linear method (vllm-project#7271)
* [Bugfix] fix for deepseek w4a16 (vllm-project#8906)
Co-authored-by: mgoin <[email protected]>
* [Core] Multi-Step + Single Step Prefills via Chunked Prefill code path (vllm-project#8378)
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [misc][distributed] add VLLM_SKIP_P2P_CHECK flag (vllm-project#8911)
* [Core] Priority-based scheduling in async engine (vllm-project#8850)
* [misc] fix wheel name (vllm-project#8919)
* [Bugfix][Intel] Fix XPU Dockerfile Build (vllm-project#7824)
Signed-off-by: tylertitsworth <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Misc] Remove vLLM patch of `BaichuanTokenizer` (vllm-project#8921)
* [Bugfix] Fix code for downloading models from modelscope (vllm-project#8443)
* [Bugfix] Fix PP for Multi-Step (vllm-project#8887)
* [CI/Build] Update models tests & examples (vllm-project#8874)
Co-authored-by: Roger Wang <[email protected]>
* [Frontend] Make beam search emulator temperature modifiable (vllm-project#8928)
Co-authored-by: Eduard Balzin <[email protected]>
* [Bugfix] Support testing prefill throughput with benchmark_serving.py --hf-output-len 1 (vllm-project#8891)
* [doc] organize installation doc and expose per-commit docker (vllm-project#8931)
* [Core] Improve choice of Python multiprocessing method (vllm-project#8823)
Signed-off-by: Russell Bryant <[email protected]>
Co-authored-by: youkaichao <[email protected]>
* [Bugfix] Block manager v2 with preemption and lookahead slots (vllm-project#8824)
* [Bugfix] Fix Marlin MoE act order when is_k_full == False (vllm-project#8741)
Co-authored-by: Tyler Michael Smith <[email protected]>
* [CI/Build] Add test decorator for minimum GPU memory (vllm-project#8925)
* [Build/CI] Set FETCHCONTENT_BASE_DIR to one location for better caching (vllm-project#8930)
* [Model] Support Qwen2.5-Math-RM-72B (vllm-project#8896)
* [Model][LoRA]LoRA support added for MiniCPMV2.5 (vllm-project#7199)
* [BugFix] Fix seeded random sampling with encoder-decoder models (vllm-project#8870)
Co-authored-by: Roger Wang <[email protected]>
* [Misc] Fix typo in BlockSpaceManagerV1 (vllm-project#8944)
* [Frontend] Added support for HF's new `continue_final_message` parameter (vllm-project#8942)
* [Kernel][Model] Varlen prefill + Prefill chunking support for mamba kernels and Jamba model (vllm-project#8533)
* [Model] support input embeddings for qwen2vl (vllm-project#8856)
* [Misc][CI/Build] Include `cv2` via `mistral_common[opencv]` (vllm-project#8951)
* [Model][LoRA]LoRA support added for MiniCPMV2.6 (vllm-project#8943)
Co-authored-by: DarkLight1337 <[email protected]>
* [Model] Expose InternVL2 max_dynamic_patch as a mm_processor_kwarg (vllm-project#8946)
* [Core] Make scheduling policy settable via EngineArgs (vllm-project#8956)
* [Misc] Adjust max_position_embeddings for LoRA compatibility (vllm-project#8957)
* [ci] Add CODEOWNERS for test directories (vllm-project#8795)
Signed-off-by: kevin <[email protected]>
* [CI][SpecDecode] Fix spec decode tests, use flash attention backend for spec decode CI tests. (vllm-project#8975)
* [Frontend][Core] Move guided decoding params into sampling params (vllm-project#8252)
Signed-off-by: Joe Runde <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
* [CI/Build] Fix machete generated kernel files ordering (vllm-project#8976)
Signed-off-by: kevin <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
* [torch.compile] fix tensor alias (vllm-project#8982)
* [Misc] add process_weights_after_loading for DummyLoader (vllm-project#8969)
* [Bugfix] Fix Fuyu tensor parallel inference (vllm-project#8986)
* [Bugfix] Fix Token IDs Reference for MiniCPM-V When Images are Provided With No Placeholders (vllm-project#8991)
Signed-off-by: Alex-Brooks <[email protected]>
* [Core] [Frontend] Priority scheduling for embeddings and in the OpenAI-API (vllm-project#8965)
* [Doc] Update list of supported models (vllm-project#8987)
* Update benchmark_serving.py to read and write json-datasets, results in UTF8, for better compatibility with Windows (vllm-project#8997)
* [Spec Decode] (1/2) Remove batch expansion (vllm-project#8839)
* [Core] Combined support for multi-step scheduling, chunked prefill & prefix caching (vllm-project#8804)
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Andrew Feldman <[email protected]>
* [Misc] Update Default Image Mapper Error Log (vllm-project#8977)
Signed-off-by: Alex-Brooks <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
* [Core] CUDA Graphs for Multi-Step + Chunked-Prefill (vllm-project#8645)
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [OpenVINO] Enable GPU support for OpenVINO vLLM backend (vllm-project#8192)
* [Model] Adding Granite MoE. (vllm-project#8206)
Co-authored-by: Nick Hill <[email protected]>
* [Doc] Update Granite model docs (vllm-project#9025)
* [Bugfix] example template should not add parallel_tool_prompt if tools is none (vllm-project#9007)
* [Misc] log when using default MoE config (vllm-project#8971)
* [BugFix] Enforce Mistral ToolCall id constraint when using the Mistral tool call parser (vllm-project#9020)
* [Core] Make BlockSpaceManagerV2 the default BlockManager to use. (vllm-project#8678)
* [Frontend] [Neuron] Parse literals out of override-neuron-config (vllm-project#8959)
Co-authored-by: Jerzy Zagorski <[email protected]>
* [misc] add forward context for attention (vllm-project#9029)
* Fix failing spec decode test (vllm-project#9054)
* [Bugfix] Weight loading fix for OPT model (vllm-project#9042)
Co-authored-by: dvres <[email protected]>
* [Frontend][Feature] support tool calling for internlm/internlm2_5-7b-chat model (vllm-project#8405)
* [CI/Build] Per file CUDA Archs (improve wheel size and dev build times) (vllm-project#8845)
* [Misc] Enable multi-step output streaming by default (vllm-project#9047)
* [Models] Add remaining model PP support (vllm-project#7168)
Signed-off-by: Muralidhar Andoorveedu <[email protected]>
Signed-off-by: Murali Andoorveedu <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>
* [Misc] Move registry to its own file (vllm-project#9064)
* [Bugfix] Reshape the dimensions of the input image embeddings in Qwen2VL (vllm-project#9071)
* [Bugfix] Flash attention arches not getting set properly (vllm-project#9062)
* [Model] add a bunch of supported lora modules for mixtral (vllm-project#9008)
Signed-off-by: Prashant Gupta <[email protected]>
* Remove AMD Ray Summit Banner (vllm-project#9075)
* [Hardware][PowerPC] Make oneDNN dependency optional for Power (vllm-project#9039)
Signed-off-by: Varad Ahirwadkar <[email protected]>
* [Core][VLM] Test registration for OOT multimodal models (vllm-project#8717)
Co-authored-by: DarkLight1337 <[email protected]>
* Adds truncate_prompt_tokens param for embeddings creation (vllm-project#8999)
Signed-off-by: Flavia Beo <[email protected]>
* [Kernel] Zero point support in fused MarlinMoE kernel + AWQ Fused MoE (vllm-project#8973)
Co-authored-by: Dipika <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
* [CI] Update performance benchmark: upgrade trt-llm to r24.07, and add SGLang (vllm-project#7412)
* [Misc] Improved prefix cache example (vllm-project#9077)
* [Misc] Add random seed for prefix cache benchmark (vllm-project#9081)
* [Misc] Fix CI lint (vllm-project#9085)
* [Hardware][Neuron] Add on-device sampling support for Neuron (vllm-project#8746)
Co-authored-by: Ashraf Mahgoub <[email protected]>
* [torch.compile] improve allreduce registration (vllm-project#9061)
* [Doc] Update README.md with Ray summit slides (vllm-project#9088)
* [Bugfix] use blockmanagerv1 for encoder-decoder (vllm-project#9084)
Co-authored-by: Roger Wang <[email protected]>
* [Bugfix] Fixes Phi3v & Ultravox Multimodal EmbeddingInputs (vllm-project#8979)
* [Model] Support Gemma2 embedding model (vllm-project#9004)
* [Bugfix] Deprecate registration of custom configs to huggingface (vllm-project#9083)
* [Bugfix] Fix order of arguments matters in config.yaml (vllm-project#8960)
* [core] use forward context for flash infer (vllm-project#9097)
* [Bugfix] Fix try-catch conditions to import correct Flash Attention Backend in Draft Model (vllm-project#9101)
* [Frontend] API support for beam search (vllm-project#9087)
Co-authored-by: youkaichao <[email protected]>
* [Misc] Remove user-facing error for removed VLM args (vllm-project#9104)
* [Model] PP support for embedding models and update docs (vllm-project#9090)
Co-authored-by: Roger Wang <[email protected]>
* [Bugfix] fix tool_parser error handling when serve a model not support it (vllm-project#8709)
* [Bugfix] Fix incorrect updates to num_computed_tokens in multi-step scheduling (vllm-project#9038)
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
* [Bugfix][Hardware][CPU] Fix CPU model input for decode (vllm-project#9044)
* [BugFix][Core] Fix BlockManagerV2 when Encoder Input is None (vllm-project#9103)
* [core] remove beam search from the core (vllm-project#9105)
* [Model] Explicit interface for vLLM models and support OOT embedding models (vllm-project#9108)
* [Hardware][CPU] Cross-attention and Encoder-Decoder models support on CPU backend (vllm-project#9089)
* [Core] Refactor GGUF parameters packing and forwarding (vllm-project#8859)
* [Model] Support NVLM-D and fix QK Norm in InternViT (vllm-project#9045)
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
* [Doc]: Add deploying_with_k8s guide (vllm-project#8451)
* [CI/Build] Add linting for github actions workflows (vllm-project#7876)
Signed-off-by: Russell Bryant <[email protected]>
* [Doc] Include performance benchmark in README (vllm-project#9135)
* [misc] fix comment and variable name (vllm-project#9139)
* Add Slack to README (vllm-project#9137)
* [misc] update utils to support comparing multiple settings (vllm-project#9140)
* [Intel GPU] Fix xpu decode input (vllm-project#9145)
* [misc] improve ux on readme (vllm-project#9147)
* [Frontend] API support for beam search for MQLLMEngine (vllm-project#9117)
* [Core][Frontend] Add Support for Inference Time mm_processor_kwargs (vllm-project#9131)
Signed-off-by: Alex-Brooks <[email protected]>
* Factor out common weight loading code
* Fix EAGLE model loading
* [Frontend] Add Early Validation For Chat Template / Tool Call Parser (vllm-project#9151)
Signed-off-by: Alex-Brooks <[email protected]>
* Improve efficiency
* Rename
* Update LLaVA-NeXT-Video
* [CI/Build] Add examples folder into Docker image so that we can leverage the templates*.jinja when serving models (vllm-project#8758)
Signed-off-by: Peter Pan <[email protected]>
* [Bugfix] fix OpenAI API server startup with --disable-frontend-multiprocessing (vllm-project#8537)
* Automatic loading and save memory
* Rename
* Update docstring
* Simplify
* Cleanup
* Fully enable recursive loading
* Clarify
* [Doc] Update vlm.rst to include an example on videos (vllm-project#9155)
Co-authored-by: Cyrus Leung <[email protected]>
* Fix incorrect semantics
* Move function
* Update error message
* Fix Ultravox loading
* spacing
* [Doc] Improve contributing and installation documentation (vllm-project#9132)
Signed-off-by: Rafael Vasquez <[email protected]>
* Fix server
* [Bugfix] Try to handle older versions of pytorch (vllm-project#9086)
---------
Signed-off-by: kevin <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Peter Pan <[email protected]>
Signed-off-by: tylertitsworth <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Muralidhar Andoorveedu <[email protected]>
Signed-off-by: Murali Andoorveedu <[email protected]>
Signed-off-by: Prashant Gupta <[email protected]>
Signed-off-by: Varad Ahirwadkar <[email protected]>
Signed-off-by: Flavia Beo <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: fyuan1316 <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Pernekhan Utemuratov <[email protected]>
Co-authored-by: Chirag Jain <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Peter Pan <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Brittany <[email protected]>
Co-authored-by: Luka Govedič <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Sebastian Schoennenbeck <[email protected]>
Co-authored-by: Tyler Titsworth <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: tastelikefeet <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Edouard B. <[email protected]>
Co-authored-by: Eduard Balzin <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Zilin Zhu <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: juncheoll <[email protected]>
Co-authored-by: danieljannai21 <[email protected]>
Co-authored-by: Mor Zusman <[email protected]>
Co-authored-by: whyiug <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Lily Liu <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: vlsav <[email protected]>
Co-authored-by: afeldman-nm <[email protected]>
Co-authored-by: Andrew Feldman <[email protected]>
Co-authored-by: Sergey Shlyapnikov <[email protected]>
Co-authored-by: Shawn Tan <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Guillaume Calmettes <[email protected]>
Co-authored-by: xendo <[email protected]>
Co-authored-by: Jerzy Zagorski <[email protected]>
Co-authored-by: Domen Vreš <[email protected]>
Co-authored-by: dvres <[email protected]>
Co-authored-by: 代君 <[email protected]>
Co-authored-by: Murali Andoorveedu <[email protected]>
Co-authored-by: Prashant Gupta <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Varad Ahirwadkar <[email protected]>
Co-authored-by: Flávia Béo <[email protected]>
Co-authored-by: Dipika <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: Andy Dai <[email protected]>
Co-authored-by: Chongming Ni <[email protected]>
Co-authored-by: Ashraf Mahgoub <[email protected]>
Co-authored-by: Zhuohan Li <[email protected]>
Co-authored-by: hhzhang16 <[email protected]>
Co-authored-by: Xin Yang <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: Brendan Wong <[email protected]>
Co-authored-by: Yanyi Liu <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: TimWang <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: Daniele <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: bnellnm <[email protected]>1 parent b51fe69 commit a466f09
File tree
411 files changed
+18718
-9884
lines changed- .buildkite
- lm-eval-harness
- configs
- nightly-benchmarks
- scripts
- tests
- .github
- workflows
- matchers
- scripts
- benchmarks
- profiling
- cmake
- csrc
- core
- mamba
- causal_conv1d
- mamba_ssm
- moe
- marlin_kernels
- prepare_inputs
- quantization
- cutlass_w8a8
- fp8
- gptq_marlin
- machete
- marlin
- dense
- qqq
- sparse
- docs
- source
- dev
- multimodal
- offline_inference
- getting_started
- models
- serving
- examples
- tests
- async_engine
- basic_correctness
- core
- block
- e2e
- data
- distributed
- engine
- entrypoints
- llm
- openai
- kernels
- lora
- model_executor
- models
- decoder_only
- language
- vision_language
- embedding/language
- encoder_decoder
- language
- vision_language
- mq_llm_engine
- multi_step
- multimodal
- plugins/vllm_add_dummy_model/vllm_add_dummy_model
- quantization
- samplers
- spec_decode
- e2e
- tool_use
- weight_loading
- tools
- vllm
- attention/backends
- core
- block
- distributed
- device_communicators
- engine
- multiprocessing
- output_processor
- entrypoints
- openai
- tool_parsers
- sync_openai
- executor
- inputs
- lora
- model_executor
- guided_decoding
- layers
- fused_moe
- mamba/ops
- quantization
- compressed_tensors
- schemes
- kernels
- utils
- model_loader
- models
- multimodal
- platforms
- spec_decode
- transformers_utils
- configs
- tokenizers
- worker
Some content is hidden
Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
411 files changed
+18718
-9884
lines changedLines changed: 11 additions & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | 2 | | |
3 | 3 | | |
| 4 | + | |
4 | 5 | | |
5 | 6 | | |
6 | 7 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
49 | 49 | | |
50 | 50 | | |
51 | 51 | | |
| 52 | + | |
52 | 53 | | |
53 | 54 | | |
54 | 55 | | |
55 | 56 | | |
56 | 57 | | |
57 | 58 | | |
58 | | - | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
1 | 1 | | |
2 | 2 | | |
3 | 3 | | |
4 | | - | |
5 | | - | |
6 | | - | |
7 | | - | |
8 | | - | |
9 | | - | |
10 | | - | |
11 | | - | |
12 | | - | |
13 | | - | |
14 | | - | |
15 | | - | |
16 | | - | |
17 | | - | |
18 | | - | |
19 | | - | |
20 | | - | |
21 | | - | |
22 | | - | |
23 | | - | |
24 | | - | |
25 | | - | |
26 | | - | |
27 | | - | |
28 | | - | |
29 | | - | |
30 | | - | |
31 | | - | |
32 | | - | |
33 | | - | |
34 | | - | |
35 | | - | |
36 | | - | |
37 | | - | |
38 | | - | |
39 | | - | |
40 | | - | |
41 | | - | |
42 | | - | |
43 | | - | |
44 | | - | |
45 | | - | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
13 | 13 | | |
14 | 14 | | |
15 | 15 | | |
16 | | - | |
| 16 | + | |
17 | 17 | | |
18 | 18 | | |
19 | 19 | | |
| |||
37 | 37 | | |
38 | 38 | | |
39 | 39 | | |
40 | | - | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
41 | 44 | | |
42 | 45 | | |
43 | 46 | | |
| |||
46 | 49 | | |
47 | 50 | | |
48 | 51 | | |
49 | | - | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
50 | 67 | | |
51 | 68 | | |
52 | 69 | | |
| |||
58 | 75 | | |
59 | 76 | | |
60 | 77 | | |
61 | | - | |
| 78 | + | |
62 | 79 | | |
63 | | - | |
64 | 80 | | |
65 | | - | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
66 | 85 | | |
67 | 86 | | |
68 | 87 | | |
| |||
71 | 90 | | |
72 | 91 | | |
73 | 92 | | |
74 | | - | |
| 93 | + | |
75 | 94 | | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
76 | 109 | | |
77 | | - | |
| 110 | + | |
| 111 | + | |
78 | 112 | | |
79 | 113 | | |
80 | 114 | | |
| |||
83 | 117 | | |
84 | 118 | | |
85 | 119 | | |
86 | | - | |
| 120 | + | |
87 | 121 | | |
| 122 | + | |
| 123 | + | |
| 124 | + | |
| 125 | + | |
| 126 | + | |
| 127 | + | |
| 128 | + | |
| 129 | + | |
| 130 | + | |
| 131 | + | |
| 132 | + | |
| 133 | + | |
| 134 | + | |
| 135 | + | |
| 136 | + | |
| 137 | + | |
| 138 | + | |
| 139 | + | |
| 140 | + | |
| 141 | + | |
| 142 | + | |
| 143 | + | |
| 144 | + | |
| 145 | + | |
| 146 | + | |
| 147 | + | |
| 148 | + | |
| 149 | + | |
| 150 | + | |
| 151 | + | |
| 152 | + | |
| 153 | + | |
| 154 | + | |
| 155 | + | |
| 156 | + | |
| 157 | + | |
| 158 | + | |
| 159 | + | |
| 160 | + | |
| 161 | + | |
| 162 | + | |
| 163 | + | |
88 | 164 | | |
89 | 165 | | |
90 | 166 | | |
91 | | - | |
| 167 | + | |
92 | 168 | | |
93 | 169 | | |
94 | 170 | | |
| |||
117 | 193 | | |
118 | 194 | | |
119 | 195 | | |
120 | | - | |
| 196 | + | |
This file was deleted.
0 commit comments