You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|Deep Learning| Intel® Neural Compressor (INC) | [Intel® Neural Compressor (INC) Quantization Aware Training](INC_QuantizationAwareTraining_TextClassification) | Fine-tune a BERT tiny model for emotion classification task using Quantization Aware Training and Inference from Intel® Neural Compressor (INC).
20
-
|Deep Learning| Intel® Extension for PyTorch (IPEX) | [IntelPyTorch Extensions Inference Optimization](IntelPyTorch_Extensions_Inference_Optimization) | Apply Intel® Extension for PyTorch (IPEX) to a PyTorch workload to gain performance boost.
20
+
|Deep Learning| Intel® Extension for PyTorch (IPEX) | [Optimize PyTorch Models using Intel® Extension for PyTorch* (IPEX)](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/README.md) | Apply Intel® Extension for PyTorch (IPEX) to a PyTorch workload to gain performance boost.
21
21
|Deep Learning| Intel® Extension for PyTorch (IPEX) | [IntelPyTorch Extensions GPU Inference Optimization with AMP](IntelPyTorch_GPU_InferenceOptimization_with_AMP) | Use the PyTorch ResNet50 model transfer learning and inference using the CIFAR10 dataset on Intel discrete GPU with Intel® Extension for PyTorch (IPEX).
22
22
|Deep Learning| Intel® Extension for PyTorch (IPEX)| [PyTorch Inference Optimizations with Intel® Advanced Matrix Extensions (Intel® AMX) Bfloat16 Integer8](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/README.md) | Analyze inference performance improvements using Intel® Extension for PyTorch (IPEX) with Advanced Matrix Extensions (Intel® AMX) Bfloat16 and Integer8.
23
23
|Deep Learning| PyTorch | [IntelPyTorch TrainingOptimizations Intel® AMX BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch (IPEX) with Intel® AMX Bfloat16.
0 commit comments