You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: AI-and-Analytics/Features-and-Functionality/README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,15 +19,15 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
19
19
|Deep Learning| Intel® Neural Compressor (INC) | [Intel® Neural Compressor (INC) Quantization Aware Training](INC_QuantizationAwareTraining_TextClassification) | Fine-tune a BERT tiny model for emotion classification task using Quantization Aware Training and Inference from Intel® Neural Compressor (INC).
20
20
|Deep Learning| Intel® Extension for PyTorch (IPEX) | [IntelPyTorch Extensions Inference Optimization](IntelPyTorch_Extensions_Inference_Optimization) | Apply Intel® Extension for PyTorch (IPEX) to a PyTorch workload to gain performance boost.
21
21
|Deep Learning| Intel® Extension for PyTorch (IPEX) | [IntelPyTorch Extensions GPU Inference Optimization with AMP](IntelPyTorch_GPU_InferenceOptimization_with_AMP) | Use the PyTorch ResNet50 model transfer learning and inference using the CIFAR10 dataset on Intel discrete GPU with Intel® Extension for PyTorch (IPEX).
22
-
|Deep Learning| Intel® Extension for PyTorch (IPEX)| [IntelPyTorch_InferenceOptimizations_AMX_BF16_INT8](IntelPyTorch_InferenceOptimizations_AMX_BF16_INT8) | Analyze inference performance improvements using Intel® Extension for PyTorch (IPEX) with Advanced Matrix Extensions (Intel® AMX) Bfloat16 and Integer8.
22
+
|Deep Learning| Intel® Extension for PyTorch (IPEX)| [PyTorch Inference Optimizations with Intel® Advanced Matrix Extensions (Intel® AMX) Bfloat16 Integer8](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/README.md) | Analyze inference performance improvements using Intel® Extension for PyTorch (IPEX) with Advanced Matrix Extensions (Intel® AMX) Bfloat16 and Integer8.
23
23
|Deep Learning| PyTorch | [IntelPyTorch TrainingOptimizations Intel® AMX BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch (IPEX) with Intel® AMX Bfloat16.
24
24
|Data Analytics | Numpy, Numba | [IntelPython Numpy Numba dpex kNN](IntelPython_Numpy_Numba_dpex_kNN) | Optimize k-NN model by numba_dpex operations without sacrificing accuracy.
25
25
|Classical Machine Learning| XGBoost | [IntelPython XGBoost Performance](IntelPython_XGBoost_Performance) | Analyze the performance benefit from using Intel optimized XGBoost compared to un-optimized XGBoost 0.81.
26
26
|Classical Machine Learning| XGBoost | [IntelPython XGBoost daal4pyPrediction](IntelPython_XGBoost_daal4pyPrediction) | Analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction.
27
27
|Classical Machine Learning| daal4py | [IntelPython daal4py DistributedKMeans](IntelPython_daal4py_DistributedKMeans) | Train and predict with a distributed k-means model using the python API package daal4py powered by the oneAPI Data Analytics Library.
28
28
|Classical Machine Learning| daal4py | [IntelPython daal4py DistributedLinearRegression](IntelPython_daal4py_DistributedLinearRegression) | Run a distributed Linear Regression model with oneAPI Data Analytics Library (oneDAL) daal4py library memory objects.
29
-
|Deep Learning| PyTorch | [IntelPytorch Interactive Chat Quantization](IntelPytorch_Interactive_Chat_Quantization) | Create interactive chat based on pre-trained DialoGPT model and add the Intel® Extension for PyTorch (IPEX) quantization to it.
30
-
|Deep Learning| PyTorch | [IntelPytorch Quantization](IntelPytorch_Quantization) | Inference performance improvements using Intel® Extension for PyTorch (IPEX) with feature quantization.
29
+
|Deep Learning| PyTorch | [Interactive Chat Based on DialoGPT Model Using Intel® Extension for PyTorch*Quantization](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/README.md) | Create interactive chat based on pre-trained DialoGPT model and add the Intel® Extension for PyTorch (IPEX) quantization to it.
30
+
|Deep Learning| PyTorch | [Optimize PyTorch Models using Intel® Extension for PyTorch* (IPEX) Quantization](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/README.md) | Inference performance improvements using Intel® Extension for PyTorch (IPEX) with feature quantization.
31
31
|Deep Learning| TensorFlow | [IntelTensorFlow Intel® AMX BF16 Training](IntelTensorFlow_AMX_BF16_Inference) | Enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for model inference with TensorFlow* .
32
32
|Deep Learning| TensorFlow | [IntelTensorFlow Intel® AMX BF16 Training](IntelTensorFlow_AMX_BF16_Training) | Training performance improvements with Intel® AMX BF16.
33
33
|Deep Learning| TensorFlow | [IntelTensorFlow Enabling Auto Mixed Precision for TransferLearning](IntelTensorFlow_Enabling_Auto_Mixed_Precision_for_TransferLearning) | Enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow*.
0 commit comments