Skip to content

Commit e62ed4a

Browse files
authored
Update README.md
1 parent 2f8b1c8 commit e62ed4a

File tree

1 file changed

+3
-3
lines changed
  • AI-and-Analytics/Features-and-Functionality

1 file changed

+3
-3
lines changed

AI-and-Analytics/Features-and-Functionality/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,15 +19,15 @@ Third party program Licenses can be found here: [third-party-programs.txt](https
1919
|Deep Learning| Intel® Neural Compressor (INC) | [Intel® Neural Compressor (INC) Quantization Aware Training](INC_QuantizationAwareTraining_TextClassification) | Fine-tune a BERT tiny model for emotion classification task using Quantization Aware Training and Inference from Intel® Neural Compressor (INC).
2020
|Deep Learning| Intel® Extension for PyTorch (IPEX) | [IntelPyTorch Extensions Inference Optimization](IntelPyTorch_Extensions_Inference_Optimization) | Apply Intel® Extension for PyTorch (IPEX) to a PyTorch workload to gain performance boost.
2121
|Deep Learning| Intel® Extension for PyTorch (IPEX) | [IntelPyTorch Extensions GPU Inference Optimization with AMP](IntelPyTorch_GPU_InferenceOptimization_with_AMP) | Use the PyTorch ResNet50 model transfer learning and inference using the CIFAR10 dataset on Intel discrete GPU with Intel® Extension for PyTorch (IPEX).
22-
|Deep Learning| Intel® Extension for PyTorch (IPEX)| [IntelPyTorch_InferenceOptimizations_AMX_BF16_INT8](IntelPyTorch_InferenceOptimizations_AMX_BF16_INT8) | Analyze inference performance improvements using Intel® Extension for PyTorch (IPEX) with Advanced Matrix Extensions (Intel® AMX) Bfloat16 and Integer8.
22+
|Deep Learning| Intel® Extension for PyTorch (IPEX)| [PyTorch Inference Optimizations with Intel® Advanced Matrix Extensions (Intel® AMX) Bfloat16 Integer8](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/README.md) | Analyze inference performance improvements using Intel® Extension for PyTorch (IPEX) with Advanced Matrix Extensions (Intel® AMX) Bfloat16 and Integer8.
2323
|Deep Learning| PyTorch | [IntelPyTorch TrainingOptimizations Intel® AMX BF16](IntelPyTorch_TrainingOptimizations_AMX_BF16) | Analyze training performance improvements using Intel® Extension for PyTorch (IPEX) with Intel® AMX Bfloat16.
2424
|Data Analytics | Numpy, Numba | [IntelPython Numpy Numba dpex kNN](IntelPython_Numpy_Numba_dpex_kNN) | Optimize k-NN model by numba_dpex operations without sacrificing accuracy.
2525
|Classical Machine Learning| XGBoost | [IntelPython XGBoost Performance](IntelPython_XGBoost_Performance) | Analyze the performance benefit from using Intel optimized XGBoost compared to un-optimized XGBoost 0.81.
2626
|Classical Machine Learning| XGBoost | [IntelPython XGBoost daal4pyPrediction](IntelPython_XGBoost_daal4pyPrediction) | Analyze the performance benefit of minimal code changes to port pre-trained XGBoost model to daal4py prediction for much faster prediction than XGBoost prediction.
2727
|Classical Machine Learning| daal4py | [IntelPython daal4py DistributedKMeans](IntelPython_daal4py_DistributedKMeans) | Train and predict with a distributed k-means model using the python API package daal4py powered by the oneAPI Data Analytics Library.
2828
|Classical Machine Learning| daal4py | [IntelPython daal4py DistributedLinearRegression](IntelPython_daal4py_DistributedLinearRegression) | Run a distributed Linear Regression model with oneAPI Data Analytics Library (oneDAL) daal4py library memory objects.
29-
|Deep Learning| PyTorch | [IntelPytorch Interactive Chat Quantization](IntelPytorch_Interactive_Chat_Quantization) | Create interactive chat based on pre-trained DialoGPT model and add the Intel® Extension for PyTorch (IPEX) quantization to it.
30-
|Deep Learning| PyTorch | [IntelPytorch Quantization](IntelPytorch_Quantization) | Inference performance improvements using Intel® Extension for PyTorch (IPEX) with feature quantization.
29+
|Deep Learning| PyTorch | [Interactive Chat Based on DialoGPT Model Using Intel® Extension for PyTorch* Quantization](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/README.md) | Create interactive chat based on pre-trained DialoGPT model and add the Intel® Extension for PyTorch (IPEX) quantization to it.
30+
|Deep Learning| PyTorch | [Optimize PyTorch Models using Intel® Extension for PyTorch* (IPEX) Quantization](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/README.md) | Inference performance improvements using Intel® Extension for PyTorch (IPEX) with feature quantization.
3131
|Deep Learning| TensorFlow | [IntelTensorFlow Intel® AMX BF16 Training](IntelTensorFlow_AMX_BF16_Inference) | Enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for model inference with TensorFlow* .
3232
|Deep Learning| TensorFlow | [IntelTensorFlow Intel® AMX BF16 Training](IntelTensorFlow_AMX_BF16_Training) | Training performance improvements with Intel® AMX BF16.
3333
|Deep Learning| TensorFlow | [IntelTensorFlow Enabling Auto Mixed Precision for TransferLearning](IntelTensorFlow_Enabling_Auto_Mixed_Precision_for_TransferLearning) | Enabling auto-mixed precision to use low-precision datatypes, like bfloat16, for transfer learning with TensorFlow*.

0 commit comments

Comments
 (0)