Skip to content

Commit 0d4013d

Browse files
committed
Update README.md
1 parent 9d47c4b commit 0d4013d

File tree

2 files changed

+2
-0
lines changed

2 files changed

+2
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@
1616
<br><br>
1717

1818
# 新闻
19+
* 2023.5.9 Chinese-CLIP适配Pytorch2.0。
1920
* 2023.3.20 新增对比学习的[梯度累积](#gradient_accumulation)支持,可模拟更大batch size的训练效果
2021
* 2023.2.16 新增[FlashAttention](https://github.com/HazyResearch/flash-attention)支持,提升训练速度,降低显存占用,详见[flash_attention.md](flash_attention.md)
2122
* 2023.1.15 新增部署[ONNX](https://onnx.ai/)[TensorRT](https://developer.nvidia.com/tensorrt)模型支持(并提供预训练TensorRT模型),提升特征推理速度,满足部署需求,详见[deployment.md](deployment.md)

README_En.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ This is the Chinese version of CLIP. We use a large-scale Chinese image-text pai
1616
<br><br>
1717

1818
# News
19+
* 2023.5.9 Chinese-CLIP has been adapted to Pytorch2.0.
1920
* 2023.3.20 Support [gradient accumulation](#gradient-accumulation) in contrastive learning to simulate the training effect of a larger batch size.
2021
* 2023.2.16 Support [FlashAttention](https://github.com/HazyResearch/flash-attention) to improve training speed and reduce memory usage. See [flash_attention_En.md](flash_attention_En.md) for more information.
2122
* 2023.1.15 Support the conversion of Pytorch models into [ONNX](https://onnx.ai/) or [TensorRT](https://developer.nvidia.com/tensorrt) formats (and provide pretrained TensorRT models) to improve inference speed and meet deployment requirements. See [deployment_En.md](deployment_En.md) for more information.

0 commit comments

Comments
 (0)