Skip to content

Commit 5fa5522

Browse files
authored
Merge pull request #4805 from FederatedAI/develop-1.11.1
Merge 1.11.1 into master
2 parents 5ac0567 + 50be383 commit 5fa5522

File tree

38 files changed

+33044
-112
lines changed

38 files changed

+33044
-112
lines changed

RELEASE.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,10 @@
1+
## Release 1.11.1
2+
### Major Features and Improvements
3+
> FederatedML
4+
* Support Homo Graph Neural Network
5+
* PSI-DH protocol enhancement: use Oakley MODP modulus groups
6+
7+
18
## Release 1.11.0
29
### Major Features and Improvements
310
> FederatedML

deploy/cluster-deploy/doc/fate_on_spark/fate_on_spark_deployment_guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,7 +187,7 @@ wget https://webank-ai-1251170195.cos.ap-guangzhou.myqcloud.com/fate/${version}/
187187
scp *.tar.gz [email protected]:/data/projects/install
188188
scp *.tar.gz [email protected]:/data/projects/install
189189
```
190-
Note: The current document needs to be deployed with FATE version>=1.7.0, ${version} is replaced with e.g. 1.11.0, without the v character.
190+
Note: The current document needs to be deployed with FATE version>=1.7.0, ${version} is replaced with e.g. 1.11.1, without the v character.
191191

192192
### 5.2 Operating system parameter checking
193193

deploy/cluster-deploy/doc/fate_on_spark/fate_on_spark_deployment_guide.zh.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ wget https://webank-ai-1251170195.cos.ap-guangzhou.myqcloud.com/fate/${version}/
183183
scp *.tar.gz [email protected]:/data/projects/install
184184
scp *.tar.gz [email protected]:/data/projects/install
185185
```
186-
注意: 当前文档需要部署的FATE version>=1.7.0,${version}替换为如1.11.0,不带v字符
186+
注意: 当前文档需要部署的FATE version>=1.7.0,${version}替换为如1.11.1,不带v字符
187187
### 5.2 操作系统参数检查
188188

189189
**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行**

deploy/standalone-deploy/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ export version={FATE version for this deployment}
4141
example:
4242

4343
```bash
44-
export version=1.11.0
44+
export version=1.11.1
4545
```
4646

4747
### 2.2 Pulling mirrors

deploy/standalone-deploy/README.zh.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,13 @@
3535
设置部署所需环境变量(注意, 通过以下方式设置的环境变量仅在当前终端会话有效, 若打开新的终端会话, 如重新登录或者新窗口, 请重新设置)
3636

3737
```bash
38-
export version={本次部署的FATE版本号, 如1.11.0}
38+
export version={本次部署的FATE版本号, 如1.11.1}
3939
```
4040

4141
样例:
4242

4343
```bash
44-
export version=1.11.0
44+
export version=1.11.1
4545
```
4646

4747
### 2.2 拉取镜像

doc/federatedml_component/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,8 @@ provide:
6262
| [Hetero SSHE Logistic Regression](logistic_regression.md) | HeteroSSHELR | Build hetero logistic regression model without arbiter | Table, values are Instances | Table, values are Instances | | SSHE LR Model |
6363
| [Hetero SSHE Linear Regression](linear_regression.md) | HeteroSSHELinR | Build hetero linear regression model without arbiter | Table, values are Instances | Table, values are Instances | | SSHE LinR Model |
6464
| [Positive Unlabeled Learning](positive_unlabeled.md) | PositiveUnlabeled | Build positive unlabeled learning model | Table, values are Instances | Table, values are Instances | | |
65+
| [FATE-LLM](fate_llm.md) | FATE_LLM | Federated Large Language Model | Torch DataSet | | PreTrained Large Language Model | FineTuned Large Language Model |
66+
6567

6668
## Secure Protocol
6769

doc/federatedml_component/README.zh.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,7 @@ Federatedml模块包括许多常见机器学习算法联邦化实现。所有模
5252
| [Hetero SSHE Logistic Regression](logistic_regression.md) | HeteroSSHELR | 两方构建纵向逻辑回归(无可信第三方) | Table, 值为Instance | Table, 值为Instance | | SSHE LR Model |
5353
| [Hetero SSHE Linear Regression](linear_regression.md) | HeteroSSHELinR | 两方构建纵向线性回归(无可信第三方) | Table, 值为Instance | Table, 值为Instance | | SSHE LinR Model |
5454
| [Positive Unlabeled Learning](positive_unlabeled.md) | PositiveUnlabeled | 构建positive unlabeled learning(PU learning)模型 | Table, 值为Instance | Table, 值为Instance | | |
55+
| [FATE-LLM](fate_llm.md) | FATE_LLM | 联邦大语言模型 | Torch DataSet | | PreTrained Large Language Model | FineTuned Large Language Model |
5556

5657

5758
## 安全协议
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# FATE-LLM
2+
FATE-LLM is a framework to support federated training with large language models, it also provides multiple parameter-efficient fine-tuning strategies[1][2] for industrial applications.
3+
4+
## Features
5+
In current version, it supports the following features:
6+
* Integration of various large language models for federated learning: including BERT[, ALBERT, RoBERta, GPT-2, BART, DeBERta, DistilBERT, etc.
7+
These models are widely used in natural language understanding and generation tasks, and can meet the needs of different application scenarios[3][4][5].
8+
* Integration of multiple parameter-efficient tuning methods: Bottleneck Adapters (including Houlsby, Pfeiffer, Parallel schemes), Invertible Adapters, LoRA, IA3, and Compacter
9+
10+
## Experiment Data
11+
12+
### Model Parameter Sizes
13+
The current version of FATE-LLM supports various classic large language models, with parameters amount ranging from tens of millions to 1.5 billions.
14+
The following table are the parameters amounts of models we support for commonly used versions
15+
![llm model parameters](../images/llm_model_parameter_amount.png)
16+
17+
### Trainable Parameter Sizes Of Parameter-Efficient Methods
18+
In order to give users a more intuitive feelings for the huge improvement of federated training and transmission in FATE-LLM,
19+
we will take gpt-2 as an example and show the parameter amount in the federated training and transmission process.
20+
![parameter_efficient](../images/parameter_efficient_of_gpt-2.png)
21+
22+
### Training Time Improvement:
23+
We present a comparison of training times between different adapter
24+
methods and fine-tuning a complete model in a homo(horizontal) federated learning scenario for a text sentiment classification task using the IMDB dataset
25+
- Scenario: Homo(Horizontal) Federated Learning Scenario
26+
- Task Type: Text Sentiment Classification Task
27+
- Participants: Two client parties involved in model building and one server for aggregation.
28+
- Data & Basic parameters: IMDB dataset, with a size of 25,000, batch_size=64, padding_length=200.
29+
- Environment: Each modeling party uses 2x V100 32GB GPUs, and the experiments are conducted in a local area network environment.
30+
31+
The table below shows the training time comparison between using various adapters and fine-tuning the complete model for each epoch (in seconds).
32+
It can be observed that the federated form of adapter+language model can significantly save training time.
33+
34+
![GPT-2 Training Time Improvement](../images/gpt-2_training_time_improvement.png)
35+
36+
37+
## References
38+
[1] Cai D, Wu Y, Wang S, et al. Autofednlp: An efficient fednlp framework[J]. arXiv preprint arXiv:2205.10162, 2022.
39+
[2] Zhang Z, Yang Y, Dai Y, et al. When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods[J]. arXiv preprint arXiv:2212.10025, 2022.
40+
[3] Zhou C, Li Q, Li C, et al. A comprehensive survey on pretrained foundation models: A history from bert to chatgpt[J].
41+
[4] Devlin J, Chang M W, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding[J].arXiv preprint arXiv:1810.04805, 2018.
42+
[5] Radford A, Wu J, Child R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.
47.8 KB
Loading
44.8 KB
Loading

0 commit comments

Comments
 (0)