Skip to content

Commit 14a73eb

Browse files
authored
Merge pull request #188 from Adamyuanyuan/master
add workspace favorites and resolve conflicts
2 parents dfa434d + bf242e1 commit 14a73eb

File tree

28 files changed

+595
-3
lines changed

28 files changed

+595
-3
lines changed
Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# DSS用户测试样例1:Scala
2+
3+
DSS用户测试样例的目的是为平台新用户提供一组测试样例,用于熟悉DSS的常见操作,并验证DSS平台的正确性
4+
5+
![image-20200408211243941](../../../images/zh_CN/chapter3/tests/home.png)
6+
7+
## 1.1 Spark Core(入口函数sc)
8+
9+
在Scriptis中,已经默认为您注册了SparkContext,所以直接使用sc即可:
10+
11+
### 1.1.1 单Value算子(Map算子为例)
12+
13+
```scala
14+
val rddMap = sc.makeRDD(Array((1,"a"),(1,"d"),(2,"b"),(3,"c")),4)
15+
val res = rddMap.mapValues(data=>{data+"||||"})
16+
res.collect().foreach(data=>println(data._1+","+data._2))
17+
```
18+
19+
### 1.1.2 双Value算子(union算子为例)
20+
21+
```scala
22+
val rdd1 = sc.makeRDD(1 to 5)
23+
val rdd2 = sc.makeRDD(6 to 10)
24+
val rddCustom = rdd1.union(rdd2)
25+
rddCustom.collect().foreach(println)
26+
```
27+
28+
### 1.1.3 K-V算子(reduceByKey算子为例子)
29+
30+
```scala
31+
val rdd1 = sc.makeRDD(List(("female",1),("male",2),("female",3),("male",4)))
32+
val rdd2 = rdd1.reduceByKey((x,y)=>x+y)
33+
rdd2.collect().foreach(println)
34+
```
35+
36+
### 1.1.4 执行算子(以上collect算子为例)
37+
38+
### 1.1.5 从hdfs上读取文件并做简单执行
39+
40+
```scala
41+
case class Person(name:String,age:String)
42+
val file = sc.textFile("/test.txt")
43+
val person = file.map(line=>{
44+
val values=line.split(",")
45+
46+
Person(values(0),values(1))
47+
})
48+
val df = person.toDF()
49+
df.select($"name").show()
50+
```
51+
52+
53+
54+
## 1.2 UDF函数测试
55+
56+
### 1.2.1 函数定义
57+
58+
59+
60+
```scala
61+
def ScalaUDF3(str: String): String = "hello, " + str + "this is a third attempt"
62+
```
63+
64+
### 1.2.2 注册函数
65+
66+
函数-》个人函数-》右击新增spark函数=》注册方式同常规spark开发
67+
68+
![img](../../../images/zh_CN/chapter3/tests/udf1.png)
69+
70+
## 1.3 UDAF函数测试
71+
72+
### 1.3.1 Jar包上传
73+
74+
​ idea上开发一个求平均值的udaf函数,打成jar(wordcount)包,上传dss jar文件夹。
75+
76+
![img](../../../images/zh_CN/chapter3/tests/udf2.png)
77+
78+
### 1.3.2 注册函数
79+
80+
函数-》个人函数-》右击新增普通函数=》注册方式同常规spark开发
81+
82+
![img](../../../images/zh_CN/chapter3/tests/udf-3.png)
Lines changed: 148 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,148 @@
1+
# DSS用户测试样例2:Hive
2+
3+
DSS用户测试样例的目的是为平台新用户提供一组测试样例,用于熟悉DSS的常见操作,并验证DSS平台的正确性
4+
5+
![image-20200408211243941](../../../images/zh_CN/chapter3/tests/home.png)
6+
7+
## 2.1 数仓建表
8+
9+
​ 进入“数据库”页面,点击“+”,依次输入表信息、表结构和分区信息即可创建数据库表:
10+
11+
<img src="../../../images/zh_CN/chapter3/tests/hive1.png" alt="image-20200408212604929" style="zoom:50%;" />
12+
13+
![img](../../../images/zh_CN/chapter3/tests/hive2.png)
14+
15+
​ 通过以上流程,分别创建部门表dept、员工表emp和分区员工表emp_partition,建表语句如下:
16+
17+
```sql
18+
create external table if not exists default.dept(
19+
deptno int,
20+
dname string,
21+
loc int
22+
)
23+
row format delimited fields terminated by '\t';
24+
25+
create external table if not exists default.emp(
26+
empno int,
27+
ename string,
28+
job string,
29+
mgr int,
30+
hiredate string,
31+
sal double,
32+
comm double,
33+
deptno int
34+
)
35+
row format delimited fields terminated by '\t';
36+
37+
create table if not exists emp_partition(
38+
empno int,
39+
ename string,
40+
job string,
41+
mgr int,
42+
hiredate string,
43+
sal double,
44+
comm double,
45+
deptno int
46+
)
47+
partitioned by (month string)
48+
row format delimited fields terminated by '\t';
49+
```
50+
51+
**导入数据**
52+
53+
目前需要通过后台手动批量导入数据,可以通过insert方法从页面插入数据
54+
55+
```sql
56+
load data local inpath 'dept.txt' into table default.dept;
57+
load data local inpath 'emp.txt' into table default.emp;
58+
load data local inpath 'emp1.txt' into table default.emp_partition;
59+
load data local inpath 'emp2.txt' into table default.emp_partition;
60+
load data local inpath 'emp2.txt' into table default.emp_partition;
61+
```
62+
63+
其它数据按照上述语句导入,样例数据文件路径在:`examples\ch3`
64+
65+
## 2.2 基本SQL语法测试
66+
67+
### 2.2.1 简单查询
68+
69+
```sql
70+
select * from dept;
71+
```
72+
73+
### 2.2.2 Join连接
74+
75+
```sql
76+
select * from emp
77+
left join dept
78+
on emp.deptno = dept.deptno;
79+
```
80+
81+
### 2.2.3 聚合函数
82+
83+
```sql
84+
select dept.dname, avg(sal) as avg_salary
85+
from emp left join dept
86+
on emp.deptno = dept.deptno
87+
group by dept.dname;
88+
```
89+
90+
### 2.2.4 内置函数
91+
92+
```sql
93+
select ename, job,sal,
94+
rank() over(partition by job order by sal desc) sal_rank
95+
from emp;
96+
```
97+
98+
### 2.2.5 分区表简单查询
99+
100+
```sql
101+
show partitions emp_partition;
102+
select * from emp_partition where month='202001';
103+
```
104+
105+
### 2.2.6 分区表联合查询
106+
107+
```sql
108+
select * from emp_partition where month='202001'
109+
union
110+
select * from emp_partition where month='202002'
111+
union
112+
select * from emp_partition where month='202003'
113+
```
114+
115+
## 2.3 UDF函数测试
116+
117+
### 2.3.1 Jar包上传
118+
119+
进入Scriptis页面后,右键目录路径上传jar包:
120+
121+
![img](../../../images/zh_CN/chapter3/tests/hive3.png)
122+
123+
测试样例jar包在`examples\ch3\rename.jar`
124+
125+
### 4.3.2 自定义函数
126+
127+
进入“UDF函数”选项(如1),右击“个人函数”目录,选择“新增函数”:
128+
129+
<img src="../../../images/zh_CN/chapter3/tests/hive4.png" alt="image-20200408214033801" style="zoom: 50%;" />
130+
131+
输入函数名称、选择jar包、并填写注册格式、输入输出格式即可创建函数:
132+
133+
![img](../../../images/zh_CN/chapter3/tests/hive5.png)
134+
135+
<img src="../../../images/zh_CN/chapter3/tests/hive-6.png" alt="image-20200409155418424" style="zoom: 67%;" />
136+
137+
获得的函数如下:
138+
139+
![img](../../../images/zh_CN/chapter3/tests/hive7.png)
140+
141+
### 4.3.3 利用自定义函数进行SQL查询
142+
143+
完成函数注册后,可进入工作空间页面创建.hql文件使用函数:
144+
145+
```sql
146+
select deptno,ename, rename(ename) as new_name
147+
from emp;
148+
```
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# DSS用户测试样例3:SparkSQL
2+
3+
DSS用户测试样例的目的是为平台新用户提供一组测试样例,用于熟悉DSS的常见操作,并验证DSS平台的正确性
4+
5+
![image-20200408211243941](../../../images/zh_CN/chapter3/tests/home.png)
6+
7+
## 3.1RDD与DataFrame转换
8+
9+
### 3.1.1 RDD转为DataFrame
10+
11+
```scala
12+
case class MyList(id:Int)
13+
14+
val lis = List(1,2,3,4)
15+
16+
val listRdd = sc.makeRDD(lis)
17+
import spark.implicits._
18+
val df = listRdd.map(value => MyList(value)).toDF()
19+
20+
df.show()
21+
```
22+
23+
### 3.1.2 DataFrame转为RDD
24+
25+
```scala
26+
case class MyList(id:Int)
27+
28+
val lis = List(1,2,3,4)
29+
val listRdd = sc.makeRDD(lis)
30+
import spark.implicits._
31+
val df = listRdd.map(value => MyList(value)).toDF()
32+
println("------------------")
33+
34+
val dfToRdd = df.rdd
35+
36+
dfToRdd.collect().foreach(print(_))
37+
```
38+
39+
## 3.2 DSL语法风格实现
40+
41+
```scala
42+
val df = df1.union(df2)
43+
val dfSelect = df.select($"department")
44+
dfSelect.show()
45+
```
46+
47+
## 3.3 SQL语法风格实现(入口函数sqlContext)
48+
49+
```scala
50+
val df = df1.union(df2)
51+
52+
df.createOrReplaceTempView("dfTable")
53+
val innerSql = """
54+
SELECT department
55+
FROM dfTable
56+
"""
57+
val sqlDF = sqlContext.sql(innerSql)
58+
sqlDF.show()
59+
```
60+
61+

dss-server/src/main/java/com/webank/wedatasphere/dss/server/dao/WorkspaceMapper.java

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,11 @@
22

33
import com.webank.wedatasphere.dss.server.dto.response.*;
44
import com.webank.wedatasphere.dss.server.entity.*;
5+
import com.webank.wedatasphere.dss.server.dto.response.HomepageDemoInstanceVo;
6+
import com.webank.wedatasphere.dss.server.dto.response.HomepageDemoMenuVo;
7+
import com.webank.wedatasphere.dss.server.dto.response.HomepageVideoVo;
8+
import com.webank.wedatasphere.dss.server.dto.response.WorkspaceFavoriteVo;
9+
import org.apache.ibatis.annotations.Param;
510

611
import java.util.List;
712

@@ -36,4 +41,11 @@ public interface WorkspaceMapper {
3641
List<OnestopMenuAppInstanceVo> getMenuAppInstancesCn(Long id);
3742
List<OnestopMenuAppInstanceVo> getMenuAppInstanceEn(Long id);
3843

44+
List<WorkspaceFavoriteVo> getWorkspaceFavoritesCn(@Param("username") String username, @Param("workspaceId") Long workspaceId);
45+
46+
List<WorkspaceFavoriteVo> getWorkspaceFavoritesEn(@Param("username") String username, @Param("workspaceId") Long workspaceId);
47+
48+
void addFavorite(DWSFavorite dwsFavorite);
49+
50+
void deleteFavorite(Long favouritesId);
3951
}

0 commit comments

Comments
 (0)