-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Search before asking
- I had searched in the issues and found no similar issues.
What happened
版本:dinky1.2.5,flink:1.20.3
描述:在dinky中第二次或之后启动任务,会发现 The number of retries exceeds the limit, check the K8S cluster for more information的异常,任务在k8s中是正常运行的,并没有报任务异常,也就是dinky get job list 失败,这个问题重启dinky服务之后就好了,所以是不是dinky有某些本地缓存导致了这个问题?
What you expected to happen
我期望是在dinky启动flink任务时,无论是第一次还是后续的任务启动,dinky都应该正常感觉
How to reproduce
k8s(阿里云ACK)
书写一个demo flink sql,点击运行,运行完成之后再次点击运行,发现获取任务状态失败,但是实际上k8s中任务是正常运行的
CREATE TABLE datagen (
f_sequence INT,
f_random INT,
f_random_str STRING,
ts AS localtimestamp,
WATERMARK FOR ts AS ts
) WITH (
'connector' = 'datagen',
-- optional options --
'rows-per-second'='2',
'fields.f_sequence.kind'='sequence',
'fields.f_sequence.start'='1',
'fields.f_sequence.end'='300',
'fields.f_random.min'='1',
'fields.f_random.max'='500',
'fields.f_random_str.length'='10'
);
CREATE TABLE print_table (
f_sequence INT,
f_random INT,
f_random_str STRING
) WITH (
'connector' = 'print'
);
INSERT INTO print_table select f_sequence,f_random,f_random_str from datagen;
Anything else
No response
Version
dev
Are you willing to submit PR?
- Yes I am willing to submit a PR!
Code of Conduct
- I agree to follow this project's Code of Conduct