|
42 | 42 | codename: '[k8s.io] Container Runtime blackbox test on terminated container should
|
43 | 43 | report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy
|
44 | 44 | FallbackToLogsOnError is set [NodeConformance] [Conformance]'
|
45 |
| - description: 'Name: Container Runtime, TerminationMessage, from log output of succeeding |
46 |
| - container Create a pod with an container. Container''s output is recorded in log |
47 |
| - and container exits successfully without an error. When container is terminated, |
| 45 | + description: 'Create a pod with an container. Container''s output is recorded in |
| 46 | + log and container exits successfully without an error. When container is terminated, |
48 | 47 | terminationMessage MUST have no content as container succeed. [LinuxOnly]: Cannot
|
49 | 48 | mount files in Windows Containers.'
|
50 | 49 | release: v1.15
|
|
53 | 52 | codename: '[k8s.io] Container Runtime blackbox test on terminated container should
|
54 | 53 | report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy
|
55 | 54 | FallbackToLogsOnError is set [NodeConformance] [Conformance]'
|
56 |
| - description: 'Name: Container Runtime, TerminationMessage, from file of succeeding |
57 |
| - container Create a pod with an container. Container''s output is recorded in a |
58 |
| - file and the container exits successfully without an error. When container is |
| 55 | + description: 'Create a pod with an container. Container''s output is recorded in |
| 56 | + a file and the container exits successfully without an error. When container is |
59 | 57 | terminated, terminationMessage MUST match with the content from file. [LinuxOnly]:
|
60 | 58 | Cannot mount files in Windows Containers.'
|
61 | 59 | release: v1.15
|
|
64 | 62 | codename: '[k8s.io] Container Runtime blackbox test on terminated container should
|
65 | 63 | report termination message [LinuxOnly] from log output if TerminationMessagePolicy
|
66 | 64 | FallbackToLogsOnError is set [NodeConformance] [Conformance]'
|
67 |
| - description: 'Name: Container Runtime, TerminationMessage, from container''s log |
68 |
| - output of failing container Create a pod with an container. Container''s output |
69 |
| - is recorded in log and container exits with an error. When container is terminated, |
70 |
| - termination message MUST match the expected output recorded from container''s |
71 |
| - log. [LinuxOnly]: Cannot mount files in Windows Containers.' |
| 65 | + description: 'Create a pod with an container. Container''s output is recorded in |
| 66 | + log and container exits with an error. When container is terminated, termination |
| 67 | + message MUST match the expected output recorded from container''s log. [LinuxOnly]: |
| 68 | + Cannot mount files in Windows Containers.' |
72 | 69 | release: v1.15
|
73 | 70 | file: test/e2e/common/runtime.go
|
74 | 71 | - testname: ""
|
75 | 72 | codename: '[k8s.io] Container Runtime blackbox test on terminated container should
|
76 | 73 | report termination message [LinuxOnly] if TerminationMessagePath is set as non-root
|
77 | 74 | user and at a non-default path [NodeConformance] [Conformance]'
|
78 |
| - description: 'Name: Container Runtime, TerminationMessagePath, non-root user and |
79 |
| - non-default path Create a pod with a container to run it as a non-root user with |
80 |
| - a custom TerminationMessagePath set. Pod redirects the output to the provided |
81 |
| - path successfully. When the container is terminated, the termination message MUST |
82 |
| - match the expected output logged in the provided custom path. [LinuxOnly]: Tagged |
83 |
| - LinuxOnly due to use of ''uid'' and unable to mount files in Windows Containers.' |
| 75 | + description: 'Create a pod with a container to run it as a non-root user with a |
| 76 | + custom TerminationMessagePath set. Pod redirects the output to the provided path |
| 77 | + successfully. When the container is terminated, the termination message MUST match |
| 78 | + the expected output logged in the provided custom path. [LinuxOnly]: Tagged LinuxOnly |
| 79 | + due to use of ''uid'' and unable to mount files in Windows Containers.' |
84 | 80 | release: v1.15
|
85 | 81 | file: test/e2e/common/runtime.go
|
86 | 82 | - testname: Container Runtime, Restart Policy, Pod Phases
|
|
104 | 100 | - testname: Docker containers, with command
|
105 | 101 | codename: '[k8s.io] Docker Containers should be able to override the image''s default
|
106 | 102 | command (docker entrypoint) [NodeConformance] [Conformance]'
|
107 |
| - description: 'Note: when you override the entrypoint, the image''s arguments (docker |
108 |
| - cmd) are ignored. Default command from the docker image entrypoint MUST NOT be |
109 |
| - used when Pod specifies the container command. Command from Pod spec MUST override |
110 |
| - the command in the image.' |
| 103 | + description: Default command from the docker image entrypoint MUST NOT be used when |
| 104 | + Pod specifies the container command. Command from Pod spec MUST override the |
| 105 | + command in the image. |
111 | 106 | release: v1.9
|
112 | 107 | file: test/e2e/common/docker_containers.go
|
113 | 108 | - testname: Docker containers, with command and arguments
|
|
793 | 788 | - testname: Garbage Collector, dependency cycle
|
794 | 789 | codename: '[sig-api-machinery] Garbage collector should not be blocked by dependency
|
795 | 790 | circle [Conformance]'
|
796 |
| - description: 'TODO: should be an integration test Create three pods, patch them |
797 |
| - with Owner references such that pod1 has pod3, pod2 has pod1 and pod3 has pod2 |
798 |
| - as owner references respectively. Delete pod1 MUST delete all pods. The dependency |
799 |
| - cycle MUST not block the garbage collection.' |
| 791 | + description: Create three pods, patch them with Owner references such that pod1 |
| 792 | + has pod3, pod2 has pod1 and pod3 has pod2 as owner references respectively. Delete |
| 793 | + pod1 MUST delete all pods. The dependency cycle MUST not block the garbage collection. |
800 | 794 | release: v1.9
|
801 | 795 | file: test/e2e/apimachinery/garbage_collector.go
|
802 | 796 | - testname: Garbage Collector, multiple owners
|
803 | 797 | codename: '[sig-api-machinery] Garbage collector should not delete dependents that
|
804 | 798 | have both valid owner and owner that''s waiting for dependents to be deleted [Conformance]'
|
805 |
| - description: 'TODO: this should be an integration test Create a replication controller |
806 |
| - RC1, with maximum allocatable Pods between 10 and 100 replicas. Create second |
807 |
| - replication controller RC2 and set RC2 as owner for half of those replicas. Once |
808 |
| - RC1 is created and the all Pods are created, delete RC1 with deleteOptions.PropagationPolicy |
809 |
| - set to Foreground. Half of the Pods that has RC2 as owner MUST not be deleted |
810 |
| - but have a deletion timestamp. Deleting the Replication Controller MUST not delete |
811 |
| - Pods that are owned by multiple replication controllers.' |
| 799 | + description: Create a replication controller RC1, with maximum allocatable Pods |
| 800 | + between 10 and 100 replicas. Create second replication controller RC2 and set |
| 801 | + RC2 as owner for half of those replicas. Once RC1 is created and the all Pods |
| 802 | + are created, delete RC1 with deleteOptions.PropagationPolicy set to Foreground. |
| 803 | + Half of the Pods that has RC2 as owner MUST not be deleted but have a deletion |
| 804 | + timestamp. Deleting the Replication Controller MUST not delete Pods that are owned |
| 805 | + by multiple replication controllers. |
812 | 806 | release: v1.9
|
813 | 807 | file: test/e2e/apimachinery/garbage_collector.go
|
814 | 808 | - testname: Garbage Collector, delete deployment, propagation policy orphan
|
|
1385 | 1379 | - testname: Kubectl, proxy port zero
|
1386 | 1380 | codename: '[sig-cli] Kubectl client Proxy server should support proxy with --port
|
1387 | 1381 | 0 [Conformance]'
|
1388 |
| - description: 'TODO: test proxy options (static, prefix, etc) Start a proxy server |
1389 |
| - on port zero by running ''kubectl proxy'' with --port=0. Call the proxy server |
1390 |
| - by requesting api versions from unix socket. The proxy server MUST provide at |
1391 |
| - least one version string.' |
| 1382 | + description: Start a proxy server on port zero by running 'kubectl proxy' with --port=0. |
| 1383 | + Call the proxy server by requesting api versions from unix socket. The proxy server |
| 1384 | + MUST provide at least one version string. |
1392 | 1385 | release: v1.9
|
1393 | 1386 | file: test/e2e/kubectl/kubectl.go
|
1394 | 1387 | - testname: Kubectl, replication controller
|
|
1473 | 1466 | - testname: Networking, intra pod http
|
1474 | 1467 | codename: '[sig-network] Networking Granular Checks: Pods should function for intra-pod
|
1475 | 1468 | communication: http [NodeConformance] [Conformance]'
|
1476 |
| - description: Try to hit all endpoints through a test container, retry 5 times, expect |
1477 |
| - exactly one unique hostname. Each of these endpoints reports its own hostname. |
1478 |
| - Create a hostexec pod that is capable of curl to netcat commands. Create a test |
1479 |
| - Pod that will act as a webserver front end exposing ports 8080 for tcp and 8081 |
1480 |
| - for udp. The netserver service proxies are created on specified number of nodes. |
1481 |
| - The kubectl exec on the webserver container MUST reach a http port on the each |
1482 |
| - of service proxy endpoints in the cluster and the request MUST be successful. |
| 1469 | + description: Create a hostexec pod that is capable of curl to netcat commands. Create |
| 1470 | + a test Pod that will act as a webserver front end exposing ports 8080 for tcp |
| 1471 | + and 8081 for udp. The netserver service proxies are created on specified number |
| 1472 | + of nodes. The kubectl exec on the webserver container MUST reach a http port on |
| 1473 | + the each of service proxy endpoints in the cluster and the request MUST be successful. |
1483 | 1474 | Container will execute curl command to reach the service port within specified
|
1484 | 1475 | max retry limit and MUST result in reporting unique hostnames.
|
1485 | 1476 | release: v1.9, v1.18
|
|
1540 | 1531 | file: test/e2e/network/proxy.go
|
1541 | 1532 | - testname: Proxy, logs service endpoint
|
1542 | 1533 | codename: '[sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]'
|
1543 |
| - description: using the porter image to serve content, access the content (of multiple |
1544 |
| - pods?) from multiple (endpoints/services?) Select any node in the cluster to invoke /logs |
1545 |
| - endpoint using the /nodes/proxy subresource from the kubelet port. This endpoint |
1546 |
| - MUST be reachable. |
| 1534 | + description: Select any node in the cluster to invoke /logs endpoint using the |
| 1535 | + /nodes/proxy subresource from the kubelet port. This endpoint MUST be reachable. |
1547 | 1536 | release: v1.9
|
1548 | 1537 | file: test/e2e/network/proxy.go
|
1549 | 1538 | - testname: Service endpoint latency, thresholds
|
|
1711 | 1700 | - testname: Scheduler, resource limits
|
1712 | 1701 | codename: '[sig-scheduling] SchedulerPredicates [Serial] validates resource limits
|
1713 | 1702 | of pods that are allowed to run [Conformance]'
|
1714 |
| - description: 'This test verifies we don''t allow scheduling of pods in a way that |
1715 |
| - sum of resource requests of pods is greater than machines capacity. It assumes |
1716 |
| - that cluster add-on pods stay stable and cannot be run in parallel with any other |
1717 |
| - test that touches Nodes or Pods. It is so because we need to have precise control |
1718 |
| - on what''s running in the cluster. Test scenario: 1. Find the amount CPU resources |
1719 |
| - on each node. 2. Create one pod with affinity to each node that uses 70% of the |
1720 |
| - node CPU. 3. Wait for the pods to be scheduled. 4. Create another pod with no |
1721 |
| - affinity to any node that need 50% of the largest node CPU. 5. Make sure this |
1722 |
| - additional pod is not scheduled. Scheduling Pods MUST fail if the resource requests |
1723 |
| - exceed Machine capacity.' |
| 1703 | + description: Scheduling Pods MUST fail if the resource requests exceed Machine capacity. |
1724 | 1704 | release: v1.9
|
1725 | 1705 | file: test/e2e/scheduling/predicates.go
|
1726 | 1706 | - testname: Scheduler, node selector matching
|
|
1734 | 1714 | - testname: Scheduler, node selector not matching
|
1735 | 1715 | codename: '[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector
|
1736 | 1716 | is respected if not matching [Conformance]'
|
1737 |
| - description: Test Nodes does not have any label, hence it should be impossible to |
1738 |
| - schedule Pod with nonempty Selector set. Create a Pod with a NodeSelector set |
1739 |
| - to a value that does not match a node in the cluster. Since there are no nodes |
1740 |
| - matching the criteria the Pod MUST not be scheduled. |
| 1717 | + description: Create a Pod with a NodeSelector set to a value that does not match |
| 1718 | + a node in the cluster. Since there are no nodes matching the criteria the Pod |
| 1719 | + MUST not be scheduled. |
1741 | 1720 | release: v1.9
|
1742 | 1721 | file: test/e2e/scheduling/predicates.go
|
1743 | 1722 | - testname: Scheduling, HostPort and Protocol match, HostIPs different but one is
|
|
2127 | 2106 | - testname: Projected Volume, multiple projections
|
2128 | 2107 | codename: '[sig-storage] Projected combined should project all components that make
|
2129 | 2108 | up the projection API [Projection][NodeConformance] [Conformance]'
|
2130 |
| - description: Test multiple projections A Pod is created with a projected volume |
2131 |
| - source for secrets, configMap and downwardAPI with pod name, cpu and memory limits |
2132 |
| - and cpu and memory requests. Pod MUST be able to read the secrets, configMap values |
2133 |
| - and the cpu and memory limits as well as cpu and memory requests from the mounted |
2134 |
| - DownwardAPIVolumeFiles. |
| 2109 | + description: A Pod is created with a projected volume source for secrets, configMap |
| 2110 | + and downwardAPI with pod name, cpu and memory limits and cpu and memory requests. |
| 2111 | + Pod MUST be able to read the secrets, configMap values and the cpu and memory |
| 2112 | + limits as well as cpu and memory requests from the mounted DownwardAPIVolumeFiles. |
2135 | 2113 | release: v1.9
|
2136 | 2114 | file: test/e2e/common/projected_combined.go
|
2137 | 2115 | - testname: Projected Volume, ConfigMap, create, update and delete
|
|
0 commit comments