Skip to content

[Enhancement][zos_data_set]Support for NOSCRATCH option when deleting datasets #2210

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: dev
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
minor_changes:
- zos_data_set - Adds `noscratch` option to allow uncataloging
a data set without deleting it from the volume's VTOC.
(https://github.com/ansible-collections/ibm_zos_core/pull/2202)
trivial:
- data_set - Internal updates to support the noscratch option.
https://github.com/ansible-collections/ibm_zos_core/pull/2202)
- test_zos_data_set_func - added test case to verify the `noscratch` option
functionality in zos_data_set module.
(https://github.com/ansible-collections/ibm_zos_core/pull/2202).
3 changes: 3 additions & 0 deletions changelogs/fragments/2207-SYSIN-support-zos_job_output.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
minor_changes:
- zos_job_output - Adds support to query SYSIN DDs from a job with new option input.
(https://github.com/ansible-collections/ibm_zos_core/pull/2207)
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
minor_changes:
- zos_data_set - Adds `noscratch` option to allow uncataloging
a data set without deleting it from the volume's VTOC.
(https://github.com/ansible-collections/ibm_zos_core/pull/2210)
trivial:
- data_set - Internal updates to support the noscratch option.
https://github.com/ansible-collections/ibm_zos_core/pull/2210)
- test_zos_data_set_func - added test case to verify the `noscratch` option
functionality in zos_data_set module.
(https://github.com/ansible-collections/ibm_zos_core/pull/2210).
16 changes: 16 additions & 0 deletions changelogs/fragments/2213-test-case-conditional-failure-2-19.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
trivial:
- test_zos_copy_func.py - modified test case `test_job_script_async`
to resolve porting issues to ansible 2.19.
(https://github.com/ansible-collections/ibm_zos_core/pull/2213).

- test_zos_job_submit_func.py - modified test case `test_job_submit_async`
to resolve porting issues to ansible 2.19.
(https://github.com/ansible-collections/ibm_zos_core/pull/2213).

- test_zos_script_func.py - modified test case `test_job_script_async`
to resolve porting issues to ansible 2.19.
(https://github.com/ansible-collections/ibm_zos_core/pull/2213).

- test_zos_unarchive_func.py - modified test case `test_zos_unarchive_async`
to resolve porting issues to ansible 2.19.
(https://github.com/ansible-collections/ibm_zos_core/pull/2213).
7 changes: 7 additions & 0 deletions changelogs/fragments/2229-job-typrun-support.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
minor_changes:
- zos_job_submit - Adds support for jobs with TYPRUN=JCLHOLD and TYPRUN=HOLD.
(https://github.com/ansible-collections/ibm_zos_core/pull/2229).
trivial:
- zos_job_submit - Fixes a regression on ZOAU v1.3.6.0 where a job submitted
with TYPRUN=COPY would return an error.
(https://github.com/ansible-collections/ibm_zos_core/pull/2229).
37 changes: 36 additions & 1 deletion docs/source/modules/zos_job_submit.rst
Original file line number Diff line number Diff line change
Expand Up @@ -383,6 +383,7 @@ jobs
"asid": 0,
"class": "K",
"content_type": "JOB",
"cpu_time": 1,
"creation_date": "2023-05-03",
"creation_time": "12:13:00",
"ddnames": [
Expand Down Expand Up @@ -579,10 +580,12 @@ jobs
"stepname": "DLORD6"
}
],
"execution_node": "STL1",
"execution_time": "00:00:10",
"job_class": "K",
"job_id": "JOB00361",
"job_name": "DBDGEN00",
"origin_node": "STL1",
"owner": "OMVSADM",
"priority": 1,
"program_name": "IEBGENER",
Expand Down Expand Up @@ -763,7 +766,9 @@ jobs

Job status `TYPRUN=SCAN` indicates that the job had the TYPRUN parameter with SCAN option.

Job status `NOEXEC` indicates that the job had the TYPRUN parameter with COPY option.
Job status `TYPRUN=COPY` indicates that the job had the TYPRUN parameter with COPY option.

Job status `HOLD` indicates that the job had the TYPRUN parameter with either the HOLD or JCLHOLD options.

Jobs where status can not be determined will result in None (NULL).

Expand Down Expand Up @@ -858,4 +863,34 @@ jobs
| **type**: str
| **sample**: IEBGENER

system
The job entry system that MVS uses to do work.

| **type**: str
| **sample**: STL1

subsystem
The job entry subsystem that MVS uses to do work.

| **type**: str
| **sample**: STL1

cpu_time
Sum of the CPU time used by each job step, in microseconds.

| **type**: int
| **sample**: 5

execution_node
Execution node that picked the job and executed it.

| **type**: str
| **sample**: STL1

origin_node
Origin node that submitted the job.

| **type**: str
| **sample**: STL1


26 changes: 15 additions & 11 deletions plugins/module_utils/data_set.py
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ def ensure_present(
return True

@staticmethod
def ensure_absent(name, volumes=None, tmphlq=None):
def ensure_absent(name, volumes=None, tmphlq=None, noscratch=False):
"""Deletes provided data set if it exists.

Parameters
Expand All @@ -252,13 +252,15 @@ def ensure_absent(name, volumes=None, tmphlq=None):
The volumes the data set may reside on.
tmphlq : str
High Level Qualifier for temporary datasets.
noscratch : bool
If True, the data set is uncataloged but not physically removed from the volume.

Returns
-------
bool
Indicates if changes were made.
"""
changed, present = DataSet.attempt_catalog_if_necessary_and_delete(name, volumes, tmphlq=tmphlq)
changed, present = DataSet.attempt_catalog_if_necessary_and_delete(name, volumes, tmphlq=tmphlq, noscratch=noscratch)
return changed

# ? should we do additional check to ensure member was actually created?
Expand Down Expand Up @@ -1003,7 +1005,7 @@ def attempt_catalog_if_necessary(name, volumes, tmphlq=None):
return present, changed

@staticmethod
def attempt_catalog_if_necessary_and_delete(name, volumes, tmphlq=None):
def attempt_catalog_if_necessary_and_delete(name, volumes, tmphlq=None, noscratch=False):
"""Attempts to catalog a data set if not already cataloged, then deletes
the data set.
This is helpful when a data set currently cataloged is not the data
Expand All @@ -1019,6 +1021,8 @@ def attempt_catalog_if_necessary_and_delete(name, volumes, tmphlq=None):
The volumes the data set may reside on.
tmphlq : str
High Level Qualifier for temporary datasets.
noscratch : bool
If True, the data set is uncataloged but not physically removed from the volume.

Returns
-------
Expand All @@ -1039,7 +1043,7 @@ def attempt_catalog_if_necessary_and_delete(name, volumes, tmphlq=None):
present = DataSet.data_set_cataloged(name, volumes, tmphlq=tmphlq)

if present:
DataSet.delete(name)
DataSet.delete(name, noscratch=noscratch)
changed = True
present = False
else:
Expand Down Expand Up @@ -1074,7 +1078,7 @@ def attempt_catalog_if_necessary_and_delete(name, volumes, tmphlq=None):

if present:
try:
DataSet.delete(name)
DataSet.delete(name, noscratch=noscratch)
except DatasetDeleteError:
try:
DataSet.uncatalog(name, tmphlq=tmphlq)
Expand All @@ -1101,14 +1105,14 @@ def attempt_catalog_if_necessary_and_delete(name, volumes, tmphlq=None):
present = DataSet.data_set_cataloged(name, volumes, tmphlq=tmphlq)

if present:
DataSet.delete(name)
DataSet.delete(name, noscratch=noscratch)
changed = True
present = False
else:
present = DataSet.data_set_cataloged(name, None, tmphlq=tmphlq)
if present:
try:
DataSet.delete(name)
DataSet.delete(name, noscratch=noscratch)
changed = True
present = False
except DatasetDeleteError:
Expand Down Expand Up @@ -1414,7 +1418,7 @@ def create(
return changed

@staticmethod
def delete(name):
def delete(name, noscratch=False):
"""A wrapper around zoautil_py
datasets.delete() to raise exceptions on failure.

Expand All @@ -1428,7 +1432,7 @@ def delete(name):
DatasetDeleteError
When data set deletion fails.
"""
rc = datasets.delete(name)
rc = datasets.delete(name, noscratch=noscratch)
if rc > 0:
raise DatasetDeleteError(name, rc)

Expand Down Expand Up @@ -2721,7 +2725,7 @@ def ensure_present(self, tmp_hlq=None, replace=False, force=False):
self.set_state("present")
return rc

def ensure_absent(self, tmp_hlq=None):
def ensure_absent(self, tmp_hlq=None, noscratch=False):
"""Removes the data set.

Parameters
Expand All @@ -2734,7 +2738,7 @@ def ensure_absent(self, tmp_hlq=None):
int
Indicates if changes were made.
"""
rc = DataSet.ensure_absent(self.name, self.volumes, tmphlq=tmp_hlq)
rc = DataSet.ensure_absent(self.name, self.volumes, tmphlq=tmp_hlq, noscratch=noscratch)
if rc == 0:
self.set_state("absent")
return rc
Expand Down
14 changes: 10 additions & 4 deletions plugins/module_utils/job.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
])


def job_output(job_id=None, owner=None, job_name=None, dd_name=None, dd_scan=True, duration=0, timeout=0, start_time=timer()):
def job_output(job_id=None, owner=None, job_name=None, dd_name=None, sysin=False, dd_scan=True, duration=0, timeout=0, start_time=timer()):
"""Get the output from a z/OS job based on various search criteria.

Keyword Parameters
Expand All @@ -71,6 +71,8 @@ def job_output(job_id=None, owner=None, job_name=None, dd_name=None, dd_scan=Tru
The job name search for (default: {None}).
dd_name : str
The data definition to retrieve (default: {None}).
sysin : bool
The input DD to retrieve SYSIN value (default: {False}).
dd_scan : bool
Whether or not to pull information from the dd's for this job {default: {True}}.
duration : int
Expand Down Expand Up @@ -112,6 +114,7 @@ def job_output(job_id=None, owner=None, job_name=None, dd_name=None, dd_scan=Tru
job_name=job_name,
dd_name=dd_name,
duration=duration,
sysin=sysin,
dd_scan=dd_scan,
timeout=timeout,
start_time=start_time
Expand All @@ -128,6 +131,7 @@ def job_output(job_id=None, owner=None, job_name=None, dd_name=None, dd_scan=Tru
owner=owner,
job_name=job_name,
dd_name=dd_name,
sysin=sysin,
dd_scan=dd_scan,
duration=duration,
timeout=timeout,
Expand Down Expand Up @@ -287,7 +291,7 @@ def _parse_steps(job_str):
return stp


def _get_job_status(job_id="*", owner="*", job_name="*", dd_name=None, dd_scan=True, duration=0, timeout=0, start_time=timer()):
def _get_job_status(job_id="*", owner="*", job_name="*", dd_name=None, sysin=False, dd_scan=True, duration=0, timeout=0, start_time=timer()):
"""Get job status.

Parameters
Expand All @@ -300,6 +304,8 @@ def _get_job_status(job_id="*", owner="*", job_name="*", dd_name=None, dd_scan=T
The job name search for (default: {None}).
dd_name : str
The data definition to retrieve (default: {None}).
sysin : bool
The input DD SYSIN (default: {False}).
dd_scan : bool
Whether or not to pull information from the dd's for this job {default: {True}}.
duration : int
Expand Down Expand Up @@ -405,7 +411,7 @@ def _get_job_status(job_id="*", owner="*", job_name="*", dd_name=None, dd_scan=T
list_of_dds = []

try:
list_of_dds = jobs.list_dds(entry.job_id)
list_of_dds = jobs.list_dds(entry.job_id, sysin=sysin)
except exceptions.DDQueryException:
is_dd_query_exception = True

Expand All @@ -424,7 +430,7 @@ def _get_job_status(job_id="*", owner="*", job_name="*", dd_name=None, dd_scan=T
try:
# Note, in the event of an exception, eg job has TYPRUN=HOLD
# list_of_dds will still be populated with valuable content
list_of_dds = jobs.list_dds(entry.job_id)
list_of_dds = jobs.list_dds(entry.job_id, sysin=sysin)
is_jesjcl = True if search_dictionaries("dd_name", "JESJCL", list_of_dds) else False
is_job_error_status = True if entry.status in JOB_ERROR_STATUSES else False
except exceptions.DDQueryException:
Expand Down
Loading