Skip to content

Commit aae8019

Browse files
committed
Docs: jobs methods work with finished jobs by default
1 parent ebfcde7 commit aae8019

File tree

1 file changed

+9
-0
lines changed

1 file changed

+9
-0
lines changed

scrapinghub/client/jobs.py

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,9 @@ def count(self, spider=None, state=None, has_tag=None, lacks_tag=None,
6060
:return: jobs count.
6161
:rtype: :class:`int`
6262
63+
The endpoint used by the method counts only finished jobs by default,
64+
use ``state`` parameter to count jobs in other states.
65+
6366
Usage::
6467
6568
>>> spider = project.spiders.get('spider1')
@@ -99,6 +102,9 @@ def iter(self, count=None, start=None, spider=None, state=None,
99102
for a given filter params.
100103
:rtype: :class:`types.GeneratorType[dict]`
101104
105+
The endpoint used by the method returns only finished jobs by default,
106+
use ``state`` parameter to return jobs in other states.
107+
102108
Usage:
103109
104110
- retrieve all jobs for a spider::
@@ -166,6 +172,9 @@ def list(self, count=None, start=None, spider=None, state=None,
166172
:return: list of dictionaries of jobs summary for a given filter params.
167173
:rtype: :class:`list[dict]`
168174
175+
The endpoint used by the method returns only finished jobs by default,
176+
use ``state`` parameter to return jobs in other states.
177+
169178
Please note that :meth:`list` can use a lot of memory and for a large
170179
amount of logs it's recommended to iterate through it via :meth:`iter`
171180
method (all params and available filters are same for both methods).

0 commit comments

Comments
 (0)