Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions .github/workflows/ci-code.yml
Original file line number Diff line number Diff line change
Expand Up @@ -133,3 +133,55 @@ jobs:
verdi devel check-load-time
verdi devel check-undesired-imports
.github/workflows/verdi.sh


test-pytest-fixtures:
# Who watches the watchmen?
# Here we test the pytest fixtures in isolation from the rest of aiida-core test suite,
# since they can be used outside of aiida core context, e.g. in plugins.
# Unlike in other workflows in this file, we purposefully don't setup a test profile.

runs-on: ubuntu-24.04
timeout-minutes: 10

services:
postgres:
image: postgres:10
env:
POSTGRES_DB: test_aiida
POSTGRES_PASSWORD: ''
POSTGRES_HOST_AUTH_METHOD: trust
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
rabbitmq:
image: rabbitmq:3.8.14-management
ports:
- 5672:5672
- 15672:15672

steps:
- uses: actions/checkout@v4

- name: Install aiida-core
uses: ./.github/actions/install-aiida-core
with:
python-version: '3.9'
from-lock: 'true'
extras: tests

- name: Test legacy pytest fixtures
run: pytest --cov aiida --noconftest src/aiida/manage/tests/test_pytest_fixtures.py

- name: Upload coverage report
if: github.repository == 'aiidateam/aiida-core'
uses: codecov/codecov-action@v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
name: test-pytest-fixtures
files: ./coverage.xml
fail_ci_if_error: false # don't fail job, if coverage upload fails
2 changes: 1 addition & 1 deletion docs/source/topics/calculations/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -635,7 +635,7 @@ Using the ``COPY`` mode, the target path defines another location (on the same f
In addition to the ``COPY`` mode, the following modes, these storage efficient modes are also are available:
``COMPRESS_TAR``, ``COMPRESS_TARBZ2``, ``COMPRESS_TARGZ``, ``COMPRESS_TARXZ``.

The stashed files and folders are represented by an output node that is attached to the calculation node through the label ``remote_stash``, as a ``RemoteStashFolderData`` node.
The stashed files and folders are represented by an output node that is attached to the calculation node through the label ``remote_stash``, as a ``RemoteStashCopyData`` node.
Just like the ``remote_folder`` node, this represents a location or files on a remote machine and so is equivalent to a "symbolic link".

.. important::
Expand Down
18 changes: 13 additions & 5 deletions docs/source/topics/cli.rst
Original file line number Diff line number Diff line change
Expand Up @@ -52,11 +52,19 @@ For example, ``verdi process kill --help`` shows::
Kill running processes.

Options:
-t, --timeout FLOAT Time in seconds to wait for a response before timing
out. [default: 5.0]
--wait / --no-wait Wait for the action to be completed otherwise return as
soon as it's scheduled.
-h, --help Show this message and exit.
-a, --all Kill all processes if no specific processes
are specified.
-t, --timeout FLOAT Time in seconds to wait for a response
before timing out. If timeout <= 0 the
command does not wait for response.
[default: inf]
-F, --kill Kills the process without waiting for a
confirmation if the job has been killed.
Note: This may lead to orphaned jobs on your
HPC and should be used with caution.
-v, --verbosity [notset|debug|info|report|warning|error|critical]
Set the verbosity of the output.
-h, --help Show this message and exit.

All help strings consist of three parts:

Expand Down
10 changes: 6 additions & 4 deletions docs/source/topics/processes/usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -728,10 +728,12 @@ If the runner has successfully received the request and scheduled the callback,
The 'scheduled' indicates that the actual killing might not necessarily have happened just yet.
This means that even after having called ``verdi process kill`` and getting the success message, the corresponding process may still be listed as active in the output of ``verdi process list``.

By default, the ``pause``, ``play`` and ``kill`` commands will only ask for the confirmation of the runner that the request has been scheduled and not actually wait for the command to have been executed.
To change this behavior, you can use the ``--wait`` flag to actually wait for the action to be completed.
If workers are under heavy load, it may take some time for them to respond to the request and for the command to finish.
If you know that your daemon runners may be experiencing a heavy load, you can also increase the time that the command waits before timing out, with the ``-t/--timeout`` flag.
To change this behavior, you can use the ``-t / --timeout <FLOAT>`` option to specify a timeout after which the command will stop the action.
If you set the timeout to ``0```, the command returns immediately without waiting for a response.
A process is only gracefully killed if AiiDA is able to cancel the associated scheduler job.
By default, the ``pause``, ``play`` and ``kill`` commands wait until the action has been executed, either failed or succeeded.
If you want to kill the process regardless of whether the scheduler job is successfully cancelled, you can use the ``-F / --force`` option.
In this case, a cancellation request is still sent to the scheduler, but the command does not wait for a successful response and proceeds to kill the AiiDA process.


.. rubric:: Footnotes
Expand Down
3 changes: 2 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,8 @@ requires-python = '>=3.9'
'core.remote' = 'aiida.orm.nodes.data.remote.base:RemoteData'
'core.remote.stash' = 'aiida.orm.nodes.data.remote.stash.base:RemoteStashData'
'core.remote.stash.compress' = 'aiida.orm.nodes.data.remote.stash.compress:RemoteStashCompressedData'
'core.remote.stash.folder' = 'aiida.orm.nodes.data.remote.stash.folder:RemoteStashFolderData'
'core.remote.stash.copy' = 'aiida.orm.nodes.data.remote.stash.copy:RemoteStashCopyData'
'core.remote.stash.folder' = 'aiida.orm.nodes.data.remote.stash.folder:RemoteStashFolderData' # legacy, to be removed in AiiDA 3.0
'core.singlefile' = 'aiida.orm.nodes.data.singlefile:SinglefileData'
'core.str' = 'aiida.orm.nodes.data.str:Str'
'core.structure' = 'aiida.orm.nodes.data.structure:StructureData'
Expand Down
2 changes: 1 addition & 1 deletion src/aiida/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
'For further information please visit http://www.aiida.net/. All rights reserved.'
)
__license__ = 'MIT license, see LICENSE.txt file.'
__version__ = '2.6.4.post0'
__version__ = '2.7.0pre2'
__authors__ = 'The AiiDA team.'
__paper__ = (
'S. P. Huber et al., "AiiDA 1.0, a scalable computational infrastructure for automated reproducible workflows and '
Expand Down
36 changes: 17 additions & 19 deletions src/aiida/cmdline/commands/cmd_process.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,16 @@
verdi daemon start
"""

ACTION_TIMEOUT = OverridableOption(
'-t',
'--timeout',
type=click.FLOAT,
default=float('inf'),
show_default=True,
help='Time in seconds to wait for a response before timing out. '
'If timeout <= 0 the command does not wait for response.',
)


def valid_projections():
"""Return list of valid projections for the ``--project`` option of ``verdi process list``.
Expand Down Expand Up @@ -320,15 +330,7 @@ def process_status(call_link_label, most_recent_node, max_depth, processes):
@verdi_process.command('kill')
@arguments.PROCESSES()
@options.ALL(help='Kill all processes if no specific processes are specified.')
@OverridableOption(
'-t',
'--timeout',
type=click.FLOAT,
default=5.0,
show_default=True,
help='Time in seconds to wait for a response of the kill task before timing out.',
)()
@options.WAIT()
@ACTION_TIMEOUT()
@OverridableOption(
'-F',
'--force',
Expand All @@ -338,7 +340,7 @@ def process_status(call_link_label, most_recent_node, max_depth, processes):
'Note: This may lead to orphaned jobs on your HPC and should be used with caution.',
)()
@decorators.with_dbenv()
def process_kill(processes, all_entries, timeout, wait, force):
def process_kill(processes, all_entries, timeout, force):
"""Kill running processes.

Kill one or multiple running processes."""
Expand Down Expand Up @@ -368,7 +370,6 @@ def process_kill(processes, all_entries, timeout, wait, force):
force=force,
all_entries=all_entries,
timeout=timeout,
wait=wait,
)
except control.ProcessTimeoutException as exception:
echo.echo_critical(f'{exception}\n{REPAIR_INSTRUCTIONS}')
Expand All @@ -380,10 +381,9 @@ def process_kill(processes, all_entries, timeout, wait, force):
@verdi_process.command('pause')
@arguments.PROCESSES()
@options.ALL(help='Pause all active processes if no specific processes are specified.')
@options.TIMEOUT()
@options.WAIT()
@ACTION_TIMEOUT()
@decorators.with_dbenv()
def process_pause(processes, all_entries, timeout, wait):
def process_pause(processes, all_entries, timeout):
"""Pause running processes.

Pause one or multiple running processes."""
Expand All @@ -404,7 +404,6 @@ def process_pause(processes, all_entries, timeout, wait):
msg_text='Paused through `verdi process pause`',
all_entries=all_entries,
timeout=timeout,
wait=wait,
)
except control.ProcessTimeoutException as exception:
echo.echo_critical(f'{exception}\n{REPAIR_INSTRUCTIONS}')
Expand All @@ -416,10 +415,9 @@ def process_pause(processes, all_entries, timeout, wait):
@verdi_process.command('play')
@arguments.PROCESSES()
@options.ALL(help='Play all paused processes if no specific processes are specified.')
@options.TIMEOUT()
@options.WAIT()
@ACTION_TIMEOUT()
@decorators.with_dbenv()
def process_play(processes, all_entries, timeout, wait):
def process_play(processes, all_entries, timeout):
"""Play (unpause) paused processes.

Play (unpause) one or multiple paused processes."""
Expand All @@ -435,7 +433,7 @@ def process_play(processes, all_entries, timeout, wait):

with capture_logging() as stream:
try:
control.play_processes(processes, all_entries=all_entries, timeout=timeout, wait=wait)
control.play_processes(processes, all_entries=all_entries, timeout=timeout)
except control.ProcessTimeoutException as exception:
echo.echo_critical(f'{exception}\n{REPAIR_INSTRUCTIONS}')

Expand Down
7 changes: 0 additions & 7 deletions src/aiida/cmdline/params/options/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,6 @@
'USER_LAST_NAME',
'VERBOSITY',
'VISUALIZATION_FORMAT',
'WAIT',
'WITH_ELEMENTS',
'WITH_ELEMENTS_EXCLUSIVE',
'active_process_states',
Expand Down Expand Up @@ -690,12 +689,6 @@ def set_log_level(ctx, _param, value):
help='Time in seconds to wait for a response before timing out.',
)

WAIT = OverridableOption(
'--wait/--no-wait',
default=False,
help='Wait for the action to be completed otherwise return as soon as it is scheduled.',
)

FORMULA_MODE = OverridableOption(
'-f',
'--formula-mode',
Expand Down
15 changes: 8 additions & 7 deletions src/aiida/engine/daemon/execmanager.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
from aiida.orm.utils.log import get_dblogger_extra
from aiida.repository.common import FileType
from aiida.schedulers.datastructures import JobState
from aiida.transports import has_magic

if TYPE_CHECKING:
from aiida.transports import Transport
Expand Down Expand Up @@ -436,7 +437,7 @@ async def stash_calculation(calculation: CalcJobNode, transport: Transport) -> N
:param transport: an already opened transport.
"""
from aiida.common.datastructures import StashMode
from aiida.orm import RemoteStashCompressedData, RemoteStashFolderData
from aiida.orm import RemoteStashCompressedData, RemoteStashCopyData

logger_extra = get_dblogger_extra(calculation)

Expand Down Expand Up @@ -465,7 +466,7 @@ async def stash_calculation(calculation: CalcJobNode, transport: Transport) -> N
target_basepath = target_base / uuid[:2] / uuid[2:4] / uuid[4:]

for source_filename in source_list:
if transport.has_magic(source_filename):
if has_magic(source_filename):
copy_instructions = []
for globbed_filename in await transport.glob_async(source_basepath / source_filename):
target_filepath = target_basepath / Path(globbed_filename).relative_to(source_basepath)
Expand All @@ -487,10 +488,10 @@ async def stash_calculation(calculation: CalcJobNode, transport: Transport) -> N
else:
EXEC_LOGGER.debug(f'stashed {source_filepath} to {target_filepath}')

remote_stash = RemoteStashFolderData(
remote_stash = RemoteStashCopyData(
computer=calculation.computer,
target_basepath=str(target_basepath),
stash_mode=StashMode(stash_mode),
target_basepath=str(target_basepath),
source_list=source_list,
).store()

Expand All @@ -511,8 +512,8 @@ async def stash_calculation(calculation: CalcJobNode, transport: Transport) -> N

remote_stash = RemoteStashCompressedData(
computer=calculation.computer,
target_basepath=target_destination,
stash_mode=StashMode(stash_mode),
target_basepath=target_destination,
source_list=source_list,
dereference=dereference,
)
Expand Down Expand Up @@ -679,7 +680,7 @@ async def retrieve_files_from_list(
if isinstance(item, (list, tuple)):
tmp_rname, tmp_lname, depth = item
# if there are more than one file I do something differently
if transport.has_magic(tmp_rname):
if has_magic(tmp_rname):
remote_names = await transport.glob_async(workdir.joinpath(tmp_rname))
local_names = []
for rem in remote_names:
Expand All @@ -702,7 +703,7 @@ async def retrieve_files_from_list(
else:
abs_item = item if item.startswith('/') else str(workdir.joinpath(item))

if transport.has_magic(abs_item):
if has_magic(abs_item):
remote_names = await transport.glob_async(abs_item)
local_names = [os.path.split(rem)[1] for rem in remote_names]
else:
Expand Down
Loading
Loading