Skip to content

Additional tests files added#167

Merged
jcharkow merged 20 commits intoOpenMS:mainfrom
Srajald:additional-tests
Mar 18, 2025
Merged

Additional tests files added#167
jcharkow merged 20 commits intoOpenMS:mainfrom
Srajald:additional-tests

Conversation

@Srajald
Copy link
Contributor

@Srajald Srajald commented Mar 10, 2025

here are the test cases for --
test_run_subprocess.py
test_simple_workflow.py

Summary by CodeRabbit

  • Tests

    • Introduced automated tests for the Run Subprocess and Simple Workflow pages, validating file selection, input handling, and button interactions.
  • Chores

    • Expanded the continuous integration process to cover a broader range of scenarios, enhancing overall quality and stability.

These updates aim to deliver a more robust and user-friendly experience.

@coderabbitai
Copy link

coderabbitai bot commented Mar 10, 2025

Walkthrough

The changes introduce two new test suites to verify user interactions in both the Run Subprocess and Simple Workflow pages of the Streamlit application. The tests validate dropdown selections, button clicks, number input handling, and the presence of UI elements. Additionally, the GitHub Actions workflow has been updated to expand test coverage, and the Simple Workflow code has been completely removed.

Changes

File(s) Change Summary
tests/test_run_subprocess.py
tests/test_simple_workflow.py
Added new test files. The Run Subprocess tests verify dropdown file selection and "Extract IDs" button functionality, while the Simple Workflow tests cover number inputs, invalid values, and the download button.
.github/workflows/ci.yml Updated CI workflow to run tests on both test_gui.py and the entire tests/ directory.
content/simple_workflow.py Removed the simple_workflow.py file, which contained the Streamlit application for the Simple Workflow page.

Sequence Diagram(s)

sequenceDiagram
    participant U as User/Automation
    participant RS as RunSubprocessApp
    U->>RS: Launch Run Subprocess Page
    RS-->>RS: Load dropdown options
    U->>RS: Select file from dropdown
    RS-->>U: Update UI with selection
    U->>RS: Click "Extract IDs" button
    RS-->>U: Initiate subprocess and update UI
Loading
sequenceDiagram
    participant U as User/Automation
    participant SW as SimpleWorkflowApp
    U->>SW: Launch Simple Workflow Page
    SW-->>SW: Display number input widgets
    U->>SW: Enter x and y dimensions
    SW-->>U: Generate and display data table
    U->>SW: Click "Download Table" button
    SW-->>U: Trigger table download process
Loading

Poem

Oh, I’m a rabbit, hopping with cheer,
New tests and tweaks are finally here!
From dropdown hops to buttons that gleam,
Our Streamlit bounds are a coder's dream.
With workflows on track and bugs at bay,
I twirl my ears and brighten the day! 🐇✨

Tip

⚡🧪 Multi-step agentic review comment chat (experimental)
  • We're introducing multi-step agentic chat in review comments. This experimental feature enhances review discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments.
    - To enable this feature, set early_access to true under in the settings.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 15a680b and edbdb16.

📒 Files selected for processing (1)
  • content/simple_workflow.py (0 hunks)
💤 Files with no reviewable changes (1)
  • content/simple_workflow.py
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-full-app
  • GitHub Check: build-simple-app
  • GitHub Check: build-openms

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
tests/test_run_subprocess.py (1)

39-40: Remove debugging print statements.

These print statements appear to be for debugging purposes and should be removed before merging to keep the test output clean.

-    print("TEXT OUTPUT:", launch.text)
-    print("SESSION STATE:", launch.session_state)
tests/test_simple_workflow.py (3)

4-8: Simplify fixture return.

The intermediate variable assignment is unnecessary.

@pytest.fixture
def launch():
    """Launch the Simple Workflow page for testing."""
-    app = AppTest.from_file("content/simple_workflow.py")
-    return app  
+    return AppTest.from_file("content/simple_workflow.py")

45-46: Remove debugging print statements.

These print statements appear to be for debugging purposes and should be removed before merging.

-    print("Available buttons:", [btn.label for btn in launch.button])
-    print("Available markdown elements:", [md.value for md in launch.markdown])

48-49: Consider a more direct button verification if possible.

The current approach checks for "Download Table" text in markdown elements, which works but is indirect. If Streamlit's testing API allows for direct button identification, consider using that instead.

# If button testing API supports direct label checking:
assert any(btn.label == "Download Table" for btn in launch.button), "Download Table button is missing!"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7183499 and 2b6cb9a.

📒 Files selected for processing (2)
  • tests/test_run_subprocess.py (1 hunks)
  • tests/test_simple_workflow.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build-full-app
  • GitHub Check: build
  • GitHub Check: build-openms
  • GitHub Check: build-simple-app
🔇 Additional comments (6)
tests/test_run_subprocess.py (4)

4-7: Good fixture setup with clear documentation.

The fixture is well-defined with a descriptive docstring that clarifies its purpose.


9-18: Well-structured test for file selection functionality.

The test properly verifies the existence of the dropdown and attempts to select an option when available, which is a good approach for testing UI interactions.


20-31: Good test structure with appropriate skip condition.

The test properly checks for prerequisites and uses pytest.skip when no files are available, which prevents false failures.


42-43: Comprehensive assertion with helpful error message.

The assertions appropriately check that the subprocess was executed successfully and provide informative error messages, which is excellent practice.

tests/test_simple_workflow.py (2)

10-28: Well-structured test for number inputs.

The test properly validates input existence, updates, and verifies the effects on both session state and output data. The assertions have clear error messages.


30-44: Good test for table generation and dimensionality.

The test effectively verifies that the table is generated with the correct dimensions based on the input values.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
tests/test_simple_workflow.py (6)

9-27: Consider making tests more robust to UI changes.

The test makes assumptions about the order of number inputs (indexes 0 and 1), which could break if the order changes in the application.

- launch.number_input[0].set_value(5)  
- launch.number_input[1].set_value(4)
+ # Find inputs by label or key if possible
+ x_input = next((inp for inp in launch.number_input if "x" in inp.label.lower()), launch.number_input[0])
+ y_input = next((inp for inp in launch.number_input if "y" in inp.label.lower()), launch.number_input[1])
+ x_input.set_value(5)
+ y_input.set_value(4)

23-26: Add validation for table content.

The test verifies the dimensions of the table but doesn't validate its content.

df = launch.dataframe[0].value
assert df.shape == (5, 4), f"Expected table size (5,4) but got {df.shape}"
+ # Validate that table contains expected data types or patterns
+ assert df.dtypes.apply(lambda x: x.name).unique().tolist() == ['int64'], "Expected all columns to be integers"
+ # Verify data bounds if applicable
+ assert df.values.min() >= 0, "Expected all values to be non-negative"

29-45: Test download button functionality.

The test verifies the presence of the download button but doesn't test its functionality.

# Check for "Download Table" text in markdown
assert any(btn.label == "Download Table" for btn in launch.button), "Download Table button is missing!"
+ # Find the download button
+ download_button = next((btn for btn in launch.button if btn.label == "Download Table"), None)
+ assert download_button is not None, "Download Table button not found"
+ 
+ # Test button click if possible
+ # Note: This may require mocking file system interactions
+ # download_button.click()
+ # Assert expected behavior after button click

1-45: Add negative test cases and error handling scenarios.

The test suite only covers the happy path. Consider adding tests for invalid inputs or edge cases.

Examples of additional test cases to consider:

  1. Test with minimum allowed values (e.g., 0 or 1)
  2. Test with very large values
  3. Test behavior when inputs are invalid
def test_invalid_inputs(launch):
    """Test behavior with invalid inputs."""
    launch.run()
    
    # Test with negative values (if these should be rejected)
    launch.number_input[0].set_value(-1)
    launch.number_input[1].set_value(-2)
    launch.run()
    
    # Assert expected error message or handled state
    # e.g., check for error message in warning elements
    assert any("invalid" in warn.value.lower() for warn in launch.warning), "Expected warning for negative values"
    
    # Verify table is not generated with invalid inputs
    assert len(launch.dataframe) == 0, "Table should not be generated with invalid inputs"

13-21: Avoid hardcoded session state keys.

The test uses hardcoded session state keys which makes it brittle if key names change.

Consider extracting these keys to constants or using a more flexible approach:

- assert launch.session_state["example-x-dimension"] == 5, "X-dimension not updated in session state!"
- assert launch.session_state["example-y-dimension"] == 4, "Y-dimension not updated in session state!"
+ # Find keys related to dimensions in the session state
+ x_key = next((k for k in launch.session_state.keys() if "x" in k.lower() and "dimension" in k.lower()), None)
+ y_key = next((k for k in launch.session_state.keys() if "y" in k.lower() and "dimension" in k.lower()), None)
+ 
+ assert x_key is not None, "X-dimension key not found in session state"
+ assert y_key is not None, "Y-dimension key not found in session state"
+ assert launch.session_state[x_key] == 5, "X-dimension not updated in session state!"
+ assert launch.session_state[y_key] == 4, "Y-dimension not updated in session state!"

1-3: Consider adding docstring or module-level comment.

Adding a module-level docstring would improve documentation and clarify the purpose of this test file.

import pytest
from streamlit.testing.v1 import AppTest

+ """
+ Tests for the Simple Workflow page functionality.
+ 
+ These tests verify:
+ - Number input widgets function correctly
+ - Session state updates properly
+ - Table generation with correct dimensions
+ - Download button presence
+ """
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2b6cb9a and 7fa851f.

📒 Files selected for processing (2)
  • tests/test_run_subprocess.py (1 hunks)
  • tests/test_simple_workflow.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/test_run_subprocess.py
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-simple-app
  • GitHub Check: build-openms
  • GitHub Check: build-full-app
🔇 Additional comments (1)
tests/test_simple_workflow.py (1)

4-7: LGTM: Good use of pytest fixtures.

The fixture is well-designed to create an isolated test environment for each test case.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
tests/test_simple_workflow.py (4)

21-23: Remove redundant run() call.

The launch.run(timeout=10) call is redundant as the fixture already runs the application.

def test_number_inputs(launch):
    """Ensure x and y dimension inputs exist and update correctly."""
-    launch.run(timeout=10)  

44-46: Remove redundant run() call in second test.

Similar to the first test, this run() call is redundant as the fixture already handles it.

def test_download_button(launch):
    """Ensure the 'Download Table' button appears after table generation."""
-    launch.run(timeout=10)

49-50: Consider using more robust widget selectors in second test.

Same issue as in the first test - using numeric indices makes the test fragile to UI changes.


59-59: Remove debug print statement.

The print statement appears to be for debugging purposes and should be removed from the final test code.

-    print("Available downloads:", [dl.label for dl in launch.download])
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7fa851f and df9aab5.

📒 Files selected for processing (1)
  • tests/test_simple_workflow.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-simple-app
  • GitHub Check: build-openms
  • GitHub Check: build-full-app
🔇 Additional comments (5)
tests/test_simple_workflow.py (5)

1-12: Well-structured test file with clear documentation.

The imports and docstring are appropriate for a Streamlit application test file. The docstring clearly explains the purpose of the tests and what they verify.


14-19: LGTM! Good fixture setup for Streamlit testing.

The fixture correctly initializes the testing environment by loading the Simple Workflow application with a reasonable timeout.


61-63: Good implementation of button verification.

The approach to verify the download button's presence using a generator expression is clean and effective.


29-30:

Details

✅ Verification successful

Consider using more robust widget selectors.

Accessing number input widgets by index (0, 1) makes the test brittle to UI changes. Consider using key, label, or other identifiers if the Streamlit testing API supports it.

-    launch.number_input[0].set_value(5)  
-    launch.number_input[1].set_value(4)  
+    # Example of more robust selectors (if supported by the API):
+    launch.number_input(key="example-x-dimension").set_value(5)  
+    launch.number_input(key="example-y-dimension").set_value(4)  

🌐 Web query:

Does Streamlit testing API support selecting widgets by key or label?

💡 Result:

The Streamlit testing API supports selecting widgets by key through the AppTest class, which allows precise identification and interaction with widgets during testing. Here's how it works:

Key-Based Widget Selection

  1. Assigned Keys
    Widgets with a unique key parameter can be directly accessed using AppTest methods:

    at.selectbox(key="contact_method")  # Access selectbox with key="contact_method"

    This retrieves the widget regardless of its position on the page[1][4][6].

  2. Key Verification
    Properties like label, options, and value can be inspected:

    assert at.selectbox[0].label == "Preferred Contact"
    assert at.selectbox[0].options == ["Email", "Phone"]

    This validates both widget configuration and state[3][11].

Label Limitations

While labels are visible in the UI, they aren't used for widget identification in tests. Two widgets with identical labels but different keys are treated as distinct entities[6][10].

Best Practices

  • Unique Keys
    Always assign explicit keys to widgets requiring test interaction (e.g., key="user_age_input")[6][10].
  • Avoid Index Reliance
    Use key instead of positional indexes (e.g., at.selectbox) to prevent test failures from UI order changes[3][4].

Example Workflow

from streamlit.testing.v1 import AppTest

def test_contact_selection():
    at = AppTest.from_file("app.py")
    at.run()
    
    # Select by key and modify value
    contact_widget = at.selectbox(key="contact_method")
    contact_widget.select("Mobile phone")
    at.run()
    
    # Verify update
    assert at.success[0].value == "You selected: Mobile phone"

This demonstrates key-based interaction and state validation[4][13][14].

For complex scenarios, combine key selection with container targeting:

at.sidebar.selectbox(key="theme_selector")  # Widget in sidebar

Citations:


Switch to key-based widget selectors for increased robustness.

The Streamlit testing API supports selecting widgets by key. In this case, replace index-based access with key-based access to prevent brittleness as the UI evolves.

  • In tests/test_simple_workflow.py (lines 29-30), update:
        launch.number_input[0].set_value(5)  
        launch.number_input[1].set_value(4)  
    to
    -    launch.number_input[0].set_value(5)  
    -    launch.number_input[1].set_value(4)  
    +    # Example using key selectors:
    +    launch.number_input(key="example-x-dimension").set_value(5)  
    +    launch.number_input(key="example-y-dimension").set_value(4)  

This change leverages the API's support for key-based selection, making tests more resilient to changes in widget order or UI structure.


1-63:

Details

❌ Incorrect review comment

Suggest adding test for button click functionality.

The tests currently verify the presence of inputs and the download button, but don't test what happens when the button is clicked. Consider adding a test that simulates clicking the download button to ensure it functions correctly.

def test_download_button_click(launch):
    """Ensure the 'Download Table' button functions correctly when clicked."""
    # Set x and y dimensions
    launch.number_input[0].set_value(3)
    launch.number_input[1].set_value(2)
    launch.run(timeout=10)
    
    # Find and click the download button
    download_button = next((dl for dl in launch.download if dl.label == "Download Table"), None)
    assert download_button, "Download Table button is missing!"
    
    # Here you'd test the click functionality if the Streamlit testing API supports it
    # This might involve checking that certain session state values are updated
    # or that a download is initiated
    # For example: download_button.click()
    # Then assert the expected behavior

🌐 Web query:

Does Streamlit testing API support download button click simulation?

💡 Result:

The Streamlit testing API (AppTest) currently does not support simulating clicks on st.download_button based on the available documentation and community discussions[1][6]. Here's a detailed breakdown:


Current Limitations

  1. Missing Support in AppTest Class:

    • The AppTest framework allows interaction with widgets like st.button, st.text_input, and st.form_submit_button, but st.download_button is explicitly excluded from supported elements[3][6].
    • A GitHub feature request (July 2024) highlights this gap, noting that automated testing for download buttons is not yet possible[6].
  2. Workaround Challenges:

    • While st.download_button can trigger actions via on_click callbacks, these interactions can’t be programmatically validated in tests due to the lack of API support[2][4].
    • Testing frameworks like Selenium or Playwright are recommended alternatives for end-to-end testing of download functionality[9][12].

Key Insights from Documentation

  • AppTest Capabilities:

    • Simulates user input for buttons, text inputs, and form submissions[3][11].
    • Inspects rendered outputs like text, dataframes, and charts[3][5].
    • Supports session state manipulation and multipage app testing[13][14].
  • Unsupported Features:

    • Download button states (e.g., verifying downloaded content or click triggers)[6][12].
    • Direct handling of file generation/download workflows in tests[6][12].

Recommendations

  • Track the GitHub Issue: Follow streamlit/streamlit#9003 for updates on adding st.download_button support to AppTest[6].
  • Use Browser Automation: For critical download workflows, combine AppTest with tools like Playwright to validate end-to-end behavior[9][12].
  • Mock File Generation: Test the logic generating downloadable files separately using pytest, while avoiding UI-dependent validations[3][14].

In summary, while Streamlit's native testing framework excels at validating most UI interactions, testing st.download_button requires external tools or workarounds until official support is added.

Citations:


Clarification on Download Button Click Testing

Based on current capabilities of the Streamlit testing API (AppTest), simulating a click on a download button (i.e. st.download_button) is not supported. Rather than adding a test for button click functionality that the API cannot simulate, please consider these alternatives:

  • Test the underlying logic: Isolate and test the file generation or callback functionality separately.
  • Use E2E testing tools: For full UI interaction tests, consider an end-to-end framework (e.g., Selenium or Playwright) which can simulate actual button clicks.
  • Monitor framework improvements: Track streamlit/streamlit#9003 for future support on simulating download button clicks.

Likely an incorrect or invalid review comment.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
tests/test_simple_workflow.py (4)

22-23: Remove redundant .run() call

Your test function has a redundant launch.run(timeout=10) call right at the beginning. Since the fixture already runs the app, this call is unnecessary and might cause delays in your test execution.

def test_number_inputs(launch):
    """Ensure x and y dimension inputs exist and update correctly."""
-   launch.run(timeout=10)  

    # Ensure at least 2 number input widgets exist

46-50: Apply consistent timeout parameters

There's inconsistency in how timeout parameters are used across different run() calls. For reliability, especially in CI environments, it's good practice to explicitly set timeouts consistently.

-   launch.run()
    
    launch.number_input[0].set_value(-1)
    launch.number_input[1].set_value(-2)
-   launch.run()
+   launch.run(timeout=10)
    
    launch.number_input[0].set_value(-1)
    launch.number_input[1].set_value(-2)
+   launch.run(timeout=10)

73-73: Remove debug print statement

The print statement should be removed as it's not necessary for the test execution and could clutter test output logs. Debug statements shouldn't be included in committed test code.

-   print("Available downloads:", [dl.label for dl in launch.download])

56-57: Fix extra blank line

According to PEP 8 style guidelines, there should be only one blank line between function definitions.

    # Verify table is not generated with invalid inputs
    assert len(launch.dataframe) == 0, "Table should not be generated with invalid inputs"

-

def test_download_button(launch):
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between df9aab5 and 160ad96.

📒 Files selected for processing (1)
  • tests/test_simple_workflow.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-full-app
  • GitHub Check: build-openms
  • GitHub Check: build-simple-app
🔇 Additional comments (1)
tests/test_simple_workflow.py (1)

1-77: Overall well-structured test suite

The test suite is well-organized with clear docstrings, appropriate assertions with helpful error messages, and good coverage of different aspects of the Simple Workflow page. The tests verify number inputs, session state updates, error conditions, and the presence of UI elements like tables and download buttons.

I particularly like the thorough validation of session state keys and values, as well as the table dimension checks.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
tests/test_simple_workflow.py (3)

42-54: Add timeout parameter to run() calls for consistency.

While the test logic for invalid inputs is good, you should include the timeout parameter in the run() calls for consistency with other tests.

-    launch.run()
+    launch.run(timeout=10)
    
    launch.number_input[0].set_value(-1)
    launch.number_input[1].set_value(-2)
-    launch.run()
+    launch.run(timeout=10)

56-72: Consider refactoring duplicate table generation logic.

This test has similar table generation logic to test_number_inputs. Consider extracting a helper function to reduce duplication.

def _generate_table(app, x_dim, y_dim):
    """Helper to set dimensions and generate a table."""
    app.number_input[0].set_value(x_dim)
    app.number_input[1].set_value(y_dim)
    app.run(timeout=10)
    
    # Verify table generation
    assert len(app.dataframe) > 0, "Table not generated!"
    df = app.dataframe[0].value
    assert df.shape == (x_dim, y_dim), f"Expected table size ({x_dim},{y_dim}) but got {df.shape}"
    return df

This would simplify both test functions and improve maintainability.


50-50: Consider a more specific assertion for warning messages.

The current assertion checks if any warning contains 'invalid', but a more specific check would be more robust.

-    assert any("invalid" in warn.value.lower() for warn in launch.warning), "Expected warning for negative values"
+    assert any("dimensions must be positive" in warn.value.lower() for warn in launch.warning), "Expected specific warning for negative values"

This assumes the actual error message contains "dimensions must be positive" - please adjust to match the actual expected message.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 160ad96 and a2070ea.

📒 Files selected for processing (1)
  • tests/test_simple_workflow.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build-simple-app
  • GitHub Check: build
  • GitHub Check: build-openms
  • GitHub Check: build-full-app
🔇 Additional comments (5)
tests/test_simple_workflow.py (5)

14-19: Well-structured test fixture for streamlit app testing.

The fixture correctly initializes the Streamlit page for testing with an appropriate timeout value. This approach makes the tests clean and maintainable.


24-40: Good verification of number inputs and table generation.

This test thoroughly verifies that:

  1. Number inputs exist and update correctly
  2. Session state keys are properly updated
  3. The generated table has the expected dimensions

Consider adding a test case with larger dimensions to verify scalability.


1-3: LGTM! Standard pytest and Streamlit testing imports.

The test file correctly imports the necessary libraries for Streamlit application testing.


4-12: Clear and informative module docstring.

The docstring effectively communicates the purpose and scope of the test suite, which improves maintainability.


71-72: Good use of next() with generator expression to find the download button.

This is an elegant way to search for a specific button by label in a collection of elements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
tests/test_simple_workflow.py (3)

47-47: Replace fixed sleep with deterministic waiting

Using time.sleep() can lead to flaky tests that sometimes pass and sometimes fail based on timing variations.

Consider using more deterministic waiting mechanisms offered by the testing framework, or checking for specific UI elements to appear instead of arbitrary sleep.


70-70: Replace fixed sleep with deterministic waiting

Using time.sleep() can lead to flaky tests that sometimes pass and sometimes fail based on timing variations.

Consider waiting for specific UI elements to appear instead of using a fixed sleep time:

-time.sleep(5)  
+# Wait for the dataframe to appear by checking its presence in a loop with timeout
+max_wait = 5
+start_time = time.time()
+while time.time() - start_time < max_wait:
+    if len(launch.dataframe) > 0:
+        break
+    time.sleep(0.1)

75-79: Standardize approach for locating the download button

The code uses two different approaches to find the download button, which suggests uncertainty about the correct approach.

If possible, identify which approach is more reliable and standardize on that approach. If both approaches are needed for resilience, consider adding a comment explaining why:

# Find the "Download Table" button correctly
-download_button = next((btn for btn in launch.button if hasattr(btn, "label") and "Download" in btn.label), None)
-download_component = next((comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), None)
+# Primary approach: Look for button components
+download_button = next((btn for btn in launch.button if hasattr(btn, "label") and "Download" in btn.label), None)
+
+# Fallback approach: Look in main components (in case button is wrapped in another component)
+download_component = next((comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), None)

# Final assertion
assert download_button or download_component, "Download Table button is missing!"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2070ea and b4209d7.

📒 Files selected for processing (1)
  • tests/test_simple_workflow.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build-full-app
  • GitHub Check: build-simple-app
  • GitHub Check: build
  • GitHub Check: build-openms
🔇 Additional comments (4)
tests/test_simple_workflow.py (4)

28-29: Update index-based access to use key-based identification

The test is using index-based access to number inputs which makes tests brittle if widget order changes.

Replace these lines with key-based access as used in your other test functions:

-launch.number_input[0].set_value(5)  
-launch.number_input[1].set_value(4)  
+x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
+y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
+assert x_input is not None, "X-dimension input not found!"
+assert y_input is not None, "Y-dimension input not found!"
+x_input.set_value(5)
+y_input.set_value(4)

1-14: LGTM! Well-structured imports and clear test documentation.

The imports are appropriate and the docstring clearly explains the test coverage.


15-21: LGTM! Well-defined test fixture.

The fixture appropriately launches the Streamlit application for testing with a reasonable timeout.


32-42: LGTM! Good validation of session state and table dimensions.

The assertions effectively validate that the session state is updated correctly and that the table is generated with the expected dimensions.

Comment on lines +43 to +55
def test_invalid_inputs(launch):
"""Ensure invalid inputs are handled properly."""

launch.run(timeout=10)
time.sleep(2)

x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"


Copy link

@coderabbitai coderabbitai bot Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Complete implementation of invalid inputs test

The test_invalid_inputs function doesn't actually test any invalid inputs. It finds the input elements but doesn't set invalid values or verify how the application handles them.

Consider implementing the actual test case by adding code to set invalid values and verify the application's response. For example:

def test_invalid_inputs(launch):
    """Ensure invalid inputs are handled properly."""

    launch.run(timeout=10)
-   time.sleep(2) 
    
    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

    assert x_input is not None, "X-dimension input not found!"
    assert y_input is not None, "Y-dimension input not found!"

+   # Test negative values
+   x_input.set_value(-1)
+   y_input.set_value(-1)
+   launch.run(timeout=10)
+   
+   # Verify appropriate error handling
+   error_message = next((text for text in launch.markdown if "invalid" in text.value.lower()), None)
+   assert error_message is not None, "No error message displayed for negative values"
+   
+   # Verify no table was generated with invalid inputs
+   assert len(launch.dataframe) == 0, "Table was generated with invalid inputs!"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_invalid_inputs(launch):
"""Ensure invalid inputs are handled properly."""
launch.run(timeout=10)
time.sleep(2)
x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"
def test_invalid_inputs(launch):
"""Ensure invalid inputs are handled properly."""
launch.run(timeout=10)
x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"
# Test negative values
x_input.set_value(-1)
y_input.set_value(-1)
launch.run(timeout=10)
# Verify appropriate error handling
error_message = next((text for text in launch.markdown if "invalid" in text.value.lower()), None)
assert error_message is not None, "No error message displayed for negative values"
# Verify no table was generated with invalid inputs
assert len(launch.dataframe) == 0, "Table was generated with invalid inputs!"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see this suggestion above. I would try testing an x value greater than 20 and it should be invalid.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@jcharkow I'll modify the test to check for x values greater than 20 as you suggested. Let me first examine the implementation to ensure the test accurately reflects the expected behavior.


🏁 Script executed:

#!/bin/bash
# Fetch the content of the simple_workflow.py file
cat content/simple_workflow.py

Length of output: 1429


@jcharkow I've examined the simple_workflow.py file and see that the x dimension has a max_value constraint of 20. Here's an updated version of the test_invalid_inputs function that tests values exceeding this limit:

def test_invalid_inputs(launch):
    """Ensure invalid inputs are handled properly."""

    launch.run(timeout=10)
    
    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

    assert x_input is not None, "X-dimension input not found!"
    assert y_input is not None, "Y-dimension input not found!"

    # Test value exceeding maximum allowed (max is 20)
    x_input.set_value(25)  # This should be clamped to 20
    y_input.set_value(10)
    launch.run(timeout=10)
    
    # Verify the value was clamped to the maximum allowed
    assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
    
    # Check that a table was generated with the correct dimensions
    assert len(launch.dataframe) > 0, "Table not generated!"
    df = launch.dataframe[0].value
    assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"

This test verifies that when a user attempts to enter a value exceeding the maximum limit, Streamlit correctly clamps it to the maximum allowed value and the application still functions properly.


✏️ Learnings added
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-11T15:23:45.825Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Copy link
Contributor

@jcharkow jcharkow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good progress so far! Please see comments above.

Some additional comments:

  1. It does not seem that sample.mzML is used so please remove that file.
  2. These tests are currently not triggered by github actions. I would recommend adding these functions to test_gui.py instead of adding new files. Alternatively if you think it is necessary to add new files please edit .github/ci.yml so these files are also tested

Comment on lines +43 to +55
def test_invalid_inputs(launch):
"""Ensure invalid inputs are handled properly."""

launch.run(timeout=10)
time.sleep(2)

x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see this suggestion above. I would try testing an x value greater than 20 and it should be invalid.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (6)
.github/ci.yml (3)

11-11: Remove trailing spaces.

Remove the trailing spaces at the end of this line to address the YAMLlint warning.

-        # Requirements file generated with python=3.11 
+        # Requirements file generated with python=3.11
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 11-11: trailing spaces

(trailing-spaces)


31-31: Add newline at end of file.

Add a newline character at the end of the file to follow standard file formatting conventions and fix the YAMLlint warning.

        python -m pytest test_gui.py tests/
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 31-31: no new line character at the end of file

(new-line-at-end-of-file)


29-29: Consider adding test result output formats.

For better visibility of test results in GitHub Actions, consider adding options to generate test reports or summaries.

        pip install pytest
+        pip install pytest-html
    - name: Test
      run: |
-        python -m pytest test_gui.py tests/
+        python -m pytest test_gui.py tests/ --html=test-report.html --self-contained-html
tests/test_simple_workflow.py (3)

47-47: Avoid using hard-coded sleep times.

Using time.sleep() with hard-coded values makes tests slower than necessary and can lead to flaky tests. Consider using a more reliable approach to wait for UI updates.

-    time.sleep(5)
+    # Wait for UI to update by polling for the expected state
+    def wait_for_condition(condition_func, timeout=10, interval=0.5):
+        start_time = time.time()
+        while time.time() - start_time < timeout:
+            if condition_func():
+                return True
+            time.sleep(interval)
+        return False
+    
+    # Use the waiting function instead of sleep
+    wait_for_condition(lambda: "example-x-dimension" in launch.session_state)

This approach can be implemented as a helper function at the module level and reused in all test functions.

Also applies to: 59-59, 83-83


92-92: Add newline at end of file.

Add a newline character at the end of the file to follow standard file formatting conventions.

    assert download_button or download_component, "Download Table button is missing!"
-92
+

43-67: Test additional invalid input scenarios.

The test_invalid_inputs function currently only tests values exceeding the maximum. Consider adding tests for:

  1. Values below the minimum
  2. Non-numeric inputs (if applicable)
  3. Edge cases at exactly the boundary values
def test_invalid_inputs(launch):
    """Ensure invalid inputs are handled properly."""

    launch.run(timeout=15)
    time.sleep(5)
    
    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

    assert x_input is not None, "X-dimension input not found!"
    assert y_input is not None, "Y-dimension input not found!"

    # Test value exceeding maximum allowed (max is 20)
    x_input.set_value(25)  
    y_input.set_value(10)
    launch.run(timeout=10)
    time.sleep(5)

    # Verify the value was clamped to the maximum allowed
    assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
    
    assert len(launch.dataframe) > 0, "Table not generated!"
    df = launch.dataframe[0].value
    assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"

+    # Test value below minimum allowed (min is 1)
+    x_input.set_value(-5)
+    y_input.set_value(5)
+    launch.run(timeout=10)
+    time.sleep(5)
+
+    # Verify the value was clamped to the minimum allowed
+    assert launch.session_state["example-x-dimension"] == 1, "X-dimension not clamped to minimum!"
+    
+    # Test edge case at exactly the boundary
+    x_input.set_value(1)  # Minimum value
+    y_input.set_value(20)  # Maximum value
+    launch.run(timeout=10)
+    time.sleep(5)
+
+    # Verify boundary values are accepted
+    assert launch.session_state["example-x-dimension"] == 1, "X-dimension minimum not accepted!"
+    assert launch.session_state["example-y-dimension"] == 20, "Y-dimension maximum not accepted!"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b4209d7 and 1f76990.

📒 Files selected for processing (3)
  • .github/ci.yml (1 hunks)
  • tests/test_run_subprocess.py (1 hunks)
  • tests/test_simple_workflow.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/test_run_subprocess.py
🧰 Additional context used
🧠 Learnings (1)
tests/test_simple_workflow.py (1)
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-12T15:07:46.046Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
🪛 YAMLlint (1.35.1)
.github/ci.yml

[error] 11-11: trailing spaces

(trailing-spaces)


[error] 31-31: no new line character at the end of file

(new-line-at-end-of-file)

⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-openms
  • GitHub Check: build-simple-app
  • GitHub Check: build-full-app
🔇 Additional comments (1)
tests/test_simple_workflow.py (1)

28-29: Use key-based widget access instead of indices.

Using index-based access (number_input[0], number_input[1]) makes tests brittle if the order of widgets changes in the UI. For consistency with other tests in this file, use key-based access instead.

-    launch.number_input[0].set_value(5)  
-    launch.number_input[1].set_value(4)  
+    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
+    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
+    
+    assert x_input is not None, "X-dimension input not found!"
+    assert y_input is not None, "Y-dimension input not found!"
+    
+    x_input.set_value(5)
+    y_input.set_value(4)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
.github/workflows/ci.yml (2)

5-12: Job Matrix Configuration and Trailing Whitespace Issue
The matrix is set up properly to run on Ubuntu with Python 3.11. However, note that line 11 contains trailing spaces; please remove them to adhere to YAML lint standards.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 11-11: trailing spaces

(trailing-spaces)


29-31: Test Step Command Needs Clarification and Newline Fix
The test step runs python -m pytest test_gui.py tests/, which may lead to ambiguity. Please confirm if explicitly listing test_gui.py is intentional given that the PR introduces tests for test_run_subprocess.py and test_simple_workflow.py. If these are the intended targets, consider updating this command to ensure all relevant tests are executed without duplication.
Additionally, line 31 lacks a newline character at the end of the file; adding one will meet YAML file formatting standards.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 31-31: no new line character at the end of file

(new-line-at-end-of-file)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1f76990 and 94bc3a7.

📒 Files selected for processing (1)
  • .github/workflows/ci.yml (1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.35.1)
.github/workflows/ci.yml

[error] 11-11: trailing spaces

(trailing-spaces)


[error] 31-31: no new line character at the end of file

(new-line-at-end-of-file)

🔇 Additional comments (4)
.github/workflows/ci.yml (4)

1-4: CI Workflow Metadata Defined Correctly
The workflow header correctly names the pipeline ("continuous-integration") and triggers on push events.


13-20: Checkout and Miniconda Setup Steps Look Good
The steps for checking out the repository and setting up the Miniconda environment (with the specified channels and environment activation) are implemented correctly.


21-24: Installation of OpenMS is Handled Appropriately
The "Install OpenMS" step uses a straightforward conda installation command, which should work as intended.


25-28: Dependency Installation Step is Effective
Upgrading pip and installing dependencies—including pytest—is well organized. The inline comment regarding dependabot updates is a good practice.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)

31-33: Remove extra blank line.
YAMLlint flagged an extra blank line at line 32. Please remove this to adhere to standard YAML formatting practices.

🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 32-32: too many blank lines

(1 > 0) (empty-lines)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 94bc3a7 and a509a08.

📒 Files selected for processing (1)
  • .github/workflows/ci.yml (1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.35.1)
.github/workflows/ci.yml

[warning] 32-32: too many blank lines

(1 > 0) (empty-lines)

⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-full-app
  • GitHub Check: build-openms
  • GitHub Check: build-simple-app
🔇 Additional comments (1)
.github/workflows/ci.yml (1)

29-32: Test command update valid.
The updated test command now runs pytest on both test_gui.py and the tests/ directory, ensuring all new tests are executed during CI.

🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 32-32: too many blank lines

(1 > 0) (empty-lines)

@Srajald Srajald requested a review from jcharkow March 13, 2025 06:49
@jcharkow
Copy link
Contributor

jcharkow commented Mar 13, 2025

Although the tests are passing above, the new tests that you have added see to be failing. Please double check this.
https://github.com/Srajald/streamlit-template/actions/runs/13819268665/job/38660331144

Note that since you changed the ci.yml file, the new tests will only trigger on your personal branch and not on the PR.

Also please fix merge conflicts. This can be done by merging the main branch locally with your branch and then pushing.

Copy link
Contributor

@jcharkow jcharkow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems that tests are still failing see comment above.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
tests/test_simple_workflow.py (2)

47-47: Avoid hardcoded time.sleep() calls

Using hardcoded time.sleep(5) calls makes tests slower and potentially flaky.

The Streamlit testing library already provides a timeout parameter in the run() method. Instead of using explicit sleep calls, rely on the timeout mechanism:

- launch.run(timeout=15)
- time.sleep(5)
+ launch.run(timeout=15)  # This already waits for components to load

If you need to wait for specific elements to appear, consider using a polling approach with a reasonable timeout instead of fixed sleep durations.

Also applies to: 60-60, 81-81


86-89: Improve button detection logic

The current approach uses two different methods to locate the download button, which could be simplified.

Consider consolidating the button detection:

- download_button = next((btn for btn in launch.button if hasattr(btn, "label") and "Download" in btn.label), None)
- download_component = next((comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), None)
- 
- assert download_button or download_component, "Download Table button is missing!"
+ # Look for download button across all component types
+ download_element = next(
+     (comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label),
+     None
+ )
+ 
+ assert download_element is not None, "Download Table button is missing!"

This approach is more robust as it searches all components, not just those identified specifically as buttons.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between edb7fcc and 2a1a7e5.

📒 Files selected for processing (2)
  • content/simple_workflow.py (2 hunks)
  • tests/test_simple_workflow.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
tests/test_simple_workflow.py (1)
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-13T10:26:04.804Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-simple-app
  • GitHub Check: build-full-app
  • GitHub Check: build-openms
🔇 Additional comments (4)
content/simple_workflow.py (2)

20-20: Good defensive programming technique

Adding min(20, params["example-x-dimension"]) ensures that values from the params dictionary are capped at the maximum allowed value, which aligns perfectly with the max_value=20 constraint already defined for this input.


29-29: Good consistency in constraint handling

Applying the same min(20, ...) pattern to the y-dimension ensures consistent behavior across both inputs. This defensive programming approach prevents potential issues with out-of-range values.

tests/test_simple_workflow.py (2)

15-20: Good use of fixture pattern

The fixture for launching the app is well-designed, making it reusable across multiple tests and reducing code duplication.


27-29: Use key-based widget access instead of indexes

Using index-based access to number inputs makes tests brittle if the UI layout changes.

The code in simple_workflow.py does define explicit keys for the inputs. Please modify this test to use key-based access like you do in the other tests:

- launch.number_input[0].set_value(5)  
- launch.number_input[1].set_value(4)
+ x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
+ y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
+ assert x_input is not None, "X-dimension input not found!"
+ assert y_input is not None, "Y-dimension input not found!"
+ x_input.set_value(5)
+ y_input.set_value(4)

Comment on lines +43 to +65
def test_invalid_inputs(launch):
"""Ensure invalid inputs prevent table generation."""

launch.run(timeout=15)
time.sleep(5)

x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"

# Set invalid values
x_input.set_value(25)
y_input.set_value(10)

launch.run(timeout=15)
time.sleep(5)

# Check if table is missing
table = next((tbl for tbl in launch.table), None)

assert table is None, "Table should not be generated when inputs are invalid!"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect test logic for invalid inputs

This test has a critical logical error. When x_input is set to 25, it won't prevent table generation because the application code now uses min(20, params["example-x-dimension"]) to cap the value at 20.

The test expects no table to be generated (assert table is None), but with the capping logic, the input becomes valid and a table should be generated. This test directly contradicts the changes made in content/simple_workflow.py.

Based on the learning feedback from jcharkow, modify this test to verify value clamping instead:

def test_invalid_inputs(launch):
    """Ensure invalid inputs are properly clamped."""

    launch.run(timeout=15)
-   time.sleep(5)

    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

    assert x_input is not found, "X-dimension input not found!"
    assert y_input is not found, "Y-dimension input not found!"

    # Set values exceeding maximum
    x_input.set_value(25)
    y_input.set_value(10)

    launch.run(timeout=15)
-   time.sleep(5)

-   # Check if table is missing
-   table = next((tbl for tbl in launch.table), None)
-
-   assert table is None, "Table should not be generated when inputs are invalid!"
+   # Verify values are clamped
+   assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
+   
+   # Check that table was generated with clamped dimensions
+   assert len(launch.dataframe) > 0, "Table not generated!"
+   df = launch.dataframe[0].value
+   assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"

Committable suggestion skipped: line range outside the PR's diff.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
tests/test_simple_workflow.py (1)

28-29: 🛠️ Refactor suggestion

Use key-based widget access instead of indexes

Using index-based access (number_input[0], number_input[1]) makes tests brittle if the order of widgets changes. This has already been pointed out in previous reviews.

-    launch.number_input[0].set_value(5)  
-    launch.number_input[1].set_value(4)  
+    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
+    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
+    
+    assert x_input is not None, "X-dimension input not found!"
+    assert y_input is not None, "Y-dimension input not found!"
+    
+    x_input.set_value(5)
+    y_input.set_value(4)
🧹 Nitpick comments (2)
tests/test_simple_workflow.py (2)

57-57: Avoid hard-coded sleep times

Using time.sleep(5) can make tests slow and potentially flaky. Consider using Streamlit's built-in waiting mechanisms or more specific checks for the UI state.

-    time.sleep(5)  
+    # Instead of sleeping, wait for a specific condition
+    # For example, check that the dataframe is populated
+    assert len(launch.dataframe) > 0, "Table not generated after timeout!"

62-65: Simplify download button detection

The current implementation uses two different methods to find the download button. Consider consolidating this into a clearer approach.

-    # Find the "Download Table" button correctly
-    download_button = next((btn for btn in launch.button if hasattr(btn, "label") and "Download" in btn.label), None)
-    download_component = next((comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), None)
-
-    assert download_button or download_component, "Download Table button is missing!"
+    # Find any component that has "Download" in its label
+    download_elements = [comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label]
+    
+    assert len(download_elements) > 0, "Download Table button is missing!"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2a1a7e5 and 5c8bd14.

📒 Files selected for processing (2)
  • content/simple_workflow.py (2 hunks)
  • tests/test_simple_workflow.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • content/simple_workflow.py
🧰 Additional context used
🧠 Learnings (1)
tests/test_simple_workflow.py (1)
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-15T14:56:47.833Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-simple-app
  • GitHub Check: build-full-app
  • GitHub Check: build-openms
🔇 Additional comments (2)
tests/test_simple_workflow.py (2)

5-13: Great documentation with clear test scope!

The docstring provides a clear overview of what these tests verify. This is excellent practice for maintainability.


15-21: LGTM - Well-structured pytest fixture

The fixture properly initializes the testing environment with an appropriate timeout.

Comment on lines +1 to +42
import pytest
import time
from streamlit.testing.v1 import AppTest

"""
Tests for the Simple Workflow page functionality.

These tests verify:
- Number input widgets function correctly
- Session state updates properly
- Table generation with correct dimensions
- Download button presence
"""

@pytest.fixture
def launch():
"""Launch the Simple Workflow page for testing."""
app = AppTest.from_file("content/simple_workflow.py")
app.run(timeout=15)
return app

def test_number_inputs(launch):
"""Ensure x and y dimension inputs exist and update correctly."""

assert len(launch.number_input) >= 2, f"Expected at least 2 number inputs, found {len(launch.number_input)}"

# Set x and y dimensions
launch.number_input[0].set_value(5)
launch.number_input[1].set_value(4)
launch.run(timeout=10)

# Validate session state updates
assert "example-x-dimension" in launch.session_state, "X-dimension key missing in session state!"
assert "example-y-dimension" in launch.session_state, "Y-dimension key missing in session state!"
assert launch.session_state["example-x-dimension"] == 5, "X-dimension not updated!"
assert launch.session_state["example-y-dimension"] == 4, "Y-dimension not updated!"

assert len(launch.dataframe) > 0, "Table not generated!"

df = launch.dataframe[0].value
assert df.shape == (5, 4), f"Expected table size (5,4) but got {df.shape}"

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Missing test for invalid inputs

The file documentation mentions validating invalid inputs (line 10-11), but the implementation of test_invalid_inputs is missing. This was pointed out in previous reviews.

Based on previous review comments, please add the test_invalid_inputs function:

def test_invalid_inputs(launch):
    """Ensure invalid inputs are properly clamped."""

    launch.run(timeout=15)

    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

    assert x_input is not None, "X-dimension input not found!"
    assert y_input is not None, "Y-dimension input not found!"

    # Set values exceeding maximum
    x_input.set_value(25)
    y_input.set_value(10)

    launch.run(timeout=15)

    # Verify values are clamped
    assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
    
    # Check that table was generated with clamped dimensions
    assert len(launch.dataframe) > 0, "Table not generated!"
    df = launch.dataframe[0].value
    assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"

🏁 Script executed:

#!/bin/bash
# Check if there's a test_invalid_inputs function in another file
grep -r "def test_invalid_inputs" tests/

Length of output: 42


Action Required: Add Missing Invalid Input Test

Our verification confirms that the test_invalid_inputs function is not present in the tests. This missing test leaves the invalid input handling unverified, which is a critical oversight.

Please add the following test to ensure invalid inputs are properly clamped:

def test_invalid_inputs(launch):
    """Ensure invalid inputs are properly clamped."""
    
    launch.run(timeout=15)

    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

    assert x_input is not None, "X-dimension input not found!"
    assert y_input is not None, "Y-dimension input not found!"

    # Set values exceeding maximum
    x_input.set_value(25)
    y_input.set_value(10)

    launch.run(timeout=15)

    # Verify values are clamped
    assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
    
    # Check that table was generated with clamped dimensions
    assert len(launch.dataframe) > 0, "Table not generated!"
    df = launch.dataframe[0].value
    assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"

Ensure this test is added in the appropriate test file (e.g., tests/test_simple_workflow.py) to address the critical issue.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
tests/test_simple_workflow.py (1)

10-11: ⚠️ Potential issue

Missing test for invalid inputs

The docstring mentions validating invalid inputs, but the implementation of this test is missing from the file.

Based on previous review feedback, please add the test_invalid_inputs function:

def test_invalid_inputs(launch):
    """Ensure invalid inputs are properly clamped."""
    
    launch.run(timeout=15)

    x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
    y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)

    assert x_input is not None, "X-dimension input not found!"
    assert y_input is not None, "Y-dimension input not found!"

    # Set values exceeding maximum
    x_input.set_value(25)
    y_input.set_value(10)

    launch.run(timeout=15)

    # Verify values are clamped
    assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
    
    # Check that table was generated with clamped dimensions
    assert len(launch.dataframe) > 0, "Table not generated!"
    df = launch.dataframe[0].value
    assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"
🧹 Nitpick comments (2)
tests/test_simple_workflow.py (2)

49-70: Reduce reliance on static sleep times

While the download button test is generally well-structured, using time.sleep(5) can make tests flaky and unnecessarily slow.

Consider removing the static sleep and relying on the timeout parameter in the run() method:

-    launch.run(timeout=15)  
-    time.sleep(5)  
+    launch.run(timeout=15)

Streamlit's testing framework should wait for the application to stabilize after changes, making the explicit sleep unnecessary.


68-69: Consider adding button click verification

The test verifies the Download button's presence but doesn't test if clicking it works correctly.

Consider extending the test to verify the button's functionality by simulating a click and checking for expected behavior. While this might require mocking file downloads, it would provide more comprehensive test coverage.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5c8bd14 and 15a680b.

📒 Files selected for processing (1)
  • tests/test_simple_workflow.py (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
tests/test_simple_workflow.py (1)
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-15T14:56:47.833Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: build-full-app
  • GitHub Check: build-openms
  • GitHub Check: build-simple-app
🔇 Additional comments (3)
tests/test_simple_workflow.py (3)

1-13: Good test documentation and structure!

The imports and docstring clearly outline the test objectives, providing a solid foundation for the test suite. The documentation effectively communicates what functionality is being tested.


15-20: Clear and well-structured fixture!

The fixture provides a clean setup for the tests with an appropriate timeout value. Using a fixture to initialize the app once and reuse it across tests is an efficient approach.


22-47: Well-implemented test for number inputs!

The test effectively validates that the number inputs exist, updates correctly, and verifies the generated table dimensions. Using key-based widget access (rather than index-based) is a good practice that makes tests more resilient to UI changes.

@Srajald Srajald requested a review from jcharkow March 18, 2025 07:32
Copy link
Contributor

@jcharkow jcharkow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good just some very minor changes.

min_value=1,
max_value=20,
value=params["example-y-dimension"],
value=max(1, min(20, params.get("example-y-dimension", 1))),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as above.

@Srajald
Copy link
Contributor Author

Srajald commented Mar 18, 2025

@jcharkow
I have Updated that sir,
I have added that just for the precaution measure..
rest now it is removed and ready to be merged..
Thanks Sir..

@Srajald Srajald requested a review from jcharkow March 18, 2025 14:26
Copy link
Contributor

@jcharkow jcharkow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Ready to merge

@jcharkow jcharkow merged commit 81d599e into OpenMS:main Mar 18, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants