Conversation
WalkthroughThe changes introduce two new test suites to verify user interactions in both the Run Subprocess and Simple Workflow pages of the Streamlit application. The tests validate dropdown selections, button clicks, number input handling, and the presence of UI elements. Additionally, the GitHub Actions workflow has been updated to expand test coverage, and the Simple Workflow code has been completely removed. Changes
Sequence Diagram(s)sequenceDiagram
participant U as User/Automation
participant RS as RunSubprocessApp
U->>RS: Launch Run Subprocess Page
RS-->>RS: Load dropdown options
U->>RS: Select file from dropdown
RS-->>U: Update UI with selection
U->>RS: Click "Extract IDs" button
RS-->>U: Initiate subprocess and update UI
sequenceDiagram
participant U as User/Automation
participant SW as SimpleWorkflowApp
U->>SW: Launch Simple Workflow Page
SW-->>SW: Display number input widgets
U->>SW: Enter x and y dimensions
SW-->>U: Generate and display data table
U->>SW: Click "Download Table" button
SW-->>U: Trigger table download process
Poem
Tip ⚡🧪 Multi-step agentic review comment chat (experimental)
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
💤 Files with no reviewable changes (1)
⏰ Context from checks skipped due to timeout of 90000ms (3)
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
tests/test_run_subprocess.py (1)
39-40: Remove debugging print statements.These print statements appear to be for debugging purposes and should be removed before merging to keep the test output clean.
- print("TEXT OUTPUT:", launch.text) - print("SESSION STATE:", launch.session_state)tests/test_simple_workflow.py (3)
4-8: Simplify fixture return.The intermediate variable assignment is unnecessary.
@pytest.fixture def launch(): """Launch the Simple Workflow page for testing.""" - app = AppTest.from_file("content/simple_workflow.py") - return app + return AppTest.from_file("content/simple_workflow.py")
45-46: Remove debugging print statements.These print statements appear to be for debugging purposes and should be removed before merging.
- print("Available buttons:", [btn.label for btn in launch.button]) - print("Available markdown elements:", [md.value for md in launch.markdown])
48-49: Consider a more direct button verification if possible.The current approach checks for "Download Table" text in markdown elements, which works but is indirect. If Streamlit's testing API allows for direct button identification, consider using that instead.
# If button testing API supports direct label checking: assert any(btn.label == "Download Table" for btn in launch.button), "Download Table button is missing!"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/test_run_subprocess.py(1 hunks)tests/test_simple_workflow.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: build-full-app
- GitHub Check: build
- GitHub Check: build-openms
- GitHub Check: build-simple-app
🔇 Additional comments (6)
tests/test_run_subprocess.py (4)
4-7: Good fixture setup with clear documentation.The fixture is well-defined with a descriptive docstring that clarifies its purpose.
9-18: Well-structured test for file selection functionality.The test properly verifies the existence of the dropdown and attempts to select an option when available, which is a good approach for testing UI interactions.
20-31: Good test structure with appropriate skip condition.The test properly checks for prerequisites and uses pytest.skip when no files are available, which prevents false failures.
42-43: Comprehensive assertion with helpful error message.The assertions appropriately check that the subprocess was executed successfully and provide informative error messages, which is excellent practice.
tests/test_simple_workflow.py (2)
10-28: Well-structured test for number inputs.The test properly validates input existence, updates, and verifies the effects on both session state and output data. The assertions have clear error messages.
30-44: Good test for table generation and dimensionality.The test effectively verifies that the table is generated with the correct dimensions based on the input values.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (6)
tests/test_simple_workflow.py (6)
9-27: Consider making tests more robust to UI changes.The test makes assumptions about the order of number inputs (indexes 0 and 1), which could break if the order changes in the application.
- launch.number_input[0].set_value(5) - launch.number_input[1].set_value(4) + # Find inputs by label or key if possible + x_input = next((inp for inp in launch.number_input if "x" in inp.label.lower()), launch.number_input[0]) + y_input = next((inp for inp in launch.number_input if "y" in inp.label.lower()), launch.number_input[1]) + x_input.set_value(5) + y_input.set_value(4)
23-26: Add validation for table content.The test verifies the dimensions of the table but doesn't validate its content.
df = launch.dataframe[0].value assert df.shape == (5, 4), f"Expected table size (5,4) but got {df.shape}" + # Validate that table contains expected data types or patterns + assert df.dtypes.apply(lambda x: x.name).unique().tolist() == ['int64'], "Expected all columns to be integers" + # Verify data bounds if applicable + assert df.values.min() >= 0, "Expected all values to be non-negative"
29-45: Test download button functionality.The test verifies the presence of the download button but doesn't test its functionality.
# Check for "Download Table" text in markdown assert any(btn.label == "Download Table" for btn in launch.button), "Download Table button is missing!" + # Find the download button + download_button = next((btn for btn in launch.button if btn.label == "Download Table"), None) + assert download_button is not None, "Download Table button not found" + + # Test button click if possible + # Note: This may require mocking file system interactions + # download_button.click() + # Assert expected behavior after button click
1-45: Add negative test cases and error handling scenarios.The test suite only covers the happy path. Consider adding tests for invalid inputs or edge cases.
Examples of additional test cases to consider:
- Test with minimum allowed values (e.g., 0 or 1)
- Test with very large values
- Test behavior when inputs are invalid
def test_invalid_inputs(launch): """Test behavior with invalid inputs.""" launch.run() # Test with negative values (if these should be rejected) launch.number_input[0].set_value(-1) launch.number_input[1].set_value(-2) launch.run() # Assert expected error message or handled state # e.g., check for error message in warning elements assert any("invalid" in warn.value.lower() for warn in launch.warning), "Expected warning for negative values" # Verify table is not generated with invalid inputs assert len(launch.dataframe) == 0, "Table should not be generated with invalid inputs"
13-21: Avoid hardcoded session state keys.The test uses hardcoded session state keys which makes it brittle if key names change.
Consider extracting these keys to constants or using a more flexible approach:
- assert launch.session_state["example-x-dimension"] == 5, "X-dimension not updated in session state!" - assert launch.session_state["example-y-dimension"] == 4, "Y-dimension not updated in session state!" + # Find keys related to dimensions in the session state + x_key = next((k for k in launch.session_state.keys() if "x" in k.lower() and "dimension" in k.lower()), None) + y_key = next((k for k in launch.session_state.keys() if "y" in k.lower() and "dimension" in k.lower()), None) + + assert x_key is not None, "X-dimension key not found in session state" + assert y_key is not None, "Y-dimension key not found in session state" + assert launch.session_state[x_key] == 5, "X-dimension not updated in session state!" + assert launch.session_state[y_key] == 4, "Y-dimension not updated in session state!"
1-3: Consider adding docstring or module-level comment.Adding a module-level docstring would improve documentation and clarify the purpose of this test file.
import pytest from streamlit.testing.v1 import AppTest + """ + Tests for the Simple Workflow page functionality. + + These tests verify: + - Number input widgets function correctly + - Session state updates properly + - Table generation with correct dimensions + - Download button presence + """
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
tests/test_run_subprocess.py(1 hunks)tests/test_simple_workflow.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_run_subprocess.py
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: build-simple-app
- GitHub Check: build-openms
- GitHub Check: build-full-app
🔇 Additional comments (1)
tests/test_simple_workflow.py (1)
4-7: LGTM: Good use of pytest fixtures.The fixture is well-designed to create an isolated test environment for each test case.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
tests/test_simple_workflow.py (4)
21-23: Remove redundant run() call.The
launch.run(timeout=10)call is redundant as the fixture already runs the application.def test_number_inputs(launch): """Ensure x and y dimension inputs exist and update correctly.""" - launch.run(timeout=10)
44-46: Remove redundant run() call in second test.Similar to the first test, this run() call is redundant as the fixture already handles it.
def test_download_button(launch): """Ensure the 'Download Table' button appears after table generation.""" - launch.run(timeout=10)
49-50: Consider using more robust widget selectors in second test.Same issue as in the first test - using numeric indices makes the test fragile to UI changes.
59-59: Remove debug print statement.The print statement appears to be for debugging purposes and should be removed from the final test code.
- print("Available downloads:", [dl.label for dl in launch.download])
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/test_simple_workflow.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: build-simple-app
- GitHub Check: build-openms
- GitHub Check: build-full-app
🔇 Additional comments (5)
tests/test_simple_workflow.py (5)
1-12: Well-structured test file with clear documentation.The imports and docstring are appropriate for a Streamlit application test file. The docstring clearly explains the purpose of the tests and what they verify.
14-19: LGTM! Good fixture setup for Streamlit testing.The fixture correctly initializes the testing environment by loading the Simple Workflow application with a reasonable timeout.
61-63: Good implementation of button verification.The approach to verify the download button's presence using a generator expression is clean and effective.
29-30:Details
✅ Verification successful
Consider using more robust widget selectors.
Accessing number input widgets by index (0, 1) makes the test brittle to UI changes. Consider using key, label, or other identifiers if the Streamlit testing API supports it.
- launch.number_input[0].set_value(5) - launch.number_input[1].set_value(4) + # Example of more robust selectors (if supported by the API): + launch.number_input(key="example-x-dimension").set_value(5) + launch.number_input(key="example-y-dimension").set_value(4)
🌐 Web query:
Does Streamlit testing API support selecting widgets by key or label?💡 Result:
The Streamlit testing API supports selecting widgets by key through the
AppTestclass, which allows precise identification and interaction with widgets during testing. Here's how it works:Key-Based Widget Selection
Assigned Keys
Widgets with a uniquekeyparameter can be directly accessed usingAppTestmethods:at.selectbox(key="contact_method") # Access selectbox with key="contact_method"This retrieves the widget regardless of its position on the page[1][4][6].
Key Verification
Properties likelabel,options, andvaluecan be inspected:assert at.selectbox[0].label == "Preferred Contact" assert at.selectbox[0].options == ["Email", "Phone"]This validates both widget configuration and state[3][11].
Label Limitations
While labels are visible in the UI, they aren't used for widget identification in tests. Two widgets with identical labels but different keys are treated as distinct entities[6][10].
Best Practices
- Unique Keys
Always assign explicit keys to widgets requiring test interaction (e.g.,key="user_age_input")[6][10].- Avoid Index Reliance
Usekeyinstead of positional indexes (e.g.,at.selectbox) to prevent test failures from UI order changes[3][4].Example Workflow
from streamlit.testing.v1 import AppTest def test_contact_selection(): at = AppTest.from_file("app.py") at.run() # Select by key and modify value contact_widget = at.selectbox(key="contact_method") contact_widget.select("Mobile phone") at.run() # Verify update assert at.success[0].value == "You selected: Mobile phone"This demonstrates key-based interaction and state validation[4][13][14].
For complex scenarios, combine key selection with container targeting:
at.sidebar.selectbox(key="theme_selector") # Widget in sidebarCitations:
- 1: https://docs.streamlit.io/develop/api-reference/widgets/st.selectbox
- 2: https://www.youtube.com/watch?v=yMcUU_OiSVQ
- 3: https://docs.streamlit.io/develop/concepts/app-testing/get-started
- 4: https://docs.streamlit.io/develop/api-reference/app-testing/st.testing.v1.apptest
- 5: https://docs.streamlit.io/1.27.0/develop/api-reference/widgets/st.selectbox
- 6: https://discuss.streamlit.io/t/how-to-use-the-key-field-in-interactive-widgets-api/1007
- 7: https://docs.streamlit.io/develop/api-reference/widgets
- 8: https://docs.streamlit.io/develop/api-reference/widgets/st.select_slider
- 9: https://github.com/streamlit/streamlit/blob/develop/lib/streamlit/testing/v1/app_test.py
- 10: https://docs.streamlit.io/develop/concepts/architecture/widget-behavior
- 11: https://docs.streamlit.io/develop/api-reference/app-testing/testing-element-classes
- 12: https://www.youtube.com/watch?v=4Vne7mHXApg
- 13: https://docs.streamlit.io/develop/api-reference/app-testing
- 14: https://discuss.streamlit.io/t/help-with-selecting-the-best-widgets-for-appraoch/70527
- 15: https://docs.streamlit.io/develop/api-reference
- 16: https://python-textbook.pythonhumanities.com/05_streamlit/05_01_04_input_widgets.html
- 17: https://docs.streamlit.io/develop/concepts/app-testing/cheat-sheet
- 18: https://www.youtube.com/watch?v=99OEoP5sy0U
Switch to key-based widget selectors for increased robustness.
The Streamlit testing API supports selecting widgets by key. In this case, replace index-based access with key-based access to prevent brittleness as the UI evolves.
- In tests/test_simple_workflow.py (lines 29-30), update:
tolaunch.number_input[0].set_value(5) launch.number_input[1].set_value(4)- launch.number_input[0].set_value(5) - launch.number_input[1].set_value(4) + # Example using key selectors: + launch.number_input(key="example-x-dimension").set_value(5) + launch.number_input(key="example-y-dimension").set_value(4)This change leverages the API's support for key-based selection, making tests more resilient to changes in widget order or UI structure.
1-63:Details
❌ Incorrect review comment
Suggest adding test for button click functionality.
The tests currently verify the presence of inputs and the download button, but don't test what happens when the button is clicked. Consider adding a test that simulates clicking the download button to ensure it functions correctly.
def test_download_button_click(launch): """Ensure the 'Download Table' button functions correctly when clicked.""" # Set x and y dimensions launch.number_input[0].set_value(3) launch.number_input[1].set_value(2) launch.run(timeout=10) # Find and click the download button download_button = next((dl for dl in launch.download if dl.label == "Download Table"), None) assert download_button, "Download Table button is missing!" # Here you'd test the click functionality if the Streamlit testing API supports it # This might involve checking that certain session state values are updated # or that a download is initiated # For example: download_button.click() # Then assert the expected behavior
🌐 Web query:
Does Streamlit testing API support download button click simulation?💡 Result:
The Streamlit testing API (
AppTest) currently does not support simulating clicks onst.download_buttonbased on the available documentation and community discussions[1][6]. Here's a detailed breakdown:
Current Limitations
Missing Support in AppTest Class:
- The
AppTestframework allows interaction with widgets likest.button,st.text_input, andst.form_submit_button, butst.download_buttonis explicitly excluded from supported elements[3][6].- A GitHub feature request (July 2024) highlights this gap, noting that automated testing for download buttons is not yet possible[6].
Workaround Challenges:
- While
st.download_buttoncan trigger actions viaon_clickcallbacks, these interactions can’t be programmatically validated in tests due to the lack of API support[2][4].- Testing frameworks like Selenium or Playwright are recommended alternatives for end-to-end testing of download functionality[9][12].
Key Insights from Documentation
AppTest Capabilities:
- Simulates user input for buttons, text inputs, and form submissions[3][11].
- Inspects rendered outputs like text, dataframes, and charts[3][5].
- Supports session state manipulation and multipage app testing[13][14].
Unsupported Features:
- Download button states (e.g., verifying downloaded content or click triggers)[6][12].
- Direct handling of file generation/download workflows in tests[6][12].
Recommendations
- Track the GitHub Issue: Follow streamlit/streamlit#9003 for updates on adding
st.download_buttonsupport toAppTest[6].- Use Browser Automation: For critical download workflows, combine
AppTestwith tools like Playwright to validate end-to-end behavior[9][12].- Mock File Generation: Test the logic generating downloadable files separately using
pytest, while avoiding UI-dependent validations[3][14].
In summary, while Streamlit's native testing framework excels at validating most UI interactions, testing
st.download_buttonrequires external tools or workarounds until official support is added.Citations:
- 1: https://www.restack.io/p/using-streamlit-answer-api-testing-cat-ai
- 2: https://www.restack.io/docs/streamlit-knowledge-streamlit-download-button-on-click
- 3: https://docs.streamlit.io/develop/api-reference/app-testing
- 4: https://www.restack.io/docs/streamlit-knowledge-streamlit-download-button-reload-fix
- 5: https://docs.streamlit.io/develop/concepts/app-testing
- 6: streamlit/streamlit#9003
- 7: https://www.youtube.com/watch?v=99OEoP5sy0U
- 8: https://discuss.streamlit.io/t/why-download-button-reload-page/24380
- 9: https://blog.streamlit.io/apptest-faster-automated-testing/
- 10: https://discuss.streamlit.io/t/create-download-file-upon-clicking-a-button/32613
- 11: https://docs.streamlit.io/develop/api-reference/app-testing/st.testing.v1.apptest
- 12: streamlit/streamlit#4382
- 13: https://docs.streamlit.io/develop/concepts/app-testing/beyond-the-basics
- 14: https://docs.streamlit.io/develop/api-reference/widgets/st.download_button
- 15: https://docs.streamlit.io/develop/concepts/app-testing/get-started
- 16: https://docs.streamlit.io/develop/api-reference
Clarification on Download Button Click Testing
Based on current capabilities of the Streamlit testing API (
AppTest), simulating a click on a download button (i.e.st.download_button) is not supported. Rather than adding a test for button click functionality that the API cannot simulate, please consider these alternatives:
- Test the underlying logic: Isolate and test the file generation or callback functionality separately.
- Use E2E testing tools: For full UI interaction tests, consider an end-to-end framework (e.g., Selenium or Playwright) which can simulate actual button clicks.
- Monitor framework improvements: Track streamlit/streamlit#9003 for future support on simulating download button clicks.
Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (4)
tests/test_simple_workflow.py (4)
22-23: Remove redundant .run() callYour test function has a redundant
launch.run(timeout=10)call right at the beginning. Since the fixture already runs the app, this call is unnecessary and might cause delays in your test execution.def test_number_inputs(launch): """Ensure x and y dimension inputs exist and update correctly.""" - launch.run(timeout=10) # Ensure at least 2 number input widgets exist
46-50: Apply consistent timeout parametersThere's inconsistency in how timeout parameters are used across different run() calls. For reliability, especially in CI environments, it's good practice to explicitly set timeouts consistently.
- launch.run() launch.number_input[0].set_value(-1) launch.number_input[1].set_value(-2) - launch.run() + launch.run(timeout=10) launch.number_input[0].set_value(-1) launch.number_input[1].set_value(-2) + launch.run(timeout=10)
73-73: Remove debug print statementThe print statement should be removed as it's not necessary for the test execution and could clutter test output logs. Debug statements shouldn't be included in committed test code.
- print("Available downloads:", [dl.label for dl in launch.download])
56-57: Fix extra blank lineAccording to PEP 8 style guidelines, there should be only one blank line between function definitions.
# Verify table is not generated with invalid inputs assert len(launch.dataframe) == 0, "Table should not be generated with invalid inputs" - def test_download_button(launch):
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/test_simple_workflow.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: build-full-app
- GitHub Check: build-openms
- GitHub Check: build-simple-app
🔇 Additional comments (1)
tests/test_simple_workflow.py (1)
1-77: Overall well-structured test suiteThe test suite is well-organized with clear docstrings, appropriate assertions with helpful error messages, and good coverage of different aspects of the Simple Workflow page. The tests verify number inputs, session state updates, error conditions, and the presence of UI elements like tables and download buttons.
I particularly like the thorough validation of session state keys and values, as well as the table dimension checks.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
tests/test_simple_workflow.py (3)
42-54: Add timeout parameter to run() calls for consistency.While the test logic for invalid inputs is good, you should include the timeout parameter in the run() calls for consistency with other tests.
- launch.run() + launch.run(timeout=10) launch.number_input[0].set_value(-1) launch.number_input[1].set_value(-2) - launch.run() + launch.run(timeout=10)
56-72: Consider refactoring duplicate table generation logic.This test has similar table generation logic to
test_number_inputs. Consider extracting a helper function to reduce duplication.def _generate_table(app, x_dim, y_dim): """Helper to set dimensions and generate a table.""" app.number_input[0].set_value(x_dim) app.number_input[1].set_value(y_dim) app.run(timeout=10) # Verify table generation assert len(app.dataframe) > 0, "Table not generated!" df = app.dataframe[0].value assert df.shape == (x_dim, y_dim), f"Expected table size ({x_dim},{y_dim}) but got {df.shape}" return dfThis would simplify both test functions and improve maintainability.
50-50: Consider a more specific assertion for warning messages.The current assertion checks if any warning contains 'invalid', but a more specific check would be more robust.
- assert any("invalid" in warn.value.lower() for warn in launch.warning), "Expected warning for negative values" + assert any("dimensions must be positive" in warn.value.lower() for warn in launch.warning), "Expected specific warning for negative values"This assumes the actual error message contains "dimensions must be positive" - please adjust to match the actual expected message.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/test_simple_workflow.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: build-simple-app
- GitHub Check: build
- GitHub Check: build-openms
- GitHub Check: build-full-app
🔇 Additional comments (5)
tests/test_simple_workflow.py (5)
14-19: Well-structured test fixture for streamlit app testing.The fixture correctly initializes the Streamlit page for testing with an appropriate timeout value. This approach makes the tests clean and maintainable.
24-40: Good verification of number inputs and table generation.This test thoroughly verifies that:
- Number inputs exist and update correctly
- Session state keys are properly updated
- The generated table has the expected dimensions
Consider adding a test case with larger dimensions to verify scalability.
1-3: LGTM! Standard pytest and Streamlit testing imports.The test file correctly imports the necessary libraries for Streamlit application testing.
4-12: Clear and informative module docstring.The docstring effectively communicates the purpose and scope of the test suite, which improves maintainability.
71-72: Good use of next() with generator expression to find the download button.This is an elegant way to search for a specific button by label in a collection of elements.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
tests/test_simple_workflow.py (3)
47-47: Replace fixed sleep with deterministic waitingUsing
time.sleep()can lead to flaky tests that sometimes pass and sometimes fail based on timing variations.Consider using more deterministic waiting mechanisms offered by the testing framework, or checking for specific UI elements to appear instead of arbitrary sleep.
70-70: Replace fixed sleep with deterministic waitingUsing
time.sleep()can lead to flaky tests that sometimes pass and sometimes fail based on timing variations.Consider waiting for specific UI elements to appear instead of using a fixed sleep time:
-time.sleep(5) +# Wait for the dataframe to appear by checking its presence in a loop with timeout +max_wait = 5 +start_time = time.time() +while time.time() - start_time < max_wait: + if len(launch.dataframe) > 0: + break + time.sleep(0.1)
75-79: Standardize approach for locating the download buttonThe code uses two different approaches to find the download button, which suggests uncertainty about the correct approach.
If possible, identify which approach is more reliable and standardize on that approach. If both approaches are needed for resilience, consider adding a comment explaining why:
# Find the "Download Table" button correctly -download_button = next((btn for btn in launch.button if hasattr(btn, "label") and "Download" in btn.label), None) -download_component = next((comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), None) +# Primary approach: Look for button components +download_button = next((btn for btn in launch.button if hasattr(btn, "label") and "Download" in btn.label), None) + +# Fallback approach: Look in main components (in case button is wrapped in another component) +download_component = next((comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), None) # Final assertion assert download_button or download_component, "Download Table button is missing!"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/test_simple_workflow.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: build-full-app
- GitHub Check: build-simple-app
- GitHub Check: build
- GitHub Check: build-openms
🔇 Additional comments (4)
tests/test_simple_workflow.py (4)
28-29: Update index-based access to use key-based identificationThe test is using index-based access to number inputs which makes tests brittle if widget order changes.
Replace these lines with key-based access as used in your other test functions:
-launch.number_input[0].set_value(5) -launch.number_input[1].set_value(4) +x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) +y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) +assert x_input is not None, "X-dimension input not found!" +assert y_input is not None, "Y-dimension input not found!" +x_input.set_value(5) +y_input.set_value(4)
1-14: LGTM! Well-structured imports and clear test documentation.The imports are appropriate and the docstring clearly explains the test coverage.
15-21: LGTM! Well-defined test fixture.The fixture appropriately launches the Streamlit application for testing with a reasonable timeout.
32-42: LGTM! Good validation of session state and table dimensions.The assertions effectively validate that the session state is updated correctly and that the table is generated with the expected dimensions.
tests/test_simple_workflow.py
Outdated
| def test_invalid_inputs(launch): | ||
| """Ensure invalid inputs are handled properly.""" | ||
|
|
||
| launch.run(timeout=10) | ||
| time.sleep(2) | ||
|
|
||
| x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) | ||
| y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) | ||
|
|
||
| assert x_input is not None, "X-dimension input not found!" | ||
| assert y_input is not None, "Y-dimension input not found!" | ||
|
|
||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Complete implementation of invalid inputs test
The test_invalid_inputs function doesn't actually test any invalid inputs. It finds the input elements but doesn't set invalid values or verify how the application handles them.
Consider implementing the actual test case by adding code to set invalid values and verify the application's response. For example:
def test_invalid_inputs(launch):
"""Ensure invalid inputs are handled properly."""
launch.run(timeout=10)
- time.sleep(2)
x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"
+ # Test negative values
+ x_input.set_value(-1)
+ y_input.set_value(-1)
+ launch.run(timeout=10)
+
+ # Verify appropriate error handling
+ error_message = next((text for text in launch.markdown if "invalid" in text.value.lower()), None)
+ assert error_message is not None, "No error message displayed for negative values"
+
+ # Verify no table was generated with invalid inputs
+ assert len(launch.dataframe) == 0, "Table was generated with invalid inputs!"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_invalid_inputs(launch): | |
| """Ensure invalid inputs are handled properly.""" | |
| launch.run(timeout=10) | |
| time.sleep(2) | |
| x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) | |
| y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) | |
| assert x_input is not None, "X-dimension input not found!" | |
| assert y_input is not None, "Y-dimension input not found!" | |
| def test_invalid_inputs(launch): | |
| """Ensure invalid inputs are handled properly.""" | |
| launch.run(timeout=10) | |
| x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) | |
| y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) | |
| assert x_input is not None, "X-dimension input not found!" | |
| assert y_input is not None, "Y-dimension input not found!" | |
| # Test negative values | |
| x_input.set_value(-1) | |
| y_input.set_value(-1) | |
| launch.run(timeout=10) | |
| # Verify appropriate error handling | |
| error_message = next((text for text in launch.markdown if "invalid" in text.value.lower()), None) | |
| assert error_message is not None, "No error message displayed for negative values" | |
| # Verify no table was generated with invalid inputs | |
| assert len(launch.dataframe) == 0, "Table was generated with invalid inputs!" |
There was a problem hiding this comment.
Please see this suggestion above. I would try testing an x value greater than 20 and it should be invalid.
There was a problem hiding this comment.
🧩 Analysis chain
@jcharkow I'll modify the test to check for x values greater than 20 as you suggested. Let me first examine the implementation to ensure the test accurately reflects the expected behavior.
🏁 Script executed:
#!/bin/bash
# Fetch the content of the simple_workflow.py file
cat content/simple_workflow.py
Length of output: 1429
@jcharkow I've examined the simple_workflow.py file and see that the x dimension has a max_value constraint of 20. Here's an updated version of the test_invalid_inputs function that tests values exceeding this limit:
def test_invalid_inputs(launch):
"""Ensure invalid inputs are handled properly."""
launch.run(timeout=10)
x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"
# Test value exceeding maximum allowed (max is 20)
x_input.set_value(25) # This should be clamped to 20
y_input.set_value(10)
launch.run(timeout=10)
# Verify the value was clamped to the maximum allowed
assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
# Check that a table was generated with the correct dimensions
assert len(launch.dataframe) > 0, "Table not generated!"
df = launch.dataframe[0].value
assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"This test verifies that when a user attempts to enter a value exceeding the maximum limit, Streamlit correctly clamps it to the maximum allowed value and the application still functions properly.
✏️ Learnings added
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-11T15:23:45.825Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
jcharkow
left a comment
There was a problem hiding this comment.
Good progress so far! Please see comments above.
Some additional comments:
- It does not seem that sample.mzML is used so please remove that file.
- These tests are currently not triggered by github actions. I would recommend adding these functions to test_gui.py instead of adding new files. Alternatively if you think it is necessary to add new files please edit .github/ci.yml so these files are also tested
tests/test_simple_workflow.py
Outdated
| def test_invalid_inputs(launch): | ||
| """Ensure invalid inputs are handled properly.""" | ||
|
|
||
| launch.run(timeout=10) | ||
| time.sleep(2) | ||
|
|
||
| x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) | ||
| y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) | ||
|
|
||
| assert x_input is not None, "X-dimension input not found!" | ||
| assert y_input is not None, "Y-dimension input not found!" | ||
|
|
||
|
|
There was a problem hiding this comment.
Please see this suggestion above. I would try testing an x value greater than 20 and it should be invalid.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (6)
.github/ci.yml (3)
11-11: Remove trailing spaces.Remove the trailing spaces at the end of this line to address the YAMLlint warning.
- # Requirements file generated with python=3.11 + # Requirements file generated with python=3.11🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 11-11: trailing spaces
(trailing-spaces)
31-31: Add newline at end of file.Add a newline character at the end of the file to follow standard file formatting conventions and fix the YAMLlint warning.
python -m pytest test_gui.py tests/ +🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 31-31: no new line character at the end of file
(new-line-at-end-of-file)
29-29: Consider adding test result output formats.For better visibility of test results in GitHub Actions, consider adding options to generate test reports or summaries.
pip install pytest + pip install pytest-html - name: Test run: | - python -m pytest test_gui.py tests/ + python -m pytest test_gui.py tests/ --html=test-report.html --self-contained-htmltests/test_simple_workflow.py (3)
47-47: Avoid using hard-coded sleep times.Using
time.sleep()with hard-coded values makes tests slower than necessary and can lead to flaky tests. Consider using a more reliable approach to wait for UI updates.- time.sleep(5) + # Wait for UI to update by polling for the expected state + def wait_for_condition(condition_func, timeout=10, interval=0.5): + start_time = time.time() + while time.time() - start_time < timeout: + if condition_func(): + return True + time.sleep(interval) + return False + + # Use the waiting function instead of sleep + wait_for_condition(lambda: "example-x-dimension" in launch.session_state)This approach can be implemented as a helper function at the module level and reused in all test functions.
Also applies to: 59-59, 83-83
92-92: Add newline at end of file.Add a newline character at the end of the file to follow standard file formatting conventions.
assert download_button or download_component, "Download Table button is missing!" -92 +
43-67: Test additional invalid input scenarios.The test_invalid_inputs function currently only tests values exceeding the maximum. Consider adding tests for:
- Values below the minimum
- Non-numeric inputs (if applicable)
- Edge cases at exactly the boundary values
def test_invalid_inputs(launch): """Ensure invalid inputs are handled properly.""" launch.run(timeout=15) time.sleep(5) x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) assert x_input is not None, "X-dimension input not found!" assert y_input is not None, "Y-dimension input not found!" # Test value exceeding maximum allowed (max is 20) x_input.set_value(25) y_input.set_value(10) launch.run(timeout=10) time.sleep(5) # Verify the value was clamped to the maximum allowed assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!" assert len(launch.dataframe) > 0, "Table not generated!" df = launch.dataframe[0].value assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}" + # Test value below minimum allowed (min is 1) + x_input.set_value(-5) + y_input.set_value(5) + launch.run(timeout=10) + time.sleep(5) + + # Verify the value was clamped to the minimum allowed + assert launch.session_state["example-x-dimension"] == 1, "X-dimension not clamped to minimum!" + + # Test edge case at exactly the boundary + x_input.set_value(1) # Minimum value + y_input.set_value(20) # Maximum value + launch.run(timeout=10) + time.sleep(5) + + # Verify boundary values are accepted + assert launch.session_state["example-x-dimension"] == 1, "X-dimension minimum not accepted!" + assert launch.session_state["example-y-dimension"] == 20, "Y-dimension maximum not accepted!"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
.github/ci.yml(1 hunks)tests/test_run_subprocess.py(1 hunks)tests/test_simple_workflow.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_run_subprocess.py
🧰 Additional context used
🧠 Learnings (1)
tests/test_simple_workflow.py (1)
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-12T15:07:46.046Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
🪛 YAMLlint (1.35.1)
.github/ci.yml
[error] 11-11: trailing spaces
(trailing-spaces)
[error] 31-31: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: build-openms
- GitHub Check: build-simple-app
- GitHub Check: build-full-app
🔇 Additional comments (1)
tests/test_simple_workflow.py (1)
28-29: Use key-based widget access instead of indices.Using index-based access (
number_input[0],number_input[1]) makes tests brittle if the order of widgets changes in the UI. For consistency with other tests in this file, use key-based access instead.- launch.number_input[0].set_value(5) - launch.number_input[1].set_value(4) + x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) + y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) + + assert x_input is not None, "X-dimension input not found!" + assert y_input is not None, "Y-dimension input not found!" + + x_input.set_value(5) + y_input.set_value(4)
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
.github/workflows/ci.yml (2)
5-12: Job Matrix Configuration and Trailing Whitespace Issue
The matrix is set up properly to run on Ubuntu with Python 3.11. However, note that line 11 contains trailing spaces; please remove them to adhere to YAML lint standards.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 11-11: trailing spaces
(trailing-spaces)
29-31: Test Step Command Needs Clarification and Newline Fix
The test step runspython -m pytest test_gui.py tests/, which may lead to ambiguity. Please confirm if explicitly listingtest_gui.pyis intentional given that the PR introduces tests fortest_run_subprocess.pyandtest_simple_workflow.py. If these are the intended targets, consider updating this command to ensure all relevant tests are executed without duplication.
Additionally, line 31 lacks a newline character at the end of the file; adding one will meet YAML file formatting standards.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 31-31: no new line character at the end of file
(new-line-at-end-of-file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/ci.yml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.35.1)
.github/workflows/ci.yml
[error] 11-11: trailing spaces
(trailing-spaces)
[error] 31-31: no new line character at the end of file
(new-line-at-end-of-file)
🔇 Additional comments (4)
.github/workflows/ci.yml (4)
1-4: CI Workflow Metadata Defined Correctly
The workflow header correctly names the pipeline ("continuous-integration") and triggers on push events.
13-20: Checkout and Miniconda Setup Steps Look Good
The steps for checking out the repository and setting up the Miniconda environment (with the specified channels and environment activation) are implemented correctly.
21-24: Installation of OpenMS is Handled Appropriately
The "Install OpenMS" step uses a straightforward conda installation command, which should work as intended.
25-28: Dependency Installation Step is Effective
Upgrading pip and installing dependencies—including pytest—is well organized. The inline comment regarding dependabot updates is a good practice.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)
31-33: Remove extra blank line.
YAMLlint flagged an extra blank line at line 32. Please remove this to adhere to standard YAML formatting practices.🧰 Tools
🪛 YAMLlint (1.35.1)
[warning] 32-32: too many blank lines
(1 > 0) (empty-lines)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/ci.yml(1 hunks)
🧰 Additional context used
🪛 YAMLlint (1.35.1)
.github/workflows/ci.yml
[warning] 32-32: too many blank lines
(1 > 0) (empty-lines)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: build-full-app
- GitHub Check: build-openms
- GitHub Check: build-simple-app
🔇 Additional comments (1)
.github/workflows/ci.yml (1)
29-32: Test command update valid.
The updated test command now runspyteston bothtest_gui.pyand thetests/directory, ensuring all new tests are executed during CI.🧰 Tools
🪛 YAMLlint (1.35.1)
[warning] 32-32: too many blank lines
(1 > 0) (empty-lines)
|
Although the tests are passing above, the new tests that you have added see to be failing. Please double check this. Note that since you changed the ci.yml file, the new tests will only trigger on your personal branch and not on the PR. Also please fix merge conflicts. This can be done by merging the main branch locally with your branch and then pushing. |
jcharkow
left a comment
There was a problem hiding this comment.
Seems that tests are still failing see comment above.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/test_simple_workflow.py (2)
47-47: Avoid hardcoded time.sleep() callsUsing hardcoded
time.sleep(5)calls makes tests slower and potentially flaky.The Streamlit testing library already provides a timeout parameter in the
run()method. Instead of using explicit sleep calls, rely on the timeout mechanism:- launch.run(timeout=15) - time.sleep(5) + launch.run(timeout=15) # This already waits for components to loadIf you need to wait for specific elements to appear, consider using a polling approach with a reasonable timeout instead of fixed sleep durations.
Also applies to: 60-60, 81-81
86-89: Improve button detection logicThe current approach uses two different methods to locate the download button, which could be simplified.
Consider consolidating the button detection:
- download_button = next((btn for btn in launch.button if hasattr(btn, "label") and "Download" in btn.label), None) - download_component = next((comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), None) - - assert download_button or download_component, "Download Table button is missing!" + # Look for download button across all component types + download_element = next( + (comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), + None + ) + + assert download_element is not None, "Download Table button is missing!"This approach is more robust as it searches all components, not just those identified specifically as buttons.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
content/simple_workflow.py(2 hunks)tests/test_simple_workflow.py(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
tests/test_simple_workflow.py (1)
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-13T10:26:04.804Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: build-simple-app
- GitHub Check: build-full-app
- GitHub Check: build-openms
🔇 Additional comments (4)
content/simple_workflow.py (2)
20-20: Good defensive programming techniqueAdding
min(20, params["example-x-dimension"])ensures that values from the params dictionary are capped at the maximum allowed value, which aligns perfectly with themax_value=20constraint already defined for this input.
29-29: Good consistency in constraint handlingApplying the same
min(20, ...)pattern to the y-dimension ensures consistent behavior across both inputs. This defensive programming approach prevents potential issues with out-of-range values.tests/test_simple_workflow.py (2)
15-20: Good use of fixture patternThe fixture for launching the app is well-designed, making it reusable across multiple tests and reducing code duplication.
27-29: Use key-based widget access instead of indexesUsing index-based access to number inputs makes tests brittle if the UI layout changes.
The code in simple_workflow.py does define explicit keys for the inputs. Please modify this test to use key-based access like you do in the other tests:
- launch.number_input[0].set_value(5) - launch.number_input[1].set_value(4) + x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) + y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) + assert x_input is not None, "X-dimension input not found!" + assert y_input is not None, "Y-dimension input not found!" + x_input.set_value(5) + y_input.set_value(4)
tests/test_simple_workflow.py
Outdated
| def test_invalid_inputs(launch): | ||
| """Ensure invalid inputs prevent table generation.""" | ||
|
|
||
| launch.run(timeout=15) | ||
| time.sleep(5) | ||
|
|
||
| x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) | ||
| y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) | ||
|
|
||
| assert x_input is not None, "X-dimension input not found!" | ||
| assert y_input is not None, "Y-dimension input not found!" | ||
|
|
||
| # Set invalid values | ||
| x_input.set_value(25) | ||
| y_input.set_value(10) | ||
|
|
||
| launch.run(timeout=15) | ||
| time.sleep(5) | ||
|
|
||
| # Check if table is missing | ||
| table = next((tbl for tbl in launch.table), None) | ||
|
|
||
| assert table is None, "Table should not be generated when inputs are invalid!" |
There was a problem hiding this comment.
Fix incorrect test logic for invalid inputs
This test has a critical logical error. When x_input is set to 25, it won't prevent table generation because the application code now uses min(20, params["example-x-dimension"]) to cap the value at 20.
The test expects no table to be generated (assert table is None), but with the capping logic, the input becomes valid and a table should be generated. This test directly contradicts the changes made in content/simple_workflow.py.
Based on the learning feedback from jcharkow, modify this test to verify value clamping instead:
def test_invalid_inputs(launch):
"""Ensure invalid inputs are properly clamped."""
launch.run(timeout=15)
- time.sleep(5)
x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
assert x_input is not found, "X-dimension input not found!"
assert y_input is not found, "Y-dimension input not found!"
# Set values exceeding maximum
x_input.set_value(25)
y_input.set_value(10)
launch.run(timeout=15)
- time.sleep(5)
- # Check if table is missing
- table = next((tbl for tbl in launch.table), None)
-
- assert table is None, "Table should not be generated when inputs are invalid!"
+ # Verify values are clamped
+ assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
+
+ # Check that table was generated with clamped dimensions
+ assert len(launch.dataframe) > 0, "Table not generated!"
+ df = launch.dataframe[0].value
+ assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/test_simple_workflow.py (1)
28-29: 🛠️ Refactor suggestionUse key-based widget access instead of indexes
Using index-based access (
number_input[0],number_input[1]) makes tests brittle if the order of widgets changes. This has already been pointed out in previous reviews.- launch.number_input[0].set_value(5) - launch.number_input[1].set_value(4) + x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) + y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) + + assert x_input is not None, "X-dimension input not found!" + assert y_input is not None, "Y-dimension input not found!" + + x_input.set_value(5) + y_input.set_value(4)
🧹 Nitpick comments (2)
tests/test_simple_workflow.py (2)
57-57: Avoid hard-coded sleep timesUsing
time.sleep(5)can make tests slow and potentially flaky. Consider using Streamlit's built-in waiting mechanisms or more specific checks for the UI state.- time.sleep(5) + # Instead of sleeping, wait for a specific condition + # For example, check that the dataframe is populated + assert len(launch.dataframe) > 0, "Table not generated after timeout!"
62-65: Simplify download button detectionThe current implementation uses two different methods to find the download button. Consider consolidating this into a clearer approach.
- # Find the "Download Table" button correctly - download_button = next((btn for btn in launch.button if hasattr(btn, "label") and "Download" in btn.label), None) - download_component = next((comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label), None) - - assert download_button or download_component, "Download Table button is missing!" + # Find any component that has "Download" in its label + download_elements = [comp for comp in launch.main if hasattr(comp, "label") and "Download" in comp.label] + + assert len(download_elements) > 0, "Download Table button is missing!"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
content/simple_workflow.py(2 hunks)tests/test_simple_workflow.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- content/simple_workflow.py
🧰 Additional context used
🧠 Learnings (1)
tests/test_simple_workflow.py (1)
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-15T14:56:47.833Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: build-simple-app
- GitHub Check: build-full-app
- GitHub Check: build-openms
🔇 Additional comments (2)
tests/test_simple_workflow.py (2)
5-13: Great documentation with clear test scope!The docstring provides a clear overview of what these tests verify. This is excellent practice for maintainability.
15-21: LGTM - Well-structured pytest fixtureThe fixture properly initializes the testing environment with an appropriate timeout.
| import pytest | ||
| import time | ||
| from streamlit.testing.v1 import AppTest | ||
|
|
||
| """ | ||
| Tests for the Simple Workflow page functionality. | ||
|
|
||
| These tests verify: | ||
| - Number input widgets function correctly | ||
| - Session state updates properly | ||
| - Table generation with correct dimensions | ||
| - Download button presence | ||
| """ | ||
|
|
||
| @pytest.fixture | ||
| def launch(): | ||
| """Launch the Simple Workflow page for testing.""" | ||
| app = AppTest.from_file("content/simple_workflow.py") | ||
| app.run(timeout=15) | ||
| return app | ||
|
|
||
| def test_number_inputs(launch): | ||
| """Ensure x and y dimension inputs exist and update correctly.""" | ||
|
|
||
| assert len(launch.number_input) >= 2, f"Expected at least 2 number inputs, found {len(launch.number_input)}" | ||
|
|
||
| # Set x and y dimensions | ||
| launch.number_input[0].set_value(5) | ||
| launch.number_input[1].set_value(4) | ||
| launch.run(timeout=10) | ||
|
|
||
| # Validate session state updates | ||
| assert "example-x-dimension" in launch.session_state, "X-dimension key missing in session state!" | ||
| assert "example-y-dimension" in launch.session_state, "Y-dimension key missing in session state!" | ||
| assert launch.session_state["example-x-dimension"] == 5, "X-dimension not updated!" | ||
| assert launch.session_state["example-y-dimension"] == 4, "Y-dimension not updated!" | ||
|
|
||
| assert len(launch.dataframe) > 0, "Table not generated!" | ||
|
|
||
| df = launch.dataframe[0].value | ||
| assert df.shape == (5, 4), f"Expected table size (5,4) but got {df.shape}" | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
❓ Verification inconclusive
Missing test for invalid inputs
The file documentation mentions validating invalid inputs (line 10-11), but the implementation of test_invalid_inputs is missing. This was pointed out in previous reviews.
Based on previous review comments, please add the test_invalid_inputs function:
def test_invalid_inputs(launch):
"""Ensure invalid inputs are properly clamped."""
launch.run(timeout=15)
x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"
# Set values exceeding maximum
x_input.set_value(25)
y_input.set_value(10)
launch.run(timeout=15)
# Verify values are clamped
assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
# Check that table was generated with clamped dimensions
assert len(launch.dataframe) > 0, "Table not generated!"
df = launch.dataframe[0].value
assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"🏁 Script executed:
#!/bin/bash
# Check if there's a test_invalid_inputs function in another file
grep -r "def test_invalid_inputs" tests/Length of output: 42
Action Required: Add Missing Invalid Input Test
Our verification confirms that the test_invalid_inputs function is not present in the tests. This missing test leaves the invalid input handling unverified, which is a critical oversight.
Please add the following test to ensure invalid inputs are properly clamped:
def test_invalid_inputs(launch):
"""Ensure invalid inputs are properly clamped."""
launch.run(timeout=15)
x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None)
y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None)
assert x_input is not None, "X-dimension input not found!"
assert y_input is not None, "Y-dimension input not found!"
# Set values exceeding maximum
x_input.set_value(25)
y_input.set_value(10)
launch.run(timeout=15)
# Verify values are clamped
assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!"
# Check that table was generated with clamped dimensions
assert len(launch.dataframe) > 0, "Table not generated!"
df = launch.dataframe[0].value
assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"Ensure this test is added in the appropriate test file (e.g., tests/test_simple_workflow.py) to address the critical issue.
There was a problem hiding this comment.
Actionable comments posted: 0
♻️ Duplicate comments (1)
tests/test_simple_workflow.py (1)
10-11:⚠️ Potential issueMissing test for invalid inputs
The docstring mentions validating invalid inputs, but the implementation of this test is missing from the file.
Based on previous review feedback, please add the
test_invalid_inputsfunction:def test_invalid_inputs(launch): """Ensure invalid inputs are properly clamped.""" launch.run(timeout=15) x_input = next((ni for ni in launch.number_input if ni.key == "example-x-dimension"), None) y_input = next((ni for ni in launch.number_input if ni.key == "example-y-dimension"), None) assert x_input is not None, "X-dimension input not found!" assert y_input is not None, "Y-dimension input not found!" # Set values exceeding maximum x_input.set_value(25) y_input.set_value(10) launch.run(timeout=15) # Verify values are clamped assert launch.session_state["example-x-dimension"] == 20, "X-dimension not clamped to maximum!" # Check that table was generated with clamped dimensions assert len(launch.dataframe) > 0, "Table not generated!" df = launch.dataframe[0].value assert df.shape == (20, 10), f"Expected table size (20,10) but got {df.shape}"
🧹 Nitpick comments (2)
tests/test_simple_workflow.py (2)
49-70: Reduce reliance on static sleep timesWhile the download button test is generally well-structured, using
time.sleep(5)can make tests flaky and unnecessarily slow.Consider removing the static sleep and relying on the
timeoutparameter in therun()method:- launch.run(timeout=15) - time.sleep(5) + launch.run(timeout=15)Streamlit's testing framework should wait for the application to stabilize after changes, making the explicit sleep unnecessary.
68-69: Consider adding button click verificationThe test verifies the Download button's presence but doesn't test if clicking it works correctly.
Consider extending the test to verify the button's functionality by simulating a click and checking for expected behavior. While this might require mocking file downloads, it would provide more comprehensive test coverage.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/test_simple_workflow.py(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
tests/test_simple_workflow.py (1)
Learnt from: jcharkow
PR: OpenMS/streamlit-template#167
File: tests/test_simple_workflow.py:43-55
Timestamp: 2025-03-15T14:56:47.833Z
Learning: In the simple workflow application, input validation for number inputs is handled through Streamlit's min_value and max_value parameters. The x and y dimensions are constrained to values between 1 and 20. When testing invalid inputs in Streamlit, we need to verify that values exceeding the limits are properly clamped to the allowed range.
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: build-full-app
- GitHub Check: build-openms
- GitHub Check: build-simple-app
🔇 Additional comments (3)
tests/test_simple_workflow.py (3)
1-13: Good test documentation and structure!The imports and docstring clearly outline the test objectives, providing a solid foundation for the test suite. The documentation effectively communicates what functionality is being tested.
15-20: Clear and well-structured fixture!The fixture provides a clean setup for the tests with an appropriate timeout value. Using a fixture to initialize the app once and reuse it across tests is an efficient approach.
22-47: Well-implemented test for number inputs!The test effectively validates that the number inputs exist, updates correctly, and verifies the generated table dimensions. Using key-based widget access (rather than index-based) is a good practice that makes tests more resilient to UI changes.
jcharkow
left a comment
There was a problem hiding this comment.
Looks good just some very minor changes.
content/simple_workflow.py
Outdated
| min_value=1, | ||
| max_value=20, | ||
| value=params["example-y-dimension"], | ||
| value=max(1, min(20, params.get("example-y-dimension", 1))), |
|
@jcharkow |
jcharkow
left a comment
There was a problem hiding this comment.
Looks good! Ready to merge
here are the test cases for --
test_run_subprocess.py
test_simple_workflow.py
Summary by CodeRabbit
Tests
Chores
These updates aim to deliver a more robust and user-friendly experience.