Skip to content

Conversation

@aseembits93
Copy link
Contributor

@aseembits93 aseembits93 commented May 9, 2025

User description

Test Generation should take significantly less amount of time with a more streamlined call to the aiservice.


PR Type

Enhancement


Description

  • Log optimization duration in aiservice

  • Measure concolic tests generation time

  • Record and log test generation duration

  • Add time imports for performance tracking


Changes walkthrough 📝

Relevant files
Enhancement
aiservice.py
Timing logs for code optimization                                               

codeflash/api/aiservice.py

  • Imported time module
  • Recorded start_time before request
  • Measured end_time after response
  • Logged duration of optimize_python_code
  • +5/-0     
    concolic_testing.py
    Timing logs for concolic testing                                                 

    codeflash/verification/concolic_testing.py

  • Imported time module
  • Started timer in generate_concolic_tests
  • Measured end timer at function end
  • Logged concolic tests generation time
  • +5/-0     
    verifier.py
    Timing logs for test generation                                                   

    codeflash/verification/verifier.py

  • Imported time module
  • Recorded start_time at generate_tests start
  • Measured end_time after test generation
  • Logged test generation duration
  • +5/-1     

    Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • @github-actions
    Copy link

    github-actions bot commented May 9, 2025

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Timing reliability

    The timing measurement in optimize_python_code is not wrapped in a try/finally block, so exceptions prevent logging of the end time.

    start_time = time.perf_counter()
    payload = {
        "source_code": source_code,
        "dependency_code": dependency_code,
        "num_variants": num_candidates,
        "trace_id": trace_id,
        "python_version": platform.python_version(),
        "experiment_metadata": experiment_metadata,
        "codeflash_version": codeflash_version,
    }
    
    logger.info("Generating optimized candidates…")
    console.rule()
    try:
        response = self.make_ai_service_request("/optimize", payload=payload, timeout=600)
    except requests.exceptions.RequestException as e:
        logger.exception(f"Error generating optimized candidates: {e}")
        ph("cli-optimize-error-caught", {"error": str(e)})
        return []
    
    if response.status_code == 200:
        optimizations_json = response.json()["optimizations"]
        logger.info(f"Generated {len(optimizations_json)} candidates.")
        console.rule()
        end_time = time.perf_counter()
        logger.info(f"Optimization took {end_time - start_time:.2f} seconds.")
        return [
    Incomplete timing

    generate_concolic_tests may return early or error out without executing the end_time log; consider using a try/finally to ensure consistent timing logs.

    start_time = time.perf_counter()
    function_to_concolic_tests = {}
    concolic_test_suite_code = ""
    if (
        test_cfg.concolic_test_root_dir
        and isinstance(function_to_optimize_ast, (ast.FunctionDef, ast.AsyncFunctionDef))
        and has_typed_parameters(function_to_optimize_ast, function_to_optimize.parents)
    ):
        logger.info("Generating concolic opcode coverage tests for the original code…")
        console.rule()
        try:
            cover_result = subprocess.run(
                [
                    SAFE_SYS_EXECUTABLE,
                    "-m",
                    "crosshair",
                    "cover",
                    "--example_output_format=pytest",
                    "--per_condition_timeout=64",
                    ".".join(
                        [
                            function_to_optimize.file_path.relative_to(args.project_root)
                            .with_suffix("")
                            .as_posix()
                            .replace("/", "."),
                            function_to_optimize.qualified_name,
                        ]
                    ),
                ],
                capture_output=True,
                text=True,
                cwd=args.project_root,
                check=False,
                timeout=600,
            )
        except subprocess.TimeoutExpired:
            logger.debug("CrossHair Cover test generation timed out")
            return function_to_concolic_tests, concolic_test_suite_code
    
        if cover_result.returncode == 0:
            generated_concolic_test: str = cover_result.stdout
            concolic_test_suite_code: str = clean_concolic_tests(generated_concolic_test)
            concolic_test_suite_dir = Path(tempfile.mkdtemp(dir=test_cfg.concolic_test_root_dir))
            concolic_test_suite_path = concolic_test_suite_dir / "test_concolic_coverage.py"
            concolic_test_suite_path.write_text(concolic_test_suite_code, encoding="utf8")
    
            concolic_test_cfg = TestConfig(
                tests_root=concolic_test_suite_dir,
                tests_project_rootdir=test_cfg.concolic_test_root_dir,
                project_root_path=args.project_root,
                test_framework=args.test_framework,
                pytest_cmd=args.pytest_cmd,
            )
            function_to_concolic_tests = discover_unit_tests(concolic_test_cfg)
            num_discovered_concolic_tests: int = sum([len(value) for value in function_to_concolic_tests.values()])
            logger.info(
                f"Created {num_discovered_concolic_tests} "
                f"concolic unit test case{'s' if num_discovered_concolic_tests != 1 else ''} "
            )
            console.rule()
            ph("cli-optimize-concolic-tests", {"num_tests": num_discovered_concolic_tests})
    
        else:
            logger.debug(f"Error running CrossHair Cover {': ' + cover_result.stderr if cover_result.stderr else '.'}")
            console.rule()
    end_time = time.perf_counter()
    logger.info(f"Generated concolic tests in {end_time - start_time:.2f} seconds")
    Missing error-path logging

    The end_time log in generate_tests is only reached on the successful path; failures or exceptions won’t report durations.

    start_time = time.perf_counter()
    test_module_path = Path(module_name_from_file_path(test_path, test_cfg.tests_project_rootdir))
    response = aiservice_client.generate_regression_tests(
        source_code_being_tested=source_code_being_tested,
        function_to_optimize=function_to_optimize,
        helper_function_names=helper_function_names,
        module_path=module_path,
        test_module_path=test_module_path,
        test_framework=test_cfg.test_framework,
        test_timeout=test_timeout,
        trace_id=function_trace_id,
        test_index=test_index,
    )
    if response and isinstance(response, tuple) and len(response) == 3:
        generated_test_source, instrumented_behavior_test_source, instrumented_perf_test_source = response
        temp_run_dir = get_run_tmp_file(Path()).as_posix()
    
        instrumented_behavior_test_source = instrumented_behavior_test_source.replace(
            "{codeflash_run_tmp_dir_client_side}", temp_run_dir
        )
        instrumented_perf_test_source = instrumented_perf_test_source.replace(
            "{codeflash_run_tmp_dir_client_side}", temp_run_dir
        )
    else:
        logger.warning(f"Failed to generate and instrument tests for {function_to_optimize.function_name}")
        return None
    end_time = time.perf_counter()
    logger.info(f"Generated tests in {end_time - start_time:.2f} seconds")

    @github-actions
    Copy link

    github-actions bot commented May 9, 2025

    PR Code Suggestions ✨

    No code suggestions found for the PR.

    @github-actions github-actions bot added the workflow-modified This PR modifies GitHub Actions workflows label May 12, 2025
    @aseembits93 aseembits93 merged commit e266864 into main May 12, 2025
    16 checks passed
    @aseembits93 aseembits93 deleted the faster-testgen branch May 12, 2025 23:34
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Labels

    Review effort 2/5 workflow-modified This PR modifies GitHub Actions workflows

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    3 participants