Skip to content

Add missing documentation for exponentiation methods#194

Closed
ChrisRackauckas-Claude wants to merge 2 commits intoSciML:masterfrom
ChrisRackauckas-Claude:add-missing-documentation
Closed

Add missing documentation for exponentiation methods#194
ChrisRackauckas-Claude wants to merge 2 commits intoSciML:masterfrom
ChrisRackauckas-Claude:add-missing-documentation

Conversation

@ChrisRackauckas-Claude
Copy link
Contributor

Summary

  • Added comprehensive documentation for previously undocumented exponentiation methods
  • Added clarifying comments for internal helper functions
  • Formatted code with JuliaFormatter (SciMLStyle)

Details

This PR adds documentation for several previously undocumented functions:

Public API Functions

  • exponential!(A::GPUArraysCore.AbstractGPUArray): GPU-specific exponential function
  • expv(t, Ks::KrylovSubspace): Compute exp(tA)b using precomputed Krylov subspace
  • expv!(w::AbstractVector{Complex{Tw}}, ...): Complex version of expv!
  • expv!(w::GPUArraysCore.AbstractGPUVector{Tw}, ...): GPU-optimized expv!
  • phiv(t, Ks::KrylovSubspace, k): Compute phi functions using precomputed Krylov subspace

Internal Functions (comments added for clarity)

  • exp_gen! functions: Internal Padé approximation implementations (orders 1-13)
  • getmem, ldiv_for_generated!: Internal helper functions for generated code
  • _expv_hb, _expv_ee: Internal helper functions for Krylov methods

Test Results

All tests pass successfully:

  • Quality Assurance: 10/10 tests pass
  • Basic Tests: 318/319 tests pass (1 broken test was pre-existing)

Notes

  • All docstrings follow Julia documentation conventions
  • Formatting applied with JuliaFormatter using SciMLStyle
  • No functional changes made, only documentation improvements

🤖 Generated with Claude Code

claude added 2 commits August 19, 2025 16:33
This commit adds documentation for several previously undocumented functions:
- exponential!(A::GPUArraysCore.AbstractGPUArray): GPU-specific exponential function
- expv(t, Ks::KrylovSubspace): Compute exp(tA)b using precomputed Krylov subspace
- expv!(w::AbstractVector{Complex{Tw}}, ...): Complex version of expv!
- expv!(w::GPUArraysCore.AbstractGPUVector{Tw}, ...): GPU-optimized expv!
- phiv(t, Ks::KrylovSubspace, k): Compute phi functions using precomputed Krylov subspace

Also added clarifying comments for internal helper functions:
- exp_gen! functions: Internal Padé approximation implementations
- getmem, ldiv_for_generated!: Internal helper functions for generated code
- _expv_hb, _expv_ee: Internal helper functions for Krylov methods

All docstrings follow Julia documentation conventions with proper formatting.
Tests pass successfully after these documentation additions.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Added comprehensive docstring to exp_gen!(cache, A, ::Val{1}) explaining:
- Purpose: Internal Padé approximation implementation
- Parameters and their meanings
- Connection to Higham (2005) algorithm
- Note that these are internal functions

Added brief reference comments to exp_gen! functions for orders 2-13
pointing to the main documentation in order 1.

All exp_gen! functions now have proper documentation explaining their
role in the matrix exponential computation.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@ChrisRackauckas-Claude
Copy link
Contributor Author

I've added additional documentation for the exp_gen! functions as suggested:

  • Added a comprehensive docstring to exp_gen!(cache, A, ::Val{1}) explaining the purpose, parameters, and connection to the Higham (2005) algorithm
  • Added reference comments to all other exp_gen! functions (orders 2-13) pointing to the main documentation

All 13 exp_gen! functions now have proper documentation. Tests continue to pass successfully.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants