Skip to content

Conversation

sabilmakbar
Copy link

@sabilmakbar sabilmakbar commented Feb 11, 2025

Closes #198

@sabilmakbar sabilmakbar changed the title fix cache_kwargs on inference using cache Closes #198 | fix cache_kwargs on inference using cache Feb 11, 2025
@AbhisarJ
Copy link

I still get the error: "AttributeError: 'StaticCache' object has no attribute 'max_batch_size'"
Please help me out, I am trying to follow the compilation approach to optimise inference.

@sabilmakbar
Copy link
Author

sorry @AbhisarJ didn't see ur message.

do u mind to reinstall your parler-tts package from the forked repo of mine? (https://github.com/sabilmakbar/parler-tts-patch-fix.git), see if that fixes yours too (coz it work on mine and just to make sure the reproducibility on the solution proposed).

lmk if you need assistance on that, if that works on you too, I'll bump the maintainers for the PR review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Deprecated kwargs in _get_config method for model generation with caching
2 participants