I want a feature in python package and in GUI that one LLM can process multiple requests with no queue if we have enough hardware resource
I heard llama.cpp has this feature but I could not find this feature in lm studio.
we cannot use AsyncOpenAI in current version, the requests will be queue !