You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Not sure why other people use stream mode but for me, it's to start reading as soon as first token is received.
Currently the viewer auto scrolls down, which starts scrolling before I can finish reading the first sentence.
IMO, it'd be better to not auto scroll and only re-render it as markdown once all data is received.
live render the markdown
Regardless where the scroll position is before markdown is rendered, it's always jarring to see the screen suddenly change. It'd be better to live render the message just as most modern AI chat apps do.
I'm not sure how feasible this is though given that koreader mostly run on low-compute and e-ink devices. As a compromise, maybe we can limit the update loop by time or token count long enough to avoid performance and visual issues? Or just render the first page and re-render after all data is received?
My use case is simply start reading the first page to save the wait.
A non-autoscroll mode has been added in commit 71d8b0c. Unchecking the switch in the settings dialog will disable auto-scroll. The idea is nice, it is useful when using fast output LLMs, on my old Kobo device, the screen would become completely glitchy and unreadable. However, newer devices with faster e-ink refresh rates can handle rapid text updates while remaining readable :)
Live rendering still seems like a distant dream, i think. The smooth experience you see on other platforms comes from their use of powerful browser engines, which natively support dynamic rendering through JavaScript DOM manipulation. KOReader, on the other hand, is built on a lightweight rendering engine: mupdf. It’s designed for static content, doesn’t include JavaScript at all, and lacks DOM manipulation capabilities.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Not sure why other people use stream mode but for me, it's to start reading as soon as first token is received.
Currently the viewer auto scrolls down, which starts scrolling before I can finish reading the first sentence.
IMO, it'd be better to not auto scroll and only re-render it as markdown once all data is received.
Regardless where the scroll position is before markdown is rendered, it's always jarring to see the screen suddenly change. It'd be better to live render the message just as most modern AI chat apps do.
I'm not sure how feasible this is though given that koreader mostly run on low-compute and e-ink devices. As a compromise, maybe we can limit the update loop by time or token count long enough to avoid performance and visual issues? Or just render the first page and re-render after all data is received?
My use case is simply start reading the first page to save the wait.
Beta Was this translation helpful? Give feedback.
All reactions