Skip to content

Auto-scrolling in chat can fight against user scrollingΒ #1988

@wch

Description

@wch

When content is streaming into the chat UI, it auto-scrolls downward. If the user scrolls up, but not quickly enough, the downward and upward scrolling can both happen and it bounces up and down very quickly.

shiny-chat-scroll-half.mp4

Live app

To get it to stop bouncing up and down as the manual upward scroll and programmatic downward smooth scroll fight both happen and overlap, we can do something like the following:

  • If it's currently scrolled to the bottom of the chat, set stickToBottom to true. When this is true, then when content streams in, call el.scroll({ behavior: "smooth" });.
  • Listen for scroll events. Note that these events are triggered by both manual and automatic scrolling.
    • If the scroll is upward, then that would have been triggered manually, by the user. This can be detected by comparing the element's scrollTop to the value of scrollTop on the previous scroll event. If an upward scroll happens, then stop the downward smooth scroll immediately with an instant scroll to its current position. Also set stickToBottom to false. This will stop the automatic scrolling.
    • If scroll position is at the bottom, then set stickToBottom to true. This tells it to keep scrolling to the bottom as content streams in.

Code for instant scroll to current position:

// Force immediate scroll to current position, which will cause smooth
// scrolling to stop.
element.scrollTo({
  top: element.scrollTop,
  behavior: "instant",
});

Code for demo app above:

import asyncio
from typing import AsyncGenerator

from shiny import App, Inputs, Outputs, Session, ui

app_ui = ui.page_fillable(
    ui.panel_title("Lorem Ipsum Shiny Chat"),
    ui.chat_ui("chat"),
    fillable_mobile=True,
)

welcome = """
Say anything and I will respond with lorem ipsum...
"""


async def lorem_ipsum_generator(n: int = 100) -> AsyncGenerator[str, None]:
    """Generate a stream of lorem ipsum words."""
    words = """lorem ipsum dolor sit amet consectetur adipiscing elit sed do
        eiusmod tempor incididunt ut labore et dolore magna aliqua ut enim ad
        minim veniam quis nostrud exercitation ullamco laboris nisi ut aliquip
        ex ea commodo consequat.""".split()

    word_count = len(words)
    for i in range(n):
        word = words[i % word_count]  # cycle through words

        # Occasionally yield a block of text all at once
        if (i / 2) % word_count == word_count - 1:
            yield "\n```\nline 1\nline 2\nline 3\nline 4\nline 5\nline 6\nline 7\nline 8\nline 9\nline 10\nline 11\nline 12\n```\n"

        await asyncio.sleep(1 / 30)
        yield word + " "


def server(input: Inputs, output: Outputs, session: Session):
    chat = ui.Chat(id="chat", messages=[welcome])

    # Define a callback to run when the user submits a message
    @chat.on_user_submit
    async def handle_user_input(user_input: str):
        # Append a response to the chat
        stream = lorem_ipsum_generator(1000)

        await chat.append_message_stream(stream)


app = App(app_ui, server)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions