mirror of
https://github.com/microsoft/autogen.git
synced 2025-12-30 00:30:23 +00:00
fix: ensure streaming chunks are immediately flushed to console (#6424)
Added `flush=True` to the `aprint` call when handling `ModelClientStreamingChunkEvent` message to ensure each chunk is immediately displayed as it arrives. <!-- Thank you for your contribution! Please review https://microsoft.github.io/autogen/docs/Contribute before opening a pull request. --> <!-- Please add a reviewer to the assignee section when you create a PR. If you don't have the access to it, we will shortly find a reviewer and assign them to your PR. --> ## Why are these changes needed? When handling `ModelClientStreamingChunkEvent` message, streaming chunks weren't guaranteed to be displayed immediately, as Python's stdout might buffer output without an explicit flush instruction. This could cause visual delays between when `chunk_event` objects are added to the message queue and when users actually see the content rendered in the console. <!-- Please give a short summary of the change and the problem this solves. --> ## Related issue number None <!-- For example: "Closes #1234" --> ## Checks - [x] I've included any doc changes needed for <https://microsoft.github.io/autogen/>. See <https://github.com/microsoft/autogen/blob/main/CONTRIBUTING.md> to build and test documentation locally. - [x] I've added tests (if relevant) corresponding to the changes introduced in this PR. - [x] I've made sure all auto checks have passed.
This commit is contained in:
parent
2792359ef0
commit
7c29704525
@ -177,7 +177,7 @@ async def Console(
|
||||
f"{'-' * 10} {message.__class__.__name__} ({message.source}) {'-' * 10}", end="\n", flush=True
|
||||
)
|
||||
if isinstance(message, ModelClientStreamingChunkEvent):
|
||||
await aprint(message.to_text(), end="")
|
||||
await aprint(message.to_text(), end="", flush=True)
|
||||
streaming_chunks.append(message.content)
|
||||
else:
|
||||
if streaming_chunks:
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user