-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Code smell]: _extract_reasoning_step
should have returned current_reasoning
as an Optional[...]
instead of a List[...]
#10568
Comments
🤖 Hey @tslmy, great to see you diving into the intricacies of our code again! 🚀 Thank you for your feedback on the The Here is the relevant code snippet: def _extract_reasoning_step(
self, output: ChatResponse, is_streaming: bool = False
) -> Tuple[str, List[BaseReasoningStep], bool]:
"""
Extracts the reasoning step from the given output.
This method parses the message content from the output,
extracts the reasoning step, and determines whether the processing is
complete. It also performs validation checks on the output and
handles possible errors.
"""
if output.message.content is None:
raise ValueError("Got empty message.")
message_content = output.message.content
current_reasoning = []
try:
reasoning_step = self._output_parser.parse(message_content, is_streaming)
except BaseException as exc:
raise ValueError(f"Could not parse output: {message_content}") from exc
if self._verbose:
print_text(f"{reasoning_step.get_content()}\n", color="pink")
current_reasoning.append(reasoning_step)
if reasoning_step.is_done:
return message_content, current_reasoning, True
reasoning_step = cast(ActionReasoningStep, reasoning_step)
if not isinstance(reasoning_step, ActionReasoningStep):
raise ValueError(f"Expected ActionReasoningStep, got {reasoning_step}")
return message_content, current_reasoning, False You can find this function in the llama_index/agent/react/step.py file. I hope this clarifies the design choice for the Sources
|
@dosu-bot , I think it's more readable to declare that list in outer functions. It's kinda too much responsibility to have one function declare a container that to be used in later stages as well as constructing its initial step. On the other hand, if |
🤖 Thank you for your suggestion to improve the readability of the LlamaIndex framework. I agree that the The Here is an example of how you might declare the def initialize_current_reasoning() -> List[BaseReasoningStep]:
"""Initialize the current reasoning list."""
return [] You would then replace all occurrences of As for renaming the Here is an example of how you might rename the def _prepare_current_reasoning_container(
self, output: ChatResponse, is_streaming: bool = False
) -> Tuple[str, List[BaseReasoningStep], bool]:
"""Prepare the current reasoning container."""
# function body... You would then replace all occurrences of However, these changes should be made carefully to ensure that they do not introduce any bugs or unexpected behavior. It's also important to update any references to these variables or functions throughout the codebase. I hope this helps! If you have any further questions or suggestions, please don't hesitate to ask. Sources
|
Bug Description
Seeing this line:
llama_index/llama_index/agent/react/step.py
Line 235 in e3a169b
I thought 1)
_extract_reasoning_step
was capable of extracting multiple reasoning steps from theoutput: ChatResponse
, and 2) we are ruthlessly discarding everything before that. It's only after I read through_extract_reasoning_step
did I realize thatcurrent_reasoning: List[...]
is either an empty list or a singleton list.I kinda consider this a code smell.
_extract_reasoning_step
should have returned ancurrent_reasoning: Optional[...]
instead....u nless there are some portability considerations?
Version
e3a169b
Steps to Reproduce
Not a bug.
Relevant Logs/Tracbacks
No response
The text was updated successfully, but these errors were encountered: