-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Returning more source nodes #10160
Comments
🤖 Hello @aminolroayaXgen! I'm Dosu, a friendly bot here to help with your questions and issues while we wait for a human maintainer. I'm ready and eager to assist you with your LlamaIndex queries. Let's navigate through this together! To increase the number of returned nodes in Here is how you can do it: retriever = VectorIndexAutoRetriever(
index=my_index,
vector_store_info=my_vector_store_info,
max_top_k=50, # Increase this value as per your requirement
similarity_top_k=20, # This is your current setting
) The Please note that the This is supported by the following code snippet from the context: # if query_spec.top_k is specified, then use it
# as long as below max_top_k and similarity_top_k
if spec.top_k is not None:
similarity_top_k = min(spec.top_k, self._max_top_k, similarity_top_k) You can find more details in the VectorIndexAutoRetriever class in the LlamaIndex repository. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@aminolroayaXgen it's working with the following code. You can check the screenshot that it's returning 15 nodes. For debugging would be better to share your code.
|
tnx @ravi03071991 . now it retrieves the nodes, but also the response time increases in scale of 10s. It used to be around 4s and now it is 20s. Maybe it is because of I am dumping json files as documents when indexing and I dont want to chunk them. Is there a way to directly retrieve document fields instead of nodes? BTW. The nodes do not have metadata currenlty. |
@aminolroayaXgen https://docs.llamaindex.ai/en/latest/examples/query_engine/json_query_engine.html# - did you check this? This might help. Another workaround is to create nodes by reading JSON files one after the other and use |
Hi, @aminolroayaXgen, I'm helping the LlamaIndex team manage their backlog and am marking this issue as stale. From what I understand, you were experiencing a limitation with the number of nodes returned in Could you please confirm if this issue is still relevant to the latest version of the LlamaIndex repository? If it is, please let the LlamaIndex team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you for your understanding and contribution to the LlamaIndex project. If you have any further questions or need assistance, feel free to reach out. |
Question Validation
Question
with llama-index of 0.8.62, I want to use document fields from the nodes with highest scores which are retrieved and used by LLama chat engine to answer queries. However, the number of returned nodes in
response.source_nodes
is so low, about two nodes or a node consireng I have 300 nodes in total. I also passed a retriever to my chat engine with similarity top k of 20. however no change in the number of returned nodes occured. Is there any solution for returning more nodes? tnx.The text was updated successfully, but these errors were encountered: