Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: map_httpcore_exceptions while invoking query_engine.query in colab #12670

Open
sunilnagpal opened this issue Apr 9, 2024 · 16 comments
Open
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@sunilnagpal
Copy link

Bug Description

New to llama_index. Thank you for all the efforts towards developing and maintaining it. I'm trying to reproduce the starter example in colab using my own tabulated data. Works fine until I try to query:

Following works fine -

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.core.embeddings import resolve_embed_model
from llama_index.llms.ollama import Ollama

documents = SimpleDirectoryReader("data").load_data()

# bge embedding model
Settings.embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")

# ollama
Settings.llm = Ollama(model="mistral", request_timeout=30.0)

index = VectorStoreIndex.from_documents(
    documents,
)
query_engine = index.as_query_engine()

Error is thrown when trying to query:
response = query_engine.query("What did the author write about?")

I have shared the error in the logs section.

Version

llama-index-0.10.27

Steps to Reproduce

Trying to reproduce starter example local of https://docs.llamaindex.ai/en/stable/getting_started/starter_example_local/ in colab.

Relevant Logs/Tracbacks

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 233, in handle_request
    resp = self._pool.handle_request(req)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
    raise exc from None
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection.py", line 99, in handle_request
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection.py", line 76, in handle_request
    stream = self._connect(request)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_sync/connection.py", line 122, in _connect
    stream = self._network_backend.connect_tcp(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
    with map_exceptions(exc_map):
  File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 99] Cannot assign requested address

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/content/starter.py", line 18, in <module>
    response = query_engine.query("What did the author write about?")
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 211, in wrapper
    result = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/base/base_query_engine.py", line 53, in query
    query_result = self._query(str_or_query_bundle)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 211, in wrapper
    result = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/query_engine/retriever_query_engine.py", line 190, in _query
    response = self._response_synthesizer.synthesize(
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 211, in wrapper
    result = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/response_synthesizers/base.py", line 241, in synthesize
    response_str = self.get_response(
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 211, in wrapper
    result = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/response_synthesizers/compact_and_refine.py", line 43, in get_response
    return super().get_response(
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 211, in wrapper
    result = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/response_synthesizers/refine.py", line 183, in get_response
    response = self._give_response_single(
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/response_synthesizers/refine.py", line 238, in _give_response_single
    program(
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/response_synthesizers/refine.py", line 84, in __call__
    answer = self._llm.predict(
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/instrumentation/dispatcher.py", line 211, in wrapper
    result = func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/llms/llm.py", line 430, in predict
    chat_response = self.chat(messages)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/core/llms/callbacks.py", line 144, in wrapped_llm_chat
    f_return_val = f(_self, messages, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/llama_index/llms/ollama/base.py", line 101, in chat
    response = client.post(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1145, in post
    return self.request(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 827, in request
    return self.send(request, auth=auth, follow_redirects=follow_redirects)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 914, in send
    response = self._send_handling_auth(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 942, in _send_handling_auth
    response = self._send_handling_redirects(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 979, in _send_handling_redirects
    response = self._send_single_request(request)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1015, in _send_single_request
    response = transport.handle_request(request)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 232, in handle_request
    with map_httpcore_exceptions():
  File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 99] Cannot assign requested address
@sunilnagpal sunilnagpal added bug Something isn't working triage Issue needs to be triaged/prioritized labels Apr 9, 2024
Copy link

dosubot bot commented Apr 9, 2024

Hey @sunilnagpal, great to see you back! Hope you're doing well despite this pesky connection error giving you a bit of a headache. Let's see what we can do to sort it out.

Based on the error message you've shared, it seems like you're encountering a network-related issue. This error is typically not directly related to the LlamaIndex code, but rather to the network settings in your Google Colab environment. Here are some general steps you can take to address this issue:

  1. Retry Mechanism: Implement a retry mechanism in your query execution logic. Network issues can be transient, and retrying the request after a short delay may resolve the issue.

  2. Check Resource Limits: Ensure you're not exceeding any resource limits in Google Colab, such as the number of open connections or outbound requests. Google Colab has limits to prevent abuse, and exceeding these limits might result in network-related errors.

  3. Use a Different Network Library: If you're using a specific HTTP client library and encountering this issue, consider switching to another library. For example, if you're using httpx, you might try requests or vice versa, to see if the issue persists.

  4. Review Colab Network Restrictions: Google Colab might have restrictions on certain types of network traffic or destinations. Ensure that your query's destination is not blocked or restricted by Google Colab's environment.

  5. Local Execution: As a last resort, if the network issue cannot be resolved within Google Colab, consider running your LlamaIndex query execution code locally or in another environment where you have more control over the network configuration.

Regarding your code snippet, it seems correct. However, there was a similar issue reported in the past where the user was not correctly loading the documents. Make sure you're correctly loading your documents using the SimpleDirectoryReader class. Here's the corrected code snippet from the similar issue:

from llama_index import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader(input_files = ["test2.txt"]).load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
query_engine.query("what is my name?")

You can find more details about this issue here.

Lastly, please note that there are known limitations with using LlamaIndex in a Google Colab environment. Google Colab typically uses Python 3.7, and LlamaIndex requires Python version 3.10 specifically. This means LlamaIndex may not be compatible with the default Python version in Google Colab. You can find more details about this limitation in the BUILD file.

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@sunilnagpal
Copy link
Author

Thanks Dosu, but this didn't resolve the issue. Also why is network library needed while querying?

Copy link

dosubot bot commented Apr 9, 2024

Based on the error message you provided, it seems like you're encountering a network-related issue when trying to perform a query with LlamaIndex in a Google Colab environment. This error typically occurs when an application is trying to bind or connect to an IP address and port that is not available or not allowed. In the context of Google Colab, this could be due to restrictions on outbound connections or the specific way networking is handled within the Colab environment.

The role of network libraries in the querying process of LlamaIndex involves two main components: NetworkQueryEngine and NetworkRetriever. The NetworkQueryEngine is responsible for processing and executing queries against the data managed by LlamaIndex, while the NetworkRetriever is tasked with the retrieval of data from the network. Together, these components facilitate the querying process by interpreting query requests, fetching the necessary data from the network, and ensuring that users or applications receive the requested information in a timely and efficient manner.

During the querying process, LlamaIndex performs specific network operations such as asynchronous querying of contributors, gathering results, response synthesis, and callback management. These operations are encapsulated within the _query and _aquery methods for synchronous and asynchronous querying, respectively.

To resolve this issue, you might want to consider the following steps:

  1. Check Network Permissions: Ensure that Google Colab allows the type of network connection that llama-index is attempting to make. Google Colab has limitations on outbound connections, especially to non-standard ports or addresses.

  2. Update Libraries: The pyproject.toml file specifies llama-index version ^0.10.12, which means it will use the latest version compatible with 0.10.x. If 0.10.27 is causing issues, you might want to explicitly set a version of llama-index that you know works well within Google Colab, if such a version exists.

  3. Use a Proxy or Different Network Configuration: If llama-index is attempting to connect to a service or address that is not allowed in Google Colab, using a proxy or configuring it to use a different address or method of connection (if supported by llama-index) might resolve the issue.

  4. Contact Support or Check Documentation: Check the documentation for llama-index and llama-index-networks for any notes on compatibility with Google Colab or similar environments.

I also found a similar issue in the LlamaIndex repository where the user encountered a problem while querying in a Colab environment. The suggested solution was to correct the document's argument. Since it is a file, use the input argument input_files to send the document in. Here's the corrected code:

from llama_index import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader(input_files = ["test2.txt"]).load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
query_engine.query("what is my name?")

You can find more details about this issue here.

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@sunilnagpal
Copy link
Author

Tried it in the local system to get similar network error:

---------------------------------------------------------------------------
ConnectError                              Traceback (most recent call last)
File ~\anaconda3\lib\site-packages\httpx\_transports\default.py:69, in map_httpcore_exceptions()
     68 try:
---> 69     yield
     70 except Exception as exc:

File ~\anaconda3\lib\site-packages\httpx\_transports\default.py:233, in HTTPTransport.handle_request(self, request)
    232 with map_httpcore_exceptions():
--> 233     resp = self._pool.handle_request(req)
    235 assert isinstance(resp.stream, typing.Iterable)

File ~\anaconda3\lib\site-packages\httpcore\_sync\connection_pool.py:216, in ConnectionPool.handle_request(self, request)
    215     self._close_connections(closing)
--> 216     raise exc from None
    218 # Return the response. Note that in this case we still have to manage
    219 # the point at which the response is closed.

File ~\anaconda3\lib\site-packages\httpcore\_sync\connection_pool.py:196, in ConnectionPool.handle_request(self, request)
    194 try:
    195     # Send the request on the assigned connection.
--> 196     response = connection.handle_request(
    197         pool_request.request
    198     )
    199 except ConnectionNotAvailable:
    200     # In some cases a connection may initially be available to
    201     # handle a request, but then become unavailable.
    202     #
    203     # In this case we clear the connection and try again.

File ~\anaconda3\lib\site-packages\httpcore\_sync\connection.py:99, in HTTPConnection.handle_request(self, request)
     98     self._connect_failed = True
---> 99     raise exc
    101 return self._connection.handle_request(request)

File ~\anaconda3\lib\site-packages\httpcore\_sync\connection.py:76, in HTTPConnection.handle_request(self, request)
     75 if self._connection is None:
---> 76     stream = self._connect(request)
     78     ssl_object = stream.get_extra_info("ssl_object")

File ~\anaconda3\lib\site-packages\httpcore\_sync\connection.py:122, in HTTPConnection._connect(self, request)
    121 with Trace("connect_tcp", logger, request, kwargs) as trace:
--> 122     stream = self._network_backend.connect_tcp(**kwargs)
    123     trace.return_value = stream

File ~\anaconda3\lib\site-packages\httpcore\_backends\sync.py:205, in SyncBackend.connect_tcp(self, host, port, timeout, local_address, socket_options)
    200 exc_map: ExceptionMapping = {
    201     socket.timeout: ConnectTimeout,
    202     OSError: ConnectError,
    203 }
--> 205 with map_exceptions(exc_map):
    206     sock = socket.create_connection(
    207         address,
    208         timeout,
    209         source_address=source_address,
    210     )

File ~\anaconda3\lib\contextlib.py:153, in _GeneratorContextManager.__exit__(self, typ, value, traceback)
    152 try:
--> 153     self.gen.throw(typ, value, traceback)
    154 except StopIteration as exc:
    155     # Suppress StopIteration *unless* it's the same exception that
    156     # was passed to throw().  This prevents a StopIteration
    157     # raised inside the "with" statement from being suppressed.

File ~\anaconda3\lib\site-packages\httpcore\_exceptions.py:14, in map_exceptions(map)
     13     if isinstance(exc, from_exc):
---> 14         raise to_exc(exc) from exc
     15 raise

ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it

The above exception was the direct cause of the following exception:

ConnectError                              Traceback (most recent call last)
Cell In[9], line 1
----> 1 response = query_engine.query("What did the author write in episode 1?")

File ~\anaconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    270 self.span_enter(
    271     id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
    272 )
    273 try:
--> 274     result = func(*args, **kwargs)
    275 except BaseException as e:
    276     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File ~\anaconda3\lib\site-packages\llama_index\core\base\base_query_engine.py:53, in BaseQueryEngine.query(self, str_or_query_bundle)
     51     if isinstance(str_or_query_bundle, str):
     52         str_or_query_bundle = QueryBundle(str_or_query_bundle)
---> 53     query_result = self._query(str_or_query_bundle)
     54 dispatch_event(QueryEndEvent())
     55 return query_result

File ~\anaconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    270 self.span_enter(
    271     id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
    272 )
    273 try:
--> 274     result = func(*args, **kwargs)
    275 except BaseException as e:
    276     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File ~\anaconda3\lib\site-packages\llama_index\core\query_engine\retriever_query_engine.py:190, in RetrieverQueryEngine._query(self, query_bundle)
    186 with self.callback_manager.event(
    187     CBEventType.QUERY, payload={EventPayload.QUERY_STR: query_bundle.query_str}
    188 ) as query_event:
    189     nodes = self.retrieve(query_bundle)
--> 190     response = self._response_synthesizer.synthesize(
    191         query=query_bundle,
    192         nodes=nodes,
    193     )
    194     query_event.on_end(payload={EventPayload.RESPONSE: response})
    196 return response

File ~\anaconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    270 self.span_enter(
    271     id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
    272 )
    273 try:
--> 274     result = func(*args, **kwargs)
    275 except BaseException as e:
    276     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File ~\anaconda3\lib\site-packages\llama_index\core\response_synthesizers\base.py:241, in BaseSynthesizer.synthesize(self, query, nodes, additional_source_nodes, **response_kwargs)
    235     query = QueryBundle(query_str=query)
    237 with self._callback_manager.event(
    238     CBEventType.SYNTHESIZE,
    239     payload={EventPayload.QUERY_STR: query.query_str},
    240 ) as event:
--> 241     response_str = self.get_response(
    242         query_str=query.query_str,
    243         text_chunks=[
    244             n.node.get_content(metadata_mode=MetadataMode.LLM) for n in nodes
    245         ],
    246         **response_kwargs,
    247     )
    249     additional_source_nodes = additional_source_nodes or []
    250     source_nodes = list(nodes) + list(additional_source_nodes)

File ~\anaconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    270 self.span_enter(
    271     id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
    272 )
    273 try:
--> 274     result = func(*args, **kwargs)
    275 except BaseException as e:
    276     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File ~\anaconda3\lib\site-packages\llama_index\core\response_synthesizers\compact_and_refine.py:43, in CompactAndRefine.get_response(self, query_str, text_chunks, prev_response, **response_kwargs)
     39 # use prompt helper to fix compact text_chunks under the prompt limitation
     40 # TODO: This is a temporary fix - reason it's temporary is that
     41 # the refine template does not account for size of previous answer.
     42 new_texts = self._make_compact_text_chunks(query_str, text_chunks)
---> 43 return super().get_response(
     44     query_str=query_str,
     45     text_chunks=new_texts,
     46     prev_response=prev_response,
     47     **response_kwargs,
     48 )

File ~\anaconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    270 self.span_enter(
    271     id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
    272 )
    273 try:
--> 274     result = func(*args, **kwargs)
    275 except BaseException as e:
    276     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File ~\anaconda3\lib\site-packages\llama_index\core\response_synthesizers\refine.py:183, in Refine.get_response(self, query_str, text_chunks, prev_response, **response_kwargs)
    179 for text_chunk in text_chunks:
    180     if prev_response is None:
    181         # if this is the first chunk, and text chunk already
    182         # is an answer, then return it
--> 183         response = self._give_response_single(
    184             query_str, text_chunk, **response_kwargs
    185         )
    186     else:
    187         # refine response if possible
    188         response = self._refine_response_single(
    189             prev_response, query_str, text_chunk, **response_kwargs
    190         )

File ~\anaconda3\lib\site-packages\llama_index\core\response_synthesizers\refine.py:238, in Refine._give_response_single(self, query_str, text_chunk, **response_kwargs)
    234 if response is None and not self._streaming:
    235     try:
    236         structured_response = cast(
    237             StructuredRefineResponse,
--> 238             program(
    239                 context_str=cur_text_chunk,
    240                 **response_kwargs,
    241             ),
    242         )
    243         query_satisfied = structured_response.query_satisfied
    244         if query_satisfied:

File ~\anaconda3\lib\site-packages\llama_index\core\response_synthesizers\refine.py:84, in DefaultRefineProgram.__call__(self, *args, **kwds)
     82     answer = answer.json()
     83 else:
---> 84     answer = self._llm.predict(
     85         self._prompt,
     86         **kwds,
     87     )
     88 return StructuredRefineResponse(answer=answer, query_satisfied=True)

File ~\anaconda3\lib\site-packages\llama_index\core\instrumentation\dispatcher.py:274, in Dispatcher.span.<locals>.wrapper(func, instance, args, kwargs)
    270 self.span_enter(
    271     id_=id_, bound_args=bound_args, instance=instance, parent_id=parent_id
    272 )
    273 try:
--> 274     result = func(*args, **kwargs)
    275 except BaseException as e:
    276     self.event(SpanDropEvent(span_id=id_, err_str=str(e)))

File ~\anaconda3\lib\site-packages\llama_index\core\llms\llm.py:430, in LLM.predict(self, prompt, **prompt_args)
    428 if self.metadata.is_chat_model:
    429     messages = self._get_messages(prompt, **prompt_args)
--> 430     chat_response = self.chat(messages)
    431     output = chat_response.message.content or ""
    432 else:

File ~\anaconda3\lib\site-packages\llama_index\core\llms\callbacks.py:144, in llm_chat_callback.<locals>.wrap.<locals>.wrapped_llm_chat(_self, messages, **kwargs)
    128 dispatcher.event(
    129     LLMChatStartEvent(
    130         model_dict=_self.to_dict(),
   (...)
    134     )
    135 )
    136 event_id = callback_manager.on_event_start(
    137     CBEventType.LLM,
    138     payload={
   (...)
    142     },
    143 )
--> 144 f_return_val = f(_self, messages, **kwargs)
    146 if isinstance(f_return_val, Generator):
    147     # intercept the generator and add a callback to the end
    148     def wrapped_gen() -> ChatResponseGen:

File ~\anaconda3\lib\site-packages\llama_index\llms\ollama\base.py:101, in Ollama.chat(self, messages, **kwargs)
     85 payload = {
     86     "model": self.model,
     87     "messages": [
   (...)
     97     **kwargs,
     98 }
    100 with httpx.Client(timeout=Timeout(self.request_timeout)) as client:
--> 101     response = client.post(
    102         url=f"{self.base_url}/api/chat",
    103         json=payload,
    104     )
    105     response.raise_for_status()
    106     raw = response.json()

File ~\anaconda3\lib\site-packages\httpx\_client.py:1145, in Client.post(self, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions)
   1124 def post(
   1125     self,
   1126     url: URLTypes,
   (...)
   1138     extensions: RequestExtensions | None = None,
   1139 ) -> Response:
   1140     """
   1141     Send a `POST` request.
   1142 
   1143     **Parameters**: See `httpx.request`.
   1144     """
-> 1145     return self.request(
   1146         "POST",
   1147         url,
   1148         content=content,
   1149         data=data,
   1150         files=files,
   1151         json=json,
   1152         params=params,
   1153         headers=headers,
   1154         cookies=cookies,
   1155         auth=auth,
   1156         follow_redirects=follow_redirects,
   1157         timeout=timeout,
   1158         extensions=extensions,
   1159     )

File ~\anaconda3\lib\site-packages\httpx\_client.py:827, in Client.request(self, method, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions)
    812     warnings.warn(message, DeprecationWarning)
    814 request = self.build_request(
    815     method=method,
    816     url=url,
   (...)
    825     extensions=extensions,
    826 )
--> 827 return self.send(request, auth=auth, follow_redirects=follow_redirects)

File ~\anaconda3\lib\site-packages\httpx\_client.py:914, in Client.send(self, request, stream, auth, follow_redirects)
    906 follow_redirects = (
    907     self.follow_redirects
    908     if isinstance(follow_redirects, UseClientDefault)
    909     else follow_redirects
    910 )
    912 auth = self._build_request_auth(request, auth)
--> 914 response = self._send_handling_auth(
    915     request,
    916     auth=auth,
    917     follow_redirects=follow_redirects,
    918     history=[],
    919 )
    920 try:
    921     if not stream:

File ~\anaconda3\lib\site-packages\httpx\_client.py:942, in Client._send_handling_auth(self, request, auth, follow_redirects, history)
    939 request = next(auth_flow)
    941 while True:
--> 942     response = self._send_handling_redirects(
    943         request,
    944         follow_redirects=follow_redirects,
    945         history=history,
    946     )
    947     try:
    948         try:

File ~\anaconda3\lib\site-packages\httpx\_client.py:979, in Client._send_handling_redirects(self, request, follow_redirects, history)
    976 for hook in self._event_hooks["request"]:
    977     hook(request)
--> 979 response = self._send_single_request(request)
    980 try:
    981     for hook in self._event_hooks["response"]:

File ~\anaconda3\lib\site-packages\httpx\_client.py:1015, in Client._send_single_request(self, request)
   1010     raise RuntimeError(
   1011         "Attempted to send an async request with a sync Client instance."
   1012     )
   1014 with request_context(request=request):
-> 1015     response = transport.handle_request(request)
   1017 assert isinstance(response.stream, SyncByteStream)
   1019 response.request = request

File ~\anaconda3\lib\site-packages\httpx\_transports\default.py:232, in HTTPTransport.handle_request(self, request)
    218 assert isinstance(request.stream, SyncByteStream)
    220 req = httpcore.Request(
    221     method=request.method,
    222     url=httpcore.URL(
   (...)
    230     extensions=request.extensions,
    231 )
--> 232 with map_httpcore_exceptions():
    233     resp = self._pool.handle_request(req)
    235 assert isinstance(resp.stream, typing.Iterable)

File ~\anaconda3\lib\contextlib.py:153, in _GeneratorContextManager.__exit__(self, typ, value, traceback)
    151     value = typ()
    152 try:
--> 153     self.gen.throw(typ, value, traceback)
    154 except StopIteration as exc:
    155     # Suppress StopIteration *unless* it's the same exception that
    156     # was passed to throw().  This prevents a StopIteration
    157     # raised inside the "with" statement from being suppressed.
    158     return exc is not value

File ~\anaconda3\lib\site-packages\httpx\_transports\default.py:86, in map_httpcore_exceptions()
     83     raise
     85 message = str(exc)
---> 86 raise mapped_exc(message) from exc

ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it

@sunilnagpal
Copy link
Author

Hello I will appreciate assistance on the above pls. Thanks

@Shahik15
Copy link

Have you been able to resolve this issue as I am getting the same connection error.

@sunilnagpal
Copy link
Author

No, @Shahik15 it remains without resolution or support on this...
if you know someone who needs to be tagged to get their attention pls do

@sunilnagpal
Copy link
Author

@jerryjliu Can you please help here if possible. Thanks

@kumailkhan1
Copy link

Experiencing the same issue, working well in a local notebook but not on Google Colab...

@Shahik15
Copy link

@kumailkhan1 can you share the code for local notebook? As I am having the connection error on local notebook pycharm as well.

@kumailkhan1
Copy link

Cant share the exact but you can say it is almost identical to what you shared...

@sunilnagpal
Copy link
Author

It's been 2 weeks of since the first issue report but I await the attention of the repo maintainers. Will appreciate any inputs on whom to tag for catching their attention

@sunilnagpal
Copy link
Author

@logan-markewich is it possible for you to help or assist with bringing this to someone's attention?

@iwill220182
Copy link

llamaIndex 这个简直了,问题非常多

@rjtormis
Copy link

Hi! Any updates on this issue?

@vishnupriyavr
Copy link

I was facing the same issue on running ollama and llamaindex in docker, I added the base url in this line to connect to the ollama docker instance, and this error did not appear again. I hope it helps.

llm = Ollama(model="llama3:latest", request_timeout=500.0, base_url=OLLAMA_BASE_URL)

In my case the base_url was http://<container_name>:11434

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

6 participants