Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More detailed information on test failures in Actions page #10713

Open
RaymondDashWu opened this issue Oct 19, 2023 · 8 comments
Open

More detailed information on test failures in Actions page #10713

RaymondDashWu opened this issue Oct 19, 2023 · 8 comments
Labels
enhancement This PR modified some existing files

Comments

@RaymondDashWu
Copy link
Contributor

Feature description

When creating tests for various algorithm files, there isn't a way to see what test failed on the Github Actions. You see something like this:
image

Something like doctest.testmod() provides more usable data to work off of
image

@RaymondDashWu RaymondDashWu added the enhancement This PR modified some existing files label Oct 19, 2023
@RaymondDashWu
Copy link
Contributor Author

I realize this is dependent per file as adding the doctests to the file gave me identical errors. Unsure if I should close this as it'd be nice to have a unified way of displaying meaningful errors

@tianyizheng02
Copy link
Contributor

You actually can see which test failed in the build logs—it's just after the progress bar:

web_programming/fetch_anime_and_play.py ...                              [ 99%]
web_programming/fetch_well_rx_price.py .                                 [ 99%]
web_programming/get_imdbtop.py .                                         [ 99%]
web_programming/get_top_billionaires.py .                                [ 99%]
web_programming/instagram_crawler.py .                                   [ 99%]
/opt/hostedtoolcache/Python/3.12.0/x64/lib/python3.12/site-packages/coverage/report_core.py:115: CoverageWarning: Couldn't parse '/home/runner/work/Python/Python/config-3.py': No source for code: '/home/runner/work/Python/Python/config-3.py'. (couldnt-parse)
  coverage._warn(msg, slug="couldnt-parse")
/opt/hostedtoolcache/Python/3.12.0/x64/lib/python3.12/site-packages/coverage/report_core.py:115: CoverageWarning: Couldn't parse '/home/runner/work/Python/Python/config.py': No source for code: '/home/runner/work/Python/Python/config.py'. (couldnt-parse)
  coverage._warn(msg, slug="couldnt-parse")
web_programming/test_fetch_github_info.py .                              [100%]

=================================== FAILURES ===================================
__________ [doctest] graphs.check_bipartite_graph_bfs.check_bipartite __________
051     ... {-1: [0, 2], 0: [-1, 1], 1: [0, 2], 2: [-1, 1]}
052     ... )
053     True
054     >>> check_bipartite(
055     ... {0.9: [1, 3], 1: [0, 2], 2: [1, 3], 3: [0, 2]}
056     ... )
057     Traceback (most recent call last):
058         ...
059     KeyError: 0
060     >>> check_bipartite(
UNEXPECTED EXCEPTION: TypeError('list indices must be integers or slices, not float')
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.12.0/x64/lib/python3.12/doctest.py", line 1357, in __run
    exec(compile(example.source, filename, "single",
  File "<doctest graphs.check_bipartite_graph_bfs.check_bipartite[10]>", line 1, in <module>
  File "/home/runner/work/Python/Python/graphs/check_bipartite_graph_bfs.py", line 103, in check_bipartite
    if bfs() is False:
       ^^^^^
  File "/home/runner/work/Python/Python/graphs/check_bipartite_graph_bfs.py", line 90, in bfs
    if color[neighbour] == -1:
       ~~~~~^^^^^^^^^^^
TypeError: list indices must be integers or slices, not float
/home/runner/work/Python/Python/graphs/check_bipartite_graph_bfs.py:60: UnexpectedException

I know, it's hard to find. I had tried to separate the test logs and coverage logs into two separate Actions steps in a past PR, but that didn't work out.

@RaymondDashWu
Copy link
Contributor Author

RaymondDashWu commented Oct 20, 2023

Ah you're right. I see if now. A bit hard to find since going to the build automatically scrolls to the end summary I took a screenshot of in my first post. I'm not sure this is a PR a contributor can make since I believe it's tied to the Github organization but would it be better to put that info under or in place of short test summary info? Contributors are only supposed to be submitting one file at a time so the size of short test summary info will only ever be 1 - making that section redundant. Or if it is something a PR can fix, can you describe what steps you took and what issues were encountered? I'll take a stab at it.

image

@tianyizheng02
Copy link
Contributor

The problem is that the huge wall of logs is just the output of the single pytest command run by the GitHub action. Like I had said, I tried to separate this output into two separate Actions, but to no avail. If you want to take a stab at it, you're welcome to open a PR to do so. You'll have to experiment with pytest and pytest-cov to see if you can somehow separate the testing output (the progress bar and the test warnings/errors) from the coverage output (the coverage statistics). The difficulty is that, as far as I can tell, there's no way to only check the coverage without also running all the tests.

@SOBAN50
Copy link

SOBAN50 commented Oct 21, 2023

@tianyizheng02 as I have understood, You want to run the PYTEST tests and coverage report separately, right?

@SOBAN50
Copy link

SOBAN50 commented Oct 21, 2023

If so, then why don't you run PYTEST and COVERAGE separately i.e in separate commands?

@SOBAN50
Copy link

SOBAN50 commented Oct 21, 2023

Actually you can run both of them seperately but it has a prerequisite.

To run 'coverage report' seperately, you have to use a testing library (pytest) first so that it can generate .coverage file

To generate .coverage file we must run this command first(present in Actions):

pytest --ignore=quantum/q_fourier_transform.py --ignore=project_euler/ --ignore=scripts/validate_solutions.py --ignore=web_programming/instagram_crawler.py --cov-report=term-missing:skip-covered --cov=. .

It would generate the .coverage file which we can later use to run COVERAGE in a sepearate GitHub Actions tab so we can visualize its result independently.

So in summary, you can not run them entirely seperately, but only after running the above command.

@RaymondDashWu
Copy link
Contributor Author

RaymondDashWu commented Oct 24, 2023

So I took a quick look and found two potential solutions @tianyizheng02 . They seem to be similar to @SOBAN50's ideas but I couldn't find the option for a .coverage file in the documentation.

  1. Pytest-cov has the option to output to a file such as json without reporting on the terminal (source). Another action could then be created to parse this and output separately
  2. If you're willing to try another testing library, coverage.py can generate a separate webpage that we could point users to. It has a coverage html that seems to be suited for this (source) that would look like this
    image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement This PR modified some existing files
Projects
None yet
Development

No branches or pull requests

3 participants