Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Test] Fix build break #777

Merged
merged 11 commits into from
Apr 24, 2024
Merged

[Test] Fix build break #777

merged 11 commits into from
Apr 24, 2024

Conversation

riedgar-ms
Copy link
Collaborator

@riedgar-ms riedgar-ms commented Apr 24, 2024

There are actually three current build breaks:

  • A mysterious push to main broke our tests. Skip the offending items
  • llama-cpp-python has started segfaulting. Refactoring the workflow files a little appears to have solved this for reasons which are not entirely clear
  • The macos-latest Runner has changed and no longer has Python 3.8 and 3.9

@riedgar-ms riedgar-ms requested a review from Harsha-Nori April 24, 2024 12:22
@riedgar-ms
Copy link
Collaborator Author

I will merge once the build is Green

@codecov-commenter
Copy link

codecov-commenter commented Apr 24, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 62.12%. Comparing base (d2525a3) to head (da6d73a).

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #777      +/-   ##
==========================================
+ Coverage   55.46%   62.12%   +6.65%     
==========================================
  Files          55       55              
  Lines        4071     4071              
==========================================
+ Hits         2258     2529     +271     
+ Misses       1813     1542     -271     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@riedgar-ms riedgar-ms requested a review from paulbkoch April 24, 2024 15:43
Comment on lines +60 to 61
@pytest.mark.skip("Don't overload the build machines")
def test_phi3_loading():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for now, but perhaps we can configure these to run on larger spec'd machines after this PR?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you can provision them....

@@ -50,7 +50,7 @@ jobs:
- name: GPU pip installs
run: |
pip install accelerate
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install "llama-cpp-python!=0.2.58"
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install "llama-cpp-python==0.2.59"
Copy link
Collaborator Author

@riedgar-ms riedgar-ms Apr 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm considering going back to the previous version. For llama-cpp-python, v0.2.59 is the best I've seen. It passes on 3.8, 3.9 and 3.12. But fails on 3.10 and 3.11. That tended to take multiple attempts, though

@riedgar-ms riedgar-ms merged commit c23c8b0 into main Apr 24, 2024
98 checks passed
@riedgar-ms riedgar-ms deleted the riedgar-ms-patch-1 branch April 24, 2024 19:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants