-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Test] Fix build break #777
Conversation
I will merge once the build is Green |
Codecov ReportAll modified and coverable lines are covered by tests ✅
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #777 +/- ##
==========================================
+ Coverage 55.46% 62.12% +6.65%
==========================================
Files 55 55
Lines 4071 4071
==========================================
+ Hits 2258 2529 +271
+ Misses 1813 1542 -271 ☔ View full report in Codecov by Sentry. |
@pytest.mark.skip("Don't overload the build machines") | ||
def test_phi3_loading(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for now, but perhaps we can configure these to run on larger spec'd machines after this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you can provision them....
.github/workflows/unit_tests_gpu.yml
Outdated
@@ -50,7 +50,7 @@ jobs: | |||
- name: GPU pip installs | |||
run: | | |||
pip install accelerate | |||
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install "llama-cpp-python!=0.2.58" | |||
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install "llama-cpp-python==0.2.59" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm considering going back to the previous version. For llama-cpp-python
, v0.2.59 is the best I've seen. It passes on 3.8, 3.9 and 3.12. But fails on 3.10 and 3.11. That tended to take multiple attempts, though
There are actually three current build breaks:
main
broke our tests. Skip the offending itemsllama-cpp-python
has started segfaulting. Refactoring the workflow files a little appears to have solved this for reasons which are not entirely clearmacos-latest
Runner has changed and no longer has Python 3.8 and 3.9