Skip to content

Commit

Permalink
Merge branch 'ACEsuit:main' into SpheriCart_Ylm
Browse files Browse the repository at this point in the history
  • Loading branch information
zhanglw0521 authored Nov 10, 2023
2 parents 46dce96 + 6c02075 commit ce5d5ed
Show file tree
Hide file tree
Showing 6 changed files with 186 additions and 0 deletions.
39 changes: 39 additions & 0 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
name: Run benchmarks

on:
pull_request:
types: [labeled, opened, synchronize, reopened]

env:
JULIA_NUM_THREADS: 2

# Only trigger the benchmark job when you add `run benchmark` label to the PR
jobs:
Benchmark:
runs-on: ubuntu-latest
if: contains(github.event.pull_request.labels.*.name, 'run benchmark')
steps:
- uses: actions/checkout@v3
- uses: julia-actions/setup-julia@latest
with:
version: 1
- run: |
using Pkg
Pkg.pkg"registry add https://github.com/ACEsuit/ACEregistry"
shell: bash -c "julia --color=yes {0}"
- uses: julia-actions/julia-buildpkg@latest
- name: Install dependencies
run: julia -e 'using Pkg; Pkg.add(["PkgBenchmark", "BenchmarkCI"])'
- name: Run benchmarks
run: julia -e 'using BenchmarkCI; BenchmarkCI.judge(baseline = "origin/main")'
- name: Post results
# displayjudgement will post in CI output, postjudge should post to PR thread, but might fail
run: julia -e 'using BenchmarkCI; BenchmarkCI.displayjudgement(); BenchmarkCI.postjudge()'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Uncomment this if you want the benchmarks to be pushed to the repo
#- name: Push results
# run: julia -e "using BenchmarkCI; BenchmarkCI.pushresult()"
# env:
# GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# SSH_KEY: ${{ secrets.DOCUMENTER_KEY }}
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@
/docs/build/
.vscode
literate_tutorials
benchmark/tune.json
5 changes: 5 additions & 0 deletions benchmark/Project.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
[deps]
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
LuxCore = "bb33d45b-7691-41d6-9220-0943567d0623"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
81 changes: 81 additions & 0 deletions benchmark/benchmarks.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
using Polynomials4ML
using BenchmarkTools
using LuxCore, Random, Zygote

const P4ML = Polynomials4ML


SUITE = BenchmarkGroup()


## Test polynomials

SUITE["Polynomials"] = BenchmarkGroup()

N = 100
Np = 10
r = 2*rand(N) .- 1
tmp = zeros(N,N)
tmp_d = similar(tmp)
tmp_d2 = similar(tmp)

# Chebyshev
ch_basis = ChebBasis(Np)

SUITE["Polynomials"]["Chebyshev"] = BenchmarkGroup()
SUITE["Polynomials"]["Chebyshev"]["evaluation"] = @benchmarkable evaluate!($tmp, $ch_basis, $r)
SUITE["Polynomials"]["Chebyshev"]["derivative"] = @benchmarkable evaluate_ed!($tmp, $tmp_d, $ch_basis, $r)
SUITE["Polynomials"]["Chebyshev"]["2nd derivative"] = @benchmarkable evaluate_ed2!($tmp, $tmp_d, $tmp_d2, $ch_basis, $r)

# OrthPolyBasis1D3T

op_basis = OrthPolyBasis1D3T(randn(Np), randn(Np), randn(Np))

SUITE["Polynomials"]["OrtoPoly1d3"] = BenchmarkGroup()
SUITE["Polynomials"]["OrtoPoly1d3"]["evaluation"] = @benchmarkable evaluate!($tmp, $op_basis, $r)
SUITE["Polynomials"]["OrtoPoly1d3"]["derivative"] = @benchmarkable evaluate_ed!($tmp, $tmp_d, $op_basis, $r)
SUITE["Polynomials"]["OrtoPoly1d3"]["2nd derivative"] = @benchmarkable evaluate_ed2!($tmp, $tmp_d, $tmp_d2, $op_basis, $r)


## ACE pooling
# this is a copy from profile/ace/profile_sparseprodpool.jl

# Helpers
function _generate_basis(; order=3, len = 50)
NN = [ rand(10:30) for _ = 1:order ]
spec = sort([ ntuple(t -> rand(1:NN[t]), order) for _ = 1:len])
return PooledSparseProduct(spec)
end

function _rand_input1(basis::PooledSparseProduct{ORDER}) where {ORDER}
NN = [ maximum(b[i] for b in basis.spec) for i = 1:ORDER ]
BB = ntuple(i -> randn(NN[i]), ORDER)
end

function _rand_input(basis::PooledSparseProduct{ORDER}; nX = 10) where {ORDER}
NN = [ maximum(b[i] for b in basis.spec) for i = 1:ORDER ]
BB = ntuple(i -> randn(nX, NN[i]), ORDER)
end

#

SUITE["ACE"] = BenchmarkGroup()
SUITE["ACE"]["SparceProduct"] = BenchmarkGroup()

order = 4
basis1 = _generate_basis(; order=order)
BB = _rand_input1(basis1)

nX = 64
order = 3
basis2 = _generate_basis(; order=order)
bBB = _rand_input(basis2; nX = nX)

SUITE["ACE"]["SparceProduct"]["no pooling"] = @benchmarkable evaluate($basis1, $BB)
SUITE["ACE"]["SparceProduct"]["pooling"] = @benchmarkable evaluate($basis2, $bBB)

l = Polynomials4ML.lux(basis2)
ps, st = LuxCore.setup(MersenneTwister(1234), l)

SUITE["ACE"]["SparceProduct"]["lux evaluation"] = @benchmarkable l($bBB, $ps, $st)
SUITE["ACE"]["SparceProduct"]["Zygote gradient"] = @benchmarkable Zygote.gradient(x -> sum($l(x, $ps, $st)[1]), $bBB)
3 changes: 3 additions & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@ makedocs(;
"ace.md", ],
"Docstrings" => "docstrings.md",
"Experimental" => "experimental.md",
"Developter Documentation" => [
"benchmarking.md",
],
],
)

Expand Down
57 changes: 57 additions & 0 deletions docs/src/benchmarking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Benchmark Instructions

For general reference look BenchmarkTools [manual](https://juliaci.github.io/BenchmarkTools.jl/stable/manual/).

A simple way to run benchmarks is to call

```julia
using BenchmarkTools
using PkgBenchmark
using Polynomials4ML

bench = benchmarkpkg(Polynomials4ML)
results = bench.benchmarkgroup

# You can search with macro "@tagged"
results[@tagged "derivative" && "Chebyshev"]
```

You can create `BenchmarkConfig` to control benchmark

```julia
t2 = BenchmarkConfig(env = Dict("JULIA_NUM_THREADS" => 2))
bench_t2 = benchmarkpkg(Polynomials4ML, t2)
```

Benchmarks can be save to a file with

```julia
export_markdown("results.md", bench)
```

Comparing current branch to another branch

```julia
# current branch to "origin/main"
j = judge(Polynomials4ML, "origin/main")
```

Benchmark scaling to different amount of threads

```julia
t4 = BenchmarkConfig(env = Dict("JULIA_NUM_THREADS" => 4))
t8 = BenchmarkConfig(env = Dict("JULIA_NUM_THREADS" => 8))

# Compare how much changing from 4-threads to 8 improves the performance
j = judge(Polynomials4ML, t8, t4)

show(j.benchmarkgroup)
```

## CI Benchmarks

Benchmarks can be run automatically on PR's by adding label "Run Benchmarks" to the PR.

## Adding more benchmarks

Take a look at `benchmark/benchmarks.jl` for an example. If your benchmark depends on an additional packages you need to add the package to `benchmark/Project.toml`.

0 comments on commit ce5d5ed

Please sign in to comment.