Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error with /attention_head_record endpoint #25

Open
Ruibn opened this issue May 8, 2024 · 0 comments
Open

error with /attention_head_record endpoint #25

Ruibn opened this issue May 8, 2024 · 0 comments

Comments

@Ruibn
Copy link

Ruibn commented May 8, 2024

Hi Team,

I hit error when I tried following request body to /attention_head_record
{
"dst": "logits",
"layerIndex": 9,
"activationIndex": 8,
"datasets": [
"https://openaipublic.blob.core.windows.net/neuron-explainer/gpt2_small_data/collated-activations/"
]
}

Error as below:
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 265, in attention_head_record
) = convert_activation_records_to_token_and_attention_activations_lists(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 191, in convert_activation_records_to_token_and_attention_activations_lists
return normalize_attention_token_scalars(zipped_tokens_and_raw_attention_activations)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 455, in normalize_attention_token_scalars
) = compute_scalar_summary_and_scale(compute_max_scalar_out, list_of_sequence_lists)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 427, in compute_scalar_summary_and_scale
scalar_indexed_by_token_sequence_list: list[list[list[float]]] = [
^
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 428, in
[compute_scalar_summary_function(sequence) for sequence in sequence_list]
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 428, in
[compute_scalar_summary_function(sequence) for sequence in sequence_list]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 421, in compute_max_scalar_out
return compute_summary_of_scalar_out(attn_token_sequence_list, _max_non_none)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 392, in compute_summary_of_scalar_out
scalar_out = [
^
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 393, in
operation(
File "/home/ruibn/debug/transformer-debugger/neuron_explainer/activation_server/read_routes.py", line 380, in _max_non_none
return max([a for a in scalars if a is not None])

Any idea? Same request was working fine on /neuron_record.
Will it due to wrong dataset I used? @WuTheFWasThat

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant