Accessing Attention on MLX #3246
Unanswered
tburleyinfo
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
“How can intermediate attention activations (e.g., attention heads or attention weights) be accessed or inspected when running transformer models with MLX?”
Beta Was this translation helpful? Give feedback.
All reactions