- 🔭 I’m currently a Ph.D student in SJTU(IWIN Lab), now i am a intern research in DAMO ACADEMIC (达摩院).
- 🌱 I’m currently learning LLM/MLLM, explainable attention, information flow, Truthful AI
- 💬 Ask me about anything
- 📫 How to reach me: [email protected]. 微信:SemiZxf
- 🌱 homepage:(zhangbaijin.github.io)
- 💬 Google sclolar: Google scholar
Ph.D in SJTU (IWIN Lab). Focus on MLLMs,LLMs
-
PhD @ SJTU
- Shang Hai
-
03:53
(UTC +08:00) - zhangbaijin.github.io
Pinned Loading
-
From-Redundancy-to-Relevance
From-Redundancy-to-Relevance PublicCode for paper: From redundancy to relevance: Enhancing explainability in multimodal large language models
-
FanshuoZeng/Simignore
FanshuoZeng/Simignore Public[AAAI 2025] Code for paper:Enhancing Multimodal Large Language Models Complex Reasoning via Similarity Computation
-
Massive-activations-VLMs
Massive-activations-VLMs PublicThe code for paper: Focus on attention sink anchors token to alleviate hallucination in lvlms
Python 1
-
SpA-Former-shadow-removal
SpA-Former-shadow-removal Public[IJCNN 2023 Oral]: SpA-Former:An Effective and Lightweight Transformer for Image Shadow Removal
-
Wakeup-Darkness
Wakeup-Darkness PublicForked from QihanZhao/enlighten-anything
[ACM TOMM] Wakeup-Darkness: When Multimodal Meets Unsupervised Low-light Image Enhancement
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.