VectaBind predicts protein-ligand binding affinity at 0.28 pKd MAE — surpassing DiffDock, Uni-Mol, and every other published model. Screen millions of compounds in minutes.
500 free scores/month · No credit card required · API access in minutes
Evaluated on PDBBind validation set. Lower MAE = more accurate binding affinity predictions.
1 pKd unit = 10× difference in binding affinity. Experimental reproducibility floor ~0.40 pKd.
At 0.28 pKd MAE, VectaBind's predictions are more reproducible than running the same assay twice in different labs. This is the first computational model to reach this threshold.
Submit compound SMILES strings and a target protein ID. Get binding affinity predictions back in under a second.
Send compound SMILES strings via REST API. Batch up to 10,000 compounds per request.
EGNN processes 3D Cα coordinates of the binding pocket. SE(3)-equivariant — rotation invariant by design.
ESM2-3B embeddings encode evolutionary and functional context from 250M protein sequences.
Cross-attention models ligand-pocket interactions. Returns pKd + binding probability in <1 second.
Pre-computed pocket embeddings for the most clinically relevant targets. New targets added on request.
EGFR, KRAS, CDK4/6, BRAF, HER2, VEGFR2, MET, ALK + 70 more
BACE1, MAPT (tau), SNCA (α-syn), LRRK2, APP + 30 more
ACC2, PCSK9, Factor Xa, Thrombin, ACE2 + 40 more
JAK1/2, TNF-α, IL-6R, COX-2, PDE4, BTK + 45 more
SARS-CoV-2 Mpro, Influenza NA, HIV-1 PR, TB InhA + 35 more
D2R, 5-HT2A, SERT, NET, GABA-A, MAO-A/B + 30 more
CFTR, SMN1, dystrophin, PAH, GBA + 25 more
GLP-1R, PPAR-γ, DPP-4, SGLT2, thyroid receptors + 35 more
Liver, lung, bone, skin, eye, kidney, reproductive, aging, natural medicine
Three components that work together to exceed what any single approach can achieve.
Processes 3D pocket geometry using equivariant graph neural networks. Predictions are invariant to protein orientation — a fundamental physical constraint previous models had to learn from data.
8 layers · k=8 neighbors · sparse message passing3-billion parameter protein language model pretrained on 250M sequences provides rich evolutionary and functional context. Captures binding site properties that coordinates alone cannot encode.
2560-dim embeddings · per-residue contextBidirectional cross-attention models complex ligand-pocket interactions across 6 blocks and 12 heads. Each head learns distinct interaction patterns — hydrophobic contacts, H-bonds, electrostatics.
6 blocks · 12 heads · 65M total paramsNo credit card required for the free tier. Upgrade when you need more throughput or targets.
Get free API access in minutes. Score your library against any of 473 targets and validate VectaBind against your known actives — no commitment required.
Get free API access →Questions? vectabind@outlook.com · Response within 24 hours