FAIR Chemistry Leaderboard

Welcome to the FAIR Chemistry Leaderboard! 🧪

This space hosts comprehensive leaderboards across different chemical domains including molecules, catalysts, and materials.

Note: Leaderboards previously hosted on EvalAI (such as OC20) will be migrated here in the near future.

🧬 OMol25

This leaderboard evaluates performance on the Open Molecules 2025 (OMol25) dataset—a diverse, high-quality collection that uniquely combines elemental, chemical, and structural diversity.

📖 Learn more: OMol25 Paper

Benchmarks

S2EF (Structure to Energy and Forces)

  • Test and validation sets across different molecular categories

Evaluations. Downstream chemistry tasks that evaluate practical applications:

  • Ligand Pocket: Protein-ligand interaction energy as a proxy for binding energy
  • Ligand Strain: Ligand-strain energy crucial for understanding protein-ligand binding
  • Conformers: Identifying the lowest energy conformer
  • Protonation: Energy differences between protonated structures (proxy for pKa prediction)
  • Distance Scaling: Short and long range intermolecular interactions
  • IE/EA: Ionization energy and electron affinity
  • Spin Gap: Energy differences between varying spin states

📋 Getting Started

Ready to submit your model? Check out our steps for running benchmarks and generating prediction files:

🔗 Submission Documentation

Copy the following snippet to cite these results

@article{levine2025open,
  title={The open molecules 2025 (omol25) dataset, evaluations, and models},
  author={Levine, Daniel S and Shuaibi, Muhammed and Spotte-Smith, Evan Walter Clark and Taylor, Michael G and Hasyim, Muhammad R and Michel, Kyle and Batatia, Ilyes and Cs{'a}nyi, G{'a}bor and Dzamba, Misko and Eastman, Peter and others},
  journal={arXiv preprint arXiv:2505.08762},
  year={2025}
}

Evaluations

**Overview rankings based on average rank across all evaluations

S2EF

How to Submit

To submit your model predictions:

  • 📝 Step 1: Generate prediction files for the appropriate task (see here for details)
  • 🔐 Step 2: Sign in with Hugging Face
  • 📋 Step 3: Fill in the submission metadata (name, organization, contact info, etc.)
  • 🎯 Step 4: Select the evaluation type that matches your prediction file
  • 📤 Step 5: Upload your file and click Submit Eval
  • ⏱️ Step 6: Wait for the evaluation to complete and see the "✅" message in the Status bar

📊 Submission Limits: Users are limited to 5 successful submissions per month for each evaluation type.

⚠️ Important Notes:

  • File Format: Ensure your prediction file format matches the expected format for the selected evaluation (.npz for S2EF and .json for Evaluations)
  • 🔐 Privacy: Your email will be stored privately and only used for communication regarding your submission
  • 📈 Results: Results will appear on the leaderboard after successful validation
  • ⏱️ Wait Time: Remain on the page until you see the "Success" message. Evaluations can take several minutes, please be patient
  • 🗑️ Removal: If you wish to have your model removed from the leaderboard, please reach out to mshuaibi@meta.com with the model name and submission date

💬 Need Help?

This leaderboard is actively being developed and we welcome any feedback and contributions!

📞 Contact us:

Training set
Eval Type