Bio x AI Hackathon
  • Welcome to the Bio x AI Hackathon
  • Getting Started
    • Quickstart
    • Important Links
  • Developers
    • BioAgents
    • CoreAgents
    • Eliza Agent Framework
    • Knowledge Graphs
    • .cursorrules
    • Starter-repos
    • Plugin Guide
  • Vision and Mission
    • Bio x AI Hackathon
    • The Problems in Science
    • TechBio
    • Guidance from the Judges
      • Important Datasets and Code Repositories
      • Reading List
      • Common Mistakes for Developers new to Academia
    • Hackathon Ideas
      • Full Projects
        • The Complexity Slider - Finding Hypotheses at the Limits of Human Knowledge
        • [Hard Mode] Metadata Generation on datasets with No Manuscript or Code Associated
        • Inverse Reproducibility - Given Manuscript and Data, Make the Code
        • Atlas of Research Methods Formatted for Agentic Reuse
        • Utilizing Knowledge Graphs for the Detection of Potential Null Results
        • Creating an Iterative Publication Stack by Linking Together Existing Tooling
        • Longevity Atlas: Building a Decentralized Knowledge Network with Agentic Research Hypothesis Engine
        • CoreAgent Track - Opportunities to work with BioDAOs
        • SpineDAO Chronos Project Spec
      • Individual Plugins
        • Plug-ins for every piece of research tooling known to humankind
        • Reproducibility Assistant - Code Cleaning, Dockerization, etc
        • Finding and Differentiating Cardinal vs Supporting Assertions
        • [Easier Mode] Metadata Generation on Datasets Given the Manuscript and Code Repository
        • Sentiment Analysis on Existing Citations, Dissenting vs Confirming
        • Agentic Metadata Template Creation for Standard Lab Equipment
  • Ops
    • Calendar
      • Key Dates
      • Office Hours
    • Judges and Mentors
      • Communicating to Judges and Mentors
      • BioAgent Judging Panel
      • CoreAgent Judging Panel
      • Mentors
    • Prize Tracks
    • Hackathon Rules
    • Kickoff Speakers
    • FAQ
Powered by GitBook
On this page
  • Prizes
  • 1. Scientific Outcomes Track ($75,000 Total)
  • 2. CoreAgent Prize ($35,000)
  • 3. Discretionary Prizes ($15,000)
  • Additional Information
  • Purposeful Flexibility in Evaluation
  • General Evaluation Guidelines
  • Important Notes
  1. Ops

Prize Tracks

Prizes

We're excited to offer a total prize pool of $125,000, distributed across various categories to reward innovation and impact in agentic scientific research.

1. Scientific Outcomes Track ($75,000 Total)

This track focuses on projects that generate meaningful scientific outputs, emphasizing information organization, knowledge graph generation, and hypothesis generation.

  • Projects that effectively utilize existing tools and connect them to Eliza will be given high priority.

  • Teams that contribute to the on-chain publication of scientific research artifacts will also be highly favored.

  • Participants are encouraged to consult the judging panel's provided ideas and information to refine their project concepts.

  • Prize distribution (size and number) will be determined by the judges based on submission volume and quality.

  • Given the dynamic nature of scientific research, judges will exercise their discretion in selecting winners.

Prize Breakdown

  • Plugin Sprint ($10,000): In Plug-ins for every piece of research tooling known to humankind, We list out a series of plugins for every tool that we think would be useful for an agent to be able to utilize in conducting research. These plugins serve as the basis for the rest of the event. After two weeks of work, the BIo.XYZ team will award prizes to teams based on the number of plugins created with a base level of quality.

  • Mid-Hackathon Checkpoint ($25,000): This checkpoint is where the Bio.XYZ team and Solana team (with consultation from judges) determine and then award will award a portion of prize money to the teams making the most interesting progress in the event. Prizes will be awarded at the Solana Accelerate event in New York City on May 19th. Projects are not required to build on Solana, but we would encourage integration if possible.

  • Grand Prize ($40,000): The remaining portion will be awarded at the DeSci Berlin conference on June 10th.

    • $25,000 of the grand prize is specifically allocated to teams building their projects on the Solana blockchain.

2. CoreAgent Prize ($35,000)

This prize is dedicated to projects that address specific challenges faced by bioDAOs.

  • Solutions that streamline community management, governance orchestration, onboarding, and other time-consuming tasks are highly valued.

  • The goal is to empower bioDAO founders to focus on sourcing, evaluating, and funding high-quality research.

  • Judges will hold recurring office hours to provide guidance and will be available for asynchronous questions.

3. Discretionary Prizes ($15,000)

This portion of the prize pool will be allocated at the judges' discretion to recognize various valuable contributions.

While specific prize categories will not be announced in advance, potential areas of focus include:

  • Effective event promotion and outreach.

  • Significant open-source code contributions to existing Bio.xyz repositories.

  • Thoughtful and active participation throughout the hackathon.

Additional Information

Purposeful Flexibility in Evaluation

We have intentionally designed the prize criteria with a degree of flexibility to allow judges to exercise their expert judgment. The scientific process is inherently creative and multifaceted, and we believe that prescriptive evaluation metrics might inadvertently constrain innovation or miss breakthrough approaches that don't fit neatly into predefined boxes.

Judges have the autonomy to identify and reward excellence as they see it in their domain of expertise. That said, the following general guidelines to help frame their evaluation:

General Evaluation Guidelines

1. Scientific Impact

  • Potential to meaningfully advance scientific research or knowledge

  • Novelty and originality of the approach to knowledge graph creation or utilization

  • Quality of insights or hypotheses generated

  • Anticipation of future scientific research needs in the age of AI

2. Technical Implementation

  • Quality and robustness of the knowledge graph model or architecture

  • Appropriate use of AI techniques for knowledge extraction and connection

  • Technical sophistication and innovation

  • Data handling and processing methods

3. Usability and Accessibility

  • Ease of use for scientific researchers

  • Clear and intuitive visualization or presentation of complex data

  • Documentation quality and comprehensiveness

  • Potential for adoption by the broader scientific community

4. Sustainability and Scalability

  • Potential for continued development and improvement

  • Ability to scale to larger datasets or different scientific domains

  • Approach to maintaining data quality and accuracy over time

  • Open-source commitment and community engagement potential

Important Notes

  • The judges' decisions are final.

  • Prize distribution is subject to change based on submission quality and volume.

  • We strongly encourage all participants to ask questions, and reach out to the judges and mentors.

  • Bio Protocol will require an invoice from prize recipients including at least one full name and address in order to disperse funds

  • All funds dispersed will be given in the form of the $BIO Tokens which will unlock linearly over the course of a year.

PreviousMentorsNextHackathon Rules

Last updated 1 month ago