Bio x AI Hackathon
  • Welcome to the Bio x AI Hackathon
  • Getting Started
    • Quickstart
    • Important Links
  • Developers
    • BioAgents
    • CoreAgents
    • Eliza Agent Framework
    • Knowledge Graphs
    • .cursorrules
    • Starter-repos
    • Plugin Guide
  • Vision and Mission
    • Bio x AI Hackathon
    • The Problems in Science
    • TechBio
    • Guidance from the Judges
      • Important Datasets and Code Repositories
      • Reading List
      • Common Mistakes for Developers new to Academia
    • Hackathon Ideas
      • Full Projects
        • The Complexity Slider - Finding Hypotheses at the Limits of Human Knowledge
        • [Hard Mode] Metadata Generation on datasets with No Manuscript or Code Associated
        • Inverse Reproducibility - Given Manuscript and Data, Make the Code
        • Atlas of Research Methods Formatted for Agentic Reuse
        • Utilizing Knowledge Graphs for the Detection of Potential Null Results
        • Creating an Iterative Publication Stack by Linking Together Existing Tooling
        • Longevity Atlas: Building a Decentralized Knowledge Network with Agentic Research Hypothesis Engine
        • CoreAgent Track - Opportunities to work with BioDAOs
        • SpineDAO Chronos Project Spec
      • Individual Plugins
        • Plug-ins for every piece of research tooling known to humankind
        • Reproducibility Assistant - Code Cleaning, Dockerization, etc
        • Finding and Differentiating Cardinal vs Supporting Assertions
        • [Easier Mode] Metadata Generation on Datasets Given the Manuscript and Code Repository
        • Sentiment Analysis on Existing Citations, Dissenting vs Confirming
        • Agentic Metadata Template Creation for Standard Lab Equipment
  • Ops
    • Calendar
      • Key Dates
      • Office Hours
    • Judges and Mentors
      • Communicating to Judges and Mentors
      • BioAgent Judging Panel
      • CoreAgent Judging Panel
      • Mentors
    • Prize Tracks
    • Hackathon Rules
    • Kickoff Speakers
    • FAQ
Powered by GitBook
On this page
  1. Developers

BioAgents

A fleet of one million scientific Agents for knowledge work

PreviousImportant LinksNextCoreAgents

Last updated 1 month ago

BioAgents are a leap beyond static predictions, harnessing a user-run, decentralized, agentic system built on the and utilizing new software for running the agents either at home or in the cloud.

Modified from

Looking forward, we have to ask the questions – how many people will contribute to research if they receive the financial benefits of their compute? And what if that compute, instead of having a mechanistic model, has the ever-expanding reasoning capabilities of LLM Agents?

When Google DeepMind released AlphaFold, it changed how we approach protein structure prediction. Incredibly, over 2 million people donated their compute and electricity for years, completely altruistically. The program resulted in over 200,000 protein structures being solved and made publicly available, and DeepMind's team winning the 2024 Nobel Prize for Chemistry.

Each BioAgent runs continuously and becomes part of the BioAgent fleet. The individual agents take on given tasks from the BioDAOs, and as a whole they become capable of the following:

  1. Reading and Analyzing Vast Scientific Literature

    • Modern biology produces over one million new publications annually, making it nearly impossible for any single human to keep pace. BioAgents can split up and digest these papers in parallel.

    • Specialized agents that each focus on a particular domain—genomics, proteomics, or specific disease areas—then reconvene to synthesize findings with near-perfect recall.

  2. Seamless Interaction with a Decentralized Knowledge Graph (DKG)

    • Beyond just reading, BioAgents also write findings to a shared knowledge layer. Using a fork of OriginTrail’s DKG plugin, our Eliza agent automatically adds new nodes and edges based on insights drawn from literature. Over time, this builds a publicly accessible “BioGraph” of all relevant scientific information.

  3. Executing Complex Reasoning and Decision Trees

    • Biology research often demands step-by-step logic: if an assay reveals a certain result, pivot to the next experimental protocol. AI fleets can collectively brainstorm hypotheses, refine them, and then plan multifaceted studies—much like a team of expert scientists collaborating in real time.

  4. High-Throughput Data Analysis (Multi-Omics & Imaging)

    • From patient genomic data to massive libraries of microscopy images, data in biotech is extensive and unwieldy. Each BioAgent can tackle a slice of the dataset—one agent for gene expression, another for imaging, yet another for proteomics—and share insights to form a cohesive interpretation.

  5. Building Claims and Corroborating Data for IP-NFTs

To complete the comparison to AlphaFold, with BioAgents, we move from donation-based, protein predictions to an incentivized, decentralized, collaborative intelligence that accelerates every step in biotech R&D.

We are selecting specific, repeatable steps of the scientific process and perfecting an Agent to complete that step extremely well. One way to think about this is we are automating the process to move data from a less valuable format to a more valuable one.

research paper → knowledge graph → hypothesis generation → experiment design → experimental validation → IP claim → IP productization/sale

Each BioDAO is going through this process, and has unique bottlenecks according to their scientific field and research goals. This is where BioAgents will help the most.

Let’s take a moment to talk about what BioAgents are not:

  • They do not come with a new token

  • They are not chatbots

  • They are not social media influencers

BioAgents are: agentic science machines, creating non-stop knowledge work to serve the BioDAOs

‍BioAgents run on $BIO

Instead of relying on donations of compute, we properly reward people for their agentic contributions. Additionally, we also discuss below a new buy & burn mechanism for BIO, for any external usage of the BioAgent fleet‍

Parameters and Caps

Since each agent requires real network resources, we’ll set a cap on how many agents can be utilized at once—adjustable as our capacity and research demands evolves. If all BioDAO research needs are satisfied at a given moment, we anticipate finding useful tasks for the BioAgents from biotech startups and pharma firms.

The fees generated from external partners utilizing the BioAgent fleet will be burnt. This adds a new deflationary element to the BIO token, and is proportionate to the size of the BioAgent fleet and the value of the tasks it can be assigned.

‍

BioAgents can perform their own experimental design and submit actual experiments to be tested to a virtual wet lab appropriate to the experiment, such as , , and many more.

The crowning achievement: generating robust, evidence-based claims that can be packaged into IP-NFTs. Once a BioAgent in the fleet uncovers a novel discovery—like a promising drug target or an innovative synthetic biology design—it can format a claim that can result in an IP submission, and store that IP onchain through , ensuring transparent, tamper-proof provenance.

The is the key way we incentivize and manage the growth of the BioAgent fleet. Critically, we see blockchain and the network incentivization through BIO as what AlphaFold is missing to scale beyond 2 million nodes at home.

Ora
Gingko
Molecule
BIO token
Eliza framework
FutureHouse Schematic