Bio x AI Hackathon
  • Welcome to the Bio x AI Hackathon
  • Getting Started
    • Quickstart
    • Important Links
  • Developers
    • BioAgents
    • CoreAgents
    • Eliza Agent Framework
    • Knowledge Graphs
    • .cursorrules
    • Starter-repos
    • Plugin Guide
  • Vision and Mission
    • Bio x AI Hackathon
    • The Problems in Science
    • TechBio
    • Guidance from the Judges
      • Important Datasets and Code Repositories
      • Reading List
      • Common Mistakes for Developers new to Academia
    • Hackathon Ideas
      • Full Projects
        • The Complexity Slider - Finding Hypotheses at the Limits of Human Knowledge
        • [Hard Mode] Metadata Generation on datasets with No Manuscript or Code Associated
        • Inverse Reproducibility - Given Manuscript and Data, Make the Code
        • Atlas of Research Methods Formatted for Agentic Reuse
        • Utilizing Knowledge Graphs for the Detection of Potential Null Results
        • Creating an Iterative Publication Stack by Linking Together Existing Tooling
        • Longevity Atlas: Building a Decentralized Knowledge Network with Agentic Research Hypothesis Engine
        • CoreAgent Track - Opportunities to work with BioDAOs
        • SpineDAO Chronos Project Spec
      • Individual Plugins
        • Plug-ins for every piece of research tooling known to humankind
        • Reproducibility Assistant - Code Cleaning, Dockerization, etc
        • Finding and Differentiating Cardinal vs Supporting Assertions
        • [Easier Mode] Metadata Generation on Datasets Given the Manuscript and Code Repository
        • Sentiment Analysis on Existing Citations, Dissenting vs Confirming
        • Agentic Metadata Template Creation for Standard Lab Equipment
  • Ops
    • Calendar
      • Key Dates
      • Office Hours
    • Judges and Mentors
      • Communicating to Judges and Mentors
      • BioAgent Judging Panel
      • CoreAgent Judging Panel
      • Mentors
    • Prize Tracks
    • Hackathon Rules
    • Kickoff Speakers
    • FAQ
Powered by GitBook
On this page
  1. Vision and Mission
  2. Hackathon Ideas
  3. Full Projects

Creating an Iterative Publication Stack by Linking Together Existing Tooling

Problem Statement:

Researchers rely on a diverse set of tools for their daily work, including lab notebooks, data analysis software, collaboration platforms, and code repositories. These tools often operate in silos, hindering seamless data sharing and FAIR publication. Building a new publication platform from scratch can be challenging; therefore, retrofitting existing tools with decentralized web integrations can foster a more open and collaborative research environment.

Challenge:

Develop a system that integrates existing research tools (e.g., Jupyter Notebooks, lab notebooks, GitHub, data repositories) with decentralized web technologies to enable iterative, FAIR-compliant publication of research outputs.

Detailed Description:

  • Tool Integration:

    • Identify commonly used research tools and develop integration modules or plugins.

    • Focus on tools that handle data, code, protocols, and documentation.

    • Prioritize open-source tools and APIs.

  • Decentralized Web Integration:

    • Utilize decentralized web technologies (e.g., IPFS, Solana, etc) to store and share research outputs. Mandate the usage of IPFS for interoperability.

    • Implement decentralized identifiers (DIDs) for researchers and research artifacts.

    • Enable verifiable data provenance and integrity.

  • FAIRification Modules:

    • Develop modules that guide researchers through the FAIRification process within their existing tools.

    • Automate metadata generation, ontology mapping, and data licensing.

    • Provide tools for creating and managing persistent identifiers (PIDs).

  • Interoperability and Data Sharing:

    • Design the system to ensure interoperability between different research tools and decentralized web platforms.

    • Implement standardized data formats and APIs for data exchange.

    • Enable researchers to easily share and reuse data across different tools and platforms.

  • Iterative Publication Workflow:

    • Develop a workflow that allows researchers to publish their research outputs incrementally.

    • Enable version control and provenance tracking for all published artifacts.

    • Support the publication of data, code, protocols, and preliminary findings.

  • User Experience:

    • Design the integrations to be seamless and non-intrusive for researchers.

    • Minimize the learning curve and provide clear documentation.

    • Focus on automating as many FAIRification steps as possible.

  • Output:

    • Working integrations or plugins for existing research tools.

    • A decentralized storage and sharing system for research outputs.

    • A workflow for iterative, FAIR-compliant publication.

    • Documentation and guidelines for researchers.

  • Potential Technologies:

    • IPFS, Filecoin, Solid, distributed databases.

    • Decentralized identifiers (DIDs).

    • Web APIs and integration frameworks.

    • Metadata standards and ontologies (e.g., Dublin Core, DataCite, schema.org).

    • Version control systems (Git).

  • Evaluation Metrics:

    • Ease of integration with existing tools.

    • FAIR compliance of published outputs.

    • Interoperability between different tools and platforms.

    • User-friendliness and adoption by researchers.

    • Security and reliability of the decentralized storage system.

Desired Outcomes:

  • A set of working integrations that empower researchers to publish FAIR-compliant outputs using their existing tools.

  • A shift towards more decentralized and collaborative research practices.

  • Improved data sharing and reuse within the scientific community.

  • A reduction in data silos and artifact fragmentation.

PreviousUtilizing Knowledge Graphs for the Detection of Potential Null ResultsNextLongevity Atlas: Building a Decentralized Knowledge Network with Agentic Research Hypothesis Engine

Last updated 1 month ago