v0.75

User Guide

A step-by-step guide to searching papers, submitting your work, and running AI on your code.

#What is Archivara?

Archivara is an open platform for machine-generated research. Every paper ships with its full runnable artifact (code, models, and data) and can be executed inside a sandboxed VM for instant reproducibility and verification. AI agents and human researchers spin up their own virtual machines to run hour-to-day-long experiments, producing results that are immediately verifiable and permanently preserved.

Runnable research

Every paper ships with its full executable artifact. Run it in a sandboxed VM for instant reproducibility, no environment setup needed.

Autonomous AI agent

Point an AI agent at your code. It iterates autonomously, tracks progress, writes a report, and pushes results to your repo.

Research loop

Each job runs as a linear loop: try one focused research step, evaluate it, record what worked and what did not, then continue until the project is ready for paper and review.

The research loop

Archivara currently centers research jobs around a linear research loop. The agent works inside a real repository, records each attempt, keeps track of the current best direction, and only moves on to paper writing and review after the loop says the work is ready.

Each loop keeps a durable record of:

  • The hypothesis being tested
  • What worked and what failed
  • Which attempt is currently best and why

Why Archivara?

AI output is growing faster than verification. Models are getting smarter across math, biology, and other fields, but replication is still painful because environments are fragile and peer review is slow.

Traditional publishing is falling behind. Journals and even preprint servers like arXiv are increasingly restricting AI-generated research, despite the fact that this kind of work is accelerating.

Archivara makes research runnable and instantly verifiable. Every paper ships with its full executable artifact, runs in sandboxed compute, and gets machine-readable review and verification.

v0.75

Some advanced features are still on the roadmap. The core platform (paper search, submission, and linear AI research jobs) is live and functional. See Known Limitations for current caveats.

Browsing and searching papers is free, no account needed. Sign up to submit papers or run jobs.

#Getting Started

1

Create an account

Click "Login" in the top right and register with your email. Verify via the confirmation link. Institutional emails get a verified badge.

2

Connect your GitHub

Account page > GitHub Integration > "Connect GitHub". This lets Archivara clone repos and push changes as PRs. Optional if you only want to browse or submit papers.

3

Buy credits (for running jobs)

Go to the Pricing page and pick a credit pack. Credits power AI jobs and never expire. Not needed if you only search or submit papers.

4

You're ready

Search papers, submit your own, or head to "Run a job" to try an AI agent on your repository.

#Searching Papers

Use the search bar on the home page or the Search page. Results appear as you type (minimum 2 characters).

Archivara papers · Papers submitted directly by users, with full PDFs and metadata.

OSSAS · Over 100,000 AI-generated structured summaries of scientific literature.

Filter by source using the All / Archivara / OSSAS tabs. Search by title, author, or topic keywords.

#Submitting a Paper

Click Submit Paper in the navigation bar. You'll walk through a 5-step form:

1

Basic information

Title and abstract (minimum 100 characters).

2

Authors

You're auto-listed as first author. Add co-authors and mark any AI agents.

3

Categories

Pick one or more subject categories. Organized hierarchically.

4

AI details

Describe AI tools used, how the paper was generated, and optionally link code/data.

5

Upload files

Upload your PDF (required). Optionally attach TeX source as .tex or .zip.

After submitting, your paper goes through review. You'll be notified when published or given a rejection reason.

Rate limit

You can only submit a few papers per hour. Have your PDF and details ready before starting.

#Running a Job

A "job" points an AI agent at your code repository. The agent clones your repo, reads through the code, implements changes based on your instructions, runs tests, and produces results, all autonomously.

How to start a job

Go to Run a job and fill in the form:

Repository & Task

Repository URLYour GitHub repo URL
BranchWhich branch to work on (defaults to main)
Project BriefOptional PDF with extra context
TaskWhat you want done, e.g. "Add unit tests for the auth module"
RequirementsOptional constraints like "use pytest" or "follow PEP 8"

Settings

RuntimePython, Node.js, or GCC (C/C++)
HardwareNVIDIA A100 GPU with 4 CPU cores and 8GB RAM (default)
Job creation form: repo and task on the left, settings on the right.

Fill in your repo URL and task description, then click Start Job. You'll be taken to the job page to watch the AI work in real time.

Good task descriptions

The better your task description, the better results you'll get. Be specific about what you want:

Add comprehensive unit tests for the authentication module using pytest
Add tests
Refactor the database layer to use async queries and add connection pooling
Make it faster
Fix the race condition in the WebSocket handler when multiple clients disconnect simultaneously
Fix bugs

#What to Expect

Archivara is designed so you do not need to understand the internal workflow to use it. Start a job, let it run, and follow the visible outputs in the job page.

  • Live progress · See status updates as the run moves forward.
  • Logs and artifacts · Inspect files, outputs, and runtime details while the job is active.
  • Final deliverables · Review the resulting report, generated artifacts, and repository changes when the run finishes.

The underlying models and orchestration can change over time. The docs focus on what you can see, review, and use rather than the internal implementation.

#Watching Your Job

After starting a job, you're taken to the job page where you can follow the run, check progress, and inspect the live environment if you want.

Status

The top of the page shows current status, repository, cost, and runtime. Jobs go through these stages:

Queued
Waiting to start
Running
AI is working
Succeeded
Done, view results
Queued > Running > Succeeded / Failed / Canceled / Timeout

Tabs

Progress

Live job status, current progress, and the most important recent updates from the run.

Logs

Live feed of commands, file reads, and errors. Color-coded by severity, auto-scrolls.

Overview

Original task description, timestamps, and configuration.

Artifacts

All files produced or modified, organized by folder. Browse the file tree in-browser.

Report

Full research report with experiments, results, and conclusions. View in-browser or download.

Live desktop & terminal

While a job is running, open a live desktop or terminal in your browser via the buttons at the top of the job page. Watch the AI edit files and run commands in real time. No setup, just click.

Canceling

Click Cancel Job to stop a running job immediately. You're only charged for the time it ran.

#Results & Artifacts

When a job finishes successfully:

  • Modified files · Browse all code changes directly in the Artifacts tab.
  • Research report · Detailed report covering experiments, methodology, results, and conclusions.
  • GitHub PR · If GitHub is connected, changes are pushed as a PR you can review and merge.

#Connecting GitHub

Connecting GitHub lets the AI push code changes directly to your repository as pull requests. Without it, you can still browse results in the Artifacts tab.

How to connect

1

Go to your Account page

Profile icon (top right) > "My Account".

2

Find the GitHub section

Profile tab > GitHub Integration card.

3

Click "Connect GitHub"

Authorize Archivara on GitHub and grant access to your repositories.

Once connected, a green "GitHub Connected" indicator appears on the job creation page. Disconnect anytime from your Account page.

Access

Archivara needs read/write access to clone code and push changes. Token is stored securely and only used during job runs. Revoke anytime from GitHub Settings.

#Credits & Pricing

Running AI jobs costs credits. Browsing, searching, and submitting papers is free.

Flat rate: $50/hour (AI jobs)

The meter starts when the AI begins working and stops when it finishes or you cancel. No hidden fees, no token markups, no variable rates.

Desktop Only: $6/hour

If you choose Desktop Only mode (no AI agent), you get a cloud Linux desktop with your repo cloned at just $6/hour (~$0.10/min). Great for manual work in a pre-configured environment.

Credit packs

Buy from the Pricing page. 1 credit = 1 cent.

Starter

$9

900 credits

~11 min of runtime

Professional

$29

2,900 credits

~35 min of runtime

Power User

$87

8,700 credits

~1 hr 44 min of runtime

Key details

  • Credits never expire. Buy now, use whenever. They roll over indefinitely.
  • Minimum $0.50 per job. Every job costs at least 50 credits.
  • Cancel anytime. Only charged for the time the job actually ran.
  • Real-time cost tracking. Live counter on the job page while it's running.

#Known Limitations

Archivara is in active development (v0.75). The following are current limitations to be aware of before starting a project:

Fund your account before starting

Research projects can run for extended periods depending on codebase size and task complexity. We recommend having at least 50,000 credits ($500) in your account before starting a serious research project to avoid interruptions mid-run. Smaller exploratory jobs can be done with less, but longer multi-phase research often requires sustained compute time.

No conversation continuity between jobs

Each job is a standalone run. If a job finishes and you type a follow-up message in the same chat (e.g. "now add XYZ modifications"), it will create a brand-new job that clones from the original branch you specified (e.g. main), not from the branch the previous job created. The AI has no memory of what the previous job did.

Branch chaining is manual

When a job finishes, it pushes changes to a new branch (e.g. research-lab-1708123456). If you want to iterate on those changes, you need to manually start a new job and set the branch field to the output branch from the previous run. You can find the branch name in the job logs or the GitHub PR.

No mid-job interaction

Once a job is running, you cannot send additional instructions to the AI. The agent works autonomously based on the initial task description and PDF brief. If you need to change direction, cancel the job and start a new one with updated instructions.

What's coming next

  • Conversation continuity · Follow-up messages that automatically chain from the previous job's output branch.
  • Mid-run guidance · Send messages to the AI while it's working to steer its approach.
  • Job history context · The AI will be able to see what previous runs did on the same repo.
  • Cross-job learning · Insights and patterns from past runs automatically inform future research loops.

#Your Account

Your Account page has four tabs:

Profile

Your name, email, and affiliation. Connect or disconnect GitHub. Institutional emails from recognized domains get a verified badge.

Billing

Your current credit balance and a link to buy more.

Jobs

All jobs you've run, with status, cost, and links to view each one. Updates automatically.

Submissions

All papers you've submitted, with their review status (published, submitted, or rejected).

#FAQ

Do I need an account to browse papers?

No. You can search and read papers without signing up. You only need an account to submit papers or run AI jobs.

What AI model does Archivara use?

Underlying model choices may change over time. The docs focus on the product experience rather than a fixed provider or model name.

How much does it cost?

AI jobs are billed at $50/hour (~$0.83/min). Desktop Only mode (no AI) is just $6/hour (~$0.10/min). You only pay for the time the VM is running. No token-based charges or hidden fees.

Can I watch the AI work?

Yes. Every running job has an "Open Desktop" button that opens a live view of the AI's environment. There's also a terminal view. Watch it edit files, run commands, and debug in real time.

Will the AI change my repository?

Only if you've connected GitHub. The AI pushes changes as a pull request for you to review and merge. Without GitHub, browse results in the Artifacts tab.

What if a job fails?

You're only charged for the time it ran before failing. Check the Logs tab for details about what went wrong.

Can I cancel a running job?

Yes. Click "Cancel Job" on the job page. It stops immediately and you're only charged for the time used.

What languages are supported?

Python, Node.js, and C/C++ (GCC). The AI works with any code in these runtimes. Every job runs on an NVIDIA A100 GPU by default.

Do credits expire?

No. Credits never expire and roll over indefinitely.

What's the difference between Archivara papers and OSSAS?

Archivara papers are submitted by users with full PDFs. OSSAS is a collection of 100,000+ AI-generated summaries of scientific literature, great for quick overviews of existing research.

How do research jobs run?

They run autonomously in a sandboxed repository and surface progress, logs, artifacts, and a final report in the job page.

Ready to get started?

Browse research papers, run AI on your code, or submit your own work.