Collaboration Opportunities

Partners

Systems research and real pilots for agentic work.

We build decision systems that plan, act, verify, and recover—with audit trails. We apply them to measured slices of a business today and extend toward AGI/SAI/ASI without hype.

Who should contact us

Universities & Labs.

Joint studies on orchestration vs capacity, agent audits, and evaluation harnesses.

Grantmakers.

Applied research in multilingual content operations, employability, and agentic decision-making.

Strategic Pilots.

Retail/catalog, candidate ops, and support workflows with acceptance metrics.

Investors.

Funding to productize agentic systems that run work under constraints.

What we do together

We co-design scoped work with clear acceptance thresholds and auditable traces.

What you get:

publishable protocols or pilot reports; unit-economics deltas; decision audits.

What we bring:

multi-model orchestration, verifier stacks, mixture-of-agents routing, memory modes, and plan–act–verify loops with rollback.

Engagement models

Joint Research.

Fixed scope.Pre-declared metrics.Co-authored notes.Redacted datasets when possible.

Pilot.

Real data under a data-processing scope.Success = accepted outputs at target cost/latency/error.Short written report.

Funding.

Capital to scale agentic execution in PageMind and Emplo and extend the agentic firm kernel.

Example scopes

Retail/catalog.
Inputs— supplier PDFs/images.
Work— attribute extraction, glossary enforcement, verification.
Outputs— CSV with per-row source trace.
Gate— corrections ≤ threshold on N sampled SKUs.
Candidate ops.
Inputs— CV + job corpus.
Work— contextual match + packet preparation.
Outputs— review bundle + links for manual portals.
Gate— review time ≤ target minutes per application.
Agent audit.
Inputs— fixed task set.
Work— plan–act–verify loops with decision logs and seeded failures.
Outputs— audit pack.
Gate— recovery rate ≥ target with stable drift.

Selection criteria

  • Moves tasks along assist → approve → auto-with-review → auto with measurable gates.
  • Improves the cost–latency–quality frontier on real workloads.
  • Reduces oversight minutes per 100 tasks and increases recovery from bad states.
  • Produces artifacts we can share—protocols, audits, and deltas—without exposing private IP.

Data & compliance prerequisites

Allowed data forms include PDFs, images, audio, tables, and private chats/emails with explicit permission or redacted/synthetic surrogates. Outputs carry source trace for audits. Data is processed under EU-only hosting during private beta. Deletion on request; backups purged within 30 days. See Legal for full terms.

Outcomes & IP

We publish problem statements, evaluation protocols, decision audits, literature maps, and selected redacted datasets. We keep proprietary code, full system diagrams, and internal datasets private. Deeper materials are shared selectively under agreement. Demos are arranged by email when relevant.

Proof points

Evaluation protocols:

  • OVC-1Orchestration vs Capacity
  • ADA-1Agentic Decision Audit
  • KCR-1Knowledge Creation & Replication

Read more on Research. Dated milestones are listed in News.

How to start

Send a brief by email with: organisation, audience type (Lab/Grant/Pilot/Investor), goal, dataset type, acceptance metrics, constraints. We reply by email. No intro calls by default.

Partner brief (optional)

Email remains default. This optional brief keeps the essentials structured if you want to share context immediately.

Personal Information

Project Details

Dataset Type

Submissions are tagged partners-brief and route to the same secure inbox as our waitlist.