Skip to content

Research Overview

The scientific foundation behind Tachikoma's design.

Quick Lookup

TechniqueWhatUsed InResearch
Position-Aware LoadingOptimizes context placementContext ManagementPosition Bias
Verification LoopsGVR patternSkill ChainsVerification
Reflection PhaseFreedom to revisit, rethinkAll skillsVerification
Model-Aware EditingOptimal edit formatSkill ExecutionModel Harness

Edit format selection matters as much as model choice. Research in Model Harness. | RLM | Large context handling | Subagents | RLM | | Cost-Aware Routing | Match complexity to strategy | Intent Routing | Cost-Aware | | Skill Composition | Modular architecture | Skill Chains | Modularity |

Research Areas

Position Bias

U-shaped attention bias in transformers.

Papers:

  • "Found in the Middle" (Hsieh et al., ACL 2024)
  • "On the Emergence of Position Bias" (Wu et al., ICML 2025)

Verification Loops

Why verification beats retries.

Papers:

  • "Towards Autonomous Mathematics Research" (arXiv:2602.10177)
  • "Accelerating Scientific Research with Gemini" (arXiv:2602.03837)

Model Harness

Edit format selection matters as much as model choice.

Source: Can.ac blog (Feb 2026)

RLM

10M+ token contexts through adaptive chunking.

Paper: "Recursive Language Models" (arXiv:2512.24601)

Cost-Aware Routing

Matching strategy to task complexity.

Paper: "When Do Tools and Planning Help LLMs Think?" (arXiv:2601.02663)

Modularity

Why focused components beat monolithic approaches.

Paper: "Agentic Proposing" (arXiv:2602.03279)

Reading Order

  1. Position Bias — Context loading (foundational)
  2. Cost-Aware Routing — Speed vs accuracy
  3. Modularity — Why skills beat monolithic
  4. RLM — Large context (advanced)
  5. Verification Loops — Reliability
  6. Model Harness — Edit optimization

Caveat

Some claims from papers are reported but not independently verified. Always verify with original sources if you need certainty.

See Also

Released under the MIT License.