Research Overview
The scientific foundation behind Tachikoma's design.
Quick Lookup
| Technique | What | Used In | Research |
|---|---|---|---|
| Position-Aware Loading | Optimizes context placement | Context Management | Position Bias |
| Verification Loops | GVR pattern | Skill Chains | Verification |
| Reflection Phase | Freedom to revisit, rethink | All skills | Verification |
| Model-Aware Editing | Optimal edit format | Skill Execution | Model Harness |
Edit format selection matters as much as model choice. Research in Model Harness. | RLM | Large context handling | Subagents | RLM | | Cost-Aware Routing | Match complexity to strategy | Intent Routing | Cost-Aware | | Skill Composition | Modular architecture | Skill Chains | Modularity |
Research Areas
Position Bias
U-shaped attention bias in transformers.
Papers:
- "Found in the Middle" (Hsieh et al., ACL 2024)
- "On the Emergence of Position Bias" (Wu et al., ICML 2025)
Verification Loops
Why verification beats retries.
Papers:
- "Towards Autonomous Mathematics Research" (arXiv:2602.10177)
- "Accelerating Scientific Research with Gemini" (arXiv:2602.03837)
Model Harness
Edit format selection matters as much as model choice.
Source: Can.ac blog (Feb 2026)
RLM
10M+ token contexts through adaptive chunking.
Paper: "Recursive Language Models" (arXiv:2512.24601)
Cost-Aware Routing
Matching strategy to task complexity.
Paper: "When Do Tools and Planning Help LLMs Think?" (arXiv:2601.02663)
Modularity
Why focused components beat monolithic approaches.
Paper: "Agentic Proposing" (arXiv:2602.03279)
Reading Order
- Position Bias — Context loading (foundational)
- Cost-Aware Routing — Speed vs accuracy
- Modularity — Why skills beat monolithic
- RLM — Large context (advanced)
- Verification Loops — Reliability
- Model Harness — Edit optimization
Caveat
Some claims from papers are reported but not independently verified. Always verify with original sources if you need certainty.