SRED AI
Home
Sign In

SR&ED and Agile Development

CRA's T661 form expects a linear R&D narrative: hypothesis, experiment, conclusion. Agile teams work in sprints, iterate constantly, and rarely think in those terms. Here's how to bridge the gap.

The agile-SR&ED disconnect

Most SR&ED guidance assumes a waterfall-style R&D process: define a problem, form a hypothesis, run experiments, analyze results, draw conclusions. This maps neatly onto CRA's T661 Part 2 project description template.
Agile development doesn't work this way. Your team works in two-week sprints, ships incremental changes via pull requests, pivots based on retrospectives, and rarely documents a formal "hypothesis" before writing code. The uncertainty is real, but it's distributed across dozens of commits and conversations rather than captured in a single experiment report.
This doesn't mean agile work is ineligible for SR&ED. It means you need a deliberate strategy for translating agile artifacts into the narrative structure CRA expects.

How CRA thinks about R&D projects

CRA evaluates SR&ED at the project level, not the sprint or ticket level. A "project" in SR&ED terms is a body of work aimed at resolving a specific technological uncertainty. It might span months and many sprints.
For each project, CRA wants answers to five questions:
  1. What scientific or technological uncertainties did you attempt to overcome?
  2. What hypotheses did you formulate to reduce or eliminate the uncertainties?
  3. What procedures or experiments did you adopt to test your hypotheses?
  4. What progress did you achieve as a result of the work performed?
  5. What conclusions did you draw from the work?
The challenge is that in an agile environment, these five questions are answered incrementally across sprints , not in a single linear pass.

Mapping agile artifacts to T661 requirements

The good news: agile teams produce a wealth of artifacts that serve as excellent SR&ED evidence. The key is knowing which artifacts map to which parts of the claim.
Epics and feature initiatives → Technological uncertainty
Large-scale features or technical initiatives often represent the "project" in SR&ED terms. The epic description (what you're trying to achieve and why it's technically challenging) maps to the uncertainty statement in the T661. Not every epic is SR&ED, but the ones involving genuine technical unknowns are strong candidates.
Sprint work and pull requests → Systematic investigation
Individual PRs and sprint tasks represent the "experiments" CRA wants to see. A PR that implements approach A, followed by another that tries approach B after A failed, is exactly the kind of systematic investigation the T661 asks about. The commit history provides a dated, verifiable record of each experiment.
Code reviews and PR discussions → Hypothesis formulation
Technical discussions in PR reviews often capture the reasoning behind an approach: why a particular algorithm was chosen, what trade-offs were considered, what alternatives were rejected. This is CRA's "hypothesis" in action, documented contemporaneously.
Retrospectives and post-mortems → Conclusions
Sprint retrospectives that discuss what worked and what didn't, especially around technical challenges, map directly to the "conclusions" section of the T661. Post-incident reviews for failed approaches are particularly valuable evidence.
Benchmarks and test results → Progress achieved
Performance tests, accuracy metrics, load test results, and A/B experiment data show measurable progress (or lack thereof) toward resolving the uncertainty. This is the evidence CRA uses to verify that real work was done.
How SRED AI helps
SRED AI reads your pull request history and automatically reconstructs a chronological narrative from your development artifacts. Instead of manually combing through months of sprints to build a T661 project description, you get a structured first draft that maps your PRs to the five-question framework CRA expects, with dated evidence already indexed.

Building a T661 project from agile workflows

Here's a practical framework for converting agile work into a SR&ED project description:
  1. Identify the overarching uncertainty. Look across sprints for a recurring technical challenge that your team worked on over weeks or months. This becomes the SR&ED project.
  2. Group related sprints and PRs. Collect the tickets, branches, and PRs that relate to this uncertainty. These form the evidence base for the project.
  3. Extract the hypothesis-experiment-conclusion arc. From the grouped PRs, identify the approaches that were tried (hypotheses), how they were implemented and tested (experiments), and what was learned (conclusions).
  4. Build the timeline. Order the evidence chronologically. CRA expects to see a logical progression, even if the actual work was iterative. Pull request merge dates provide a natural timeline.
  5. Write the narrative. Using the five questions as a framework, write a project description that tells the story of how your team investigated and resolved (or attempted to resolve) the uncertainty.

Concurrent documentation in agile teams

CRA places significant weight on contemporaneous documentation. evidence created during the work, not reconstructed after the fact. Agile teams have a natural advantage here, because their workflows already produce dated artifacts:
  • Pull request descriptions. Encourage developers to write meaningful PR descriptions that explain the "why", not just the "what". A sentence about the technical challenge being addressed turns a routine PR into SR&ED evidence.
  • Commit messages. Detailed commit messages that reference the problem being solved create a searchable trail of investigation.
  • Sprint planning notes. When technical uncertainties are discussed during planning, capture them. Even brief notes like "unsure if approach X will meet latency targets" become evidence.
  • Architecture decision records. For significant technical decisions, a short ADR explaining the problem, options considered, and rationale is gold-standard SR&ED documentation.
  • Slack or Teams threads. Technical discussions about trade-offs and alternative approaches are valuable, but they're ephemeral. Consider periodically exporting or summarizing key discussions.

Common pitfalls for agile teams

  • Defining SR&ED projects too narrowly. A single sprint task is rarely a SR&ED project. Look for the larger technical challenge that spans multiple sprints.
  • Defining projects too broadly. "Build the platform" is not a SR&ED project. Each project should focus on a specific, identifiable technological uncertainty.
  • Losing the thread across sprints. When work on a technical challenge is spread across many sprints with other work in between, it's easy to lose the narrative. Tag or label SR&ED-relevant tickets consistently.
  • No per-project time tracking. CRA needs hours attributed to each SR&ED project. If your team only tracks time by sprint, you'll need to estimate retroactively, which is less defensible.
  • Waiting until year-end. Reconstructing 12 months of agile work into T661 narratives at filing time is painful and produces weaker claims. Review quarterly at minimum.
How SRED AI helps
By connecting directly to your GitHub repository, SRED AI eliminates the year-end scramble. It continuously tracks pull requests and commits, so when it's time to file, your T661 project descriptions are already drafted, with evidence mapped to each of the five questions CRA asks.

Stop reconstructing what your team built

SRED AI extracts technological uncertainties and experiments from your GitHub pull requests, giving you a T661-ready first draft in minutes instead of weeks.

Get Started Free
SRED AI

Generate T661-ready SR&ED documentation from your GitHub pull requests.

Resources
SR&ED for Software CompaniesSR&ED and Agile DevelopmentWhat Happens During a SR&ED AuditSR&ED Eligibility CriteriaHow to Claim SR&EDSR&ED Documentation Best Practices
Legal
Privacy PolicyTerms of Service

© 2026 SRED AI. All rights reserved.