SR&ED Documentation Best Practices
What CRA reviewers expect to see, what evidence carries the most weight, and how to build a defensible documentation trail without disrupting your development workflow.
Why documentation matters
Documentation serves two purposes in a SR&ED claim. First, it's the raw material for writing your T661 project descriptions: without good records, you're reconstructing the work from memory, which produces weaker narratives. Second, it's your defense during a CRA review: the RTA (Research and Technology Advisor) will ask to see evidence that supports your project descriptions.
CRA does not require a specific format or level of detail. What they do require is that documentation exists, that it was created during or close to the time of the work (contemporaneous), and that it substantiates the claims made in the T661.
Contemporaneous vs. reconstructed evidence
This distinction is one of the most important concepts in SR&ED documentation:
Contemporaneous documentation
Created during or shortly after the work was performed. Examples: commit messages, pull request descriptions, design documents, meeting notes, test results, architecture decision records. This type of evidence is highly credible because it wasn't created with the SR&ED claim in mind: it's a natural byproduct of the development process.
Reconstructed documentation
Created after the fact, specifically for the SR&ED claim. Examples: project descriptions written months later, developer interviews conducted at filing time, retrospective timelines assembled from memory. This evidence is inherently weaker because it relies on recall and can be influenced by the desire to support the claim.
CRA will accept both types, but contemporaneous documentation is significantly more persuasive during a review. An RTA who sees a pull request from March that describes the exact uncertainty mentioned in your T661 will be far more convinced than one who reads a summary written in December.
How SRED AI helps
SRED AI builds your T661 drafts from contemporaneous evidence by default. Since it reads directly from your GitHub pull requests and commits, artifacts created by your developers during the actual work, the resulting documentation is grounded in dated, verifiable evidence rather than post-hoc reconstruction.
The five questions framework for documentation
Organize your evidence around CRA's five questions. For each SR&ED project, you should be able to point to documentation that supports each answer:
1. Uncertainty: "We didn't know if/how this could be achieved"
Evidence: design documents that discuss technical challenges, Slack messages where developers express uncertainty about feasibility, architecture decision records listing approaches considered and rejected, research notes on existing solutions and their limitations.
2. Hypothesis: "We thought this approach might work because..."
Evidence: pull request descriptions explaining the approach being tried, sprint planning notes that discuss potential solutions, technical proposals or RFC documents, comments in code reviews explaining the rationale.
3. Experiment: "We built and tested this to evaluate our hypothesis"
Evidence: branches and pull requests showing the implementation, commit history showing iterative changes, benchmark scripts and test code, CI/CD pipeline results, proof-of-concept or spike branches.
4. Results: "Here's what happened"
Evidence: test results and performance benchmarks, code review comments discussing outcomes, reverted or abandoned PRs (evidence of approaches that didn't work), monitoring dashboards showing before/after metrics.
5. Conclusions: "Here's what we learned"
Evidence: retrospective notes, final PR descriptions summarizing the approach taken and why, architecture decision records documenting the chosen solution and alternatives rejected, post-mortem documents.
Types of evidence that carry weight
Not all documentation is equally valuable. Here's a rough hierarchy from most to least persuasive:
- Dated code artifacts. Git commits, pull requests, branches, and code reviews with timestamps. These are nearly impossible to fabricate and provide an irrefutable timeline of investigation.
- Technical documents created during the work. Design docs, RFCs, architecture decision records, and technical proposals written before or during the investigation.
- Test results and benchmarks. Quantitative evidence of experiments conducted: performance comparisons, accuracy metrics, failure rates, load test results.
- Meeting notes and communications. Sprint planning notes, retrospective summaries, Slack threads discussing technical challenges. These show the investigation as it unfolded.
- Project management records. Jira tickets, Linear issues, or similar tools that track the work at a task level. Useful for corroborating timelines and scope.
- After-the-fact summaries. Developer interviews and written recollections. Necessary when contemporaneous documentation is thin, but the weakest form of evidence.
Code-based evidence for software SR&ED
Software companies have a unique advantage: your version control system is a time-stamped ledger of every experiment, hypothesis, and conclusion. Here's how to use it:
- Pull request descriptions. The single most valuable piece of SR&ED documentation for software teams. A well-written PR description that explains the problem, the approach, and the expected outcome is contemporaneous evidence of systematic investigation.
- Commit history. Shows the chronological progression of work. Commits that show iterative refinement, alternative approaches, and performance tuning are evidence of experimentation.
- Branch history. Abandoned or experimental branches that were never merged are evidence of approaches that were tried and failed: exactly the kind of investigation CRA values.
- Code review threads. Discussions about technical trade-offs, alternative approaches, and concerns about feasibility provide rich evidence of the uncertainty and analysis involved.
- CI/CD pipeline results. Build and test results tied to specific commits can demonstrate the testing and iteration cycle that characterizes systematic investigation.
How SRED AI helps
SRED AI is purpose-built to extract SR&ED evidence from code-based artifacts. It reads your pull requests, commit messages, and code review discussions to build an evidence index for each project: linking specific development artifacts to the five questions in your T661. This turns your existing Git history into organized, audit-ready evidence.
Building a documentation habit
The best SR&ED documentation is created as a side effect of good engineering practices, not as a separate activity. Here are lightweight habits that produce strong evidence:
- Write meaningful PR descriptions. Include a sentence or two about why this approach was chosen. Mention what alternatives were considered. This takes 30 seconds and is valuable for code reviews regardless of SR&ED.
- Use descriptive commit messages. Instead of "fix bug", write "try approach B for connection pooling after approach A exceeded memory limits". This documents the investigation naturally.
- Keep architecture decision records (ADRs). For significant technical decisions, write a brief document: problem, options considered, decision made, rationale. This is excellent engineering practice and strong SR&ED evidence.
- Save benchmark results. When you run performance tests or comparisons, commit the results to the repository or save them in a shared document. Quantitative evidence is compelling.
- Note uncertainties during sprint planning. When the team identifies work where the outcome is uncertain, note it. Even a brief mention in meeting notes creates evidence.
Common documentation pitfalls
- No documentation until claim time. The most common mistake. Trying to reconstruct 12 months of technical investigation from memory produces vague, unconvincing narratives and leaves you vulnerable during a review.
- Documenting the product, not the investigation. CRA doesn't care about your feature roadmap. They want to see evidence of technological uncertainty, experiments, and conclusions. Focus on the technical journey, not the destination.
- Overly formal documentation requirements. Don't create a burdensome SR&ED-specific documentation process. It won't be maintained. Instead, enhance existing processes (PR descriptions, commit messages, retrospectives) with SR&ED-relevant detail.
- Inconsistent record-keeping. Documentation for only some projects or some months undermines the credibility of the entire claim. CRA may question why evidence exists for some projects but not others.
- Confusing time records. Employee time allocations should be supportable. If you claim an employee spent 60% of their time on SR&ED, you should be able to explain how that estimate was derived.
What "good" documentation looks like
A well-documented SR&ED project for a software company typically includes:
- Project summary. A brief T661-style narrative covering the five questions, referencing specific dates and artifacts
- Evidence timeline. A chronological list of relevant PRs, commits, and documents with dates and brief descriptions of their SR&ED relevance
- Key technical documents. Design docs, ADRs, or technical proposals created during the investigation
- Test results and benchmarks. Quantitative evidence of experiments, including negative results
- Time allocation records. How many hours each team member spent on this project, and the basis for the estimate
- Cross-references. Links between the T661 narrative and the supporting artifacts, so a CRA reviewer can easily verify claims against evidence
This may seem like a lot, but most of it already exists in your development workflow. The challenge is organizing it, not creating it from scratch.
How SRED AI helps
SRED AI automates the organization step. It reads your repository, identifies SR&ED-relevant development activity, and produces a structured evidence package: project narrative, chronological timeline, and indexed references to supporting artifacts. What normally takes days of manual assembly is generated in minutes.
Stop reconstructing what your team built
SRED AI extracts technological uncertainties and experiments from your GitHub pull requests, giving you a T661-ready first draft in minutes instead of weeks.