Loading lesson...
Loading lesson...
Your capstone project is built. This module structures the peer review process: giving and receiving constructive feedback, evaluating architecture decisions against quality criteria, and completing the final assessment for CPD certification.
With the learning outcomes established, this module begins by examining why peer review matters in professional development in depth.
Professional development is not complete until your work has been tested by someone other than yourself. Peer review trains two skills simultaneously: the ability to give structured, evidence-based feedback, and the ability to receive critique without defensiveness. Both are essential in professional engineering environments.
CPD (Continuing Professional Development) is the structured process by which professionals maintain and develop their competence after initial qualification. The CPD Standards Office, which sets the benchmark for CPD programmes in the UK, requires that learning outcomes be evidenced through demonstrable outputs, not just participation. In this course, the capstone project is the output, and the peer review process is the evidence of quality.
“CPD should be a documented, structured approach to learning and development that is planned, recorded, and evaluated. Evidence of CPD must demonstrate that learning has been applied, not merely that attendance or participation occurred.”
CPD Standards Office - CPD Quality Mark criteria, Professional Development Standards
The CPD Standards Office distinguishes between passive learning (reading, watching) and applied learning (building, reviewing, receiving feedback). Your capstone submission and the peer review you give and receive are the applied evidence that satisfies CPD documentation requirements.
The review process in this course mirrors professional practice. You submit your project, two reviewers evaluate it independently against a rubric, you respond to their feedback, and the course coordinator marks the module complete. You also review at least one peer project. Both roles are required for certification.
With an understanding of why peer review matters in professional development in place, the discussion can now turn to the review process, which builds directly on these foundations.
The peer review cycle proceeds through these stages:
You are also required to review at least one peer project using the same rubric. This is not a lighter obligation. Giving good review feedback requires you to read a submission carefully, apply criteria rigorously, and communicate findings clearly. The review you give is itself evidence of your competence.
With an understanding of the review process in place, the discussion can now turn to review criteria, which builds directly on these foundations.
Reviewers evaluate submissions against five categories. Each category is rated Strong, Adequate, or Needs Work. The required minimum rating for each category is noted below. A submission that receives "Needs Work" in a required category must be revised before certification.
Functionality (required: Adequate or Strong). Does the agent run end-to-end on five test inputs without crashing? Do unit tests pass? Do tools handle errors gracefully, returning structured error objects rather than raw exceptions? Is there a step limit that prevents the agent loop from running indefinitely?
Architecture (required: Adequate or Strong). Does the chosen pattern match the stated requirements, with a justification rather than a default choice? Is the tool inventory complete, with schemas and permission levels documented? Is there an ADR (Architecture Decision Record)? Are security controls implemented, not just mentioned?
Code quality (required: Adequate). Are there no secrets in the code? Are tool descriptions specific, with examples of when to use and when not to use each tool? Are inputs validated using Pydantic or an equivalent? Is there structured logging for every tool call? Is code execution sandboxed where applicable?
Documentation (required: Adequate or Strong). Does the README cover setup with step-by-step instructions that have been verified? Does it state at least one known limitation honestly? Does the runbook cover at least two failure scenarios? Does the architecture documentation match the implemented system?
Ethics and safety (required: Adequate or Strong). Is least privilege applied, with tools scoped to minimum required permissions? Is there a human approval gate for high-risk actions, implemented and tested rather than just planned? Is at least one bias or edge case test documented? Is data handling compliant, with GDPR (General Data Protection Regulation) and data retention considerations noted?
With an understanding of review criteria in place, the discussion can now turn to how to give feedback that is actionable, which builds directly on these foundations.
Good peer feedback follows three rules that distinguish professional review from general comment. First, reference the code or documentation directly. Cite the specific file and line number rather than describing the issue in general terms. This makes the feedback immediately actionable. Second, separate observations from recommendations. State what you observed, then state what could be improved and why. Third, acknowledge what is working. Begin with two or three things that are done well. This demonstrates that you read the whole submission and makes critique easier to receive.
The contrast between weak and strong feedback is significant. A weak comment says "the security is not good." A strong comment says: "Thesearch_web tool description says 'searches the web for information' with no guidance on when to use or avoid it. This is likely to cause the agent to call it unnecessarily. Suggested revision: include explicit 'use when' and 'do not use when' guidance." Both comments describe the same issue. Only the second gives the author something to do.
For security observations, state the specific vulnerability pattern, reference the module where the correct approach was taught, and describe the fix with a test to verify it. For documentation gaps, quote the section that is missing or unclear, and describe what information a new engineer would need to find there.
Common misconception
“A peer reviewer at the same level as the author cannot provide useful feedback.”
Familiarity blindness affects every author, regardless of skill level. A fresh reader sees what the author became blind to: naming that is confusing, assumptions that are implicit, and steps in documentation that were skipped because the author considered them obvious. Peer review value comes from the fresh perspective, not from the reviewer being more experienced.
Common misconception
“Certification proves competence.”
Certification documents that specific learning outcomes were achieved and evidenced at a point in time. Competence is demonstrated through repeated application over time. The CPD Standards Office is explicit that its certification recognises structured, evidenced learning, not an absolute competence claim. Use certification as a foundation, not a finish line.
With an understanding of how to give feedback that is actionable in place, the discussion can now turn to self-assessment before submission, which builds directly on these foundations.
Complete this self-assessment before submitting your capstone project for peer review. Self-assessment catches the issues that you are able to catch. Peer review catches the rest. Both are necessary.
Technical competencies. Can you explain what a context window is and why it limits agent memory? Can you write a tool schema description that makes the agent select the tool correctly in a test? Can you trace a bug through the agent loop using log output? Can you implement exponential backoff for a rate-limited API? Can you explain the difference between prompt injection and jailbreaking? Can you describe the OWASP (Open Web Application Security Project) Agentic AI Top 10 and apply three of them to your project?
Architecture competencies. Can you justify your pattern choice with reference to the specific requirements you documented? Can you draw a C4 Level 1 context diagram for your system without looking at a template? Can you name two trade-offs you accepted in your chosen architecture? Can you write an ADR for your most significant architectural decision?
Professional competencies. Could another engineer at your level set up your project from the README alone, without asking you questions? Have you documented at least two failure modes and the steps to recover from each? Have you tested your agent on at least one input that you expected it to handle badly, and documented what happened?
With an understanding of self-assessment before submission in place, the discussion can now turn to the three most common quality gaps, which builds directly on these foundations.
Based on review cycles from this course, three gaps appear most frequently. Knowing them in advance lets you fix them before your reviewer finds them.
Gap 1: Vague tool descriptions. This is the most common reason agent behaviour is difficult to control. A description that says "searches the database" gives the model no basis for deciding when to use the tool versus an alternative. Fix: rewrite every tool description to include "use when," "do not use when," and at least one example trigger phrase. Test each description by asking yourself: if another engineer read only this description, would they route requests to this tool the same way you do?
Gap 2: No structured error handling in tools. Tools that raise raw exceptions cause the agent loop to crash rather than recover. Fix: wrap every tool function in a try/except block and return a structured error dictionary on failure, with at minimum an error message and an error type field. The agent loop can then handle the error gracefully and either retry, escalate, or stop cleanly.
Gap 3: Missing limitations section. Nearly every project has limitations. Submissions that do not document them are likely unaware of them, which is more concerning than the limitations themselves. Fix: deliberately test five adversarial or unusual inputs before writing your README, document which ones produce incorrect or incomplete results, and state what an operator should do in each case.
Self-assessment before peer review catches the issues the author can catch. Peer review catches the rest. Both stages are required for certification.
With an understanding of the three most common quality gaps in place, the discussion can now turn to evidence package for cpd certification, which builds directly on these foundations.
To receive CPD (Continuing Professional Development) credit for this course, your evidence package must contain six items. Your capstone project as a GitHub repository (public or shared with your reviewer). Architecture documents in Markdown within the repository, including a context diagram, a tool inventory, and at least one ADR. Test results showing all unit tests passing, either as CI (Continuous Integration) output or verified screenshots. Written peer review feedback that you received, covering at least two review categories. Written peer review feedback that you gave on at least one peer project. A signed self-assessment checklist with all items answered.
The CPD Standards Office requires that evidence be structured and documented rather than claimed. Each item in this package serves as verifiable evidence of a specific learning outcome. The peer review you gave is evidence of evaluation skill. The peer review you received and responded to is evidence of professional receptivity to feedback. Both matter.
CPD Standards Office, CPD Quality Mark criteria
Professional Development Standards, Evidence requirements
The UK body that sets quality standards for CPD programmes. Quoted in Section 24.1 to define the distinction between passive participation and applied evidence required for CPD certification.
Google Engineering Practices Documentation
Code Review Developer Guide, published 2018
Used as the opening real-world story to establish why peer review from someone who did not write the code produces quality improvements that self-review cannot. Cited to support the familiarity blindness argument.
ISO/IEC 42001:2023, Artificial Intelligence Management Systems
Clause 7.5, Documented information
Referenced in Section 24.7 to establish that structured, documented evidence is the correct format for AI development records, which the CPD evidence package satisfies.
OWASP Top 10 for Agentic AI Applications (2025)
LLM01 to LLM10, 2025 edition
Referenced in the self-assessment checklist (Section 24.5) as the security standard against which technical competence in agent security is assessed.
Module 24 of 25 · Capstone and Certification