Lecture 4: Acceptance Testing in Agile
Overview
This lecture provides a comprehensive guide to Acceptance Testing in Agile development. Acceptance testing verifies that software meets business requirements and is ready for release. We'll explore Acceptance Test-Driven Development (ATDD), the Three Amigos model, and various automated testing tools that help ensure quality throughout the development process.
Lecture Material (PDF)
ATDD & Three Amigos
Acceptance Test-Driven Development (ATDD) encourages collaboration between developers, testers, and business stakeholders before development begins. The 'Three Amigos' model clarifies requirements and defines the 'Definition of Done' (DoD) for Agile projects.
Definition of Done
Acceptance tests define when a feature is complete. They establish clear criteria that must be met before a story can be considered finished.
Three Amigos Collaboration
Brings together developers, testers, and business stakeholders to discuss requirements, examples, and acceptance criteria before coding begins.
User Stories & Scenarios
Write scenarios with expected results. Example: User creates account, uploads profile picture, adds friends - profile successfully created and visible to others.
Acceptance Criteria
Clear, testable conditions that software must meet. Examples include displaying classifications correctly, calculating projections accurately, and ensuring multi-device accessibility.
Scenario Testing
Test specific user journeys. Example: Student with 70% average should see 'First Class' classification; student with failed module should see warning about impact.
Validation & Automation
Confirm ATDD scenarios validate against user stories and use cases. These processes can be automated to ensure consistent verification throughout development.
Testing Tools & Frameworks
Modern web development relies on automated testing tools to ensure quality, performance, accessibility, and security. These tools integrate into development workflows to provide continuous feedback and catch issues early.
Selenium
Automates web browser interactions for end-to-end testing. 75% of web automation frameworks use Selenium to test workflows like login, navigation, and transactions.
W3C Validator
Checks HTML and CSS for compliance with web standards. Validating code reduces cross-browser issues and improves accessibility by 20%.
Lighthouse
Open-source tool assessing web performance, accessibility, SEO, and best practices. Sites with high Lighthouse scores rank 30% better in Google search results.
Accessibility Tools
Pa11y and Axe automate accessibility checks for standards compliance. Axe-core is used by over 60% of accessibility testers worldwide.
Gatling
Open-source load and performance testing tool. Using Gatling improves app scalability by 25% by detecting bottlenecks early through stress testing with concurrent users.
OWASP ZAP
Automates security testing to detect vulnerabilities. 98% of websites passing OWASP ZAP checks avoid common vulnerabilities. Most penetration testers focus on this 'low hanging fruit'.
Continuous Integration & Testing
Continuous Integration (CI) integrates code changes frequently and runs automated tests to ensure quality in each iteration. Automating tests in CI pipelines can reduce deployment times by 50% and testing time by 40%.
CI Pipeline Integration
Jenkins, GitHub Actions, and other CI tools automate tests like Selenium, W3C Validator, and Lighthouse. 70% of high-performing teams integrate automated tests into CI pipelines.
Quick Feedback
Selenium tests in CI pipelines provide immediate feedback on code changes, catching issues before they reach production and reducing testing time by 40%.
Quality Assurance
Automated testing ensures every code change meets quality standards. This continuous verification prevents regression and maintains application stability throughout development.
Usability Testing
Ensure applications are user-friendly and meet accessibility standards. Accessible websites see a 25% increase in user engagement due to inclusive design.
Labs & Practical Exercises
Work through hands-on activities building a grade submission system with validation, manual testing workflows, DOM manipulation exercises, and optional Selenium automation.
Activity 1: Simple HTML Setup
Create a basic HTML page with input fields and buttons. Set up the foundation for testing user interactions and input validation.
Activity 2: Input Handling
Implement grade submission logic with pass/fail boundaries. Display results based on user input and boundary value of 40.
Activity 3: Boundary Testing
Manually test boundary values including edge cases and invalid inputs. Verify correct behavior at critical transition points.
Activity 4: Regex Validation
Add regular expression validation for non-numeric input detection. Prevent invalid characters from being processed.
Activity 5: Dynamic Parsing
Extend functionality to handle floating-point numbers and whitespace. Support multiple input formats while maintaining validation.
Activity 6: Table Parsing
Parse HTML tables and store data in sessionStorage. Practice working with structured data entirely client-side.
Tools & Further Reading
VLE Page
Course management and resources.