๐Ÿงช Testing Explainer

How J4H is tested โ€” unit tests, smoke tests, and regression

J4H has two test files that together cover 43 automated tests. Unit tests verify that the Flask app behaves correctly in isolation. Smoke tests verify that the live production site is reachable and serving pages correctly. Together they form a lightweight but meaningful safety net.
2 Test Files
25 Unit Tests
18 Smoke Tests
43 Total Tests
๐Ÿ“š
Two Types of Tests
Unit tests vs smoke tests โ€” what each one does
โ–ผ

๐Ÿ”ฌ Unit / Integration Tests

Run entirely locally against a temporary SQLite database. No internet required. Tests Flask routes and API behavior in isolation. Fast โ€” completes in under 2 seconds. File: test_app.py

๐ŸŒ Smoke Tests

Make real HTTP requests to the live j4h.org and Heroku URLs. Require internet. Verify the production site is up, HTTPS is enforced, and every page returns 200. File: test_smoke.py

Running the tests
# Run unit tests only (no internet needed) pytest test_app.py -v # Run smoke tests only (hits live production) pytest test_smoke.py -v # Run all tests pytest -v
๐Ÿ”ฌ
test_app.py โ€” 25 Unit Tests
Flask routes and API endpoints tested in isolation
โ–ผ
TestPasscodeAuth 4 tests
  • Passcode page loads and contains "J4H"
  • Correct passcode (8903) redirects to /
  • Wrong passcode stays on the passcode page
  • Logout clears the session (localStorage.removeItem in response)
TestPageRoutes 8 tests
  • 7 parametrized pages each return HTTP 200: /, /entries, /calendar, /chart, /patient, /health-record, /quizzes
  • SMART info page (/smart-info) returns 200
TestEntriesAPI 7 tests
  • GET /api/entries returns a JSON list
  • POST with valid content returns 200 with success: true and an entry_id
  • POST with blank content (whitespace only) is rejected with 400
  • POST with missing content field is rejected with 400
  • DELETE /api/entries/:id removes the entry and returns success
  • Date filter parameters (start_date, end_date) are accepted
  • Limit parameter returns at most N entries
TestSummariesAPI 5 tests
  • GET /api/summaries returns a JSON list
  • POST /api/summary/save with valid fields returns success
  • POST with missing date fields is rejected with 400
  • POST with empty summary text is rejected with 400
  • POST /api/summary with no API key configured returns 503
TestSmartStatus 1 test
  • GET /api/smart/status returns JSON with authenticated and auth_method fields
How test isolation works
@pytest.fixture def client(): # Create a fresh temp SQLite file for each test tmp = tempfile.NamedTemporaryFile(suffix='.db', delete=False) tmp.close() import app as flask_app import database # Replace the global db with the temp one flask_app.db = database.Database(db_path=tmp.name) flask_app.app.config['TESTING'] = True with flask_app.app.test_client() as client: yield client os.unlink(tmp.name) # Delete the temp db after the test
Why replace the database? The real database (app.db or PostgreSQL) is created when app.py is imported โ€” before any fixture runs. By replacing flask_app.db directly after import, each test gets a clean, empty database with no leftover data from previous runs.
๐ŸŒ
test_smoke.py โ€” 18 Smoke Tests
Live production site verified after every deployment
โ–ผ
test_root_reachable 3 tests
  • Parametrized across 3 URLs: https://j4h.org, https://www.j4h.org, https://j4h-be058543801b.herokuapp.com
  • Each must return HTTP 200 (following any redirects)
test_https_enforced 3 tests
  • Parametrized across the same 3 URLs
  • After following redirects, the final URL must start with https://
  • Catches accidental HTTP serving or broken Cloudflare SSL
test_page_loads_on_j4h_org 10 tests
  • Parametrized across 10 pages: /passcode, /entries, /calendar, /chart, /vitals, /export, /import, /frontend-explainer, /backend-explainer, /project-explainer
  • Each must return HTTP 200 on the live site
Standalone tests 2 tests
  • test_api_entries_reachable โ€” the entries API must return 200, 302, or 401 (not 500)
  • test_static_crypto_js โ€” /static/crypto.js returns 200 and contains deriveAndStoreKey
When to run smoke tests? After every deployment (git push heroku main). They confirm that the new code is live, all pages are reachable, HTTPS is working, and the encryption library was served correctly.
โš ๏ธ
What Is Not Tested (and Why)
Intentional gaps in the test suite
โ–ผ
  • Quiz pages โ€” the 7 quiz templates are not in the unit or smoke tests. They are static content with no server logic. Testing that HTML renders is low value.
  • Client-side encryption โ€” crypto.js runs in the browser using the Web Crypto API, which is not available in the Python test environment. Browser-level testing would require a tool like Playwright or Selenium.
  • FHIR API calls โ€” the GTRI FHIR server is an external third-party service. Tests that depend on external services are fragile and slow. The unit tests set summarizer = None as a pattern for disabling external services.
  • Email sending โ€” Flask-Mail calls require real SMTP credentials and would send actual emails during tests. Isolated by not testing the /api/email-summary or /api/send-history endpoints.
  • Android app โ€” the Capacitor Android app is a WebView wrapper. Its behavior is the same as the browser and is verified manually in Android Studio.
The rule of thumb used in this project: Test server logic (routes, database, validation) automatically. Test browser UI and external services manually. Don't test static content.
๐Ÿ”
Regression Testing
How the test suite prevents old bugs from returning
โ–ผ
  • A regression is a bug that was fixed but later reappears โ€” usually because a new change broke something that used to work
  • Every test in test_app.py is a regression guard โ€” if a future change breaks a route or API, the test catches it before deployment
  • The smoke tests catch deployment regressions โ€” cases where the app deploys but a page fails to load or HTTPS breaks
  • Real regressions caught during this project include: the PBKDF2 unlock freezing on iOS, the crypto.subtle not available error over HTTP, the Heroku DB fixture being replaced too late in the test lifecycle
  • Adding a test when you fix a bug ensures that same bug cannot silently return
Example โ€” a regression test for input validation
def test_add_entry_empty_content_rejected(self, client): """Blank content should be rejected โ€” regression guard. If someone removes the validation, this test fails.""" r = client.post('/api/entries', json={'content': ' '}) assert r.status_code == 400 assert 'error' in r.get_json()
The test suite as documentation Reading the tests tells you exactly what the app is supposed to do. test_correct_passcode_redirects documents that 8903 redirects to /. test_generate_summary_without_api_key documents that a missing API key returns 503, not 500.

The testing strategy in one sentence

Run pytest test_app.py before every deploy to catch broken routes and API regressions locally, then run pytest test_smoke.py after every deploy to confirm the live site is healthy.