Skip to content

Chapter 3: Operational Guide

This chapter is an operational guide. It describes the standard procedures for developing, maintaining, and administering the "Augmented Financial Analyst" application.

3.1. The Ideal Development Workflow

Any change to the project, whether it's a bug fix, a new feature, or a simple text update, must follow this rigorous process to ensure production stability.

3.1.1. Create a Working Branch

Never work directly on the main branch. Every new task begins by creating a dedicated branch from the most up-to-date version of main.

# Ensure the local `main` branch is up-to-date
git checkout main
git pull origin main

# Create and switch to a new descriptive branch
git checkout -b <change-type>/<short-description>
Branch name examples: feature/add-new-chart, fix/login-page-bug.

3.1.2. Local Code Modification

Make all necessary code changes in your local development environment (e.g., VS Code).

3.1.3. Local Test Validation

Before submitting your work, run the entire test suite to ensure your changes have not introduced any regressions. This command must be run from the root of the application repository.

# From the project root
python -m pytest -v
All tests must pass (or be intentionally skipped).

3.1.4. Save Your Work (Commit)

Save your changes to the Git history with a clear and concise commit message, following the "Conventional Commits" convention.

# Add the modified files
git add .

# Create the commit
git commit -m "type(scope): description of the change"
Examples: feat(dashboard): Add 52-week analysis panel, fix(pipeline): Correct currency conversion logic.

3.1.5. Share and Review (Pull Request)

Push your working branch to the GitHub repository and open a "Pull Request" (PR).

git push --set-upstream origin <your-branch-name>
On GitHub, create the Pull Request targeting the main branch. The PR is the opportunity to describe your changes and allow the CI/CD system to run the tests one last time in a clean environment.

3.1.6. Merging

Once the Pull Request is approved (reviewed and CI tests passed), merge it into main.

3.2. Data Maintenance Procedures

3.2.1. Update Portfolio Composition

  • When: When you buy or sell stocks, or modify quantities.
  • Procedure:
    1. Open the data/tipranks_raw.csv file on your local machine.
    2. Modify, add, or delete the necessary rows.
    3. Crucial: For any new row, ensure you fill in the Marketstack_Ticker and Marketstack_Currency mapping columns after researching them on the Marketstack website.
    4. Follow the Ideal Development Workflow (commit, push, merge). The data pipeline will process the changes from the CSV.

3.2.2. Manually Run the Data Pipeline

  • When: For a one-time data refresh or to force processing for a specific day after an error.
  • Procedure:
    1. SSH into the VPS.
    2. Navigate to the project directory: cd /var/www/qa-automated-pipeline.
    3. Run the command: docker compose exec app python -m code_source_simule.pipeline.
  • Important Behavior: This command targets the previous market day. Sunday runs are blocked; Monday runs are blocked by default and allowed only as a morning catch-up when the previously recorded run failed.

3.2.3. Configure SMTP Alerts

  • When: Before enabling production operator email notifications.
  • Procedure: Configure these environment variables on the target runtime (local shell, VPS, or CI secret store):
    • SMTP_HOST
    • SMTP_PORT
    • SMTP_USER
    • SMTP_PASSWORD
    • ALERT_EMAIL_TO
    • SMTP_SECURITY (starttls, ssl, none) — default starttls
    • SMTP_TIMEOUT_SECONDS — default 10
  • Recommended profiles:
    • 587 + SMTP_SECURITY=starttls
    • 465 + SMTP_SECURITY=ssl
  • Fail-open behavior: if SMTP is misconfigured or unreachable, the pipeline continues and logs a warning/event for diagnosis.

3.2.4. Troubleshoot Missing API Prices (Marketstack)

  • When: UI history stops updating or alerts report a high number of missing API prices.
  • Procedure:
    1. Confirm cron timing first (crontab -l) and verify the last run log (tail -n 80 /var/log/cron-pipeline.log).
    2. Reproduce the failing date directly against provider endpoints in Postman or curl.
    3. Compare v1/eod and v2/eod for the same symbols and date range before any code change.
    4. If v2/eod returns valid data and v1/eod does not, treat this as endpoint compatibility and align pipeline endpoint usage.
    5. Re-run one manual import and verify DB/UI refresh before closing the incident.
  • Rule: For external API incidents, provider-side endpoint/version reproduction is mandatory before implementing mitigations.

3.2.5. Run User E2E/API Commands (local Windows)

  • When: To manually validate application login, reusable authentication state, dashboard rendering, and Marketstack API scenarios.
  • Standard procedure:
    1. Open PowerShell in e2e/.
    2. Save secrets once (or after credential changes):
      • npm run secrets:save:windows
    3. Rebuild login session state:
      • npm run test:auth:setup:windows
    4. Validate access without re-entering login:
      • npm run test:skip-login:windows
    5. If only the Marketstack API key changes:
      • npm run secrets:save:windows:marketstack
    6. Run secure Marketstack API tests:
      • npm run test:marketstack:windows
    7. Run the dedicated dashboard suite without opening the browser:
      • npm run test:dashboard
    8. Run the same dashboard suite with the browser visible:
      • npm run test:dashboard:headed
  • Useful shortcuts:
    • Full E2E suite: npm run test:e2e
    • Local CI subset aligned with the Marketstack business suite: npm run test:e2e:ci
    • Local Marketstack CI subset: npm run test:marketstack:ci
  • Security rule: never store MARKETSTACK_API_KEY in plaintext in the repository; use only local Windows secrets (DPAPI) and GitHub secrets in CI.

3.3. Production Server Administration (VPS)

3.3.1. Check Application Status

  • View active containers: docker ps (should show qa-automated-pipeline-app-1 and qa-automated-pipeline-db-1 with status Up).
  • Application logs (Gunicorn/Flask): cd /var/www/qa-automated-pipeline && docker compose logs -f app (-f to follow in real-time).
  • Web server error logs: sudo tail -f /var/log/nginx/error.log.

3.3.2. Restart Services

  • Full restart (App + DB): cd /var/www/qa-automated-pipeline && docker compose restart.
  • Restart the application only: cd /var/www/qa-automated-pipeline && docker compose restart app.
  • Restart Nginx: sudo systemctl restart nginx.

3.3.3. Manage the Automated Pipeline (cron)

  • List scheduled tasks: crontab -l.
  • Edit scheduled tasks: crontab -e.
    • Recommendation: Keep a simple schedule from Monday to Saturday morning (server local time), and let the pipeline guardrails enforce Sunday block and Monday catch-up-only-on-failure logic.
  • Check logs of the last execution: cat /var/log/cron-pipeline.log.

3.3.4. Interact with the Database

  1. SSH into the VPS.
  2. Navigate to the project directory: cd /var/www/qa-automated-pipeline.
  3. Load environment variables: source .env.
  4. Launch the MariaDB client: docker compose exec db mysql -u"$DB_PROD_USER" -p"$DB_PROD_PASSWORD" "$DB_PROD_NAME".

3.4. Dependency Management

To add a new Python library (e.g., new-library): 1. On your local machine, with the venv activated, install the library: pip install new-library. 2. Update the requirements.txt file. This is the most important command to ensure reproducibility.

pip freeze > requirements.txt
3. Verify that the application still works by running local tests (pytest -v). 4. Follow the Ideal Development Workflow (commit, push, etc.). The deployment process should rebuild the Docker image with the new library (--build), making the dependency available in production.

3.5. Project Documentation Management

The project documentation (the chapters you are currently reading), the test coverage report, and the architect's log are managed with MkDocs.

3.5.1. Build and View Documentation Locally

The documentation source files live directly in the docs/ directory. To preview locally, run the MkDocs development server from the project root:

mkdocs serve

You can now open your browser to http://127.0.0.1:8000 to see the live preview.

Note: You must have the documentation dependencies installed: pip install -r docs/docs-requirements.txt.

3.5.2. Update the Documentation

  • To modify the main documentation (like this guide), edit the Markdown files located in docs/.
  • To update the functional test cases, edit the Markdown files in test_cases/.
  • Changes are reflected immediately in the local preview server.

3.6. Formalized Debugging Plan

To ensure consistent, reliable, and well-documented bug fixes, all bugs must follow this standardized debugging workflow:

3.6.1. Overview of the Standardized Plan

  1. Create a GitHub Issue — Brief statement of the problem (constat initial) without diagnosis details.
  2. Create a Dedicated Branch — Branch naming: fix/<short-issue-description>.
  3. Develop TDD Tests — Write tests that validate the expected behavior or expose the bug.
  4. Implement the Fix — Make code changes to pass the new tests.
  5. Test the Fix — Run new tests and validate the fix works.
  6. Execute Regression Tests — Ensure all existing tests still pass.
  7. Update Test Coverage — Generate and update coverage reports.
  8. Document in an Activity Report — Create a report following the established format (see examples in docs/activity_report/).
  9. Update Related Documentation — Review and update chapters 1, 2, and 3 if the fix impacts architecture or procedures.
  10. Formalize as a Project Rule — Verify this plan remains codified and updated in Chapter 3.

3.6.2. GitHub Issue Template

Every issue should be concise and state only the initial observation (no investigation or diagnosis):

## Bug Title

### Constat

Brief description of the symptom or failure observed in production.
Include error messages, dates, and reproducibility information if available.

---

(Investigation and solution details will be documented in the PR and activity report.)

3.6.3. Branch Naming Convention

  • Bug fixes: fix/<short-description> (e.g., fix/ticker-column-length-bug)
  • Features: feature/<short-description>
  • Refactors: refactor/<short-description>

3.6.4. Writing TDD Tests

New tests should be added to the appropriate test file and: - Use descriptive test IDs (e.g., tc-pipe-ticker01). - Include a docstring explaining what is being validated. - Validate both the happy path and edge cases (e.g., maximum length, empty values, invalid types).

Example structure:

@pytest.mark.test_id("tc-pipe-ticker01")
def test_insert_ticker_with_reasonable_length(self, clean_db):
    """Validates that a ticker of reasonable length inserts successfully."""
    # GIVEN: setup
    # WHEN: action
    # THEN: assertion

3.6.6. Checklist Before Merging

  • GitHub issue created with clear constat
  • Dedicated branch created and pushed
  • TDD tests written and passing
  • Bug fix implemented and working
  • All regression tests pass (40+ tests)
  • Test coverage reports updated
  • Activity report documented
  • Related documentation chapters reviewed and updated
  • Commit messages follow "Conventional Commits" standard
  • Pull Request reviewed and approved
  • Merged into main and scheduled for production deployment