DevOps Pipelines & System Architecture
A behind-the-scenes look at how I ship production-ready software: from Azure DevOps CI/CD pipelines to the cloud-native architecture powering my insurance PDF comparison platform and other automation projects.
Azure DevOps CI/CD Pipeline
I use Azure DevOps to continuously build, test, and deploy both front-end and back-end services. Typical stacks here include React-based UIs and Node.js / serverless backends (Firebase Cloud Functions and Azure Functions) that integrate with external APIs, LLMs, and databases.
Pipelines are defined as code in YAML so that every change to the build or release process is versioned alongside the source code and goes through the same pull request workflow.
Sample Multi-Stage Pipeline (Frontend + Functions)
trigger:
branches:
include:
- main
pool:
vmImage: 'ubuntu-latest'
variables:
- group: 'ppi-shared-secrets' # API keys, connection strings, etc.
stages:
- stage: Build
displayName: 'Build & Test'
jobs:
- job: build
steps:
- checkout: self
- task: NodeTool@0
inputs:
versionSpec: '18.x'
- script: |
npm install
npm run lint
npm test -- --watch=false
displayName: 'Install & Test'
- script: npm run build
displayName: 'Build App'
- task: PublishBuildArtifacts@1
inputs:
pathToPublish: 'build'
artifactName: 'drop'
- stage: Deploy
displayName: 'Deploy to Production'
dependsOn: Build
jobs:
- deployment: deploy
environment: 'production'
strategy:
runOnce:
deploy:
steps:
- task: DownloadBuildArtifacts@0
inputs:
artifactName: 'drop'
- script: |
npm install -g firebase-tools
firebase deploy --only hosting,functions
displayName: 'Deploy to Firebase'
This pattern gives me automated tests on every push, gated deployments, and a reproducible way to ship both UI and backend changes together.
Deployment Workflow & Branching Strategy
To keep deployments predictable and easy to reason about, I follow a clean branching model and environment-based promotion flow:
- main – Always deployable. Represents production.
- develop (optional) – Integration branch for upcoming features.
- feature/* – Short-lived branches for individual features or fixes.
- hotfix/* – Emergency fixes applied on top of
main, then merged back.
Pull requests into develop or main automatically trigger the CI pipeline, running tests and builds. Approved PRs to main then flow into the Deploy stage, which pushes changes to production (Firebase Hosting, Cloud Functions, Azure Functions, or other targets depending on the project).
Insurance PDF Comparison – High-Level Architecture
The Insurance PDF Comparison platform is built as a cloud-native, serverless system. It ingests carrier and public adjuster estimates as PDFs, runs OCR, stores normalized text, and then uses an LLM to generate side-by-side comparisons and exportable reports.
At a high level, the architecture looks like this:
Browser (React App)
├─ UploadPDF.js # Claim ID + file upload UI
├─ DocumentList.js # Real-time Firestore document list
└─ ComparePDFs.js # Compare + Download PDF actions
│
▼
Firebase Hosting / React SPA
│
▼
Firebase Cloud Functions
├─ OCR Trigger
│ └─ On PDF upload → run Google Vision / OCR
│ → Store extractedText in Firestore
└─ compareInsurancePDFs (Callable)
├─ Fetch texts for 2 PDFs
├─ Call Gemini / LLM with custom prompt
└─ Return Markdown comparison result
│
▼
Firestore & Storage
├─ claims/{claimId}/documents/{fileName}
└─ extractedText + metadata
│
▼
Client
├─ Renders Markdown comparison
└─ Uses html2canvas + jsPDF for export
The front-end is a React SPA that talks directly to Firebase (for auth, Firestore, and Storage) and calls Cloud Functions for heavy lifting such as OCR and LLM comparison logic. All long-running or sensitive operations stay off the client to keep the UI fast and secure.
Security, Secrets & Access Control
Handling insurance documents and claim details means security is a first-class concern. I keep secrets, access control, and data isolation baked into the architecture and pipeline:
-
Secrets in Azure DevOps – API keys (Gemini/LLM, OCR providers, etc.),
service account JSON, and connection strings are stored in secure variable groups. Pipelines
consume them via
$(SECRET_NAME)and they never appear in the repo. -
Environment Config – Local development uses
.envfiles (git-ignored), while production uses environment variables configured at the platform level (Firebase Functions config, Azure Function App settings). - Firestore Rules – Access to claim documents and extracted text can be restricted to authenticated roles, with reads/writes limited to internal tools and service accounts.
- Logging & Observability – Cloud Functions log structured events for OCR runs, LLM calls, and comparison generation. Failures are visible in function logs and can be wired into alerts if needed.
The combination of CI/CD, environment-based config, and strong access control makes it easy to iterate quickly without sacrificing security or traceability.
End-to-End Workflow
Putting everything together, a typical “compare two PDFs” flow looks like this:
- The user enters a Claim ID (typically the client’s name) and uploads two PDFs.
- Files are uploaded to Firebase Storage under
claims/{ClaimID}. - A Cloud Function trigger runs OCR and writes
extractedTextto Firestore. - When the user clicks Compare, the React app calls the
compareInsurancePDFscallable function. - The function pulls both texts from Firestore, sends them to the LLM with a structured prompt, and returns a Markdown-formatted comparison.
- The UI renders the result and lets the user export a clean A4 PDF via
html2canvas+jsPDF.
This page exists to show not just what I build, but how I think about pipelines, architecture, and keeping complex systems maintainable over time.