We engaged with Raw Blend in late April. What started as an audit of their flagship store became a full rebuild of their e-commerce ops layer — across SEO, structured data, supplier integrations and warehouse automation.
A single-store stock sync running on Windows Task Scheduler. The prototype that proved the patterns — auth, idempotency, local-first deployment — every later automation was built on.
A single Flask dashboard wrapping six storefronts, six production automations, and the headline ShipStation → QuickBooks tracking sync running hourly without human touch.
Bringing EcoVessel and U-Konserve under the autonomous platform, generalising purchase orders across every carrier, cross-store reporting, and webhook-driven real-time syncs.
"Make the warehouse stop re-typing tracking numbers."
That was the visible pain. As we walked through the system map, it became clear the warehouse problem was a symptom of something deeper: the integration layer between Shopify, QuickBooks, and ShipStation didn't really exist. Each store was a silo. Each tool spoke its own dialect. Each automation was a one-off script that broke every few months.
The real brief, after we refined it together, became:
Build a single autonomous platform that connects all six storefronts to QuickBooks and ShipStation, automates every operational handoff that humans currently do by hand, and gives the team a dashboard so they can see what's working without me being involved.
Raw Blend the company sells across six independent Shopify stores — each with its own brand, customers, and product catalogue. Behind those six fronts sits one shared back office: one warehouse, one QuickBooks Online file, one ShipStation account.
The operational glue between these systems looked like this:
.bat files on the warehouse PC kicking the workflow via webhook (Cloudflare blocks n8n's IPs, so the .bat had to run on Australian residential IP)
That last item was the headline pain. Every shipped order incurred a ~30-second copy-paste round trip — fetch tracking from ShipStation, find the matching invoice in QB, paste, save. Multiplied across the six stores, it was eating ~1 hour of staff time daily, with a non-trivial typo rate.
Layered on top: no observability. If an n8n workflow failed silently overnight, nobody knew until a customer complained two days later about wrong stock or a missing tracking number.
A self-contained Python application that lives on the warehouse PC. Flask serves a localhost dashboard. Windows Task Scheduler runs the automations on schedule. Every credential lives in one .env file. No cloud bills, no recurring subscriptions, no platforms to be locked into.
┌──────────────────────────────────────────────────────────────────┐ │ Flask dashboard (localhost:5000) │ │ ┌─────────┬─────┬─────┬──────────┬───────────┬────────────┐ │ │ │ RawBlend│ CE │ CF │ Blenders │ U-Konserve│ EcoVessel │ │ │ └────┬────┴──┬──┴─────┴──────────┴───────────┴────────────┘ │ │ │ │ │ │ │ └─── 3 inventory automations │ │ │ │ │ └─── 3 ops automations (purchase orders, │ │ ShipStation tracking sync, Roband cost sync) │ └──────┬───────────┬───────────────┬────────────────────────────┬──┘ │ │ │ │ ▼ ▼ ▼ ▼ Shopify QuickBooks ShipStation Windows (6 stores) Online V2 API Task GraphQL OAuth refresh API key Scheduler + REST sparse update (autonomous)
One Python codebase. Six store tabs. Six automations. Three external APIs. Zero recurring SaaS fees.
PurchaseCost only on items where the wholesale price actually changed.The dashboard isn't an afterthought wrapper around the runners — it's the single source of truth for staff. One bookmarkable URL on the warehouse PC, one tab per store, one card per automation. They never need to phone us to ask "did anything fail?"
Each scheduled automation (Uropa, AG, ShipStation Tracking) renders a card showing last-run timestamp, exit code, summary line, and a link to the latest log. Run now kicks off a manual run; View latest log opens the file directly. Status freshness is colour-coded so a glance tells the story.
status/<id>.jsonregistry.pyRoband cost sync and FED stock sync run on irregular cadence — the supplier emails an Excel sheet whenever they feel like it. The card has a drop zone: drag the .xlsx onto it, click Run, the runner parses, syncs, writes status. No "where do I save the file?" decisions required.
uploads/OAuth client_credentials grant, GraphQL + REST helpers. Same code paths for all six stores.
QBO OAuth refresh, paginated queries, sparse-update writes, invoice PDF fetch.
ShipStation V2 — labels + shipments (we need the latter for the multi-store routing trick).
Three-tier credential resolution: per-store JSON → hub-root .env → dev fallback.
Per-run timestamped logs in YYYY-MM-DD_HHMMSS.log, one folder per automation.
JSON contract between runners and dashboard — every automation writes one status file when done.
SMTP for the AU Post purchase-orders flow — emails warehouse with invoice PDFs attached.
Local printer integration for warehouse-side label and document output.
The client's PC is shared with non-technical staff. We don't remote in for updates. The deploy flow has to be extract-and-go with credentials handled on a separate channel.
package_for_client.bat on the dev machine produces a ~93 KB zip. Gitignored stuff (logs, cache, the live .env) is excluded automatically.
.env sent via a secure side channel — never in the same email as the code.
C:\RawBlend\automation-hub\ · runs pip install -r requirements.txt.
python scripts\qb_oauth_bootstrap.py — one-time browser-based QB OAuth flow. Persists refresh token to cache/quickbooks/tokens.json.
install_*_schedule.bat files as admin — registers Windows Task Scheduler entries for Uropa, AG and ShipStation Tracking.
install_desktop_shortcut.bat drops a custom-iconned RB Hub shortcut on the desktop — staff just double-click.
This was the one that mattered most to staff, and the one that took the most thinking to design. The naive approach — "for each ShipStation label, look up the matching QB invoice and update it" — runs into three real obstacles.
Each label has a numeric external_order_id (a Shopify order ID), but no store_id to pivot on. With six stores, that's a multi-tenant lookup problem.
QB invoices are created from Shopify orders by a third-party sync tool. The Shopify order number lands somewhere on the QB invoice — but where, exactly? DocNumber? PrivateNote? A custom field?
PrivateNote isn't queryable in QuickBooks.
The most surprising discovery: the QBO API supports a SQL-like query language, but PrivateNote is projectable (you can SELECT it) and not queryable (you can't WHERE on it). My first draft of the sync issued a query per ShipStation label and got an immediate 400 error.
Working through them in order:
Each ShipStation shipment (one level above the label) carries a shipment_number field — the prefixed Shopify order name. RB32708 for Raw Blend. CE5776 for Commercial Equipment. UK2534 for U-Konserve. The two-letter prefix unambiguously identifies the source store. No multi-tenant fan-out needed.
The third-party Shopify→QB sync tool writes the prefixed order name into PrivateNote exactly. So the bridge: ShipStation.shipment_number == QuickBooks.Invoice.PrivateNote.
Switched to building an in-memory invoice index once per run. MetaData.CreateTime is queryable, so it bounds the pull cleanly. Each invocation pages QB invoices from the last 60 days, builds two dictionaries ({PrivateNote: invoice}, {DocNumber: invoice}), and looks up each shipment_number against the indexes. ~900 invoices, ~8 paginated queries, ~5 seconds per run.
def find_invoice_for_shipment(shipment_number, by_private_note, by_doc_number): """Lookup priority: 1. PrivateNote == shipment_number (regular Shopify orders) 2. DocNumber == shipment_number (warranty entries with full INV-XXXX) 3. DocNumber == "INV-" + shipment_number (warranty entries with bare number) """ matches = by_private_note.get(shipment_number) or [] if matches: return "private_note", matches matches = by_doc_number.get(shipment_number) or [] if matches: return "doc_number", matches if shipment_number.isdigit(): matches = by_doc_number.get(f"INV-{shipment_number}") or [] if matches: return "doc_number", matches return "none", []
The two fallbacks emerged from real data. Some warranty claims and one-off shipments don't have a Shopify order — the warehouse types the QB invoice number directly into the ShipStation shipment field. Sometimes they type INV-62142, sometimes just 62142. The cascade handles both.
383 labels scanned: 361 updated, 20 no QB match, 2 multi-match Watermark advanced to 2026-05-07T04:48:30Z
361 invoices got their tracking number auto-populated in 18 minutes. The 20 unmatched were exactly what you'd expect: customer-name strings the warehouse had typed for one-off warranty shipments (BRANDON, EMMA), and a few invoices outside the 60-day index window. The 2 multi-match cases were duplicate invoices in QB — a separate cleanup task for the bookkeeper, which the sync correctly refused to write to.
From that point forward, the warehouse stopped touching tracking numbers.
The hourly cron handles it.
Every Shopify tutorial says "create app, paste the shpat_ token in your code." The newer pattern — OAuth client_credentials grant — exchanges client_id + client_secret for a fresh 24-hour token at runtime. Built it that way for the first store. Adding the next five was zero code change.
When the bridge field isn't queryable on the server, page locally and index in memory. Cheaper than fighting the API, and gives you a clean place to layer fallback strategies (DocNumber, INV- prefix) without exploding query count.
Every external write is safe to repeat. Re-running pushes the same SKU quantities (Shopify accepts the no-op). Re-running tracking sync skips invoices whose tracking already matches. When something fails, recovery is "just run it again." No state to repair.
Runs on the warehouse PC. Flask serves localhost only. Windows Task Scheduler triggers automations. State, logs, credentials all on local disk. Supplier feeds need an Australian residential IP anyway — and adding cloud hosting would mean a recurring bill, network round-trips, and another vendor in the failure chain.
Each automation writes a status JSON when it finishes — current state, summary, last-run timestamp, log path. The Flask dashboard polls these and renders one card per automation per store tab. Staff get one URL to bookmark for "did anything fail today?", one-click run, one-click logs. The dashboard isn't sophisticated — three Jinja templates, vanilla JS for polling. That's the point. Less code to break = the client doesn't need me.
| Metric | Before | After |
|---|---|---|
| Stores under unified ops | 0 | 6 |
| Production automations | 1 (legacy .exe) | 6 |
| Daily warehouse minutes re-typing | ~60 | 0 |
| "Did anything fail?" surface | None | One dashboard, six tabs |
| Time to add new store | ~1 week | ~30 min |
| Time to add new automation | "Talk to Zac first" | 1 file + 1 registry entry |
The 60 minutes per day is the headline — at the client's wage rates, roughly $15,000/year of recovered productive time, against a one-off build cost. The harder-to-quantify wins are bigger though: the bookkeeper trusts that tracking numbers are right. Warehouse staff spend their day on packing rather than data entry. And when the next store gets added, or the next supplier needs a feed, the marginal cost is hours rather than weeks.
If your team is re-typing data between systems, watching workflows fail silently, or paying for tools that should talk to each other and don't — there's a build for that.
Discovery is free · Quote in 48 hours · No retainer required