TopoloCommerce
Public overview of the multi-vertical commerce platform for venue operations, guest runtimes, and staff execution.
What It Is
TopoloCommerce is the multi-vertical commerce platform for teams that need one operating system across many venues.
It supports:
- one Topolo org with many venues
- shared commerce foundations
- venue-level module packs for hospitality, retail, or service workflows
- guest-facing and staff-facing runtime surfaces in the same platform model
Architecture
TopoloCommerce is organized as a multi-surface workspace:
- Worker API
- signed-in operations app
- public guest web runtime
- managed mobile guest runtime
The platform resolves effective venue behavior from org defaults, venue overrides, and preset-based module packs instead of hardcoding one vertical operating model.
Runtime Surfaces
See /systems/topolo-commerce for the current runtime inventory.
The main public-facing understanding is:
- staff and chain operators use the signed-in ops app
- the signed-in ops app now uses canonical venue-scoped URLs in the form
/venues/:venueId/... - guests use venue-scoped web or managed-device surfaces that can now submit live orders, service requests, and bookings
- venue teams can manage sections, assign staff resources, and work live queue items as they arrive
- venue operators can now edit the live venue catalog from ops instead of only viewing seeded catalog content
- guest and ops surfaces now keep cached local state and replay queued actions after temporary cloud interruption
- signed-in venue workspaces are scoped to the authenticated Topolo org and no longer fall back to demo venues when a different org signs in
- public guest venue discovery now resolves from live active venues instead of a hidden default demo org
- normal guest and ops runtimes now use only live Commerce data or cached real snapshots; demo content is reserved for explicit local demo mode
- replayed guest and ops writes now carry stable
clientMutationIdvalues so temporary connectivity loss does not create duplicate venue records during recovery - venue operators can now inspect a dedicated resilience view showing the current cloud journal, recovery posture, and enrolled Venue Edge nodes for a venue
- venue boards and DOOH outputs publish through the existing Nodo runtime
API Reference
The current route families cover org context, module resolution, venue detail, catalog reads and saves, venue resilience state, Venue Edge node enrollment and sync, guest venue discovery, guest-session creation, live order and request submission, voice-intent resolution, queue reads and transitions, live queue streaming, team management, import approvals, and payment-session creation.
The current resilience contract also includes replay-safe guest and staff mutations for the browser outbox model plus the first cloud-side Venue Edge bootstrap and journal-sync contract.
Use /systems/topolo-commerce plus the internal handbook for the current route inventory.
Auth and Permissions
The signed-in ops surface uses shared Topolo Auth and the first-party suite launcher contract.
Guest surfaces are venue-scoped runtimes and do not behave like ordinary signed-in Topolo suite apps.
Data Ownership
TopoloCommerce owns venue configuration, module settings, catalog and menu data, guest sessions, carts, orders, service requests, bookings, queue state, device-assignment intent, and venue-specific publish metadata.
Topolo Pay, Topolo Nexus, Topolo MDM, and Nodo continue to own their own execution domains.
Deployments
TopoloCommerce currently deploys to:
https://topolo-commerce-api.topolo.workers.devhttps://commerce.topolo.apphttps://guest.topolo.app
CloudControl remains the source of truth for the current Worker, Pages, and storage bindings.
Failure Modes
- venue behavior drifts from the shared module-resolution contract
- guest and staff boundaries blur and the wrong runtime surface inherits the wrong auth or launcher behavior
- venue team and assignment state drifts from the live queue model and breaks traceability
- venues depend on cloud reachability because cached-local replay behavior is removed or bypassed
- venues depend on replaying non-idempotent writes and create duplicates after short connectivity loss
- venues lack visibility into their own recovery posture because resilience state is not surfaced in the signed-in workspace
- venues cannot advance toward full local authority if edge enrollment and cursor sync drift from the canonical Commerce contract
- platform integrations such as MDM, Pay, or Nodo are treated as local Commerce responsibilities instead of contract boundaries
Debugging
Start with /systems/topolo-commerce and the current internal handbook when checking route availability, venue behavior, or module-driven UI visibility.
Change Log / Verification
- Added canonical TopoloCommerce public coverage on 2026-04-10 for the new multi-vertical org-to-venue commerce platform
- Updated public coverage on 2026-04-10 to include the live queue stream and team-management contract now present in production Commerce
- Updated public coverage on 2026-04-10 to include the current cached-local resilience foundation for guest and ops surfaces
- Updated public coverage on 2026-04-10 to include replay-safe guest and ops writes for the current cached-local resilience foundation
- Updated public coverage on 2026-04-10 to include the signed-in venue resilience view and cloud-side venue event journal
- Updated public coverage on 2026-04-10 to include Venue Edge node enrollment and the cloud-side bootstrap plus journal-sync contract
- Updated public coverage on 2026-04-10 to include live catalog editing from the signed-in ops surface
- Updated public coverage on 2026-04-10 to include authenticated org-scoped staff workspaces and canonical
/venues/:venueId/...ops URLs - Updated public coverage on 2026-04-10 to clarify that demo content is not a normal runtime fallback outside explicit local demo mode
- Updated public coverage on 2026-04-10 to clarify that public guest venue discovery reads live active venues instead of a hidden demo-org fallback