OneSciencePlace is informed by three decades of research cyberinfrastructure and by contributors to Hubzero, SeedMeLab, CIPRES, Apache Airavata, and Tapis. We partner with Tapis and extend this legacy of gateways with a modern, composable, integrated platform focused on usability, scale, and sustainability.

The table below compares OneSciencePlace with Open OnDemand, the tool stakeholders most often ask about, highlighting differences in architecture, extensibility, and suitability for full-lifecycle research and education. 

Notes: capabilities reflect typical deployments and planned items (e.g., globus, workflows, open-source distribution). Snapshot: Oct 19, 2025,  suggestions for updating the comparison are welcome.


Platform Comparison: OneSciencePlace vs. Open OnDemand

Section 1 — Scope & Identity

FeatureOneSciencePlace (OSP)Open OnDemand (OOD)
DeliverySaaS (managed); open-source distribution plannedSelf-hosted (per site), open-source
Multi-tenantPlatform: yes; apps are tenant-scopedNo (per-site instance)
Identity & account mappingHeterogeneous IdPs; per-system mapping; optional key-bridging; no shared IdP requiredOne portal IdP (LDAP/SAML/OIDC); per-cluster Unix account mapping
ScaleLab → department → institution → multi-institutionLab → large site
Execution targets1+ VM/host (no scheduler) and 1+ SLURM clustersScheduler-backed clusters (SLURM/PBS/LSF/SGE)

Section 2 — User Features

FeatureOSPOOD
Web UIYesYes
WorkflowsPlannedPartial (via templates/add-ons)
FAIR / publishingYes (DOIs/metadata)No (not built-in)
Low / no-code UIYes (forms/pages)No (admin templates)
Bring-your-own appsContainers: single-port web apps run out-of-the-box; GUI via embedded VNC/DCV/Xpra. 
Standalone executables work too.
Interactive App wrappers (form + submit; web or VNC/desktop)
App visibilitySingle-user or site-wide; groups plannedPer-site policy
Job clone / restartYes / Yes* (if app supports checkpoint)Yes (clone via Job Composer) / Partial (site scripts)
Job data archivingYes (policy hooks to POSIX/S3)Partial (site automation)

* Restart depends on the application providing checkpoints.

Section 3 — Runtime & Integration

CapabilityOSPOOD
Web appsYes (single-port containers)Yes
GUI / desktop appsYes (container-embedded VNC/DCV/Xpra; proxied)Yes (TurboVNC + noVNC/websockify; others possible)
MPI / parallelYesYes
Distributed / HTCYes (scheduler arrays/templates; same pattern as OOD)Yes (arrays/templates)
SerialYesYes
Private nodes (NAT)Yes (gateway proxy to non-public nodes)Yes (PUN/reverse proxy to internal nodes)
Multi-clusterYes (cross-site)Yes (per-cluster config)
Remote partnersYes (heterogeneous IdPs supported)Partial (mapping per partner)
CloudYes (hybrid)Yes (if deployed in cloud)
DataPOSIX, S3; Globus plannedPOSIX; S3/Globus via site add-ons

Notes: capabilities reflect typical deployments; site engineering can extend either platform. Snapshot: Oct 19, 2025.


The OneSciencePlace Advantage

The choice between a self-hosted portal and a managed platform comes down to three factors: Scope, Usability, and Sustainability.

  1. Unified scope. OSP spans the research lifecycle—app execution (single VM or Slurm), data sharing, and publishing—with heterogeneous IdP support and cross-site targets. (Workflows and Globus: planned.)
  2. Faster onboarding. A container that exposes one port runs out-of-the-box; GUI apps embed VNC/DCV/Xpra. A no-code UI builder, job clone/restart, and policy archiving reduce custom development and speed deployment. Hpc codes follow similar pattern.
  3. Sustainable operations. Delivered as a managed service, OSP shifts upgrades/security off local teams and lowers TCO versus maintaining a full portal stack in-house. (Open-source distribution planned.)


Conclusion: OSP is designed to provide a complete, sustainable platform from lab to multi-institution scale, freeing local staff to focus on enabling the science.