OneSciencePlace: Composable research computing, data, and publishing

Composable research platform Open-source distribution planned

OneSciencePlace

Run apps, manage data, and publish outputs on a single, integrated platform.

  • No local setup or installation required. Quickly deliver a small or large portal/gateway.
  • Run native and containerized apps on VMs/Hosts or on Slurm clusters, located anywhere, for the same OSP tenant.
  • Apps run in user-space.
  • No-code user interface to deploy native or container apps.
  • Publish apps, datasets, and other artifacts with rich metadata and DOIs.

Lineage & Acknowledgements

OneSciencePlace was originally initiated within  NSF’s Science Gateway Community Institute and is informed by three decades of research on cyberinfrastructure, with contributors from people who helped develop Hubzero, SeedMeLab, CIPRES, Apache Airavata, and Tapis. The National Science Foundation funded this work under award numbers 1547611, 2311206, 2311207, and 2311208.

What Is OSP?

A managed, multi-tenant platform that unifies app execution, data sharing, publishing, and site content, eliminating the need to stitch, maintain, and manage different tools.

Apps

Run native or container-based command-line, web, or graphical apps, with a no-code launch UI on a web browser.

Data

POSIX and S3/object storage; optional archiving of job outputs.

Publishing

Publish apps, datasets, and other content with metadata and DOIs.

Outcomes

Faster deployment and onboarding

Deliver portals, gateways, or repositories in less time than it takes to write their specification.

Lower barrier

No specialized web/backend development required for common use cases.

Scales with your needs

Lab → department → institution → multi-institution.

Apps

  • Containers: single-port web containers run out of the box.
  • Native: command line serial, parallel, or distributed executables are supported.
  • Graphical apps: embed VNC/DCV/Xpra in the container; OSP proxies to the browser.
  • No-code launch UI: build forms and complex UIs without writing code.
  • Restrictions: Apps can be private or tenant-wide. Group restrictions planned.
  • Reproducibility: Job parameters are tracked and stored.
  • Clone & restart: one-click re-run; checkpoint/resume when the app supports it.
  • Visibility: apps are tenant-scoped; publish as single-user or site-wide (groups planned).
Have existing containers or native codes?
Set them up for a launch in minutes.

Compute Integration

Targets

Single VMs/hosts (no scheduler) and Slurm clusters; multiple systems per tenant.

Private nodes

Proxy to non-public compute nodes behind NAT; only the gateway is internet-facing.

Parallel & distributed

MPI and many-task/array workloads supported; GPU when available on the target.

Data & Publishing

  • Storage today: POSIX and S3/object + file sharing.
  • Globus: Globus data transfer planned.
  • Publish: publish datasets and apps with rich metadata and DOIs, FAIR data support planned.
  • Archiving: post-run configuration support archive job outputs to POSIX or S3.
Want to discuss data management, storage, and archival policies?

Identity & Access

Federated sign-in

OIDC/SAML (e.g., CILogon, InCommon) and LDAP. CILogon and Globus Auth are supported.

Heterogeneous IdPs

No shared IdP required across systems; per-system identity mapping.

SSH key bridging

Optional SSH key bridging to bind web users to remote Unix accounts.

Use Cases

PI Lab Portal

A handful of apps on one host; publish results with DOIs.

Course & Workshop

Prebuilt apps, simple sign-in; minimal ops burden.

Institutional Gateway

Many apps across multiple clusters; tenant branding and content.

Multi-Institution

Mixed IdPs and remote systems with or without a shared identity.

Featured Project: Quakeworx

Quakeworx is an extensible framework for earthquake simulations delivered with OSP.

See projects quakeworx.org

Have a project you want to deliver as a compute/education/HPC portal, science gateway, or data repository, and more?

How It Works

  1. Schedule a demo — see the platform and discuss needs.
  2. Setup — we provision a secure environment and brand it.
  3. Auth & compute — connect LDAP/OIDC; integrate clusters or a Linux host.
  4. Deploy apps — onboard containers and configure UIs with the no-code builder.
  5. Launch — go live; we handle hosting and maintenance.

Do you think this sounds familiar to you?

  • Researchers need HPC access; the command line blocks many of them.
  • Staff time is limited for evaluating or building DIY portals.
  • Maintaining a custom gateway consumes scarce FTEs.
  • Sponsors require Open Access; you need DOIs and metadata without building a repo.

Open Source

OneSciencePlace is built on open-source software and open standards. Core components of the platform are available in public repositories and are actively maintained within their respective communities. The integrated distribution and orchestration layer that ties these components together is currently maintained as a managed release. This ensures stability, security, and coordinated versioning across the platform.

In alignment with NSF CSSI objectives, we are working toward a structured public release of the unified distribution. Release milestones are tied to funded development phases and institutional partnerships. If you would like to get in early collaboration or access to source code, you should contact us.

 

Capabilities noted above reflect typical deployments; extensions and customizations are feasible.

Projects · Events · Docs · Contact