Introduction: From “All or Nothing” Breaches to Bounded Blast Radius
Modern SaaS platforms sit on top of massive, multi-tenant data stores. When those stores are breached, the damage is rarely limited to a single record; it is often “wholesale” compromise of large slices of the user base. For a CISO or CTO, this is the critical risk: not that a record can be stolen, but that everything a given system knows becomes available in one incident.
Cloud providers and SaaS security guidance have converged on a simple principle: design for tenant isolation and blast radius reduction. You assume compromise is possible and work to ensure that any single failure affects as few tenants and as little data as possible, instead of the entire corpus. AWS Documentation
Database and infrastructure sharding emerged first as a scalability technique, but security literature increasingly frames sharding as a way to structurally prevent widespread data compromise, especially in multi-tenant SaaS. Amazon Web Services, Inc.+1
This article explains how sharding can be used as a deliberate security strategy and then introduces Mimir’s “Shard on User Access” model: a data-in-use protection paradigm where the server never has enough context (cryptographically or structurally) to “monkey-branch” from authorized data into data the user is not allowed to see.
1. Blast Radius as a First-Class Security Objective
Traditional access control focuses on whether a principal is allowed to access a resource. Blast-radius thinking focuses on how much can go wrong if that principal—or the infrastructure around it—is compromised.
Zero Trust guidance from major vendors emphasizes three recurring themes:
Explicit verification of identity and context.
Least-privilege access aligned to business roles.
Assume breach and limit impact, often described explicitly as reducing the “blast radius.”
From a data perspective, this means:
Do not rely on a single, large, flat data store guarded only by perimeter controls and application logic.
Assume application bugs, misconfigured IAM, or SQL/NoSQL injection will occasionally succeed.
Architect the storage so that a successful exploit can only ever reach a bounded, well-defined slice of information.
Sharding is a natural way to make that boundary explicit and enforceable.
2. Sharding 101: From Scalability to Security
Data sharding is the practice of splitting a logical dataset into multiple physical partitions—shards—based on some key (e.g., tenant ID, region, or another domain-specific dimension). In cloud-native SaaS, providers routinely use sharding to scale beyond the limits of a single database instance and to improve performance and operability. Amazon Web Services, Inc.
From a security point of view, sharding changes the failure model:
A single compromised shard exposes only the data assigned to that shard.
Operational issues (e.g., replication lag, backups, schema changes) are handled per shard.
Performance hot spots and noisy tenants can be isolated or migrated without touching the rest of the fleet.
Security guidance from cloud vendors explicitly calls out cellular architectures and shard-based partitioning as techniques to limit the blast radius of infrastructure failure or compromise. Amazon Web Services, Inc.+1
The key design choice, from a CISO/CTO perspective, is what to shard on:
By tenant: all data for a tenant lives together. Good for operational isolation and customer off-boarding.
By geography or regulatory zone: close to data localization / residency patterns.
By product or business unit: reflecting organizational boundaries.
By user or access domain: aligning shards with the actual security boundaries that matter in practice. This is where Mimir’s philosophy comes in.
3. Sharding as Deliberate Blast-Radius Control
Sharding is not automatically a security control; it becomes one when you use it to constrain lateral movement and bound compromise.
3.1 Coarse-grained isolation: tenants and cells
Cloud-native SaaS isolation guidance distinguishes between:
Silo (per-tenant) isolation – each tenant has a fully separate stack (accounts, VPCs, databases). This gives strong isolation but is expensive operationally.
Pooled isolation – tenants share infrastructure but rely on strict policy and runtime enforcement to keep tenants separated logically. AWS Documentation
In practice, many architectures adopt a cellular or cell-based design: a set of semi-independent “cells,” each hosting a subset of tenants and their data. If a cell fails or is compromised, only the tenants in that cell are affected. Sharding is then the mechanism that maps tenants into cells and keeps data partitioned.
Incidents in large SaaS platforms have shown both the performance and resiliency benefits of sharded, cell-based architectures, and the trade-offs when a shard becomes overloaded or unbalanced. Slack Engineering
3.2 Shuffle sharding and probabilistic isolation
Extending this further, shuffle sharding assigns tenants to overlapping but distinct subsets of shards. This technique, popularized in systems reliability engineering, allows you to make strong probabilistic statements about how many tenants can be affected by any one failure, at the cost of extra complexity. Cloud and observability vendors document shuffle sharding as a way to further reduce blast radius beyond simple one-tenant-per-shard or one-shard-per-cell models. Amazon Web Services, Inc.+1
For CISOs, the key takeaway is that shard topology can be used to:
Encode business-critical isolation requirements.
Limit the maximum number of tenants affected by any one compromise path.
Provide auditable, structural guarantees about worst-case impact.
4. Beyond Infrastructure: Data-Centric and Cryptographic Sharding
Sharding at the database or infrastructure level is valuable, but it still assumes a trusted database engine and control plane. Data-in-use protection pushes this boundary further:
The database server is treated as honest-but-curious at best, and potentially compromised.
Sensitive operations are pushed to client-side runtimes or hardened enclaves.
Data is stored and processed as ciphertext, with keys held outside the server’s trust domain.
In this model, simply placing different tenants on different database instances is no longer sufficient. What matters is that:
The plaintext graph of data relationships the server can ever assemble is strictly bounded.
The server never possesses keys that allow decryption across those boundaries.
A data-centric sharding strategy therefore needs two components:
Structural isolation – the way data is organized (tables, packages, shards, indices) enforces separation along security boundaries.
Cryptographic isolation – distinct key material is used for each shard, tenant, or user-access domain, and key usage is tightly scoped.
5. Mimir’s “Shard on User Access” Model
Mimir’s philosophy starts from a harsh but realistic assumption: if an adversary has fully rooted a client machine and can scrape process memory, they can generally impersonate the user and obtain data through legitimate channels. The right response is not to pretend we can magically prevent all memory scraping; it is to ensure that no infrastructure compromise, however deep, lets an attacker jump from legitimately accessible data into anything more.
In the “Shard on User Access” model, this is achieved by aligning both structure and cryptography with who is allowed to see what:
Client-side packaging and encryption
Sensitive records and files are packaged and encrypted client-side (for example, in a WASM-based runtime in the browser or local agent). The server only ever receives opaque, authenticated ciphertext plus minimal routing metadata.Shards defined by access domains
Instead of sharding purely by tenant or by database instance, Mimir shards along user (or group) access boundaries:Personal data shards: data that a single user can access.
Shared group shards: data that a well-defined group (e.g., a care team, deal team, or project) can access.
Global or public shards: data that is intentionally non-sensitive.
Each shard is encrypted with keys specific to that access domain. The same physical database might hold thousands of shards, but it never sees the keys.
No structural path for lateral movement
Because server-resident structures (tables, indices, package headers) never link across access boundaries in a way that the server can traverse, there is no structural path to “monkey-branch” from one user’s accessible data into another user’s data:A compromised query layer can only assemble ciphertext belonging to access domains for which it has been given valid, scoped decryption tokens.
Even if an attacker gains full database read access, any aggregation across shards is limited by the set of keys they can obtain from user-side or key-management workflows.
Keys follow policy, not topology
In traditional sharding, keys and credentials are often scoped to coarse infrastructure boundaries (database instance, cluster, or account). In the Shard on User Access model:Keys are scoped to logical access domains.
Rotations, revocations, and audits are expressed in terms of who can see what, not “which server is compromised.”
Compromising a single key affects only the specific shard(s) tied to that access domain; it does not automatically unlock entire tables, regions, or tenant fleets.
Compatibility with existing cloud patterns
Importantly, this is additive to existing SaaS isolation guidance:You can still use AWS’s silo/pool models for tenant-level isolation. AWS Documentation+1
You can still deploy cellular architectures and shuffle sharding to limit infrastructure blast radius. Amazon Web Services, Inc.+1
The Shard on User Access model sits “above” this, ensuring that even if infrastructure isolation fails, attackers cannot traverse from one user’s ciphertext to another’s plaintext.
From a CISO’s perspective, this yields a layered story:
If the app is exploited: the attacker can only access data that the compromised identity can legitimately decrypt.
If the database is breached: the attacker sees ciphertext partitioned by access domain, with no global key.
If a key is stolen: the effect is constrained to a specific shard; there is no backdoor into the broader tenant or platform.
6. Compliance and Data Localization: Sharding Is Necessary but Not Sufficient
Regulatory regimes (GDPR, HIPAA, sector-specific data residency rules) increasingly influence how we store and process data across borders. Data localization guidance often encourages keeping certain classes of data within specific jurisdictions, but legal and technical sources are clear: localization alone does not guarantee adequate protection. Imperva+1
For CISOs and CTOs, this means:
Where you host a shard (region, data center) matters for jurisdiction, incident reporting, and lawful access.
How that shard is encrypted and structurally separated matters more for actual security outcomes.
A well-designed sharding strategy can support:
Jurisdictional boundaries (e.g., EU vs. US shards).
Business boundaries (e.g., each regulated business line in its own set of shards).
Access boundaries (e.g., Mimir’s user/group shards) in one coherent model.
The Shard on User Access approach is particularly useful here because it lets you:
Prove that a breach in one jurisdictional shard cannot expose data from another.
Demonstrate cryptographic and structural isolation as part of your DPIAs, threat models, and audit narratives.
Align Zero Trust goals (“assume breach, minimize impact”) with concrete, data-layer designs.
7. Practical Questions for CISOs and CTOs
When evaluating your current architecture—or a vendor’s claims—around sharding and blast radius reduction, it is useful to ask:
What is the maximum unit of compromise?
Is it “the whole production database,” “a tenant,” “a cell,” or “an access domain”?
What keys exist that could decrypt large swaths of data?
If a DBA’s credentials or KMS admin role is compromised, how much data is practically exposed?
How does the architecture limit lateral movement after an initial breach?
Are there structural and cryptographic barriers, not just policy statements?
Can we demonstrate these guarantees to auditors?
Can we show how many records, tenants, or shards are affected under specific key- or infrastructure-compromise scenarios?
How are sharding and Zero Trust integrated?
Are blast radius concepts visible in risk registers, tabletop exercises, and incident runbooks?
Mimir’s Shard on User Access model is one answer to these questions: it embeds blast-radius control directly into how data is structured and encrypted, not just how it is routed or firewalled.
Conclusion
Sharding began as a way to scale databases but has matured into a powerful tool for containing the impact of inevitable failures—both operational and security-related. When aligned with Zero Trust principles and implemented as a data-centric, cryptographically enforced strategy, sharding can transform your breach profile from “catastrophic all-at-once exposure” to “bounded, auditable, and recoverable incidents.”
The Shard on User Access model extends this thinking to the data-in-use layer, ensuring that neither the server nor an attacker who compromises it can ever assemble more plaintext than the current user is legitimately allowed to see. For CISOs and CTOs charged with protecting highly aggregated cloud data, this shift—from protecting the perimeter to structuring and encrypting the data itself—is where meaningful blast-radius reduction truly begins.
Endnotes (MLA Format)
Amazon Web Services. SaaS Tenant Isolation Strategies: Isolating Resources in a Multi-Tenant Environment. AWS Whitepaper, 1 Aug. 2020, docs.aws.amazon.com/whitepapers/latest/saas-tenant-isolation-strategies/. Accessed 1 Dec. 2025. AWS Documentation
Roberts, Dave, and Josh Hart. “Scale Your Relational Database for SaaS, Part 2: Sharding and Routing.” AWS Database Blog, 30 Apr. 2024, aws.amazon.com/blogs/database/scale-your-relational-database-for-saas-part-2-sharding-and-routing/. Accessed 1 Dec. 2025. Amazon Web Services, Inc.
Microsoft. “What Is Zero Trust?” Microsoft Learn, Microsoft, 2023, learn.microsoft.com/security/zero-trust/zero-trust-overview. Accessed 1 Dec. 2025.
Microsoft. “Secure Data with Zero Trust.” Microsoft Learn, Microsoft, 2023, learn.microsoft.com/security/zero-trust/deploy/secure-data. Accessed 1 Dec. 2025.
Grafana Labs. “Configure Grafana Mimir Shuffle Sharding.” Grafana Mimir Documentation, Grafana Labs, 2024, grafana.com/docs/mimir/latest/operators-guide/configure/configure-shuffle-sharding/. Accessed 1 Dec. 2025. Amazon Web Services, Inc.
Mokhtar, Emad, et al. “The Query Strikes Again.” Engineering at Slack, Slack Technologies, 15 Nov. 2023, slack.engineering/the-query-strikes-again/. Accessed 1 Dec. 2025. Slack Engineering
Imperva. “Data Localization.” Imperva Learning Center, Imperva, n.d., imperva.com/learn/data-security/data-localization/. Accessed 1 Dec. 2025. Imperva
Richter, AJ. “Does Server Location Really Matter under the GDPR?” TechGDPR Blog, TechGDPR, n.d., techgdpr.com/blog/server-location-gdpr/. Accessed 1 Dec. 2025. TechGDPR