Best for Backup: Sizing and Scaling with S3 for CapEx and Consumption
In the age of ransomware, backup data isn’t just a best practice—it’s a necessity. But not all backups are created equal. To truly protect data, organizations must implement immutable backups.
With its built-in immutability via Object Lock technology, S3 object storage is the go-to choice for ransomware resilience. Yet sizing and scaling backup infrastructure in this context is far from straightforward.
Why backups are crucial and why sizing backup infrastructure is hard
Ransomware attacks continue to rise, and immutable backups are the ultimate defence. If your backup data can’t be altered or deleted—even using compromised credentials—you’re in a much stronger position to recover without paying a ransom.
That’s why S3 object storage with immutability settings (like Veeam’s support for S3 Object Lock) is now a cornerstone of modern backup strategies. But adopting this approach introduces new complexity—especially when it comes to sizing.
However, calculating how much backup storage is needed has always been tricky. You’re not just storing today’s data—you’re planning for:
- Retention policies (daily, weekly, monthly, yearly)
- Growth rates (which can be unpredictable)
- Change rates (how much data changes between backups)
- Compression and deduplication ratios
- Restore performance requirements
Over-provisioning leads to wasted budget and idle hardware. Under-provisioning risks failed backups, missed SLAs, and costly emergency upgrades. And once you add immutability into the mix, things get even more complicated.
S3 object storage: unique sizing challenges
S3-based backup targets are ideal for immutability, scalability, and ransomware resilience—but they behave differently from traditional block or file storage.
Here’s why sizing S3 backups can be particularly challenging:
1. Immutability settings inflate retention: With S3 Object Lock, backups can’t be deleted until their retention period expires. This means:
- No space reclamation during the lock period
- Longer retention = more storage required
- GFS (Grandfather/Father/Son) policies—e.g., monthly/yearly backups—can dramatically increase the storage footprint
2. Versioning and Metadata Overhead: S3 stores data as objects, often with multiple versions and associated metadata. This adds overhead that’s easy to overlook in sizing calculations.
3. Deduplication: S3 doesn’t natively deduplicate data. Backup software such as Veeam generally handles this, but it’s critical to model it accurately.
4. Performance Considerations: S3 performance depends on object size, concurrency, and backend architecture. Sizing must account for capacity and restoring throughput, ensuring sufficient free space and system headroom to maintain performance during high-load operations.
5. Restore and full backup headroom requirements: Some backup software vendors may require sufficient free capacity to write a full backup before initiating a restore, especially during support scenarios. Sizing should include headroom to accommodate these operations.
Mitigating sizing risk with consumption-based models
One way to reduce the risk of mis-sizing is to adopt a Consumption-based subscription model.
Instead of guessing how much capacity you’ll need for the next 3–5 years, you:
- Pay only for what you use
- Scale up as needed
- Avoid upfront hardware purchases
- Let the vendor handle refreshes and upgrades
However, even with a Consumption model, accurate sizing is essential to avoid surprise costs or performance bottlenecks. So, make sure to ask your vendor these questions:
- Do they offer automated sizing tools?
- Can they model immutability settings like GFS?
- Do they provide expert-led sizing sessions?
- Can they simulate growth scenarios over time, for performance and capacity tiers?
Object First, for example, offers both a self-service Storage Calculator and a (mandatory) SE-led sizing session for Consumption customers—ensuring you start with the right capacity step and scale smoothly.
CapEx vs. Consumption: which is right for you?
Not sure which is best for you? Here’s a framework to help you decide:
| CapEx | Consumption | |
| Upfront budget | Necessary | None |
| Data growth suitability | Best when predictable | Best when variable or fast-growing |
| Hardware lifecycle | Customer-managed | Vendor-managed |
| Who bears sizing risk? | Customer | Vendor |
| Total cost over time | Potentially lower | Potentially higher, but more predictable |
Also, when evaluating vendors, make sure to consider the following factors:
- Do they charge based on storage only, or also compute?
- Can they model S3-specific behaviours accurately?
- Do they offer tools to simulate retention and immutability impact?
- Is support included—or an add-on?
- How do they handle hardware refreshes and scaling?
These can significantly affect your total cost of ownership and operational efficiency.
The Object First advantage
At Object First, we specialize in S3 backup sizing—especially for Veeam environments. Our expertise in modeling multi-tier backup schemas is provided through our online Storage Calculator and SE-led sizing sessions designed to help customers avoid sizing pitfalls and optimize cost.
With our Consumption model, you get:
- Immutable backup storage from 10 TB to 7 PB+
- No upfront payment
- Simple monthly billing
- All updates and hardware refreshes included
- Expert guidance to right-size your environment
Whether you choose CapEx or Consumption, we’re here to help you build a backup strategy that’s secure, scalable, and complements your favoured financial model.
Ready to size your backup environment? Try our free Backup Storage Calculator and accurately estimate the capacity needed for your Veeam backup data.



