×

How to Optimize Cloud Storage Costs & Performance on Google Cloud Platform

Optimize your Cloud Storage Costs & Performance on Google Cloud Storage with existing new features.

How to Optimize Cloud Storage Costs & Performance on Google Cloud Platform

Whenever you’re running your own data center, storage tends to get lost in your overall infrastructure costs, making it harder to do proper cost management. But in the Google Cloud Platform (GCP), where the Cloud Storage is billed as a separate line item, paying attention to storage utilization and configuration can result in substantial cost savings. 

Storage needs can not be determined (Example - Compute Engine ) as these are always changing. It’s possible that the storage class that you have picked when you first set up your environment may no longer be appropriate for a given workload. Also, Cloud Storage has come a long way—it offers a lot of new features that weren't there just a year ago.

Storage classes: Google Cloud Storage offers a variety of storage classes—

1. Standard - It is best for data that is frequently accessed and/or stored for only brief periods of time.

2. Nearline - It is a low-cost, highly durable storage service for storing infrequently accessed data.

3. Coldline - It is a very-low-cost, highly durable storage service for storing infrequently accessed data.

4. Archival - It is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery.

All with varying costs and their own best-fit use cases.

If you’re only using the standard storage class, then it’s the time to look into your workloads and re-evaluate how frequently your data is being accessed. 

Lifecycle Policies: Using Google Cloud Storage you are not only saving money by using different storage classes,  you can make it happen automatically with object lifecycle management. By configuring a lifecycle policy on a particular cloud storage, you can programmatically set an object to adjust its storage class based on a set of conditions, or even delete the file entirely if it's no longer needed for your use. 

Example: Hypothetical Situation:  You and your team analyzed the data within the first month it’s created; beyond that, you only need it for regulatory purposes. In that case, you can simply set a policy that adjusts your cloud storage to coldline or archive after your object reaches 31 days.

Deduplication:  This is another very common source where most of us waste the cloud storage which is duplicate data. Obviously, there are times when it’s necessary. 

Example: You want to duplicate a particular dataset across multiple geographic regions so that local teams can access as soon as possible.

However, in our personal experience working with customers, a lot of duplicate data is the result of lax version control, which can be cumbersome and very expensive to manage. 

However, we are very lucky that there are lots of ways to prevent duplicate data, as well as tools to prevent data from being deleted in error. 

Whenever you're trying to maintain resiliency with a single source, it may make more sense to use a multi-region bucket on Google Cloud Storage rather than creating multiple copies in various buckets. With this fantastic feature, you will have geo redundancy enabled for objects stored. Most of the time a lot of duplicate data comes from not properly using the Cloud Storage object versioning feature. This prevents your data from being overwritten or accidentally deleted, but the duplicates it creates can really add up. Now the question is do you really need five copies of your data? As long as it's protected one is enough. In this case, you can set up object versioning policies to ensure you have an appropriate number of copies. In case of losing something accidentally we can use the bucket lock feature, which helps ensure that items aren’t deleted before a specific date or time.




Trendy