Cost-optimization with Google Cloud Operations (Stackdriver)

Kasia Pryczek
19 May 2020
4 min read

If your applications run on Google Cloud Platform (GCP), you've probably already searched for ways to optimize the costs your app generates. You may have looked for opportunities to scale your apps up and down. You may have adopted GCP's recommendations to only use services that best suit your startup’s needs. 

With Google Cloud Operations, you pay only for what you use. It's one of the most significant benefits of cloud-based platforms. But if you are new to Google Cloud Platform, you may not know how to control the resources you pay for yet. You may also not know where to look for cost cuts. If your application is up and running and you're using tools like Cloud Operations to check its performance, now is a good time to look at monitoring from a cost optimization perspective.

What is Google Cloud Operations?

Google Cloud Operations (formerly known as Stackdriver) is a monitoring, logging, and diagnostics service natively integrated with Google Cloud Platform. It helps to ensure the optimal performance of your application. It gathers different types of data and displays them in custom dashboards, charts, and reports. That's how Cloud Operations diagnoses your application's health. Its key features include monitoring, error reporting, debugger, and logging.

In March 2020, GCP rebranded Stackdriver. Now called Google Cloud Operations, it merged the Stackdriver Metrics UI into the Google Cloud Console to provide a unified usage experience and faster access to essential data.

What can I check with Google Cloud Operations?

Developers are dealing with a vast amount of data when it comes to logging and monitoring. It's an advantage to have all services integrated within Cloud Operations. Logs, errors, dashboards, and charts are easily accessible. It improves the troubleshooting process.

Check your apps heath with Cloud Monitoring

GCP offers visibility into the performance of your applications with Cloud Monitoring. It measures the overall application's health based on collected usage, uptime, or memory metrics. It also makes those data visible to developers through flexible and customizable dashboards. You can set custom alerts and integrate them with tools such as Slack to react quickly if the system detects any performance issues.

Dashboards and charts let you navigate between monitoring and detailed log data managed by Cloud Logging. This service enables you to store and search on real-time log data. Like with Cloud Monitoring, you can set custom alerts fully managed by GCP. Log data can be exported to Cloud Storage or BigQuery for longer visibility periods. They can also be sent via Pub/Sub to any endpoint of your choice.

Analyze your apps’ errors with Error Reporting

For gathering and analyzing application's errors, GCP offers Error Reporting. It displays error details such as behavior occurrences over time, time charts, or affected users count through a dedicated interface. The displayed data is already processed and parsed. It supports many programming languages like Ruby, Python, Node.js, or Java.

Watch how your apps are doing with Cloud Debugger

Last but not least is Cloud Debugger, a service used to inspect the state of applications. It allows you to take a snapshot of an app running in a production environment at a specific code location. It's useful to inspect the app without stopping it, slowing it down, or affecting users on the other end.

How to optimize costs with Google Cloud Operations?

One benefit that Google Cloud Platform always offers to its customers is to pay only for what you use. This solution is, in fact, beneficial, but it also creates a challenge on how to optimize costs and control them. It's especially true if you are new to GCP. 

First of all, it's crucial to understand the billing reports provided by the cloud. Monitoring or logging details are also helpful in understanding where the costs come from. Prices for Cloud Logging and Cloud Monitoring are based on the volume of ingested logs, metrics, or the number of API calls. Analyzing and understanding your usage data can help find areas that might need optimization.

Review your logs on Cloud Logging

Cloud Logging provides a detailed list of ingested logs. It shows you volumes for the previous and current month as well as projected end-of-month volume. With this information, you can analyze how your usage changes over time. You can also see which types of logs are most cost consuming. As soon as you have a good overview of your log data, there are several ways to reduce your costs. 

To reduce the volume of logs ingested to the platform, you can apply exclusion filters on Cloud Logging. You can exclude any types of logs that are not useful to your team members. You can also add custom exclusion rules based on, e.g., high log volume or lack of any value to your project. You can even set a rule to collect only a certain percentage of, e.g., succeeded request logs. They will not appear in the interfaces, which makes the reporting more adequate and cost-optimized. 

Export your log data on Cloud Storage or BigQuery

Another way to cut some costs on logging is to export your log data using Cloud Storage or BigQuery. This method seems reasonable if you plan on any long-term logs analysis. It gives you access to your data without any extra ingestion into Cloud Logging. When using this solution, you should remember to exclude useless logs from your exports. They might generate unnecessary costs for your Cloud Storage or BigQu+ery.

Reduce your volume of ingested data on Cloud Monitoring

Saving costs with Cloud Monitoring requires reducing the volume of ingested data. The charges are based on the metrics volume and the number of API calls. To cut costs, we need to be wise and watch only those that provide useful and practical information. That's why it's good to either exclude or lower the level of visibility for logs and metrics in environments other than production. 

Since you pay for the data you send, it's a good idea to track your development, staging, or beta environments at the lower time-frequency.

Limit the number of labels

Another saving tip is limiting the number of labels added to custom metrics since each label creates a new series of time entries. If you are creating a label, it is good to give it a value with the low cardinality.

At the time of this article writing, Error Reporting only triggers minor charges if the Cloud Logger has ingested errors. Cloud Debugger is free of charge.

What we think of it

Cloud Logging and Cloud Monitoring are integrated in a very convenient way. It allows you to navigate between entries and troubleshoot your application. They also provide detailed information about usage over the months. You can track changes in your usage trends. With a good understanding of your app's performance reports, you can adapt the services' configurations to optimize your costs.

Do you want to know more about what GCP can do for you? Drop us a line or send an email to hello@start-up.house

You may also like...

Development

The rarely told advantages of Ruby on Rails for developers

Implementing new ideas into reality is always hard. But as it is with the creation of every product, not only software projects, correctly selected tools can make this road much smoother. Let&rs...

Adrian Nowak
27 August 2020
3 min read
Development

Vue for React developers

Even though I work primarily in React, during my relatively short career as front-end developer I have sort of built myself a reputation as the guy who look...

Maciek Kozłowski
12 August 2020
6 min read