We may earn an affiliate commission when you visit our partners.
Sean Bradley

Welcome to my course on Grafana

Grafana is an analytics platform for all of your metrics. Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. Trusted and loved by the community.

This is a Learn by example course, where I demonstrate all the concepts discussed so that you can see them working, and you can try them out for yourself as well.

Read more

Welcome to my course on Grafana

Grafana is an analytics platform for all of your metrics. Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored. Create, explore, and share dashboards with your team and foster a data driven culture. Trusted and loved by the community.

This is a Learn by example course, where I demonstrate all the concepts discussed so that you can see them working, and you can try them out for yourself as well.

With this course, comes accompanying documentation that you can access for free. You will then be able to match what you see in the videos and copy/paste directly from my documentation and see the same result.

In this course we will,

  • Install Grafana from Packages

  • Create a domain name, install an SSL certificate and change the default port

  • Explore the Graph, Stat, Gauge, Bar Gauge, Table, Text, Heatmap and Logs Panels

  • Create many different types of Data Sources from MySQL, Zabbix, InfluxDB, Prometheus and Loki

  • We will configure their various collection processes such as MySQL Event Scheduler, Telegraf, Node Exporters, SNMP agents and Promtail

  • We will look at graphing Time Series data versus Non Time Series data

  • We will also install dashboards for each of the Data Sources, experimenting with community created dashboards plus experimenting with our own

  • We will monitor SNMP Devices using Telegraf Agent and InfluxDB Data Sources

  • Setup Elasticsearch with Filebeat and Metricbeat services

  • We will create Annotation Queries and link the Log and Graphs panels together

  • We will look at Dynamic Dashboard Variables, Dynamic Tables and Graphs

  • We will look at creating Value Groups/Tags and how to use them with different kinds of data sources

  • We will set up Alerting Channels/Contact Points and understand the different alerting options, configure an example to detect offline SNMP devices and demonstrate receiving email alerts via a local SMTP server

At the end of the course, you will have your own dedicated working Grafana Server, which will be in the cloud, with SSL, a domain name, with many example Data Sources and collectors configured, that you can call your own, ready for you to take it to the next level.

Once again, this is a Learn by example course, with all the example commands available for you to copy and paste. I demonstrate them working, and you will be able to do that to.

You are now ready to continue.

Thanks for taking part in my course, and i'll see you there.

Enroll now

What's inside

Learning objectives

  • Explore the graph, stat, gauge, bar gauge, table, text, heatmap and logs panels
  • Install and configure a mysql datasource, dashboard and collector
  • Install and configure a zabbix server datasource, dashboards
  • Install and configure influxdb with telegraf
  • Use dashboard variables to create dynamic dashboards with automatic visualisation placement
  • Install an snmp agent and configure telegraf snmp input
  • Install loki data source that queries a loki service that is ingesting data from a promtail service.
  • Graph time series aswell as non time series sql data
  • Create custom mysql time series queries
  • Install grafana from packages
  • Add a nginx reverse proxy for grafana
  • Create a domain name and install an ssl certificate for the grafana server
  • Explore the dashboards panels options
  • Install a smtp server and setup an email notification channel
  • Setup alerts for when snmp devices go offline or return no data
  • Setup a telegram contact point
  • Use annotation queries to link logs panels and graph panels
  • Install prometheus with several node exporters and a dashboard
  • Setup an elasticsearch server with filebeat and metricbeat services.
  • Show more
  • Show less

Syllabus

Introduction

The official documentation for this course is hosted for FREE at https://sbcode.net/grafana. I have written it specifically to be used by the students of this course. Please check the documentation links in the resources for each lesson, as there may be some extra notes added to each lesson.

In this course all my examples are executed on an unrestricted Ubuntu 20.04 LTS. You can get minimal Ubuntu 20.04 LTS server from cloud providers. 
In this course I used Digital Ocean, with this link https://m.do.co/c/23d277be9014, you will normally get $50 credit for 30 days, occasionally the offer changes, but can create and delete as many VMs as you wish during the initial offer period.

Read more

I have many options where to install Grafana.

In this course I want the server to be on 24 hours a day, and to be easily accessible from many physical locations.

I decide that hosting it using a cloud provider is my best option.

Grafana is excellent for monitoring time series data.

In this video, I will install Grafana


After the successful install, you can start the Grafana service

$ sudo service grafana-server start


Your Grafana server will be hosted at

http://[your Grafana server url]:3000


The default login,

Username : admin

Password : admin

Grafana is updated very regularly.

When using the open source version of Grafana, you need to manage updates yourself and at your own risk.

The open source version comes with no promise of backwards compatibility, so upgrading may affect how any dashboards, visualizations, data sources, etc work.

Remember that upgrading is done at your own risk. If you consider that the upgrade was detrimental, you can downgrade back to your previous version by using the same method.

To minimize potential backwards compatibility issues when upgrading, it is better to upgrade 1 patch at a time until you get to the version that you desire. That way you can look at the change log documentation at https://grafana.com/docs/grafana/latest/release-notes/ to get a clue about how to fix any errors that you may suddenly have.

Using my Domain name provider, I create an A name record pointing to my Grafana server IP address.

Now to add a proxy. I will use Nginx. The Nginx proxy will also allow us to more easily configure our Grafana servers public url and bind an SSL certificate to it. It is possible to change the grafana.ini settings to use a specific port number, SSL certificates and http protocol instead but you will also need to manage file permissions that the Grafana server process will need. It is more versatile to use a dedicated proxy and especially if your server will be hosting multiple other web applications.

I add SSL to the Grafana web server to ensure all traffic is encrypted between the server and web browser.

I use LetsEncrypt by following the Certbot instructions.

For Web Server software, I choose Nginx

For Operating system, I choose Ubuntu 24.04 LTS

I then SSH onto my new Grafana server,

I ensure snap is installed.

# sudo snap list


Make sure I have the latest version of snap

# sudo snap install core; sudo snap refresh core


I install the classic certbot

# sudo snap install --classic certbot


Prepare the command so that it can be executed from the command line

# sudo ln -s /snap/bin/certbot /usr/bin/certbot


Start the process of installing the SSL certificate for my domain name.

# sudo certbot --nginx


Follow the prompts, and enter the domain name that you want to secure.

After completion, you should then be able to now visit your Grafana server using the url

https://YOUR-DOMAIN-NAME

Note that after running Certbot, it has changed the settings of your Nginx configuration file you created earlier.

You can see those changes by using the cat command.

# cat /etc/nginx/sites-enabled/YOUR-DOMAIN-NAME

I use the 'TestData' data source. This is perfect for beginning to learn about Grafana visualisations.

We add a panel to the existing TestData DB dashboard, and more specifically, we first look at managing rows and the various presentation options we have.

We look at some of the presentation options for panels, such as positioning, size, keyboard shortcuts, duplication, deleting and more.

When creating and editing Dashboards, it is advisable to make regular saves in case you need to go back to a previous version or not.

I demonstrate many of the visualisation settings found in the panel tab and using different TestDataDB data source scenarios.

I demonstrate the settings found in the Overrides tabs.

I demonstrate using the Reduce and Add field from calculation transformations on the data from the TestData we have so far. These two transformations are likely to be the most common use case for using transforms within Grafana and are a good base to understand the remaining transforms.

Show a single value in a series.

We explore the Gauge panel.

We explore the Bar Gauge panel.

We explore the Table panel.

Overview So Far

In this video, I demonstrate how to set up the MySQL Data Source with a collector and it's related dashboard.

I also demonstrate how to install a MySQL database server.

I install the MySQL database onto a new sever to also demonstrate the types of issues that you may have when connecting Grafana to another server.

Add the MySQL Data source in Grafana. If you try to save it, it will have some issues. We will next setup a MySQL server ready for the Data Source configuration.

After sourcing your new server (you can always use your existing Grafana server if you prefer) we next can install MySQL onto the new server. SSH onto your new MySQL server.

We create a custom MySQL time series query that reads data from a our table in our MySQL database and formats the result set in a way that Grafana can use as a time series result set, and present that data in a graph along with the ability to filter that data using the Grafana user interface time filter drop down.

Grafana graphs Time Series data from many types of data sources extremely well.

But sometimes you just want to graph, simple non time series data. i.e., Data without timestamps, flat tables that show simple statistics or values.

To keep this as simple as possible, we will install the Loki binary as a service on our existing Grafana server.

Now we will create the Promtail service that will act as the collector for Loki.

Now that we have a loki data source we can query it with the LogQL query language.

In this video, we will try out many LogQL queries in the 'systemd-journal' stream selector that we just set up.

There are two types of LogQL queries:

  1. Log queries returning the contents of log lines as streams.

  2. Metric queries that convert logs into value matrixes.

A LogQL query consists of,

  1. The log stream selector

  2. Filter expression

We can use operations on both the log stream selectors and filter expressions to refine them.

Topics discussed in the video are,

  • Log Stream Selectors

  • Filter Expressions

  • Expression Operators

  • Range and Instance Vectors

  • Aggregate Functions

  • Aggregate Group

  • Comparison Operators

  • Logical Operators

  • Arithmetic Operators

  • Operator order


LogQL Filter Expressions

We can install a Promtail service on other servers, and point them to an existing Loki service already running on a different server. If you have multiple Promtail services distributed around your network, and all pushing data to one main Loki service, then there are a few more considerations.

We then need to make sure that the job label in your Promtail configuration scrape_configs is unique from the perspective of the Loki service that it will be pushing to.

If using a Promtail service, or Loki service across the network, then it is important that you consider who can access it, or whether it needs to be encrypted since the transmitted data is likely to contain sensitive information about your server and other services.

In this video, I demonstrate setting up annotation queries to help me visualize invalid user login attempts on my servers.

We will add to our Promtail scrape configs, the ability to read the Nginx access logs.

We need to add a new job_name to our existing Promtail config_promtail.yml

Restart the Promtail service and check its status.

If I go back into Grafana, I will see the new nginx job inside the Explore panel.

This is pretty good now, but we can make it better.

I want to be able to filter by the status code and other http properties.

We can use the loki pattern parser to dynamically create labels for query refinement.

Now I can create a dashboard for Nginx status codes.

Prometheus is already available on the default Ubuntu 20.04 repositories. So we can just install it and it will be set up as a service already.

$ sudo apt install prometheus

$ sudo service prometheus status


Test it by visiting http://[your domain or ip]:9090/graph

Restrict internet access by

$ iptables -A INPUT -p tcp -s localhost --dport 9090 -j ACCEPT

$ iptables -A INPUT -p tcp --dport 9090 -j DROP

$ iptables -L

We will install two dashboards being one for the Prometheus service and the other for the Node Exporter.

The Prometheus dashboard can be found in the dashboards tab for the Prometheus data source configuration in the Grafana UI.

We will get the Node Exporter dashboard from the official Grafana Dashboards link.

https://grafana.com/grafana/dashboards

The dashboard Id is 11074 for the English version or Id 8919 for the Chinese version.

English Version: https://grafana.com/grafana/dashboards/11074

Chinese Version: https://grafana.com/grafana/dashboards/8919

Depending on your Grafana and Prometheus versions, the pre built Grafana Metrics dashboard may partly work or not at all.

In this video, I will show the steps that I used to get it to work.

Install the Grafana Metrics dashboard from the Prometheus Datasource --> Dashboards tab.

The Prometheus service, since it is local will, retrieve Grafana stats from the url http://127.0.0.0.1:3000/metrics

Grafana will return metrics data by default.

You can verify or change the settings in the grafana.ini file.

Go back into Grafana, and there is likely to be some issues with the visualizations.

In the video, I fix each problem in turn and demonstrate my problem solving process.

I will install a Prometheus Node Exporter on a different server and connect to it using the main Prometheus service.

It is now exposing the metrics endpoint on http://[your domain or ip]:9100

We can create a scrape config on the Prometheus server that retrieves metrics from that URL.

But since my new node exporter is accessible from the internet, I will block port 9100.

Next, Go back onto the main Prometheus server and edit the existing scrape config for node and add the new metrics endpoint for the other server.

You can connect directly to Zabbix via the API method. If you want faster performance than it is advised to also setup a MySQL data source for your Zabbix connection. The MySQL data source will instead connect directly to the Zabbix database and bypass the API layer for certain queries.

We import the 3 supplied Zabbix Dashboards

  1. Zabbix System Status

  2. Zabbix Template Linux Server

  3. Zabbix Server Dashboard

In the next few lectures, we will look at how the create and use dashboard variables.

We will manually recreate fully a dynamic dashboard to query our SNMP devices that we've set up in the previous lectures.



We create Dynamic Tables from a Prometheus query using a dashboard variable.

We create repeating dynamic timeseries graphs from queries using our dashboard variables.

I have an SNMP enabled MikroTik 260GS switch configured as a host in my Zabbix system. I demonstrate creating a Grafana dashboard for it.

We will create an alerting rule for high CPU usage reported by any of our node exporters.

We will create alerting rules from when Prometheus detects a Node exporter is not up.

And we will create an alerting rule for when Prometheus is down.

When a node exporter is down, then Prometheus will return a 0 for the up metric. This is easy to write an alerting rule for.

But when Prometheus is down, then Grafana will get a timeout. There will be no response from Prometheus at all.

We will need to handle the symptom, rather than the metric value, as is returned when a Node exporter is down.

I want to send alerts using Grafana, but I need to first create an alert notification channel.

In this lecture I will create a new channel for email alerts.

I don't have an SMTP server available, so install a local send only SMTP server on my Grafana server.


We can also add Telegram as a Contact Point for Alerting.

To do this install the Telegram app on your phone or PC. It will be easiest to set this up on the PC first.

You will need a BOT API Token and Chat ID.

See Video for example how to do this.

We add users to our Grafana system using several different methods such as,

  • Add User,

  • Invite User,

  • User Sign Up,

  • Anonymous users

We also set the various roles of our users and manage the various permissions of our dashboards to allow certain roles to view and/or edit.

Teams help to segregate groups of users further.

You can assign team permissions to your dashboards and add/remove users from the teams instead without needing to go into each individual dashboard and reassign/add/remove particular user permissions each time.

Orgs can be used to create a new Grafana configuration with its own dashboards, data sources, users, team, alerts on an existing Grafana install.

Users can be shared between Orgs if you need.

Course Conclusion

I introduce you the the Grafana Cloud managed version. If you have done most of this course, then I can recommend you the offer of an upgraded 28 day PRO trial coupon for Grafana cloud using the link https://grafana.com/auth/sign-up?refCode=gr83y6yqNccjKnX

In this video I demonstrate setting up the Loki data source in the Grafana Cloud version.

In this video I demonstrate setting up the Prometheus data source in the Grafana Cloud version.

We are going to install InfluxDB v2, the InfluxDB data source, a Telegraf agent and then collect some data.

InfluxDB is a database useful for storing large amounts of timestamped data.

Telegraf is an agent that supports plugins and it will save it's data into InfluxDB.

Now to install the Telegraf agent and configure the output plugin to save data into the InfluxDB.

Now that we have a System dashboard visible in InfluxDB, we can also re-produce this same dashboard in Grafana.

In this video, I demonstrate the process of doing that.

SNMP stands for Simple Network Management Protocol. 

We can configure Telegraf to read SNMP, save it into InfluxDB and view it in Grafana.

Common devices that support SNMP are routers, switches, printers, servers, workstations and other devices found on IP networks.

Not every network device supports SNMP, or has it enabled, and there is a good chance you don't have an SNMP enabled device available that you can use in this lecture.

So, I will also show you how to install and configure SNMP on your server, as well as read the SNMP data with Telegraf, save it into InfluxDB and view it in Grafana.


I will add several more SNMP agents to the Telegraf config.

In order for Telegraf to connect to the external SNMP agents, those other SNMP agents will need to be configured to allow my Telegraf agent on my Grafana server to connect remotely.

At the end of the lecture, I have 3 separate SNMP agents I can query for the next lecture.

We Import a SNMP Dashboard that uses the InfluxDB data source and the Telegraf collector.


I demonstrate installing and querying Elasticsearch 7.16.

Elasticsearch uses the JavaVM. So I recommend a minimum spec of 2GB RAM for the server that you use for the Elasticsearch service.

I am using Debian Package Instructions from https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html


I demonstrate how to setup a Filebeat service to read systemd logs.

And then we can set up a new data source in Grafana, or modify the existing and test it using the explore tab.

I demonstrate how to setup a Metricbeat service to send to the Elasticsearch server.

Now to install an advanced dashboard that uses both the Filebeat and Metricbeat data sources at the same time.

I will set this up on my Linux server where the Filebeat process is already running.

Download Metricbeat for your OS from https://www.elastic.co/downloads/beats/metricbeat

My OS is a Debian based Ubuntu 20.04

Save this course

Save Grafana to your list so you can find it easily later:
Save

Activities

Be better prepared before your course. Deepen your understanding during and after it. Supplement your coursework and achieve mastery of the topics covered in Grafana with these activities:
Review Time Series Data Concepts
Reinforce your understanding of time series data, which is fundamental to Grafana's functionality.
Browse courses on Time Series Data
Show steps
  • Review the definition of time series data and its characteristics.
  • Study examples of time series data in different domains.
  • Practice identifying time series patterns and anomalies.
Create a Grafana Resource Compilation
Improve your understanding of Grafana by compiling a list of useful resources.
Show steps
  • Gather links to Grafana documentation, tutorials, blog posts, and community forums.
  • Organize the resources into categories (e.g., data sources, visualizations, alerting).
  • Add brief descriptions of each resource.
Review: Learning Grafana 7
Deepen your understanding of Grafana by studying a dedicated book on the subject.
Show steps
  • Obtain a copy of 'Learning Grafana 7'.
  • Read the chapters relevant to the course topics.
  • Experiment with the examples provided in the book.
Four other activities
Expand to see all activities and additional details
Show all seven activities
Review: Grafana: Up and Running
Gain a practical understanding of Grafana through a hands-on guide.
Show steps
  • Obtain a copy of 'Grafana: Up and Running'.
  • Work through the examples in the book to create dashboards.
  • Experiment with different data sources and visualizations.
Write a Blog Post on Grafana Best Practices
Solidify your understanding of Grafana by sharing your knowledge and insights with others.
Show steps
  • Research Grafana best practices for dashboard design, data source configuration, and alerting.
  • Write a blog post summarizing your findings and providing practical tips.
  • Publish your blog post on a platform like Medium or your personal website.
Build a Monitoring Dashboard for a Home Server
Apply your Grafana knowledge by creating a real-world monitoring solution for a home server.
Show steps
  • Set up a home server with services to monitor (e.g., CPU, memory, network).
  • Install and configure data collectors (e.g., Telegraf, Prometheus node exporter).
  • Create a Grafana dashboard to visualize the collected data.
  • Configure alerts for critical metrics.
Develop a Grafana Plugin
Extend Grafana's functionality by creating a custom plugin for a specific use case.
Show steps
  • Identify a need for a new Grafana plugin (e.g., a custom data source or panel).
  • Study the Grafana plugin development documentation.
  • Develop and test your plugin.
  • Share your plugin with the Grafana community.

Career center

Learners who complete Grafana will develop knowledge and skills that may be useful to these careers:
Monitoring Specialist
A monitoring specialist concentrates primarily on ensuring the health, performance, and security of IT systems. They make use of many monitoring tools to detect anomalies, troubleshoot problems, and analyze patterns. This course helps a monitoring specialist to set up Grafana, create data sources from a multitude of platforms, configure alerting channels, and design custom dashboards. The extensive coverage of data sources and alerting mechanisms makes this course essential for anyone focused on monitoring.
Data Visualization Specialist
Data visualization specialists translate complex data sets into easy to understand visuals. The work of data visualization encompasses dashboards, reports, and presentations. This Grafana course builds a foundation in data visualization. This course is particularly helpful, as it covers creating dynamic dashboards, graphing time series data, and exploring various visualization options within Grafana. The knowledge gained here helps a data visualization specialist present data in a compelling and insightful manner.
Site Reliability Engineer
The Site Reliability Engineer focuses on ensuring the reliability, availability, and performance of systems and services. They use monitoring and automation tools to proactively identify and address potential issues. This Grafana course builds the foundational knowledge needed by the Site Reliability Engineer. The focus on setting up alerts, integrating with Prometheus and other data sources, and creating custom dashboards, helps ensure system reliability.
Data Analyst
A data analyst examines data using tools like Grafana to identify trends, develop charts, and create visual presentations that help organizations make better decisions. Data analysts often work with data visualization tools to create reports and dashboards. This Grafana course helps build a foundation for a data analyst by demonstrating how to install Grafana and create different types of data sources from MySQL, Zabbix, InfluxDB, Prometheus, and Loki. The modules on exploring dashboards panels and creating custom MySQL time series queries may also prove to be helpful.
DevOps Engineer
The DevOps engineer focuses on automation, collaboration, and infrastructure as code. They often use monitoring and visualization tools to gain insights into application performance. The Grafana course helps a DevOps engineer learn how to use Grafana for monitoring, alerting, and visualizing metrics from various data sources. The modules on setting up Prometheus, Loki, and Elasticsearch integrations provide practical skills for monitoring applications and infrastructure.
Business Intelligence Analyst
The Business Intelligence Analyst transforms data into insights that drive business decisions. Business intelligence analysts frequently use data visualization tools to present findings to stakeholders. This Grafana course helps build the visualization skills needed for this role. The focus on creating dashboards and exploring panel options will enable a business intelligence analyst to effectively communicate complex data to a non-technical audience.
Systems Administrator
The systems administrator ensures the stability, integrity, and efficient operation of an organization's IT systems. Systems administrators use tools like Grafana to monitor system performance. This course helps a systems administrator learn how to install Grafana, create data sources such as MySQL and Prometheus, and set up alerts for critical system events. The modules on installing SNMP agents and configuring Telegraf SNMP input may be particularly useful for monitoring network devices.
Cloud Engineer
The cloud engineer is responsible for designing, deploying, and managing cloud-based infrastructure and applications. As cloud environments are dynamic and distributed, monitoring is crucial. This Grafana course builds proficiency in visualizing cloud metrics. The course's focus on installing Grafana on cloud platforms helps build expertise. Learning to connect to data sources like Prometheus and Elasticsearch will be valuable for monitoring cloud application performance.
Application Support Engineer
The application support engineer troubleshoots and resolves issues with software applications. They use monitoring tools to identify the root cause of problems and ensure application uptime. The Grafana course helps application support engineers learn how to monitor applications using Grafana. The integration with data sources like Prometheus, Loki, and Elasticsearch can help monitor application logs, metrics, and performance.
Database Administrator
The database administrator is responsible for the performance, integrity, and security of databases. They use monitoring tools to identify performance bottlenecks and ensure database uptime. The Grafana course helps a database administrator learn how to connect Grafana to databases like MySQL and InfluxDB. Creating custom time series queries and setting up alerts will assist in proactively managing database health and performance.
Network Engineer
The network engineer designs, implements, and manages computer networks. A key role of this job is ensuring network uptime and performance. This course may be useful since it covers installing and configuring InfluxDB with Telegraf and monitoring SNMP devices. The ability to create dynamic dashboards with automatic visualization placement will also assist a network engineer in monitoring network health and performance with Grafana.
Security Analyst
A security analyst monitors systems and networks for security breaches and investigates security incidents. Security analysts can leverage tools like Grafana to visualize security logs and metrics. This course may be helpful for security analysts. The course covers integrating Grafana with Elasticsearch, Filebeat, and Metricbeat, which are commonly used in security information and event management (SIEM) systems. This combination helps correlate logs and metrics for effective threat detection.
Technical Support Engineer
The technical support engineer provides expert technical assistance to customers, resolving complex issues and ensuring customer satisfaction. Technical support engineers benefit from system performance and health overviews. The Grafana course helps technical support engineers gain skills in setting up Grafana servers, configuring data connectors, and visualizing data. They will also learn how to automate the process of creating dynamic dashboards with automatic visualization placement.
IT Support Specialist
The IT support specialist provides technical assistance to end users, troubleshooting hardware, software, and network issues. This Grafana course may be useful for learning how to use Grafana to monitor system performance and identify potential problems before they impact users. Setting up alerts, demonstrated in the course, further empowers an IT support specialist to proactively address issues. Knowledge of Grafana is a valuable asset in providing efficient IT support.
IT Manager
The IT manager oversees the IT department, ensuring that IT systems and services meet the needs of the organization. IT Managers need tools that provide oversight of system performance and health. This Grafana course may be useful, as IT managers can use Grafana to gain visibility into the performance of IT systems and infrastructure, track key metrics, and make data-driven decisions to improve efficiency and reliability.

Reading list

We've selected one books that we think will supplement your learning. Use these to develop background knowledge, enrich your coursework, and gain a deeper understanding of the topics covered in Grafana.
Provides a comprehensive guide to Grafana 7, covering dashboard creation, data visualization, and data analysis. It useful reference for understanding Grafana's features and capabilities. It offers practical examples and step-by-step instructions for building dashboards and visualizations. This book can be used as a reference text to expand and provide additional depth to the course.

Share

Help others find this course page by sharing it with your friends and followers:

Similar courses

Similar courses are unavailable at this time. Please try again later.
Our mission

OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.

Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.

Find this site helpful? Tell a friend about us.

Affiliate disclosure

We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.

Your purchases help us maintain our catalog and keep our servers humming without ads.

Thank you for supporting OpenCourser.

© 2016 - 2025 OpenCourser