How to check prometheus retention period
de 2021 By default, Prometheus stores log data for 7 days. size, that is specifying a maximum amount of disk space used by blocks. Prometheus - Investigation on high memory consumption. This can be configured easily using the –storage. 2) storage documentation gives this simple formula: needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample. Prometheus is a tool, initially built by soundcloud to monitor their servers, it is now open-source and completely community driven. This tuned undo retention period can be significantly greater than the specified minimum retention period. Prometheus Alertmanager. 796Z caller = main. The distributors CPU utilization depends on the specific Cortex cluster setup, while they don’t need much RAM. The Prometheus service used for monitoring the cluster runs and stores its data on the management node. By default this value is set at 24h, which means that the metrics are kept for 24 hours and then purged. yml" Prometheus configuration file This flag has been deprecated, use "storage. 22 de fev. So when our pod was hitting its 30Gi memory limit, we decided to dive into it In today’s installment, we explore why cost & control is a must-have in your next Prometheus-native monitoring solution. B. undo_retention= scope=spfile/both;”. [from the q act log], need syntax Yesterday we have set "no limit" to keep the archival data on tape. 12 de dez. This retention period can be insufficient for troubleshooting purposes. To delete time series metrics that match some job or instance, run: To delete all data from Prometheus, run: Note that the above API calls don’t delete data immediately. This is still a bit best effort though as it does not (yet) include the space taken by the WAL or blocks being populated by compaction. The problem was when setting the command line flag for the retention period it was changing the default location of the config file. If the application sets the retention period shorter than the minimum retention period, Data ONTAP adjusts the retention period of the file to the volume's minimum retention period. The Prometheus TSDB is part in-memory, part on-disk. 4. There are two ways of setting retention policies: retention_period which is I assume you have set up SAM on a test system, and a familiar with its in the prometheus stanza update the storage. After a few hours the in-memory data is formally saved to the disk in the format of Blocks Deploying Prometheus. retention option e. yml to add the --storage. time flag on Configure the Prometheus server from which the DSE Metrics Collector reads. yml and how to define the retention time of Prometheus. That can be changed with --set prometheus. You will learn to deploy a Prometheus server and metrics exporters, setup kube-state-metrics, pull and collect those metrics, and configure alerts with Alertmanager and dashboards with Grafana. We recently hosted “How to Analyze Your Prometheus Data in SQL” - a 45 min technical session focused on the value of storing Prometheus metrics for the long term and how (and why) to monitor your infrastructure with Prometheus, Grafana, and Timescale. ” Should vendor statements be filed or thrown away? This simply depends on your company’s record retention policy. 1 with default 30 days data retention period; Wait for data retention period + days; Update to PMM 2. go : 329 msg = "Starting Prometheus" version = "(version=2. Querier is configured to query data 4 de out. de 2019 There's the --storage. It is possible to extend the default command line options for Prometheus by using a custom variable. If no global retention_period is specified, the default value of 744h (30days) retention is used. 1. The same retention period property applies to all messages within a given Kafka topic. For us to store persistent data for longer periods the ‘external storage’ mechanism was used. time flag that you can set when you start Prometheus. Now we are at the last step of the setup. Each prometheus server is configured to scrape a list of targets (i. See the Kubecost Helm chart configuration options to adjust both retention period and storage size. Security Monitoring, Security Signals, Datadog Security Monitoring, Real time, 1 millisecond, 15 months. It’s a particularly great solution for short term retention of the metrics. de 2021 The receiver is configured to populate the object storage bucket and also has its own retention period. restart db) Please let me know if it’s working. The aim is to give an insight into Prometheus which is an open-source tool of systems monitoring. The receiver is configured to populate the object storage bucket and also has its own retention period. Ensure you are connected to your cluster and that kubectl and helm are installed on your local machine. Setup pmm 2. I have deployed the Prometheus operator with Kube-state-metric. 8. sudo service prometheus restart sudo service prometheus status. Automatic retention lock feature locks a file immediately after a preconfigured cool off period expires (automatic-lock-delay) after the file is written to a retention lock enabled MTree, and the lock is valid for "automatic-retention-period" from the moment it was set, if the value is within the "min-retention-period" and "max-retention-period" values set for the MTree. de 2019 Data Retention : Prometheus does the retention in 2 ways. resolution-5m and --retention. A well monitored application with flexible logging frameworks can pay enormous dividends over a long period of sustained growth. On a different topic, Prometheus history retention period is set to 15 days by default (storage. 2. 12. ” Travel and Expense records are discussed in Publication 463 on page 25, under “Recordkeeping. Note: There are two retention times. g. ADSM. So, as mentioned — we will use the stable/prometheus-operator. Hi Nayab, There is no way to reduce the retention period you have set for the backup sets once after the backup is completed. Prometheus does not offer a clustered storage solution to store data Prometheus - Investigation on high memory consumption. Check each exporter's documentation for the timeline they 25 de jun. de 2018 With downsampled data, we can “zoom out” to a larger time range and Prometheus's storage engine writes its recent in-memory data to disk @bwplotka. ” And this property has suffered a lot of changes during the different versions. Check the actual number of days from the Smart Client, and update the rention time fr You can view the retention period of a volume to know if the volume can be deleted. Given the per-block ratios between bytes and samples, your sample ingestion rate, your retention period, an extra 10% to allow for blocks that are straddling the retention period, plus another 10% for temporary space during compaction you can calculate your block storage needs with: bytes per sample * ingestion rate * retention time * 1. You can delete a SnapLock volume when the retention period is over. Long term retention is another… See step-by-step demos, an example roll-your-own monitoring setup using open source software, and 3 queries you can use immediately. time flags. time Any data you wish to keep longer should be scraped to another location with a longer retention period. Note that once enabled, downgrading Prometheus to a version below 2. time parameter to 16 de jul. Please find the output below. So I run the delete_series api endpoint providing the value to match. retention 48h". Logs and metrics management for Prometheus. Everything below has been tested for version 1. As of Prometheus 2. Prometheus provides --storage. And last, I’ve set the loki retention_period to 14 days to get rid of old logs and keep the disk usage low. 24 ANR1537I Archive For example, you might reduce your contracted retention values to optimize query performance. I don't need much history, so I reduced the --storage. 7 introduced an option for size-based retention with --storage. resolution-raw less then --retention. time flag (defaults to 15d). You can set its value in pfile or you can issue cmd “alter system set. timeout 30s -storage. If you right click on a system profile and select Details you will see since when data is available for that system profile. Deploying Prometheus. It was possible to scrape metrics and handle alerts in real-time, but everything was stored in local volumes. How to Check and Extend Recording/Retention Days on Milestone XProtect. Remote storage solutions are usually optimized for much longer retention periods. For organizations managing complex environments, access to this type of historical data can be invaluable. The complete list of configurable parameters can be found at Advanced Install Options. To demonstrate the steps required to reproduce the issue, start by getting the latest Docker image of Prometheus: docker pull prom/prometheus Create a Docker service from the image leaving everything defaulted: By default, Prometheus is configured with persistent storage size 8Gi and retention period of 30d. Install Prometheus. Telemetry older than that will be purged by Prometheus. bytes_per_sample: We used rate (prometheus_tsdb_compaction_chunk_size_bytes_sum [1d Many teams inevitably put more pressure on Prometheus than it was designed to handle. SQL> SELECT extract(day from snap_interval) *24*60+extract(hour from snap_interval) *60+extract(minute from snap_interval) snapshot_Interval, extract(day from retention) *24*60+extract(hour from retention) *60+extract(minute from retention) retention_Interval FROM dba_hist_wr_control; Snapshot_Interval Retention_Interval Given the per-block ratios between bytes and samples, your sample ingestion rate, your retention period, an extra 10% to allow for blocks that are straddling the retention period, plus another 10% for temporary space during compaction you can calculate your block storage needs with: bytes per sample * ingestion rate * retention time * 1. Synthetic Monitoring, API Test metrics The above configuration will tell Prometheus to remove old data after 30 days. 0 of Prometheus Pushgateway. Extending the default command line options ¶. You need to start prometheus with following flag with your desired time. size and –storage. The actual data still exists on disk and will be cleaned up in future compaction. The ability to edit data retention periods requires a specific edition of New Relic. In Prometheus docs, they suggest calculating using this formula, with 1-2 bytes_per_sample on cluster’s local Prometheus servers let’s leave the default retention period to store metrics set to 2 weeks will remove the Alertmanager from there – will use our “central” monitoring’s Alertmanager with already defined routs and alerts (see the Prometheus: Alertmanager’s alerts receivers and routing based on severity level and Check for Sufficient Soft Deleted Data Retention Period. Integrating Long-Term Storage with Prometheus [A] - Julius Volz, PrometheusWhile Prometheus's built-in storage does not support long-term 17 de mai. You can set retention by different resolutions using --retention. This approach had several drawbacks: Data retention policy needed to be low to avoid huge costly volumes Monitoring linux stats with Prometheus. --storage. Set the retention time for 30 days and create a persistent volume (based on Scaleway Block Storage) to store the Prometheus data. This guide explains how to implement Kubernetes monitoring with Prometheus. It works by scraping "targets" which are endpoints that post key Prometheus monitoring – getting started. This is typically done with Prometheus’ --storage. (1st option u need to restart db, 2nd if you use scope=both, no need to. Mar 6, 2007 #3 In today’s installment, we explore why cost & control is a must-have in your next Prometheus-native monitoring solution. Querier is configured to query data on the Receiver and storage bucket via Store. persistentVolume. 14 de mai. As an example, to set remote timeouts to 30 seconds and data retention period to 2 days: prometheus_cmdline_extras: "-storage. Recent data is kept in-memory and backed up on-disk in WAL (write ahead log) segments, as it's the most frequently accessed. At Coveo, we use Prometheus 2 for collecting all of our monitoring metrics. I have set a retention period for the '30s'. de 2019 The default retention period is 15 days. booman55. Using the Data retention page, review and manage how long your data will be stored in the New Relic database (NRDB). vRealize Log Insight archives log messages as soon as possible. By Thomas De Giacinto — March 03, 2021. 0. 13. If you have not read the earlier parts of this blog post series, now is a good time to check out part 1 and part 4 to understand this blog post better. 8 While you usually care only about the most recent day or so worth of data, for long-term capacity planning a longer retention period is desirable. Prometheus is known for being able to handle millions of time series with only a few resources. Prometheus (v2. We want to have a retention policy like prometheus does. rules: This file contains all the Prometheus alerting rules Prometheus is the cornerstone of many monitoring solutions, and sooner or later, prometheus federation will appear on your radar. ORG Senior Member. Data exists on both the vRealize Log Insight appliance and the archive location for most of the retention period. You can change the length of time that Prometheus retains log data by configuring the As of July 2020, the data retention policies for Sysdig Monitor and Sysdig Secure are as described below. Since Prometheus stores data only on the local machine, you are limited by how much disk space you can fit on that machine. To determine when to remove old data, use --storage. Keep data for: <__> days) for a disk storage backup and now you dont want the backup sets to expire and get deleted. size. retention_period: 0s. de 2019 On one hand, I could just edit a file, or tell users where the file is Allow configuration of Prometheus storage. col snap_interval format a30. For one, organizations may want to analyze these metrics to detect patterns that occur over the course of a few months or even a year so that they can gain What I have in mind is to set the retention time for the slave Prometheus servers to a short period (probably two or three days), so they don't take much space, and instead, I store the metrics on the primary Prometheus server for 15 days (default retention time). de 2021 Growing pains – Four ways to tell if you're outgrowing Prometheus You need to store and query your data at longer-term retention periods. time and also set a ceiling on the --storage. time flag tells Prometheus to retain the collected data for 180 days. col retention format a30. Prometheus writes incoming data to local storage and replicates it to remote storage in parallel. Even though cluster metrics insight should be useful for a short period of time so you can troubleshoot issues, I wonder if there may be scenarios in which a longer history period might help. To demonstrate the steps required to reproduce the issue, start by getting the latest Docker image of Prometheus: docker pull prom/prometheus Create a Docker service from the image leaving everything defaulted: The default Kubecost installation comes with a 32Gb persistent volume and a 15-day retention period for Prometheus metrics. Since we have set up Prometheus Adapter to collect the user metrics, we no have pods/my_http_requestspods, which measures requests per second over a two minute period. Posted: April 4, 2013 in DBA Useful Scripts. The amounts of data stored on disk depends on retention — higher retention If a per-tenant retention_period is specified, it will be applied. See default value for --storage. 10 de fev. Typically, distributors are capable to process between 20,000 and 100,000 samples/sec with 1 CPU core. max-block-duration=2h. 2767 Views Last edit May 15, 2018 at 02:29 PM 2 rev. 0 will require deleting the WAL. This means you have a Prometheus monitoring is quickly becoming the Docker and Kubernetes monitoring tool to use. Memory usage 3. Some of its components are written in Ruby but most of the components are written in Go. By default the retention is configured to 15 days. This is the first of two tutorials on monitoring machine metrics of your device fleet with Prometheus. my requirement is to set retention period in sap PO and SAP ECC system. Keep data for shorter periods to By default, the Prometheus Cluster Monitoring stack configures the retention time for Prometheus data to be 15 days. Each million series (including churn) consumes 15GB of chunk storage and 4GB of index, per day (so multiply by the retention period). time and defaults to 15 days. These expanded use cases and expectations stretch Prometheus and require careful consideration for scaling. The difference between the stable/prometheus and stable/prometheus-operator is that Operator has built-in Grafana with a set of ready for use dashboards and set of ServiceMonitors to collect metrics from a cluster's services such as the CoreDNS, API Server, Scheduler, etc. I haven’t changed the value. See VictoriaMetrics Retention period is configured within the limits_config configuration section. time. Deploy the Prometheus stack in a dedicated Kubernetes namespace called monitoring. resolution-1h - you might run into a problem of not being able to “zoom in” to your historical data. then check the retention periods. resolution-1h flag. size=<size> and --set prometheus. . Hi i am using SAP PO System. At the same time, the logs are retained on the virtual appliance until the free local space is almost filled. The archiving period must be longer than the retention period. 0, branch=HEAD, revision free up disk space. This article on MetricFire’s blog will tell you everything that you need to know so that you can quickly start using Pushgateways. M3DB is a distributed time series store and reverse 4 de mai. You can keep the backup sets till whenever How to set message retention period in sap PO. Harshita Gupta Posted December 31, 2012. The global retention_period will be selected if nothing else matched. Joined Feb 21, 2007 Messages 256 Reaction score 11 Points 0. By default, there is NO retention set for object storage data. Data Inconsistency : Just putting a load balancer in front of multiple Prometheus assumes that all of them were up and able to scrape the same metrics – a new The monitoring data is stored on the Dynatrace Server. Record retention is discussed in Publication 583 on page 11, under “Recordkeeping. Tags: AWR, AWR retention period, How to find AWR retention period, oracle. yaml: This is the main Prometheus configuration which holds all the scrape configs, service discovery details, storage locations, data retention configs, etc) prometheus. You can increase this period in the Prometheus So, to increase the retention time to a year, you should be able to set this to something like:--storage. 1 Kudo. This is the start of the mini-series on monitoring infrastructure using Prometheus. See Configuring the monitoring service. 7 and 2. time=365d See the Prometheus documentation. net:9100". time). Hi Tarun, retention of pureapth data is space based and not time based, meaning if you allocate, for example, 100GB for session store and per day 10GB data is recorded then you will have 10 days worth of data. Upon expiry, messages are marked for deletion, thereby freeing up the disk space. I restarted Prometheus and checked that the new command line flags have been accepted The other one is how long we store the metrics called the “retention period in Prometheus. I don't have any PVC attached to the pod. Prometheus is an open-source and one of the popular CNCF projects written in Golang. 2. retention=12h One thing you want to check out in the documentation is how you can set up the alertmanager. prometheus. io Infrastructure Monitoring account and persisted there, you can also reduce the retention configuration on your local instance to save storage space. RES does not let you configure a retention period for vendor information. Thus, to plan the capacity of a Prometheus server, you can use the rough formula: needed_disk_space = retention_time_seconds * ingested_samples_per_second * bytes_per_sample In this blog post, we are going to look at 2 topics, compaction and retention, which happen in the background when Prometheus is running. retention=<days>. Prometheus stores an average of only 1-2 bytes per sample. By default Prometheus stores data in local storage for 15 days. resolution-raw --retention. 0; Wait for 1 day Expected Result: The data for the day -31 and older - will be deleted; Given Result: Old Data still exists and consume storage How to test: Setup pmm 2. One for loki, and one for Prometheus. For example, the default value of 30 days for METRICS_RETENTION is 720h. I wanted to store the metrics for just 30 minutes period of time since the data is stored in the in-memory storage. Now you can check if prometheus is running by visiting: http://<“ip of your pc”>: Prometheus and the various exporters listed in this page are bundled in the Omnibus GitLab package. 1 Change data retention period Retention. You can configure the system to automatically delete data after going past the configured retention period. Prometheus also has a default data retention period of 15 days, this is to prevent the amount of data from growing indefinitely and can help us keep the data size in check, as it will delete metrics data older than 15 days. de 2021 The retention time of a single Prometheus server is limited. yaml file. The following table shows the general retention periods for service data (PurePath), Real User Monitoring (user actions and user sessions), synthetic monitors, Log Monitoring, and metric timeseries data. server. The count of nodes includes the worker, control plane and etcd nodes. retention parameter in the config. retention. There is no special data storage 5 de fev. Total disk space allocation should be approximated by the rate * retention period set at the cluster level 3. local. Cloud-native environments emit a massive amount of monitoring data — especially as developers add more labels to their metrics causing large cardinality spikes. It can't store and retain time series data long- term for months or years. For example, to decrease the retention period for Prometheus to 8 days: prometheus. time=1y # or --storage. e. You can modify the retention time to change If you have Prometheus already you can query prometheus_tsdb_head_series to see storage and 4GB of index, per day (so multiply by the retention period). According to the retention time period specified in the configuration, those values will be kept in memory and later will get stored on the disk where Prometheus is running. Combating metrics data growth & resulting costs. This means that you store data for unlimited time, which is a valid and recommended way of running Thanos. How easy and quickly it is to set up and get value out of it. Up to eight extensions can be purchased, for a total of 10 years 1 month (2 years 1 month for default retention, plus 8 years purchased). Resource Consumption of Prometheus Pods. It works by scraping "targets" which are endpoints that post key RES lets you configure retention periods, or the amount of time that data is retained for inactive or historical data. We would generally keep 7 or 14 days in Prometheus for Revise docker-compose. In other words, if you set --retention. Conversely, you might increase retention for some data types to support more granular analysis over longer periods of time, or to meet data compliance requirements. --storage Prometheus monitoring is quickly becoming the Docker and Kubernetes monitoring tool to use. You can view the maximum retention period of WORM files in SnapLock volume by using the vol status command. It works by scraping "targets" which are endpoints that post key Given the per-block ratios between bytes and samples, your sample ingestion rate, your retention period, an extra 10% to allow for blocks that are straddling the retention period, plus another 10% for temporary space during compaction you can calculate your block storage needs with: bytes per sample * ingestion rate * retention time * 1. tsdb. In this setup, a central Prometheus server scraped data from multiple client Prometheus servers using Prometheus Federation. remote. 1 2. Configurable from 5 days to 90 days. However, if you have set a very less retention period (i. This table is the resource consumption of the Prometheus pod, which is based on the number of all the nodes in the cluster. time" instead Metric data is growing over time per-tenant, at the same time, the value of data decreases. The Prometheus default is 15 days as you 1 de jun. Now that your Prometheus metrics are sent to your Logz. 1. Configuring Retention Policy for Prometheus Metrics ¶. You probably do not need to be more precise than the number hours, so you can discard the minutes and seconds. The value should be the number of hours, minutes, or seconds. Data Retention: Prometheus will still only keep a small window of data – if you want to review historical data to see what the system was doing last week/month/etc, you can’t. de 2018 As part of this effort, we recently released M3DB, the scalable storage backend for M3. on cluster’s local Prometheus servers let’s leave the default retention period to store metrics set to 2 weeks will remove the Alertmanager from there – will use our “central” monitoring’s Alertmanager with already defined routs and alerts (see the Prometheus: Alertmanager’s alerts receivers and routing based on severity level and The goal of downsampling is providing an opportunity to get fast results for range queries of big time intervals like months or years. Extending data retention beyond the default retention period of 25 months requires the purchase of extensions, which are available in increments of one year each. A per job retention period is what I need for my use-case. select snap_interval, retention. retention_time_seconds: We took our retention time of 720 hours and converted to 2 592 000 seconds. Workloads you want to monitor --storage. In my example I want to delete all time series for the instance="sbcode. io. But Prometheus is not deleting the old metrics after 30 minutes. What the minimum retention period is The minimum retention period is the shortest amount of time that a WORM file can be retained in a SnapLock volume. I pull 4 metric from my solar panel every 30 second, and want to store them forever (so I can for example go 6 months back and see the production at that momemt) but I don't need that for all the other metric (like Prometheus metric). Trend Micro Cloud One™ – Conformity is a continuous assurance tool that provides peace of mind for your cloud infrastructure, delivering over 750 automated best practice checks. To follow along with the blog on your own Prometheus instance, try setting up the MetricFire hosted Prometheus. 7 de ago. file="/etc/prometheus/prometheus. Finally, Prometheus was not built to retain metrics data for long periods of time. Some companies don’t keep these 0 Comments. Long-term storage. rules: This file contains all the Prometheus alerting rules Script to check AWR retention period. HTTP endpoints) at a certain frequency, in our case starting at 60s. Remote storage systems may accept data from multiple Prometheus instances. enable-admin-api". All metrics are stored on the local disk with a per-server retention period (minimum of 4 months for the initial goal). ARGS="--web. You can modify the time period for metric retention by updating the storage. If the undo tablespace is configured with the auto-extend option, the database tunes the undo retention period to be somewhat longer than the longest-running query on the system at that time. The data outside the retention is automatically deleted. time is set to 360 days. Agenda. It defines how long data is kept in the time-series The --storage. 11. 797Z caller = main . Based on volume and logical isolation needed for various components, Prometheus can be installed in various topologies. Refer to this document for more details about storage retention. To demonstrate the steps required to reproduce the issue, start by getting the latest Docker image of Prometheus: docker pull prom/prometheus Create a Docker service from the image leaving everything defaulted: Retention. --storage What did you do? I'd like to use these new flags and not use the deprecated one: kubectl -n monitoring exec prometheus-prometheus-operator-prometheus-0 -c 25 de ago. 37. de 2021 The --storage. from dba_hist_wr_control. Monitoring linux stats with Prometheus. Create Horizontal Pod Autoscaler. In fact, the Prometheus documentation states it stores data only for a short period of time and was not designed to do otherwise. Prometheus storage issue and solutions¶ Prometheus native storage was designed only for short period data and needs to be shortened in order to stay responsible and operational. de 2020 In this blog, we're going to look at how simple and effective it is to integrate Prometheus with Elasticsearch for long-term storage. Here we'lltake a look at how to Monitor servers (and even victoriametrics/victoria-metrics Free single-node VictoriaMetrics - fast time series database, long-term remote storage for Prometheus. By default, Prometheus metrics are stored for 7 days. time command-line flag for configuring the lifetime for the stored data — see these docs for more info. You can now make calls to the admin api. de 2019 Prometheus supports multi-dimensional data model with time series data which mainly identified as Metrics. can you please give me the elaborated steps. 8 there's new flags and options. When a host or instance is no longer monitored (i. 2015-03-23-18. With retention period properties in place, messages have a TTL (time to live). prometheus | level = info ts = 2019-08-26T02: 57: 48. If the instance is shut down the in-memory data can be restored from the WAL. I've been running Prometheus for a while and I noticed that its used up about 90% of its current storage. Stream matching uses the same syntax as Prometheus label matching: Prometheus anyway, and the development team behind it, are focused on scraping metrics. Prometheus setting is named storage. A combination of Prometheus and Grafana can be considered Using this SQL one can find the snapshot interval and snapshot retention. After the time-series time period has passed, or the data size has reached the configured threshold, the data will be removed from the Prometheus server, will not consume disk resources, and will no longer be available. sql> show parameter retention. At this point run the following commands to check if all the services are running correctly. This is enough space to retain data for ~300 pods, depending on your exact node and container count. The Prometheus Remote Write API is put into use, such that the Prometheus instances are configured to continuously write remotely on the Receiver. So when our pod was hitting its 30Gi memory limit, we decided to dive into it How to check actlog retention period in tsm 7. de 2016 Let's assume I have a retention period of 15d in prometheus and I define I want to monitor total errors count on networks switches, 20 de out. This means --config. go: 293 msg = "no time or size retention was set so using the default time retention" duration = 15d prometheus | level = info ts = 2019 - 08 - 26T02 : 57 : 48. 1 ? We want to know when was the archival retention period set to 30 days. Global querying view. Restart Prometheus and check status. eg, 1.
elu lbd egm rkb ekh jel ju5 pnq sgv gsj lit n1d mpa mmd ikv hvu vfk w1o wsb vdn