Kube-prometheus-stack bundles Prometheus and Grafana for monitoring Kubernetes workloads. On the surface, it looks like the answer to all your monitoring needs. But monitoring is not observability, and if you confuse the two, you will hit a wall.Kube-prometheus-stack bundles Prometheus and Grafana for monitoring Kubernetes workloads. On the surface, it looks like the answer to all your monitoring needs. But monitoring is not observability, and if you confuse the two, you will hit a wall.

Why kube-prometheus-stack Isn’t Enough for Kubernetes Observability

2025/10/28 14:04

Observability in Kubernetes has become a hot topic in recent years. Teams everywhere deploy the popular kube-prometheus-stack, which bundles Prometheus and Grafana into an opinionated setup for monitoring Kubernetes workloads. On the surface, it looks like the answer to all your monitoring needs. But here is the catch: monitoring is not observability. And if you confuse the two, you will hit a wall when your cluster scales or your incident response gets messy.

In this first post of my observability series, I want to break down the real difference between monitoring and observability, highlight the gaps in kube-prometheus-stack, and suggest how we can move toward true Kubernetes observability.

The question I keep hearing

I worked with a team running microservices on Kubernetes. They had kube-prometheus-stack deployed, beautiful Grafana dashboards, and alerts configured. Everything looked great until 3 AM on a Tuesday when API requests started timing out.

The on-call engineer got paged. Prometheus showed CPU spikes. Grafana showed pod restarts. When the team jumped on Slack, they asked me: “Do you have tools for understanding what causes these timeouts?” They spent two hours manually correlating logs across CloudWatch, checking recent deployments, and guessing at database queries before finding the culprit: a batch job with an unoptimized query hammering the production database.

I had seen this pattern before. Their monitoring stack told them something was broken, but not why. With distributed tracing, they would have traced the slow requests back to that exact query in minutes, not hours. This is the observability gap I keep running into: teams confuse monitoring dashboards with actual observability. The lesson for them was clear: monitoring answers “what broke” while observability answers “why it broke.” And fixing this requires shared ownership. Developers need to instrument their code for visibility. DevOps engineers need to provide the infrastructure to capture and expose that behavior. When both sides own observability together, incidents get resolved faster and systems become more reliable.

Monitoring vs Observability

Most engineers use the terms interchangeably, but they are not the same. Monitoring tells you when something is wrong, while observability helps you understand why it went wrong.

  • Monitoring: Answers “what is happening?” You collect predefined metrics (CPU, memory, disk) and set alerts when thresholds are breached. Your alert fires: “CPU usage is 95%.” Now what?
  • Observability: Answers “why is this happening?” You investigate using interconnected data you didn’t know you’d need. Which pod is consuming CPU? What user request triggered it? Which database query is slow? What changed in the last deployment?

The classic definition of observability relies on the three pillars:

  • Metrics: Numerical values over time (CPU, latency, request counts).
  • Logs: Unstructured text for contextual events.
  • Traces: Request flow across services.

Prometheus and Grafana excel at metrics, but Kubernetes observability requires all three pillars working together. The CNCF observability landscape shows how the ecosystem has evolved beyond simple monitoring. If you only deploy kube-prometheus-stack, you will only get one piece of the puzzle.

The Dominance of kube-prometheus-stack

Let’s be fair. kube-prometheus-stack is the default for a reason. It provides:

  • Prometheus for metrics scraping
  • Grafana for dashboards
  • Alertmanager for rule-based alerts
  • Node Exporter for hardware and OS metrics

With Helm, you can set it up in minutes. This is why it dominates Kubernetes monitoring setups today. But it’s not the full story.

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \ --namespace monitoring \ --create-namespace

Within minutes, you’ll have Prometheus scraping metrics, Grafana running on port 3000, and a collection of pre-configured dashboards. It feels like magic at first.

Access Grafana to see your dashboards:

kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 3000:80

Default credentials are admin / prom-operator. You’ll immediately see dashboards for Kubernetes cluster monitoring, node exporter metrics, and pod resource usage. The data flows in automatically.

In many projects, I’ve seen teams proudly display dashboards full of red and green panels yet still struggle during incidents. Why? Because the dashboards told them what broke, not why.

Common Pitfalls with kube-prometheus-stack

Metric Cardinality Explosion

Cardinality is the number of unique time series created by combining a metric name with all possible label value combinations. Each unique combination creates a separate time series that Prometheus must store and query. The Prometheus documentation on metric and label naming provides official guidance on avoiding cardinality issues.

Prometheus loves labels, but too many labels can crash your cluster. If you add dynamic labels like user_id or transaction_id, you end up with millions of time series. This causes both storage and query performance issues. I’ve witnessed a production cluster go down not because of the application but because Prometheus itself was choking.

Here’s a bad example that will destroy your Prometheus instance:

from prometheus_client import Counter # BAD: High cardinality labels http_requests = Counter( 'http_requests_total', 'Total HTTP requests', ['method', 'endpoint', 'user_id', 'transaction_id'] # AVOID! ) # With 1000 users and 10000 transactions per user, you get: # 5 methods * 20 endpoints * 1000 users * 10000 transactions = 1 billion time series

Instead, use low-cardinality labels and track high-cardinality data elsewhere:

from prometheus_client import Counter # GOOD: Low cardinality labels http_requests = Counter( 'http_requests_total', 'Total HTTP requests', ['method', 'endpoint', 'status_code'] # Limited set of values ) # Now you have: 5 methods * 20 endpoints * 5 status codes = 500 time series

You can check your cardinality with this PromQL query:

count({__name__=~".+"}) by (__name__)

If you see metrics with hundreds of thousands of series, you’ve found your culprit.

Lack of Scalability

In small clusters, a single Prometheus instance works fine. In large enterprises with multiple clusters, it becomes a nightmare. Without federation or sharding, Prometheus does not scale well. If you’re building multi-cluster infrastructure, understanding Kubernetes deployment patterns becomes critical for running monitoring components reliably.

For multi-cluster setups, you’ll need Prometheus federation according to the Prometheus federation documentation. Here’s a basic configuration for a global Prometheus instance that scrapes from cluster-specific instances:

scrape_configs: - job_name: 'federate' scrape_interval: 15s honor_labels: true metrics_path: '/federate' params: 'match[]': - '{job="kubernetes-pods"}' - '{__name__=~"job:.*"}' static_configs: - targets: - 'prometheus-cluster-1.monitoring:9090' - 'prometheus-cluster-2.monitoring:9090' - 'prometheus-cluster-3.monitoring:9090'

Even with federation, you hit storage limits. A single Prometheus instance struggles beyond 10-15 million active time series.

Alert Fatigue

Kube-prometheus-stack ships with a bunch of default alerts. While they are useful at first, they quickly generate alert fatigue. Engineers drown in notifications that don’t actually help them resolve issues.

Check your current alert rules:

kubectl get prometheusrules -n monitoring

You’ll likely see dozens of pre-configured alerts. Here’s an example of a noisy alert that fires too often:

- alert: KubePodCrashLooping annotations: description: 'Pod {{ $labels.namespace }}/{{ $labels.pod }} is crash looping' summary: Pod is crash looping. expr: | max_over_time(kube_pod_container_status_waiting_reason{reason="CrashLoopBackOff"}[5m]) >= 1 for: 15m labels: severity: warning

The problem? This fires for every pod in CrashLoopBackOff, including those in development namespaces or expected restarts during deployments. You end up with alert spam.

A better approach is to tune alerts based on criticality:

- alert: CriticalPodCrashLooping annotations: description: 'Critical pod {{ $labels.namespace }}/{{ $labels.pod }} is crash looping' summary: Production-critical pod is failing. expr: | max_over_time(kube_pod_container_status_waiting_reason{ reason="CrashLoopBackOff", namespace=~"production|payment|auth" }[5m]) >= 1 for: 5m labels: severity: critical

Now you only get alerted for crashes in critical namespaces, and you can respond faster because the signal-to-noise ratio is higher.

Dashboards That Show What but Not Why

Grafana panels look impressive, but most of them only highlight symptoms. High CPU, failing pods, dropped requests. They don’t explain the underlying cause. This is the observability gap.

Here’s a typical PromQL query you’ll see in Grafana dashboards:

# Shows CPU usage percentage 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)

This tells you what: CPU is at 95%. But it doesn’t tell you why. Which process? Which pod? What triggered the spike?

You can try drilling down with more queries:

# Top 10 pods by CPU usage topk(10, rate(container_cpu_usage_seconds_total[5m]))

Even this shows you the pod name, but not the request path, user action, or external dependency that caused the spike. Without distributed tracing, you’re guessing. You end up in Slack asking, “Did anyone deploy something?” or “Is the database slow?”

Why kube-prometheus-stack Alone Is Not Enough for Kubernetes Observability

Here is the opinionated part: kube-prometheus-stack is monitoring, not observability. It’s a foundation, but not the endgame. Kubernetes observability requires:

  • Logs (e.g., Loki, Elasticsearch)
  • Traces (e.g., Jaeger, Tempo)
  • Correlated context (not isolated metrics)

Without these, you will continue firefighting with partial visibility.

Building a Path Toward Observability

So, how do we close the observability gap?

  • Start with kube-prometheus-stack, but acknowledge its limits.
  • Add a centralized logging solution (Loki, Elasticsearch, or your preferred stack).
  • Adopt distributed tracing with Jaeger or Tempo.
  • Prepare for the next step: OpenTelemetry.

Here’s how to add Loki for centralized logging alongside your existing Prometheus setup:

helm repo add grafana https://grafana.github.io/helm-charts helm repo update # Install Loki for log aggregation helm install loki grafana/loki \ --namespace monitoring \ --create-namespace

For distributed tracing, Tempo integrates seamlessly with Grafana:

# Install Tempo for traces helm install tempo grafana/tempo \ --namespace monitoring

Now configure Grafana to use Loki and Tempo as data sources. In your Grafana UI, add:

apiVersion: 1 datasources: - name: Loki type: loki access: proxy url: http://loki:3100 - name: Tempo type: tempo access: proxy url: http://tempo:3100

With this setup, you can jump from a metric spike in Prometheus to related logs in Loki and traces in Tempo. This is when monitoring starts becoming observability.

OpenTelemetry introduces a vendor-neutral way to capture metrics, logs, and traces in a single pipeline. Instead of bolting together siloed tools, you get a unified foundation. I’ll cover this in detail in the next post on OpenTelemetry in Kubernetes.

Conclusion

Kubernetes observability is more than Prometheus and Grafana dashboards. Kube-prometheus-stack gives you a strong monitoring foundation, but it leaves critical gaps in logs, traces, and correlation. If you only rely on it, you will face cardinality explosions, alert fatigue, and dashboards that tell you what went wrong but not why.

True Kubernetes observability requires a mindset shift. You’re not just collecting metrics anymore. You’re building a system that helps you ask questions you didn’t know you’d need to answer. When an incident happens at 3 AM, you want to trace a slow API call from the user request, through your microservices, down to the database query that’s timing out. Prometheus alone won’t get you there.

To build true Kubernetes observability:

  • Accept kube-prometheus-stack as monitoring, not observability
  • Add logs and traces into your pipeline
  • Watch out for metric cardinality and alert noise
  • Move toward OpenTelemetry pipelines for a unified solution

The monitoring foundation you build today shapes how quickly you can respond to incidents tomorrow. Start with kube-prometheus-stack, acknowledge its limits, and plan your path toward full observability. Your future self (and your on-call team) will thank you.

In the next part of this series, I will show how to deploy OpenTelemetry in Kubernetes for centralized observability. That is where the real transformation begins.

Read next: OpenTelemetry in Kubernetes for centralized observability.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Share Insights

You May Also Like

US Spot ETH ETFs Witness Remarkable $244M Inflow Surge

US Spot ETH ETFs Witness Remarkable $244M Inflow Surge

BitcoinWorld US Spot ETH ETFs Witness Remarkable $244M Inflow Surge The world of digital assets is buzzing with exciting news! US spot ETH ETFs recently experienced a significant milestone, recording a whopping $244 million in net inflows on October 28. This marks the second consecutive day of positive movement for these crucial investment vehicles, signaling a growing appetite for Ethereum exposure among mainstream investors. What’s Fueling the Latest US Spot ETH ETFs Inflow? This impressive influx of capital into US spot ETH ETFs highlights a clear trend: institutional and retail investors are increasingly comfortable with regulated crypto investment products. The figures, reported by industry tracker Trader T, show a robust interest that could reshape the market. Fidelity’s FETH led the charge, attracting a substantial $99.27 million. This demonstrates strong confidence in Fidelity’s offering and Ethereum’s long-term potential. BlackRock’s ETHA wasn’t far behind, securing $74.74 million in inflows. BlackRock’s entry into the crypto ETF space has been closely watched, and these numbers confirm its growing influence. Grayscale’s Mini ETH also saw significant action, pulling in $73.03 million. This new product is quickly gaining traction, offering investors another avenue for Ethereum exposure. It’s important to note that while most products saw positive flows, Grayscale’s ETHE experienced a net outflow of $2.66 million. This might suggest a shift in investor preference towards newer, perhaps more cost-effective, spot ETF options. Why Are US Spot ETH ETFs Attracting Such Significant Capital? The appeal of US spot ETH ETFs is multifaceted. For many investors, these products offer a regulated and accessible way to gain exposure to Ethereum without directly owning the cryptocurrency. This removes some of the complexities associated with digital asset management, such as setting up wallets, managing private keys, or dealing with less regulated exchanges. Key benefits include: Accessibility: Investors can buy and sell shares of the ETF through traditional brokerage accounts, just like stocks. Regulation: Being regulated by financial authorities provides a layer of security and trust that some investors seek. Diversification: For traditional portfolios, adding exposure to a leading altcoin like Ethereum through an ETF can offer diversification benefits. Liquidity: ETFs are generally liquid, allowing for easy entry and exit from positions. Moreover, Ethereum itself continues to be a powerhouse in the blockchain space, underpinning a vast ecosystem of decentralized applications (dApps), NFTs, and decentralized finance (DeFi) protocols. Its ongoing development and significant network activity make it an attractive asset for long-term growth. What Does This US Spot ETH ETFs Trend Mean for Investors? The consistent positive inflows into US spot ETH ETFs could be a strong indicator of maturing institutional interest in the broader crypto market. It suggests that major financial players are not just dabbling but are actively integrating digital assets into their investment strategies. For individual investors, this trend offers several actionable insights: Market Validation: The increasing capital flow validates Ethereum’s position as a significant digital asset with real-world utility and investor demand. Potential for Growth: Continued institutional adoption through ETFs could contribute to greater price stability and potential upward momentum for Ethereum. Observing Investor Behavior: The shift from products like Grayscale’s ETHE to newer spot ETFs highlights how investors are becoming more discerning about their investment vehicles, prioritizing efficiency and cost. However, it is crucial to remember that the crypto market remains volatile. While these inflows are positive, investors should always conduct their own research and consider their risk tolerance before making investment decisions. A Compelling Outlook for US Spot ETH ETFs The recent $244 million net inflow into US spot ETH ETFs is more than just a number; it’s a powerful signal. It underscores a growing confidence in Ethereum as an asset class and the increasing mainstream acceptance of regulated cryptocurrency investment products. With major players like Fidelity and BlackRock leading the charge, the landscape for digital asset investment is evolving rapidly, offering exciting new opportunities for both seasoned and new investors alike. This positive momentum suggests a potentially bright future for Ethereum’s integration into traditional financial portfolios. Frequently Asked Questions (FAQs) What is a US spot ETH ETF? A US spot ETH ETF (Exchange-Traded Fund) is an investment product that allows investors to gain exposure to the price movements of Ethereum (ETH) without directly owning the cryptocurrency. The fund holds actual Ethereum, and shares of the fund are traded on traditional stock exchanges. Which firms are leading the inflows into US spot ETH ETFs? On October 28, Fidelity’s FETH led with $99.27 million, followed by BlackRock’s ETHA with $74.74 million, and Grayscale’s Mini ETH with $73.03 million. Why are spot ETH ETFs important for the crypto market? Spot ETH ETFs are crucial because they provide a regulated, accessible, and often more familiar investment vehicle for traditional investors to enter the cryptocurrency market. This can lead to increased institutional adoption, greater liquidity, and enhanced legitimacy for Ethereum as an asset class. What was Grayscale’s ETHE outflow and what does it signify? Grayscale’s ETHE experienced a net outflow of $2.66 million. This might indicate that some investors are shifting capital from older, perhaps less efficient, Grayscale products to newer spot ETH ETFs, which often offer better fee structures or direct exposure without the previous trust structure limitations. If you found this article insightful, consider sharing it with your network! Your support helps us bring more valuable insights into the world of cryptocurrency. Spread the word and let others discover the exciting trends shaping the digital asset space. To learn more about the latest Ethereum trends, explore our article on key developments shaping Ethereum institutional adoption. This post US Spot ETH ETFs Witness Remarkable $244M Inflow Surge first appeared on BitcoinWorld.
Share
2025/10/29 11:45
First Ethereum Treasury Firm Sells ETH For Buybacks: Death Spiral Incoming?

First Ethereum Treasury Firm Sells ETH For Buybacks: Death Spiral Incoming?

Ethereum-focused treasury company ETHZilla said it has sold roughly $40 million worth of ether to fund ongoing share repurchases, a maneuver aimed at closing what it calls a “significant discount to NAV.” In a press statement on Monday, the company disclosed that since Friday, October 24, it has bought back about 600,000 common shares for approximately $12 million under a broader authorization of up to $250 million, and that it intends to continue buying while the discount persists. ETHZilla Dumps ETH For BuyBacks The company framed the buybacks as balance-sheet arbitrage rather than a strategic retreat from its core Ethereum exposure. “We are leveraging the strength of our balance sheet, including reducing our ETH holdings, to execute share repurchases,” chairman and CEO McAndrew Rudisill said, adding that ETH sales are being used as “cash” while common shares trade below net asset value. He argued the transactions would be immediately accretive to remaining shareholders. Related Reading: Crypto Analyst Shows The Possibility Of The Ethereum Price Reaching $16,000 ETHZilla amplified the message on X, saying it would “use its strong balance sheet to support shareholders through buybacks, reduce shares available for short borrow, [and] drive up NAV per share” and reiterating that it still holds “~$400 million of ETH” on the balance sheet and carries “no net debt.” The company also cited “recent, concentrated short selling” as a factor keeping the stock under pressure. The market-structure logic is straightforward: when a digital-asset treasury trades below the value of its coin holdings and cash, buying back stock with “coin-cash” can, in theory, collapse the discount and lift NAV per share. But the optics are contentious inside crypto because the mechanism requires selling the underlying asset—here, ETH—to purchase equity, potentially weakening the very treasury backing that investors originally sought. Death Spiral Incoming? Popular crypto trader SalsaTekila (@SalsaTekila) commented on X: “This is extremely bearish, especially if it invites similar behavior. ETH treasuries are not Saylor; they haven’t shown diamond-hand will. If treasury companies start dumping the coin to buy shares, it’s a death spiral setup.” Skeptics also zeroed in on funding choices. “I am mostly curious why the company chose to sell ETH and not use the $569m in cash they had on the balance sheet last month,” another analyst Dan Smith wrote, noting ETHZilla had just said it still holds about $400 million of ETH and thus didn’t deploy it on fresh ETH accumulation. “Why not just use cash?” The question cuts to the core of treasury signaling: using ETH as a liquidity reservoir to defend a discounted equity can be read as rational capital allocation, or as capitulation that undermines the ETH-as-reserve narrative. Beyond the buyback, a retail-driven storyline has rapidly formed around the stock. Business Insider reported that Dimitri Semenikhin—who recently became the face of the Beyond Meat surge—has targeted ETHZilla, saying he purchased roughly 2% of the company at what he views as a 50% discount to modified NAV. He has argued that the market is misreading ETHZilla’s balance sheet because it still reflects legacy biotech results rather than the current digital-asset treasury model. Related Reading: Ethereum Emerges As The Sole Trillion-Dollar Institutional Store Of Value — Here’s Why The same report cites liquid holdings on the order of 102,300 ETH and roughly $560 million in cash, translating to about $62 per share in liquid assets, and calls out a 1-for-10 reverse split on October 15 that, in his view, muddied the optics for retail. Semenikhin flagged November 13 as a potential catalyst if results show the pivot to ETH generating profits. The company’s own messaging emphasizes the discount-to-NAV lens rather than a change in strategy. ETHZilla told investors it would keep buying while the stock trades below asset value and highlighted a goal of shrinking lendable supply to blunt short-selling pressure. For Ethereum markets, the immediate flow effect is limited—$40 million is marginal in ETH’s daily liquidity—but the second-order risk flagged by traders is behavioral contagion. If other ETH-heavy treasuries follow the playbook, selling the underlying to buy their own stock, the flow could become pro-cyclical: coins are sold to close equity discounts, the selling pressures spot, and wider discounts reappear as equity screens rerate to the weaker mark—repeat. That is the “death spiral” scenario skeptics warn about when the treasury asset doubles as the company’s signal of conviction. At press time, ETH traded at $4,156. Featured image created with DALL.E, chart from TradingView.com
Share
2025/10/29 12:00