In 2025, something occurred that, until a few years ago, seemed difficult even to imagine: artificial intelligence (AI) absorbed about 50% of all global ventureIn 2025, something occurred that, until a few years ago, seemed difficult even to imagine: artificial intelligence (AI) absorbed about 50% of all global venture

2025, the year Artificial Intelligence conquered Venture Capital

intelligenza artificiale venture capital

In 2025, something occurred that, until a few years ago, seemed difficult even to imagine: artificial intelligence (AI) absorbed about 50% of all global venture capital, raising 211 billion dollars, nearly double the 114 billion of 2024. This is not just a quantitative increase, but a structural shift in how venture capital perceives innovation, value, and growth.

The HumanX + Crunchbase 2025 AI Funding Report accurately captures this historical transition, demonstrating how AI is no longer a “technological gamble,” but the foundational infrastructure upon which entire industrial sectors are being redefined.

An Industry That Doubles in Just One Year

From 2016 to 2022, global investments in AI have progressively increased, with an initial acceleration in the post-pandemic period. However, it is between 2024 and 2025 that the real breakthrough occurs: +85% year-over-year, a dynamic rarely observed on a global scale.

The most significant data point is not just the total amount, but rather the structure of the rounds:

  • $163 billion have flowed into rounds of $100 million or more
  • 233 companies have closed megadeals
  • These rounds represent 77% of all capital invested in AI in 2025, compared to 67% in 2024

Venture capital, in other words, is increasingly focusing on a few winners perceived as systemic.

Foundation Model: Dominant, But Not Alone

The foundation models remain the symbolic heart of the AI ecosystem. In 2025, they raised 87 billion dollars, with a growth of nearly 180% compared to the previous year.

OpenAI and Anthropic alone have attracted 58.5 billion, solidifying valuations that place them among the largest private companies in the world:

  • OpenAI: estimated valuation around 500 billion dollars
  • Anthropic: approximately $183 billion

However, perhaps the most interesting data point is that 59% of AI investments did not go to foundation models, but to everything that makes AI usable, scalable, and monetizable.

Where Capital Really Goes: Infrastructure, Applications, and Deep Tech

Analyzing rounds above 100 million dollars reveals a much more intricate distribution:

  • 19% towards AI infrastructure (cloud, data labeling, platforms)
  • 15% towards vertical AI software, with a particular focus on healthcare and security
  • 11% towards deep tech, particularly robotics and defense

This shift indicates a market maturation: the focus is moving from pure computational power to the creation of measurable value.

Not surprisingly, several industry leaders emphasize that the issue is not ambition, but the foundations. According to research cited in the report, 95% of AI pilot projects do not produce measurable ROI, often due to infrastructural or organizational shortcomings. Companies that manage to bridge this gap are currently achieving average returns between 15% and 20%, with rapid improvement margins.

Geography of Power: The United States (and the Bay Area) Dominate

The geographical concentration of investments is impressive:

  • 79% of all AI capital in 2025 went to U.S. companies
  • $166 billion invested in the USA

Within the United States, the San Francisco Bay Area remains the absolute epicenter:

  • 60% of global AI funding (approximately 126 billion dollars)
  • 81% of all regional startup capital invested in AI
  • 92 companies with rounds over 100 million

Yet, the Bay Area accounts for only 22% of the total number of deals, indicating that the global ecosystem is vast, but capital concentrates where iteration speed, talent, and capital collide more rapidly.

Women and AI: A Statistic That Requires Careful Consideration

One of the most surprising data points of 2025 concerns the presence of female co-founders in AI-funded companies in North America and Europe:

  • 47% of AI capital has flowed into companies with at least one female founder
  • Total of $84.7 billion

However, the report calls for a critical reading: the effect is heavily influenced by the mega-rounds of foundation models. Looking at the number of rounds, the percentage stabilizes around 20%, consistent with previous years.

The signal is positive, but it highlights how structural parity is still a long way off.

HumanX: Where Capital Meets Narrative

The report is not just a macro analysis, but also a snapshot of the ecosystem surrounding HumanX, the global summit dedicated to enterprise AI.

The over 130 companies taking the stage have raised more than 72 billion dollars since 2018. Among them are names like Databricks, Cerebras Systems, Synthesia, Runway, Cohere, and many others, active in sectors ranging from cloud to semiconductors, from generative video to automated coding.

HumanX positions itself as a “collision” space: data, founders, and investors don’t just discuss trends, they put them to the test in the field.

Looking Ahead to 2026: IPOs, M&A, and New Liquidity

Thanks to Crunchbase’s predictive intelligence, the report also attempts to look ahead:

  • Out of approximately 6,600 AI companies funded since 2023, over 2,300 are considered likely acquisition candidates
  • 443 companies show a high likelihood of IPO
  • Among the companies present at HumanX, 27 could go public and 30 could be acquired
  • Over half are expected to raise new rounds in the short term

After years of slowing exits, 2026 could mark a tangible reopening of the market.


Conclusion: not a bubble, but a historical reallocation

The message emerging from the HumanX + Crunchbase 2025 AI Funding Report is clear: AI is not merely experiencing a phase of hype, but a structural reallocation of capital.

Venture capital is betting on companies that:

  • solve complex problems,
  • generate measurable value,
  • build lasting infrastructures.

In this sense, 2025 is not just the year when AI “captured” venture capital. It is the year when venture capital acknowledged that the future of innovation almost entirely stems from there.


Megadeals and Capital Concentration: The New Paradigm of Venture

One of the key elements that distinguishes 2025 from previous years is the intense concentration of capital. Venture capital is not just investing more in AI: it is investing more selectively.

Megadeals (rounds over 100 million dollars) have become the dominant tool for financing AI innovation. This results in two structural effects:

  1. Reduction of perceived risk: large funds prefer to double down on already validated companies rather than fragment capital across dozens of early-stage bets.
  2. Building systemic champions: many AI companies are not conceived as mere startups, but as future infrastructural layers of the digital economy.

This model is more reminiscent of the industrialization of the 20th century than the “spray and pray” venture approach of the 2010s.


AI as Economic Infrastructure, Not as a Feature

A key point of the report is the shift in narrative: AI is no longer treated as an additional feature, but as primary economic infrastructure.

The companies attracting significant capital in 2025 share certain characteristics:

  • direct or privileged control of proprietary data;
  • strong integration with core processes (supply chain, compliance, healthcare, security);
  • business models focused on enterprise recurring revenue;
  • ability to demonstrate measurable operational improvements.

In this context, AI becomes comparable to electricity or the Internet: invisible to the end user, yet essential for competitiveness.


The Topic of ROI: From Myth to Metric

The report addresses one of the most sensitive issues of AI adoption: the return on investment.

According to the cited data, 95% of AI pilot projects fail to produce a measurable ROI. Not because the technology doesn’t work, but because there is a lack of:

  • integration with legacy systems;
  • data governance;
  • internal training;
  • redefining decision-making processes.

Companies that surpass this initial phase enter a virtuous cycle. The current average ROI, estimated between 15% and 20%, is set to grow rapidly thanks to:

  • reduction of marginal inference costs;
  • model improvement;
  • standardization of AI pipelines.

For venture capital, this means one thing: less hype, more execution.


Invisible Infrastructures: The Real Battleground

If foundation models represent the tip of the iceberg, AI infrastructures are the submerged mass.

In 2025, an increasing share of capital flowed into:

  • cloud specialized for AI workloads;
  • high-energy density data center;
  • data labeling and synthetic data companies;
  • chips and semiconductors optimized for training and inference.

These investments are less visible in the media, but often more defensible in the long term. They build technological moats that are difficult to replicate and bind customers through high switching costs.


Vertical Applications: When AI Becomes Business

Another sign of maturity is the growth of vertical applications.

Healthcare, cybersecurity, legaltech, defense, and finance are among the sectors that attract the most capital because they combine:

  • high regulatory complexity;
  • need for automation;
  • availability of enterprise budget.

Here, AI is not experimentation, but a direct competitive advantage. Companies that manage to deeply integrate it into their workflows quickly become difficult to replace.


United States vs the Rest of the World: A Widening Gap

The U.S. dominance is not only quantitative but also qualitative.

The United States focuses on:

  • the largest VC funds;
  • the most advanced universities and research centers;
  • big tech companies capable of acquiring or funding AI startups.

The result is a flywheel effect: more capital generates more talent, which generates more companies, attracting even more capital.

The rest of the world remains active and innovative, but faces increasing difficulties in competing in late-stage rounds.


Bay Area: Global AI Laboratory

The Bay Area emerges as a true global laboratory.

Here they focus on:

  • the most advanced foundation models;
  • the most sophisticated cloud infrastructures;
  • an ecosystem of serial founders and specialized investors.

The most emblematic data is that 81% of regional startup capital has gone to AI. This means that, in fact, the Bay Area is betting almost exclusively on this technology as a driver of future growth.


The Role of HumanX: Validation Platform

HumanX is not just a simple conference, but a market validation platform.

The companies taking the stage are not concepts, but realities that:

  • generate revenue;
  • have enterprise clients;
  • attract significant rounds.

This makes HumanX a prime observatory for understanding which business models are truly working.


Predictive Intelligence: Venture Capital Looks Ahead

The integration of Crunchbase’s predictive intelligence introduces a new element: the systematic forecasting of financial events.

Through the analysis of billions of signals, Crunchbase is able to estimate:

  • probability of new rounds;
  • likelihood of acquisition;
  • IPO potential.

The fact that thousands of predictions have already been confirmed suggests a paradigm shift: venture capital no longer merely reacts, but seeks to anticipate.


IPO and M&A: Towards a New Liquidity Window

After years of contraction in exits, 2026 could represent a turning point.

The more mature AI companies demonstrate:

  • stronger growth metrics;
  • clear monetization paths;
  • growing interest from corporates and public markets.

This could unlock new liquidity, reactivating the entire venture cycle.


Extended Conclusion: AI as the Architecture of the Future

2025 marks a dividing line.

Artificial intelligence is no longer a promise, but a backbone of the global economy. Venture capital has understood this and has reallocated resources accordingly.

Not all companies will succeed. The selection will be tough. But one thing is clear: the future of innovation, productivity, and industrial competitiveness will largely stem from here.

The year 2025 was not only when AI captured venture capital. It was the year when capital agreed to transform itself.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

21Shares Launches JitoSOL Staking ETP on Euronext for European Investors

21Shares Launches JitoSOL Staking ETP on Euronext for European Investors

21Shares launches JitoSOL staking ETP on Euronext, offering European investors regulated access to Solana staking rewards with additional yield opportunities.Read
Share
Coinstats2026/01/30 12:53
Digital Asset Infrastructure Firm Talos Raises $45M, Valuation Hits $1.5 Billion

Digital Asset Infrastructure Firm Talos Raises $45M, Valuation Hits $1.5 Billion

Robinhood, Sony and trading firms back Series B extension as institutional crypto trading platform expands into traditional asset tokenization
Share
Blockhead2026/01/30 13:30
Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40